It’s about a 5 min. read.
Over the past two years, we’ve embarked on a quest to help the insights industry get better at harnessing passive mobile behavioral data. In 2015, we partnered with Research Now for an analysis of mobile wallet usage, using unlinked passive and survey-based data. This year, we teamed up with Research Now once again for research-on-research directly linking actual mobile traffic and app data to consumers’ self-reported online shopper journey behavior.
We asked over 1,000 shoppers, across a variety of Black Friday/Cyber Monday categories, a standard set of purchase journey survey questions immediately after the event, then again after 30 days, 60 days, and 90 days. We then compared their self-reported online and mobile behavior to the actual mobile app and website usage data from their smartphones.
The results deepened our understanding of how best to use (and not use) each respective data source, and how combining both can help our clients get closer to the truth than they could using any single source of information.
Here are a few things to consider if you find yourself tasked with a purchase journey project that uses one or both of these data sources as fuel for insights and recommendations:
Most people use multiple devices for a major purchase journey, and here’s why you should care:
- Any device tracking platform (even one claiming a 3600 view) is likely missing some relevant online behavior to a given shopper journey. In our study, we were getting behavior from their primary smartphone, but many of these consumers reported visiting websites we had no record of from our tracking data. Although they reported visiting these websites on their smartphones, it is likely that some of these visits happened on their personal computer, a tablet, a computer at their work, etc.
Not all mobile usage is related to the purchase journey you care about:
- We saw cases of consumers whose behavioral data showed they’d visited big retail websites and mobile apps during the purchase journey but who did not report using these sites/apps as part of the journey we asked them about. This is a bigger problem with larger, more generalist mobile websites and apps (like Amazon, for this particular project, or like PayPal when we did the earlier Mobile Wallet study with a similar methodological exercise).
Human recall ain’t perfect. We all know this, but it’s important to understand when and where it’s less perfect, and where it’s actually sufficient for our purposes. Using survey sampling to analyze behaviors can be enormously valuable in a lot of different situations, but understand the limitations and when you are expecting too much detail from somebody to give you accurate data to work with. Here are a few situations to consider:
- Asking whether a given retailer, brand or major web property figured into the purchase journey at all will give you pretty good survey data to work with. Smaller retailers, websites, and apps will get more misses/lack of recall, but accurate recall is a proxy for influence, and if you’re ultimately trying to figure out how best to influence a consumer’s purchase journey, self-reported recall of visits is a good proxy, whereas relying on behavioral data alone may inflate the apparent impact of smaller properties on the final purchase journey.
- Asking people to remember whether they used the mobile app vs. the mobile website introduces more error in your data. Most websites are now mobile optimized and look/ feel like mobile apps, or will switch users to the native mobile app on their phone automatically if possible.
- In this particular project, we saw evidence of a 35-50% improvement in survey-behavior match rates if we did not require respondents to differentiate the mobile website from the mobile app for the same retailer.
Does time-lapse matter? It depends.
- For certain activities (e.g., making minor purchases in grocery store, a TV viewing occasion), capturing in-the-moment feedback from consumers is critical for accuracy.
- In other situations where the process is bigger, involves more research, or is more memorable in general (e.g., buying a car, having a wedding, or making a planned-for purchase based on a Black Friday or Cyber Monday deal): you can get away with asking people about it further out from the actual event.
- In this particular project, we actually found no systematic evidence of recall deterioration when we ran the survey immediately after Black Friday/Cyber Monday vs. running it 30 days, 60 days, and 90 days after.
Working with passive mobile behavioral data (or any digital passive data) is challenging, no doubt. Trying to make hay by combining these data with primary research survey sampling, customer databases, transactional data, etc., can be even more challenging. But, like it or not, that’s where Insights is headed. We’ll continue to push the envelope in terms of best practices for navigating these types of engagements as Analytics teams, Insights departments, Financial Planning and Strategy groups work together more seamlessly to provide senior executives with a “single version of the truth”— one which is more accurate than any previously siloed version.
Chris Neal leads CMB’s Tech Practice. He knows full well that data scientists and programmatic ad buying bots are analyzing his every click on every computing device and is perfectly OK with that as long as they serve up relevant ads. Nothing to hide!