It’s about a 6 min. read.
Methodology matters. Perhaps this much is obvious, but as the demand for market researchers to deliver better insights yesterday increases, dialing up the pressure to limit planning time, it’s worth re-emphasizing the impact of research approach on outcomes. Over the past year, I’ve come across numerous reminders of this while following this election cycle and the excellent coverage over at Nate Silver’s FiveThirtyEight.com. I’m not particularly politically engaged, and as the long, painful campaign has worn on I’ve become even less so; but, I keep coming back to FiveThirtyEight, not because of the politics, but because so much of the commentary is relevant to market research. I rarely ever visit the site (particularly the ‘chats’) without coming across an idea that inspires me or makes me more thoughtful about the research I’m doing day-to-day, and generally speaking that idea centers on methodology. Here are a few examples:
In my day-to-day work, I would guesstimate that 90-95% of the studies I see are intended to capture a more specific population of interest than the general population, making the screening criteria used to identify members of that population absolutely vital. In general, these criteria consist of a series of questions (e.g., are you a decision maker, do you meet certain age and income qualifications, have you purchased something in X category before, would you consider using brand Y), with only those with the right pattern or patterns of responses getting through.
But what if there were a better way to do this? Reading the above on FiveThirtyEight got me thinking about the kinds of studies in which using a probabilistic screener (and weighting the data accordingly) might actually be better than what we do now. These would be studies where the following is true:
“Yeah right,” you might say, “like we ever have robust enough data available on the exact behavior we’re interested in.” Well, this might be a perfect opportunity for incorporating the (to all appearances) ever-increasing amounts of passive customer data that are available into our surveys. It’s inspiring, at any rate, to think about how a more nuanced screener might make our research more predictive.
Social Desirability Bias & More Creative Questioning
Social desirability is very much a market research-101 topic, but that doesn’t mean it’s something that’s either been definitively solved for or that the same solution would work in every case. The issue comes up a lot, not only in the context of respondent attitudes, but even more commonly when asking about demographics like income or age. There are lots of available solutions, some of which involve manipulating the data to ‘normalize’ it in some way, and some of which involve creative questioning like the example shown above. I think the right takeaways from the above are:
Plus, brainstorming alternatives is fun! For example:
The Vital Importance of Context
At the heart of FiveThirtyEight’s commentary here is a reminder of the vital importance of context. It’s all very well to push respondents through a series of scales and return means or top box frequencies; but depending on the situation, that may tell only a small part of the story. What does an average rating of ‘6.5’ really tell you? In the end, without proper context, this kind of result has very little inherent meaning.
So how do we establish context? Some options (all of which rely on prior planning) include:
Wrapping this up, there are two takeaways that I’d like to leave you with: