It’s 3am on a Sunday morning, and I just wrote a 20 question survey that I want to share with you to better understand the agony and the ecstasy of one piece of the insight value chain – qualitative research.
I believe in the value of talking to real people about the products and services I help my clients develop and sell. I believe in being a member of a reality-based community. And I believe that evidence and empiricism often lead not only to incremental improvement, but also to great leaps of imagination and invention.
I believe in it so strongly that I’m building a business around it, and working only with clients who are willing to give evidence a shot.
Still, I feel saddled with a tradition of “research” for gathering this evidence. As a former partner in a research-based brand consultancy, I can tell you that the standard operating procedures of most researchers are built on tradition, on a desire to be taken seriously as something professional and almost science-y, balanced with a need to be flexible and creative and responsive.
But tradition is not the only reason that the standard operating procedure has become standard. The biggest reason we cling to the 8 person focus group, conducted in a grey room in front of a grey mirror with grey people, is that we are dependent on a research supply chain that is broken. And there are four reasons, all mutually dependent and a bit circular, why this chain is broken.
How Sample is Collected is Broken.
Random sampling of people walking past you on the street corner, mall intercepts, fliers on the bulletin board at the community center or grocery store, direct mail solicitation, robo-calling, or registering online – these are all legitimate, time-tested means of collecting sample.
Most recruiters focus on the most general of general population folks when they are standing in malls, or calling people at dinner time. There’s never any trouble finding stay-at-home moms, retirees, students or the marginally employed (which are, admittedly, a plentiful piece of the economy these days). But start getting specific – not just Walmart Moms, but Walmart moms who own an iPhone and a Prius – and things get tricky.
Or start looking for people who aren’t at the mall at 3pm on a Wednesday: office managers, shipping managers, IT directors, CEOs, lawyers or accountants or doctors. Or for people who aren’t at home in front of the TV at 10am or 10pm – college students, retail workers, bartenders, people with social lives. It starts getting tougher, and then you have only two choices – pray for a list, or staff up to start dialing the phone book in search of the people you need to talk to.
Maybe you’re lucky, maybe there is a customer or prospect database. Many clients aren’t direct to consumer sellers, so they often don’t have this data. Yet even when they do, you begin to rethink the definition of luck. Dummy phone numbers, misspelled email addresses, out of date addresses, incorrectly entered data. I’ve had to quote clients a 1000:1 ratio on some lists for number of calls we’ll have to make to find a single respondent. Because even if the data is good, you don’t know when is best to reach them, if they’ll be interested in participating, if they’ll be available on your schedule, or if they’ll even qualify once they’re put through the screener.
In other words, sample is tough to get, and even tougher to ensure is of quality.
Screeners and Profiles are Broken.
If you’ve ever drafted a screener you’ll know that they seem to go in one of two directions: too specific or too simplistic.
The too specific screener may actually get you exactly the person you think you want to speak to (short of having an accurate, up to date, well tended customer or prospect list). But it will strike fear and panic into the hearts of any researcher or planner to have to sit patiently as the days go by without a single recruit – because of the sample problems I outlined above.
The too specific screener also begins to set up every good-enough respondent as a scapegoat, branded with the “Not the Target” mark of the client or agency who is looking more for validation than for a learning experience.
But on the other hand, the simplistic screener is a blunt instrument: “do you buy this product and are you available on Monday” may not give you enough information about the potential respondent to know whether they’ll be right for the kind of research you want to do.
Regardless of whether the screener is good or bad, the biggest problem with it is that it is a script from which we do not allow or trust our research partners to deviate. There’s no improvisation in recruiting – we quite literally say “TERMINATE” on screener questions where the wrong answer leads to disqualification. Not only is that a sudden stop to a phone call from a stranger, one that ends in what is unmistakably a rejection, but it’s also a phone call the recruiter doesn’t get paid for. Recruiters get paid by respondent recruited, not by time spent calling people or by effort. Therefore, recruiters want the most relaxed criteria they can get – ensuring they have to make the fewest number of calls to “fill the groups” – and therefore to get paid.
Project Management is Broken.
Because of all the difficulty in getting a large enough sample to recruit from, and in defining a subset of that sample that you want to include in your research in a way that is specific enough to get you who you want, but not so specific that you can’t get anyone, researchers, strategists, and people like me are constrained in multiple ways.
- It’s batch-and-queue, baby. Screeners must be drafted and approved by clients. Then they have to be handed off to the recruiter, who will inevitably spend a day asking more questions to clarify the screener so they can preemptively reduce errors when the phone bank gets to work. Recruiters will almost uniformly tell you that all recruits take 2 weeks, some longer. This isn’t strictly speaking true, because they will almost always be able to “fill the group” – whether you give them a shitty list of 2000 names and 4 days, or no list at all and 2 weeks. They will generally recruit up to the last minute – and often don’t put the screener in field right away if they feel they have enough time to spare.
- It’s a black box. Once the screener has been “programmed” and the call center activated, there’s no transparency into progress. Once a day, beginning on whatever day they get the first confirmed respondent, most recruiters will begin to share an excel spreadsheet with respondent names and their answers to the screener’s questions. If there are people being ‘terminated’, you don’t see them. Only if lots of people start to disqualify on one particular question will the recruiter call their client to talk about those disqualifications and ask to “relax the recruit” in order to make numbers. As a buyer of field services, I can’t see how those criteria are affecting the recruit, and so I can’t take action to help my recruiter course correct. In the meantime, I’ll have nervous clients or colleagues wanting updates on the recruit, and I simply have to wait until the recruiter calls me back or emails me an update that is, almost by definition, an incomplete picture of the situation. But it’s no wonder that recruiters only give you daily updates – they’re working the phones too hard to get in touch on progress or check in with ideas about how to improve the situation.
Qualitative Research Design is Broken.
This is a topic I can go on and on about. But for the purposes of this post – study design is broken because it is created with the realities of the research supply chain in mind, and this can trump quality of the learning experience. Rather than thinking through the right kinds of people to meet and learn from, we start thinking about cities: Which town will have early tech adopters aplenty? Where do people tend to shop at big box stores? Which cities over-index on soap opera viewership? The question of location is dictated by two considerations: the need to do face to face research that lots of people can observe, and the need to find a recruiting partner who has a sample database with our kinds of people in it. At the same time, we’re thinking about segments that are distinct enough from one another, but still all within the reach of our clients, from whom we’ll learn something useful. They’ve also got to be distinctive enough that they look like different groups when you’re sitting on the other side of the glass, but not so unique that they’re a needle in a haystack.
In other words, we’re trying to guess what our recruiting partners can get us that is also relevant to our clients. And we default to using the recruiter’s crappy facilities because you want to keep them on side by paying them a rental fee plus the head count fee, and besides, in for a penny, in for a pound. Plus it’s easier to corral our clients into a dark room with bowls of Chex Mix and a mini-fridge full of sodas than it is to have them tag along for every site visit, ride along, and in-home interview.
How We Treat “Respondents” is Broken.
Even the way we treat respondents is batch and queue. It begins with a phone survey, followed by an email with instructions for getting to and preparing for a group; showing up 15 minutes before a group begins, filling out more forms, sitting in a waiting room; names are called and 1 or 2 people are left behind; they’re directed into a room where they’re told to turn off cell phones and sit in a chair and put on a name tag. A moderator comes into the room and asks questions – depending on how good she is, it’ll either seem like an interrogation or a conversation. Sometimes it’ll be fun. We’ll provide sodas and food, but not schedule a break in the 2 hours for a trip to the bathroom. The moderator will cut people off if they talk too much, and then when the time is up, that’ll be it.
Respondents have the vague sense of being watched, but mostly forget about what sits behind the giant mirror behind the moderator, unless of course there’s a tap at the glass, or a note is passed in, or she asks someone to speak up because the microphones aren’t picking up soft or low voices. Then they’re ushered back into the waiting room, asked to sign for their ‘incentive’, and head home, not sure whether this was helpful to anyone, or what will come of it. And this is how to “get paid for your opinions.”
We spend about $100 a head just to find these people. And then we haggle with them about how much it’s worth to them to spend 90 minutes or 2 hours with us discussing topics ranging from nearly irrelevant to them to deeply personal and private. We assess their value – the 22 year old part-time employed mom is worth $75 for 2 hours, the 35 year old IT director is worth $150. This isn’t about their value to our clients, but about their value in the world. We reason, that mom would be lucky to make $35 an hour; whereas that IT director might actually need to be paid a bit more to show up if his title is senior enough. We don’t stop to think that the mom spends thousands of dollars every year at her local grocery store, while that IT director may not actually be the one who signs off on the purchase of a new CRM system. Her value to a consumer packaged good brand is definite; his is tenuous. But we don’t think about them as valued customers or prospects, we think about them as short term employees. We should be engaging them as collaborators that our clients don’t otherwise have (or think they have, or want to have) direct access to.
Trust is Broken.
Why are we doing things this way? It’s a problem of trust. Clients don’t trust agencies to do ‘unbiased’ research on their own ideas. Agencies don’t trust researchers not to kill a good idea. Researchers don’t trust recruiters to get them the ‘right’ respondents. And we don’t trust respondents to be smart, creative, collaborative, or frankly, even experts in their own lives. So we over process the process, we constantly question and negotiate the investment of money and time, and rather than a true best effort, we do what we think is possible, rather than what is best.
So why do we do keep doing it??
Look, we – planners, strategists, designers, makers, brands – still commission qualitative research, we still write screeners, use recruiters, hire moderators, sit in back rooms and listen as questions on discussion guides are asked.
After all, we still need evidence. We need that gut check, that reality check. We need to learn *something* or risk making unfounded decisions, decisions purely on personal taste or ego. While we all seem to have come to consensus on the notion that you can’t ask consumers what they will want in the future, we all seem to also agree that the person who comes up with an idea is inherently biased in its favor, even when it’s shit. So we hope for the wisdom of the crowd in adjudicating the value of an idea.
Our path to this adjudication strips crowd wisdom of most of its value. It’s too stressful, too opaque, too costly, too time consuming and inefficient. So by the time everyone is huddled around laptops and gobbling down fistfuls of M&Ms on some Tuesday evening in Cleveland, it’s no wonder they’re not really listening. They’ve spent all their energy worrying about the recruit, fretting about the screeners and the guides, trying to keep costs down, and herding people onto planes to come watch dull people in dull rooms talk about dull things. The people we’ve recruited to participate are kept in the dark about our intentions, treated as ‘respondents’ rather than as partners or collaborators. And the people who recruited them are hidebound by their own business model, with little incentive or opportunity to collaborate, and a strong incentive to appear regimented when they’re really just tapdancing as fast as they can.
I – and my team – want to change that. We want to find out where the value chain is broken for you, and where there is still value in gathering evidence for insight and inspiration. We want to understand it from the perspective of time, money, satisfaction and utility. So that’s why I wrote a survey. I’d love it if you could fill it out or share it with others. When we get the results, we’ll share them with this community of people who do and buy qualitative research.
And we’ll keep thinking about the insight value chain, especially as it regards innovation and product/service design, and I’ll keep writing about stuff that pisses me off. You can be sure of that.