Written by:

Simon Svenstrup
Senior Researcher
Share this article
Blog2026-05-07
5 briefing mistakes that will ruin your qualitative study

Stop us if you’ve heard this one before: A brand manager sends over a brief. The stated objective is, unhelpfully, to "understand the consumer." The timeline is four weeks, and the fieldwork is supposed to kick off on Monday.
Six weeks later, the key stakeholders receive the key results and recommendations based on customer interviews. Do they understand the consumer?
Of course not. With such a vague objective, the output is equally imprecise. The study could never answer the actual business question, because the question itself was never the right one.
Insights professionals know that the value of a study is determined well before fieldwork begins, but they’re constantly being squeezed to confirm internal hypotheses, answer nebulous business questions, and provide results faster and with fewer resources.
We’ve worked with thousands of insights teams to design and conduct great studies. Unfortunately, we’ve also seen what happens when corners get cut. Here are the five most common ways qualitative research can go sideways, and what to do about it.
Mistake 1: The business problem is not a research question
"Understand the consumer" is not a brief. Neither is "explore the category" or "look at how people shop the aisle." We may all be curious about these things, but they are not testable.
A research question has a subject, a verb, and a decision attached: "Among urban parents who buy oat milk weekly, what would have to be true for them to switch to our refrigerated SKU?" That sentence tells you who is in the study, what is being learned, and which decision the answer feeds.
The translation from business problem to research question is the researcher's job. It’s normal for incoming briefs to include vague or untestable questions. But stakeholders must agree to a specific research question before running with anything.
Skipping this step is the most common reason a study fails to deliver valuable results.
Mistake 2: The screener contradicts the objective
Screening and recruitment is a fundamental part of qualitative research, but too often screening is treated as a procurement task and not its own specialization. If the people in the study are not relevant to the research question, no amount of analytical vigor can make up for it.
For example, if the objective is to learn what would make lapsed buyers come back, but the screener requires a category purchase in the last three months, then the study has now effectively excluded the audience the objective named.
Another version: the objective is to test a premium positioning, but the screener has no income filter, so the sample skews toward shoppers who are price-led on the category.
The screener is a research instrument, not a recruitment form. Every criterion needs to earn its place to help meet the study objectives.
Mistake 3: The guide assumes the answer
Consumer research is plagued by leading questions. Biased questions will skew your results, but investing time in improving your interview guide will prevent a majority of the most common problems. Here are the most common ways it can go wrong.
| Problem | Biased question | How to fix it |
|---|---|---|
| Leading | "What do you love about the new packaging?" "Don't you think this packaging is more premium?" "How frustrating is the current checkout process?" | "Walk me through your reaction to the new packaging." "How would you describe this packaging to a friend?" "Tell me about the last time you checked out on this site." |
| Double-barrelled | "How does this make you feel, and would you buy it?" | Split it into two questions, separated by follow-up. |
| Primed | A five-minute brand introduction before the open exploration, so the respondent now knows the answer the room wants. | Save the brand reveal for after the open exploration. Start with what the respondent does, not what the brand says. |
| Hypothetical | "Would you buy this if it cost $20?" | Stated intent is a poor predictor of actual behavior. Instead, anchor in past behavior. "Walk me through the last time you bought something in this category. What did you pay?" |
| The "why" trap | "Why do you buy Brand X?" | Forces rationalization, not recall. Replace with behavior. "Walk me through your last purchase. What was on the shelf next to it?" |
Mistake 4: One-size-fits-all methodology
In research, habit can easily be mistaken for best practice. If your team has had success conducting studies with eight focus groups or 20 in-depth interviews, it can be tempting to simply repeat what has worked in the past.
But good research design tailors methodology to suit the audience, the business question, and the constraints.
If the audience is hard to recruit and geographically scattered, focus groups are the wrong instrument. If the topic is sensitive, group dynamics will distort the data. If the decision needs depth on individual journeys, ethnography or one-to-one interviews will be more insightful than any group format. And if the timeline is tight, digital methods like online interviews may be the most effective way to collect insights.
When the method is fixed before the question is clear, the study is already compromised. The brief should leave method open until the question and the audience are both locked.
Mistake 5: No-one has defined what "decision-ready" looks like
Before doing any fieldwork, you should already know what a successful output will look like and which decision it will help inform. If you can’t communicate that in one sentence, the brief is not clear enough.
Decision-ready means the format of the output has been agreed before any outreach begins.
For example:
- A two-page memo with three positioning options ranked by emotional fit to help the brand team choose a launch direction.
- A jobs-to-be-done framework with the top three jobs ranked by frequency to help prioritize the next-quarter roadmap.
- A segmentation refresh with three rules to help rebuild audience targeting.
- A one-slide barrier-and-trigger summary for the lapsed-buyer audience, to help design a win-back campaign.
- A pricing-sensitivity map across three price points to help the commercial team land on a recommended retail price.
Key takeaways: 6 questions to pressure-test your brief
Before agreeing to run any qualitative study, here are six questions you can use to sanity check the effectiveness of your brief.
- What business decision does this study inform, and who owns that decision?
- What is the research question (maximum one sentence)?
- Who is the audience the decision is about, and does the screener match that audience?
- Read the discussion guide cold. Does any question contain its own answer?
- Has the method been chosen because the audience and the question demand it, or because it is the team's default?
- What does decision-ready output look like, and what decision will it affect?
Garbage in, garbage out. Good in, good out.
That leaves the AI question, which every insights leader is being asked to answer for.
When analysis that once took weeks takes hours, the pressure to skip the briefing stage and go straight to fieldwork becomes even stronger. But speed just accelerates whatever was already there, good thinking or bad. The researchers who will get the most from AI are the ones who use the time AI saves to do the front-end work properly to kickoff fieldwork with a brief that was worth running in the first place.
Creating an effective and methodologically sound study is the researcher's craft, and it is worth defending. So the next time "understand the consumer" lands in the queue, send it back. Get the question right first, then run.

