The mom test: ask questions that reveal the truth
Most user interviews fail before they start because we ask the wrong questions. The Mom Test by Rob Fitzpatrick offers three simple rules to get honest feedback from anyone, even your mom.

Why most user interviews produce useless data
Every founder and product manager has experienced it. You describe your idea to someone, they nod enthusiastically, tell you it sounds amazing, and say they would definitely use it. You walk away feeling validated. Then you build it and nobody signs up.
This happens because humans are polite. When someone shares an idea they are clearly excited about, our natural instinct is to be supportive rather than honest. Your mom would never tell you your startup idea is terrible, and most other people will not either. The problem is not that people are liars. The problem is that we ask questions designed to produce compliments instead of truth.
Rob Fitzpatrick identified this pattern in his book The Mom Test and offered a deceptively simple framework for fixing it. The core insight is that the quality of your user research depends almost entirely on the questions you ask, not who you ask them to. Even your mom can give you useful data if you ask the right questions.
The three rules of the mom test
Fitzpatrick distills good customer conversation technique into three rules. First, talk about their life instead of your idea. Second, ask about specifics in the past instead of generics or opinions about the future. Third, talk less and listen more.
The first rule is the most counterintuitive. Founders want to pitch. They want to describe their solution and gauge reactions. But reactions to ideas are almost worthless because people cannot accurately predict their own future behavior. Instead of asking whether someone would use your product, ask about the last time they experienced the problem you are trying to solve. Their actual past behavior tells you far more than their hypothetical future behavior ever could.
The second rule protects you from another common trap. When people speak in generalities, they are essentially making things up on the spot. Statements like "I usually try to eat healthy" or "I would probably check that app every day" are unreliable. But when you ask someone to walk you through what they did yesterday morning, or how they handled a specific situation last week, you get concrete data you can actually build on.
Bad questions versus good questions
The difference between a useful interview and a waste of time often comes down to a few word choices. Consider how dramatically the information changes when you reframe common questions.
Bad: "Would you use a product that does X?" This invites a polite yes. Good: "How do you currently handle X?" This reveals their actual workflow, existing tools, and whether the problem even matters enough for them to have a process. Bad: "Do you think this is a good idea?" This is asking for a compliment. Good: "Tell me about the last time you ran into this problem. What did you do?" This surfaces real behavior and real pain points.
Another classic mistake is asking "Would you pay for this?" Almost everyone says yes to avoid awkwardness. A better approach is to ask what they currently spend on solving this problem, or what they have tried in the past that did not work. If someone has never spent money or meaningful time trying to solve the problem, they are unlikely to pay for your solution no matter what they tell you in an interview.
The deflection problem and how to spot it
Fitzpatrick warns about a particularly dangerous type of bad data: compliments disguised as validation. When someone says "That is really cool" or "I could definitely see myself using that," inexperienced interviewers record this as positive signal. Experienced ones recognize it as a deflection.
Real buying signals look different. They look like someone leaning forward and asking when it will be available. They look like someone offering to introduce you to others who have the same problem. They look like someone pulling out their laptop to show you the terrible spreadsheet they currently use as a workaround. These are commitments and advancement signals, and they matter infinitely more than verbal enthusiasm.
The best interviewers also pay attention to what is not said. If you ask about a problem and the person gives a short, vague answer before changing the subject, that tells you the problem is not keeping them up at night. Contrast this with someone who launches into a five-minute story about how frustrating their current process is without you even having to probe further.
Running better interviews at scale
One practical challenge with the mom test approach is that it requires skilled interviewing. You need to resist the urge to pitch, catch yourself when you start asking leading questions, and stay genuinely curious instead of seeking confirmation. This is difficult even for experienced researchers, and it gets harder when you need to conduct dozens or hundreds of interviews.
This is where AI-assisted interviewing tools like Intervio are changing the game. An AI interviewer follows a structured question framework consistently across every single conversation. It does not get excited and accidentally pitch the product. It does not ask leading questions because it got nervous about an awkward silence. It follows the research protocol every time, asking about past behavior, probing for specifics, and letting the participant do most of the talking.
Intervio lets you design interview scripts built around mom test principles and then deploy them as conversations that participants can complete on their own time. The AI conducts the interview, captures the full transcript, and then synthesizes patterns across all your sessions. This means you can apply rigorous customer discovery methodology to fifty conversations with the same consistency you would bring to five.
Extracting signal from noise
Even with perfect questions, raw interview data can be overwhelming. Fitzpatrick recommends taking notes focused on specific facts, not interpretations. Write down that the person spends three hours every Monday manually updating their reports, not that they seemed frustrated with reporting.
When you are processing interviews at scale, this discipline becomes critical. Tools that automatically transcribe and analyze interviews can help separate factual statements from opinions and identify patterns across conversations. Intervio generates summaries that pull out concrete behavioral data, the kind of specific, past-tense facts that the mom test prioritizes, making it easier to spot genuine patterns without reading through hours of transcripts.
The ultimate goal of customer discovery is not to hear what you want to hear. It is to find the truth quickly enough that you can build something people actually need. The mom test gives you the conversational framework. Combining it with structured, scalable interview processes ensures you actually follow through on that framework when it matters most.
Putting it into practice today
Start your next round of user research with one simple commitment: do not describe your idea for the first ten minutes of the conversation. Instead, ask about the person's life, their current workflows, and the last time they dealt with the problem space you are exploring. You will be surprised how much more you learn when you stop seeking validation and start seeking truth.
The mom test is not a complicated methodology. It is a mindset shift. Once you internalize that every conversation should focus on their past behavior rather than your future product, the quality of your customer insights will improve dramatically. Whether you conduct five interviews yourself or run fifty through an AI-powered tool, the principles remain the same: ask about their life, demand specifics, and listen more than you talk.
Try it yourself
Start running AI-powered user interviews today with Intervio.
Related Articles

Voice surveys: collect richer feedback without forms
Traditional surveys suffer from fatigue and shallow responses. Voice surveys use AI-powered conversations to gather richer, more nuanced feedback at scale — turning every participant into an in-depth interview.

Why automated interviews give faster product insights
Traditional user research takes weeks of scheduling, interviewing, and synthesizing. AI-automated interviews compress this timeline dramatically, letting product teams validate ideas in hours.

How to run user interviews without scheduling a call
Scheduling user interviews is the biggest bottleneck in product research. Async voice interviews let you collect rich qualitative insights from users on their own time, without calendars, no-shows, or timezone headaches.
Ready to get started?
Join thousands of users already using our platform to build amazing products.
