Why Talking to 5 Users Is Not Validation
You talked to 5 people. They all said your idea was interesting. Three of them said “I’d definitely use that.” You feel validated.
You are not.
Five conversations is a starting point, not an endpoint. And “I’d definitely use that” from a stranger in a 30-minute call is one of the least reliable predictors of actual customer behavior.
Here is why most founders confuse customer conversations with validation, and what real validation actually looks like.
The Problem with Small Sample Sizes
5 People Is Not Statistical Significance
With a sample of 5, your confidence interval is so wide it is meaningless. If 4 out of 5 people say they would use your product, your true market interest could be anywhere from 28% to 99% (at 95% confidence). That range tells you nothing useful.
At 15-20 interviews, patterns start to emerge. At 30+, you have enough data to identify meaningful segments and trends. Five is just noise.
Selection Bias Is Invisible at Small Scale
Who were those 5 people? How did you find them? If they came from your network, a warm introduction, or a single community, they represent a slice of your market — not the whole thing.
At 5 interviews, you cannot see selection bias. At 15-20, you start to notice when different segments respond differently. “Enterprise users love this; SMBs do not care” is an insight that requires breadth.
Words vs. Actions: The Say-Do Gap
”I Would Definitely Use That” Means Almost Nothing
Research consistently shows that people’s stated intentions do not predict their actual behavior. In one well-known study, 68% of consumers said they would buy a new product concept. Actual purchase rate when the product launched: 14%.
The gap between what people say they will do and what they actually do is enormous. And it widens when:
- The person wants to be polite (everyone does in an interview)
- The product is hypothetical (no real cost to saying yes)
- The interviewer is enthusiastic (enthusiasm is contagious and misleading)
What Actually Predicts Behavior
Actions predict behavior. Specifically:
- Signing up for a waitlist with a real email (not “sure, put me on the list”)
- Clicking “buy now” on a landing page (even if the product does not exist yet)
- Putting down a deposit or pre-ordering
- Signing a letter of intent (for B2B)
- Spending 30+ minutes engaging with your content, prototype, or demo
These actions involve commitment — time, money, or reputation. They are harder to fake than a verbal “yes.”
What Real Validation Looks Like
Minimum Interview Count: 15-20
Fifteen to twenty interviews is the minimum for meaningful pattern recognition. At this scale:
- You hear the same pain points repeated across unrelated conversations
- You see consistent language patterns (how customers describe the problem)
- You identify segments that respond differently
- You have enough data to distinguish genuine enthusiasm from politeness
Interviews + Experiments = Validation
Interviews tell you about the problem. Experiments tell you about demand. You need both.
A solid validation process combines:
- 15-20 customer interviews to understand the problem, current solutions, and willingness to pay
- 2-3 demand experiments to measure actual behavior — landing page smoke tests, pre-sale campaigns, fake-door tests
- Predefined pass/fail criteria set before you see the results
This combination produces multi-signal evidence: qualitative depth from interviews plus quantitative rigor from experiments. Neither alone is sufficient.
The Validation Hierarchy
From weakest to strongest signal:
- “I like this idea” (worthless)
- “I would use this” (weak)
- “I would pay for this” (moderate — still just words)
- Signs up with real email (meaningful)
- Clicks “buy now” on a smoke test (strong)
- Pre-orders or puts down a deposit (very strong)
- Signs a letter of intent (strongest for B2B)
- Pays for a concierge version (ultimate validation)
If your validation evidence sits in the 1-3 range, you have not validated. You have had some nice conversations.
The Most Common Interview Mistakes
Asking Leading Questions
“Don’t you think it would be great if there was a tool that did X?” Yes. Obviously. You just led the witness.
Instead: “Tell me about the last time you dealt with [problem area].” Open-ended. Let them lead.
Pitching Instead of Listening
If you spend more than 20% of the interview talking, you are pitching, not researching. The purpose of a validation interview is to listen, not to convince.
Interviewing the Wrong People
Friends, family, fellow founders, and people in your Twitter bubble are not your target customers (unless your product is literally for those groups). You need strangers who fit your target customer profile and have no social incentive to be nice to you.
Stopping After the First Positive Signal
“Person #3 said they’d pay $50/month. Validated!” No. One data point is an anecdote. Continue interviewing until patterns repeat across multiple conversations.
What to Do Instead
If you have done 5 interviews and they went well — great. You have a promising signal. Now do this:
- Interview 10-15 more people from different sources and segments
- Run at least one demand experiment — a landing page smoke test is the fastest to set up
- Set pass/fail criteria before you see results — what conversion rate would convince you? What would concern you?
- Score your idea using a structured validation framework instead of gut feeling
Want professional help? Our validation sprint runs 15-20 interviews and 3-5 demand experiments in 2 weeks for $4,500. You get a definitive answer, not a feeling.
Proof Engine Studio — real validation, not polite conversations.