Start building evidence

Validation Kill Criteria — When to Pivot, Persist, or Stop


The hardest part of validation is not running the experiments. It is reading the results honestly.

Every founder wants their idea to work. That desire creates a gravitational pull toward interpreting ambiguous data as positive. “The conversion rate was only 2%, but we had a small sample.” “Nobody pre-ordered, but the interviews were positive.” “The numbers are not great, but if we just tweak the messaging…”

Kill criteria exist to counteract this pull. They are predefined thresholds you set before running experiments — clear lines that tell you whether the data says go, pivot, or stop. No reinterpretation. No goalpost-moving. Just honest decision-making.


Why You Need Kill Criteria (Before You See Results)

The Confirmation Bias Problem

You are emotionally invested in your idea. You have spent weeks or months thinking about it. You may have told friends, family, or investors. You want it to work.

This emotional investment creates confirmation bias: the tendency to interpret data in a way that supports your existing belief. Without predefined criteria, you will find a way to make almost any result look positive.

Kill criteria eliminate this problem by forcing you to commit to what “success” looks like before you see the data.

How to Set Kill Criteria

For each validation experiment, define three numbers before you launch:

  1. Pass threshold: The result that means “proceed with confidence”
  2. Conditional threshold: The result that means “investigate further”
  3. Fail threshold: The result that means “this assumption did not hold”

Write them down. Share them with a co-founder, advisor, or friend. Do not change them after the experiment runs.


Kill Criteria by Experiment Type

Landing Page Smoke Tests

MetricPassConditionalFail
Email signup rate5%+3-5%Below 3%
Waitlist conversion3%+1-3%Below 1%
“Buy now” clicks2%+1-2%Below 1%
Bounce rateBelow 60%60-75%Above 75%

Context: These benchmarks assume targeted traffic (paid ads or community posts to your specific audience). Untargeted traffic will always convert lower.

Pre-Sale / Pre-Order Campaigns

MetricPassConditionalFail
Purchases (B2C)10+ in first week3-9 in first week0-2 in first week
LOIs (B2B)3+1-20
Revenue collectedAny$0

Context: Even a single pre-order is more valuable than 100 survey responses. Money on the table is the strongest signal.

Customer Interviews

MetricPassConditionalFail
Problem confirmation70%+ confirm50-70% confirmBelow 50%
Pain intensity (1-10)Average 7+Average 5-7Average below 5
Current spending on alternativesMost spend money/timeSome spendNobody spends
Willingness to pay (stated)60%+ say yes40-60% say yesBelow 40%

Context: Interview data is qualitative. Use these as directional indicators, not absolute thresholds. Always pair interview findings with behavioral experiments.

Overall Validation Score

Using our 5-pillar framework:

Total Score (out of 25)Decision
20-25Go — build with confidence
15-19Conditional — address weak pillars
10-14Pivot — strong elements exist but direction needs to change
Below 10Stop — evidence does not support this idea

The Three Decisions

Persist: When the Data Says Go

Signals:

  • 2+ experiments hit pass thresholds
  • Consistent problem confirmation across interviews (70%+)
  • At least one money-on-the-table signal (pre-order, LOI, or concierge payment)
  • Market size supports a viable business
  • Validation score 20+/25

What to do: Start building. Focus your MVP on the #1 feature that validation data identified as the primary value driver. Move fast — the data is fresh and the momentum is real.

Pivot: When the Data Points Somewhere Else

Signals:

  • Experiments missed pass thresholds but showed pockets of interest
  • Customers described a different problem than expected
  • A specific segment responded much more positively than others
  • Demand exists but for a different use case, price point, or customer profile

What to do: Identify the strongest signal in your data and follow it. A pivot does not mean starting over — it means adjusting your direction based on what the market told you. Common pivots:

  • Customer pivot: Same solution, different target segment
  • Problem pivot: Same customer, different pain point
  • Solution pivot: Same problem, different approach
  • Channel pivot: Same product, different go-to-market

You can re-validate a pivoted concept in another 2-week sprint.

Stop: When the Data Says No

Signals:

  • All or most experiments failed to hit pass thresholds
  • Nobody could clearly articulate the problem you are solving
  • Zero demand signals across all channels
  • Market is too small, declining, or fully saturated
  • Validation score below 10/25

What to do: Stop. This is not failure — this is the system working. You spent 2 weeks and $4,500 instead of 6 months and $50K+ learning that this idea does not have a market.

The one founder from our case study on killing a bad idea told us the kill decision was the most valuable outcome of the sprint. It freed them to pursue an idea that actually had demand.


How to Read Ambiguous Results

Not every experiment produces a clean pass or fail. Here is how to handle gray areas:

“The numbers are close to the threshold”

If your landing page conversion is 4.5% and your pass threshold is 5%, that is conditional — not a pass. Do not round up. Instead, ask what would improve the number: better messaging? Different audience? Lower price? Run a second experiment testing the most likely improvement.

”Interviews were positive but experiments were weak”

This is one of the most common patterns. People say they want it, but they do not take action. The most likely explanation: the value proposition is not compelling enough at the tested price point. The problem may be real, but your solution or pricing needs adjustment.

”One experiment passed, two failed”

Look at which experiment passed and which failed. If the pre-sale passed but the landing page failed, your value proposition resonates with people who hear the full pitch but not with casual visitors. That is a messaging problem, not a demand problem. If the landing page passed but the pre-sale failed, interest exists but willingness to pay does not. That is more concerning.

”The data is just not enough”

Sometimes sample sizes are too small to draw conclusions. If your landing page got 50 visitors, a 4% conversion rate means 2 people. That is not enough data. Before calling it a fail, ensure you had:

  • 200+ targeted visitors to landing pages
  • 15+ interview conversations
  • 48+ hours of live experiment runtime

If your sample sizes are below these minimums, the data is inconclusive — not negative. Run more traffic.


The Emotional Side of Kill Decisions

Killing an idea hurts. You believed in it. You told people about it. You imagined the future it would create.

A few things that help:

Reframe the outcome. You did not fail. You spent $4,500 and 2 weeks to avoid wasting $50K and 6 months. That is one of the smartest decisions a founder can make.

Capture what you learned. The market knowledge from a killed idea carries forward. Many founders discover their next (successful) idea from the data in a failed validation.

Move fast. The longer you sit with a kill decision, the more you second-guess it. Start exploring your next idea within a week.


Set Your Kill Criteria Today

If you are running validation experiments, stop and define your pass/fail thresholds right now. Write them down. Share them with someone who will hold you accountable.

Need a structured approach? Our product validation checklist includes clear criteria for all 30 validation items. Or let our team run the experiments and apply the criteria objectively in a 2-week sprint.


Proof Engine Studio — honest validation, honest answers.