Start building evidence

7 Demand Validation Experiments for Startups


Most founders skip demand validation entirely. They validate the problem (“yes, this is annoying”), validate the solution (“yes, this feature set sounds useful”), and then assume demand will follow.

It does not.

Validating demand is a distinct step — and arguably the most important one. It answers the question no amount of customer interviews can answer on their own: “Will enough people actually pay for this to build a business?”

This guide gives you 7 demand validation experiments you can run in days, not months. Each one includes the exact setup, the tools you need, the benchmarks to hit, and real examples. Pick the experiment that matches your stage, or stack several together in a 2-week validation sprint for the strongest possible signal.

If you have ever typed “How to test market demand,” “market demand testing,” or the blunt phrase “Validate business idea with ads” into a search bar, these seven experiments are the practical answer.


What Is Demand Validation (and Why It Matters More Than You Think)

Demand Validation vs. Problem Validation vs. Solution Validation

These three get confused constantly, so let us clear it up.

Problem validation asks: “Does this problem exist, and is it painful enough to solve?” You run customer interviews, survey target users, and look for patterns. The output is confidence that a real problem exists.

Solution validation asks: “Does my proposed solution actually solve the problem?” You show prototypes, run usability tests, and get feedback on your approach. The output is confidence that your solution works.

Demand validation asks: “Will enough people pay for this solution to build a viable business?” You run experiments that require people to take real action — sign up, put money down, click a buy button. The output is evidence, not opinions.

Here is the critical difference: someone can acknowledge a problem exists, agree your solution looks good, and still never buy it. Demand validation closes that gap.

If you are still working on the problem and solution layers, start with our startup idea validation framework first. This guide assumes you have a clear problem and a proposed solution, and you need to test whether real market demand exists.

Why 42% of Startups Fail Due to No Market Need

CB Insights found that 42% of failed startups cited “no market need” as the primary reason they died. Not bad tech, not team issues, not running out of money. They built something nobody wanted badly enough to pay for.

The uncomfortable truth: most of those founders did some form of validation. They talked to potential customers. They got positive feedback. They saw head nods and “I would definitely use that.”

What they did not do was test demand with experiments that require real commitment. Saying “I would use that” costs nothing. Entering a credit card number costs something. The gap between those two actions is where startups go to die.

The Demand Evidence Hierarchy (Weakest to Strongest Signals)

Not all demand signals are created equal. Here is the hierarchy from weakest to strongest:

  1. Verbal interest — “That sounds cool” (nearly worthless)
  2. Email signup — Someone gives you their email address (weak signal)
  3. Waitlist join with details — They fill out a multi-field form (moderate signal)
  4. Time commitment — They schedule a demo call or complete a task (moderate-strong signal)
  5. Social commitment — They share or refer others publicly (strong signal)
  6. Financial commitment — small — They put down a refundable deposit (strong signal)
  7. Financial commitment — real — They pre-pay or sign a letter of intent (strongest signal)

The experiments below are ordered roughly by this hierarchy. Experiment 1 captures weak-to-moderate signals. Experiment 3 captures the strongest signal possible: money on the table.

Your goal is to collect evidence at multiple levels. A single experiment gives you a data point. Multiple experiments give you a demand picture.


Experiment 1 — The Landing Page Smoke Test

The smoke test is the workhorse of demand validation. You create a landing page for a product that does not exist yet, drive traffic to it, and measure how many people take a desired action (sign up, join a waitlist, click “buy”).

It is fast, cheap, and gives you quantitative data you can act on.

How to Set Up a Smoke Test Landing Page in 2 Hours

You do not need a designer. You do not need a developer. You need a clear value proposition and a landing page builder.

Here is the page structure that works:

  1. Headline: One sentence that describes the outcome your product delivers. Not what it does — what the customer gets.
  2. Subheadline: Who it is for and why it is different from alternatives.
  3. 3-4 benefit bullets: Specific outcomes, not features. Use numbers when possible (“Save 5 hours per week” beats “Save time”).
  4. Social proof element: Even pre-launch, you can use quotes from customer discovery interviews (with permission) or relevant industry stats.
  5. Primary CTA: Email capture, waitlist signup, or “Get Early Access” button.
  6. Secondary CTA (optional): “Learn more” or “See how it works” for people who are not ready to commit.

Keep it to one page. No navigation menu. No links that take people away from the CTA. Every element on the page exists to drive one action.

Time budget:

  • Write the copy: 45 minutes
  • Build the page: 45 minutes
  • Set up analytics: 15 minutes
  • Test and launch: 15 minutes

Driving Traffic — Paid Ads, Communities, Cold Outreach

A landing page without traffic is a tree falling in an empty forest. You need 200-500 visitors minimum to get statistically meaningful data. Here are three ways to get them:

Paid ads (fastest, most controllable)

  • Google Ads: Target high-intent keywords related to your problem. Budget $200-$500 for a meaningful test. Focus on exact-match and phrase-match keywords.
  • Facebook/Instagram Ads: Best for B2C. Use interest-based targeting. Budget $150-$300.
  • LinkedIn Ads: Best for B2B. More expensive per click ($5-$12), so budget $300-$500 and target tightly by job title and company size.

Community posting (free, slower)

  • Share your landing page in relevant Reddit communities, Slack groups, Discord servers, and Indie Hackers. Do not spam. Add genuine value and mention your project naturally.
  • This traffic is slower but often higher quality because the people are already engaged with the problem space.

Cold outreach (free, targeted)

  • Email 50-100 people in your target segment with a short, personal message. Not a pitch. A question: “I am building X to solve Y. Would you take a look and tell me if this resonates?”
  • Expect 10-20% to click through. That gives you 5-20 high-quality visitors who match your ICP exactly.

What Conversion Rates Actually Mean at This Stage

Conversion rate is visitors who take the desired action divided by total visitors. But context matters enormously at this stage.

A 10% conversion rate from cold outreach visitors is less impressive than a 3% rate from paid ads. Why? Outreach visitors are pre-warmed — you hand-picked them and personally asked them to look. Ad visitors have zero relationship with you. Their conversion is a purer demand signal.

Track these separately. If your analytics tool groups all traffic together, use UTM parameters to segment by source.

Also track bounce rate and time on page. A high bounce rate (above 70%) with low time on page (under 15 seconds) means your headline is not resonating. A low bounce rate with low conversion means people are interested but your CTA is not compelling enough.

Benchmarks — Aim for 5%+ Email Signup, 2%+ Waitlist

Here are the benchmarks we use at Proof Engine, based on data from hundreds of validation experiments:

MetricWeak SignalModerate SignalStrong Signal
Email signup rateUnder 3%3-5%Above 5%
Waitlist signup rateUnder 1%1-2%Above 2%
“Buy” button click rateUnder 0.5%0.5-1.5%Above 1.5%
CTA click-through rateUnder 2%2-5%Above 5%

These benchmarks assume cold traffic (paid ads or organic). If your traffic comes from warm sources like personal outreach, adjust expectations upward by 2-3x.

A strong signal does not guarantee product-market fit. But it tells you demand exists at a level worth investigating further. A weak signal does not mean your idea is dead — but it means something is off. Either the positioning, the audience targeting, or the underlying demand.

Tools to Use (Carrd, Unbounce, Google Ads)

Landing page builders:

  • Carrd ($19/year) — Simplest option. Perfect for one-page smoke tests. Limited customization, but fast.
  • Unbounce (starts at $99/month) — More features, A/B testing built in, better analytics. Worth it if you plan to iterate.
  • Framer (free tier available) — Good middle ground. More design flexibility than Carrd, less complex than Unbounce.

Analytics:

  • Google Analytics 4 (free) — Set up conversion events for CTA clicks and form submissions.
  • Hotjar (free tier) — Heatmaps and session recordings show you exactly where visitors lose interest.

Ads:

  • Google Ads — Best for high-intent search traffic. Start with $20/day budget.
  • Meta Ads Manager — Best for awareness and B2C testing. Start with $15/day.

Experiment 2 — The Fake Door Test

What a Fake Door Test Is and When to Use It

A fake door test presents a feature or product as if it exists, measures how many people try to use it, and then reveals that it is not available yet.

Think of it like this: you add a “Premium Plan” button to your existing product, or a “Try Beta” button on your website. When someone clicks it, instead of getting the product, they get a message explaining it is coming soon and asking for their email.

The click is the signal. It tells you people wanted the thing badly enough to take action. That is worth more than a thousand survey responses.

Use fake door tests when:

  • You already have an existing product and want to test demand for a new feature
  • You want to measure demand for a specific price point (show pricing, measure clicks)
  • You want to test multiple concepts simultaneously (which door gets the most clicks?)

Ethical Considerations and How to Handle Them

Fake door tests sit in an ethical gray zone. You are showing people something that does not exist. Here is how to do it responsibly:

  1. Be honest immediately. The moment someone clicks, show a clear message: “This feature is not available yet. We are gauging interest. Want to be first to know when it launches?”
  2. Never collect payment. The line between a fake door test and fraud is money. Do not charge for something that does not exist. (Pre-sales, covered in Experiment 3, are different — the buyer knows they are pre-ordering.)
  3. Do not waste excessive time. If your fake door requires someone to fill out a long form before revealing the feature does not exist, you have crossed a line.
  4. Offer something in return. Give clickers early access, a discount code, or priority placement on the waitlist.

Done right, most people appreciate being part of the process. They feel included, not tricked.

Setup Walkthrough with Examples

Example 1: Testing demand for a new feature in a SaaS product

Add a menu item or button for the new feature inside your existing app. When users click, show a modal:

“We are building [Feature Name] right now. It will [one-sentence benefit]. Want early access? Drop your email and we will let you know the moment it is ready.”

Track: total users who saw the button vs. total who clicked it.

Example 2: Testing demand for a product that does not exist

Create a product listing page with a “Buy Now” or “Start Free Trial” button. When clicked, show:

“Thanks for your interest. [Product Name] is launching soon. Join the waitlist for early access and a launch discount.”

Track: page visitors vs. button clicks vs. email submissions.

Example 3: Testing pricing sensitivity

Show three pricing tiers on your landing page with “Choose Plan” buttons. Track which tier gets the most clicks. This tells you not just if people want the product, but what they expect to pay.

Measuring Intent Signals

The key metric is the click-through rate on the fake door itself. Here are benchmarks:

  • In-app fake door (existing users): 5-15% click rate is a strong signal. These people already use your product and actively wanted the feature.
  • Website fake door (new visitors): 1-3% is strong. These are cold visitors choosing to engage.
  • Email fake door (email list): 3-8% click rate on the CTA link is strong.

Also measure the depth of intent: of those who clicked, how many also submitted their email? A 50%+ email submission rate after clicking means the demand is genuine, not casual curiosity.


Experiment 3 — Pre-Sale / Letter of Intent Campaign

This is the experiment you should try today. It produces the strongest demand signal possible: someone giving you money (or a binding commitment) for something that does not exist yet.

If Experiment 1 is a thermometer, Experiment 3 is a lie detector.

Crafting a Compelling Pre-Sale Offer

A pre-sale offer needs three elements to work:

  1. A clear description of what the buyer will get. Not vague promises. Specific deliverables, features, or outcomes. “You will get X, Y, and Z when we launch in [timeframe].”

  2. A compelling reason to buy now instead of waiting. This is usually a discount (30-50% off launch price), a lifetime deal, exclusive features, or founder-level access.

  3. A risk reversal. Full refund if you do not launch by a specific date. No questions asked. This removes the buyer’s risk entirely and puts the pressure where it belongs — on you to deliver.

Pre-sale offer template:

Get [Product Name] at 40% off before we launch.

Here is what you get:

  • [Specific deliverable 1]
  • [Specific deliverable 2]
  • [Specific deliverable 3]

Launch price: $X/month. Pre-sale price: $Y/month — locked in for life.

Full refund guaranteed if we do not launch by [date].

Running a 48-Hour Pre-Sale Campaign

Urgency drives action. A 48-hour window forces people to decide instead of bookmarking your page and forgetting about it.

Hour 0-2: Launch

  • Send the offer to your email list, existing contacts, and customer interview participants
  • Post on relevant communities (with genuine context, not spam)
  • Turn on paid ads pointing to the pre-sale page

Hour 2-24: First push

  • Share on social channels
  • Send personal messages to your warmest leads
  • Monitor early results and adjust messaging if needed

Hour 24-36: Midpoint update

  • Share progress (“12 founders have already locked in early access”)
  • Address common objections you have heard

Hour 36-48: Final push

  • “Last chance” messaging to email list
  • Update social posts with final hours countdown
  • Close the offer at exactly 48 hours

What counts as success:

  • 10+ pre-sales from a small audience (under 500 contacts): strong signal
  • 25+ pre-sales from a medium audience (500-2,000 contacts): strong signal
  • 5% or higher conversion from total reach: strong signal

LOI Templates for B2B Validation

For B2B products, a Letter of Intent (LOI) is often more appropriate than a pre-sale. An LOI is a non-binding (or sometimes binding) document where a company states their intent to purchase your product at a specific price point.

LOIs are powerful because they involve a procurement decision. Someone at the company had to evaluate your offer and say “yes, we would buy this.” That signal is gold.

Simple LOI template:

Letter of Intent

[Company Name] confirms its intent to purchase [Product Name] at a price of $[X] per [month/year/seat] upon general availability.

This letter confirms serious interest and intent, though it does not constitute a binding purchase agreement.

We understand that [Product Name] is currently in development and expect availability by [date].

Signed: _______________ Name: _______________ Title: _______________ Company: _______________ Date: _______________

Send this to the companies you spoke with during customer discovery. If 3-5 companies sign, you have strong demand validation. If nobody signs, you have learned something equally valuable.

Why Money-on-the-Table Is the Strongest Demand Signal

Every other demand signal can be faked or misinterpreted. People sign up for email lists and never open a single email. People click “Buy Now” buttons out of curiosity. People say “I would definitely pay for that” in interviews and then do not.

But when someone enters their credit card number, they are telling you the truth. When a VP signs an LOI and puts it on company letterhead, they are telling you the truth.

Money removes the gap between stated preference and revealed preference. It is the closest you can get to real demand data without having a real product.

This is why we always recommend running Experiment 3 alongside Experiments 1 or 2. The landing page tells you if people are interested. The pre-sale tells you if they are committed. You need both signals.

For a deeper dive on sequencing these decisions, see our guide on whether to build an MVP or validate first.


Experiment 4 — Concierge MVP / Manual-First Delivery

Delivering the Value Manually Before Automating

A concierge MVP delivers your product’s value using manual processes instead of technology. You do the work by hand, for real customers, and charge real money.

Why does this count as a demand validation experiment? Because it tests two things simultaneously:

  1. Will people pay for this outcome? (demand validation)
  2. Can you actually deliver the value? (solution validation)

Here is how it works in practice. Say you want to build an AI tool that creates personalized meal plans based on dietary restrictions and fitness goals. Instead of building the app, you offer the service manually. Customers fill out a form. You create the meal plan yourself (or with basic tools like spreadsheets). You deliver it via email.

If 20 people pay you $30 each for a manually-created meal plan, you know two things: demand exists, and the value proposition works. Now you have revenue, customers to interview, and confidence to invest in building the automated version.

When Concierge Makes Sense vs. When It Does Not

Concierge works well when:

  • The value you deliver can be replicated manually (at least for a small number of customers)
  • You are testing a service or outcome-based product
  • You want to learn from direct customer interaction before automating
  • Your target customers are accessible and willing to try a “beta” experience

Concierge does not work when:

  • The product only works at scale (social networks, marketplaces)
  • The core value proposition is speed or automation itself (“instant” anything)
  • Manual delivery would be prohibitively expensive per customer
  • You are in a market where “manual” signals low quality to buyers

Examples from Successful Startups (Zappos, Food on the Table)

Zappos — Nick Swinmurn wanted to test if people would buy shoes online (a radical idea in 1999). Instead of building a full e-commerce platform, he went to local shoe stores, photographed the inventory, and posted the photos online. When someone ordered, he bought the shoes at full retail price from the store and shipped them. He lost money on every sale. But he proved demand existed. That “concierge MVP” became a company Amazon acquired for $1.2 billion.

Food on the Table — This startup helped families plan meals based on local grocery deals. The founder, Manuel Rosso, started by serving one customer. He personally went to her local grocery store, checked the sales, planned her meals, and created her shopping list. Then he found customer number two and did it again. He did not write a line of code until he had validated that people valued the outcome enough to pay for it.

Lessons from both: Neither founder built technology first. They tested demand by delivering value manually, proved people would pay, and then invested in building the product. The manual phase was not a compromise — it was the experiment.


Experiment 5 — Crowdfunding Validation

Using Kickstarter/Indiegogo as a Demand Test

Crowdfunding platforms are not just fundraising tools. They are demand validation machines.

A crowdfunding campaign forces you to articulate your value proposition, set a price, and ask people to commit real money. The platform provides the audience, the payment infrastructure, and the social proof mechanics. Your job is to present a compelling case for why your product should exist.

This works best for physical products, hardware, creative projects, and consumer apps with a tangible deliverable. It works less well for B2B SaaS or service businesses.

Minimum Viable Campaign Strategy

You do not need a Hollywood-quality video or a months-long marketing buildup. Here is the minimum viable campaign:

Pre-launch (1-2 weeks):

  • Create a landing page to capture emails of interested people. Aim for 200+ emails before launch.
  • Build a short list of press contacts and bloggers in your space.
  • Prepare a 60-90 second video. A founder talking to camera with a few product renders is enough. Authenticity beats production value.

Campaign page essentials:

  • Clear headline explaining what the product does and who it is for
  • 3-5 images or renders showing the product
  • Pricing tiers: early bird (limited), standard, and a premium tier
  • Timeline with realistic delivery dates
  • A funding goal that represents your minimum viable production run

During campaign (30 days):

  • Email your list on day 1. First 48 hours determine your campaign trajectory.
  • Post updates weekly showing progress.
  • Respond to every backer comment.

Budget: $0-$500 for video production and page design. The platform takes 5-8% of funds raised.

Interpreting Funding Results as Demand Data

Reaching your funding goal is the obvious success signal. But there is more nuance:

Funded in the first 48 hours: Extremely strong demand. Your existing audience alone covered the goal. Expand your target market — it is bigger than you thought.

Funded by day 15-20: Solid demand. You reached beyond your warm network. The product resonates with cold audiences.

Funded in the final days (or barely funded): Weak demand. You likely hit your goal through personal network pressure, not genuine market pull. Proceed with caution.

Not funded: This is not a death sentence for your idea. It might mean your campaign messaging was off, your pricing was wrong, or crowdfunding is not the right channel for your product. But it is a red flag worth investigating before investing further.

Key metrics beyond funding:

  • Backer count vs. dollar amount: 500 backers at $20 each is a different (and often stronger) signal than 10 backers at $1,000 each.
  • Referral rate: Platforms like Kickstarter show how many backers came from social sharing. A high referral rate means your product has built-in word-of-mouth potential.
  • Comment sentiment: Read every comment. Backers who ask detailed questions about features and timelines are showing deep engagement.

Experiment 6 — Competitor Audience Hijacking

If someone already pays for a solution to the problem you solve, they have pre-validated their own demand. Your job is to find those people and test whether your differentiated offer pulls them.

Finding and Surveying Competitor Customers

Start by identifying where your competitors’ customers congregate:

  • Review sites: G2, Capterra, Trustpilot, and the App Store are goldmines. Search for your competitors and read the 2-star and 3-star reviews. These are customers who wanted the solution but found it lacking. They are your targets.
  • Community forums: Search Reddit, Quora, and niche forums for “[competitor name] alternative” or “[competitor name] problems.” The people posting these are actively looking for something better.
  • Social media: Search Twitter/X for mentions of competitor names alongside words like “frustrated,” “switching,” “alternative,” or “hate.”

Once you find them, reach out directly:

“I noticed you mentioned [pain point] with [Competitor]. I am building something that specifically addresses that. Would you take 5 minutes to look at what we are working on and tell me if it hits the mark?”

Response rate on messages like this: 15-25%. These are people who already expressed dissatisfaction publicly. They want to be heard.

Running Ads Targeting Competitor Audiences

Most ad platforms let you target people who have shown interest in competitor brands:

Google Ads:

  • Bid on competitor brand name keywords (“[Competitor] alternative,” “[Competitor] vs”)
  • These searchers are actively evaluating options. Conversion rates on competitor keywords run 2-4x higher than generic keywords.

Facebook/Instagram:

  • Target interests related to competitor brands
  • Create ads that directly address known competitor weaknesses (without naming them — keep it classy)

LinkedIn (B2B):

  • Target employees of companies that use competitor products
  • Target people with skills or job titles that imply competitor tool usage

Budget: $300-$500 for a 7-day test. Send traffic to your smoke test landing page (Experiment 1) and measure conversion rates against the benchmarks above.

The insight here is powerful: if people who already pay for a competing solution sign up for yours, demand is validated at the strongest possible level. They are not just interested in the concept — they are actively dissatisfied with their current option and willing to explore yours.

Mining Review Sites for Unmet Needs

Review mining is underrated as a demand validation tool. Here is a systematic approach:

  1. Collect 50-100 reviews of your top 3 competitors (mix of positive and negative)
  2. Categorize complaints into themes: pricing, missing features, poor UX, bad support, reliability, etc.
  3. Rank themes by frequency. The complaint that appears most often is the unmet need with the most demand behind it.
  4. Test whether your solution addresses the top 2-3 complaints. If it does, you have a positioning angle and a demand signal in one.

This data also feeds directly into your landing page copy. When your headline addresses the exact frustration competitor customers have expressed publicly, conversion rates climb.


Experiment 7 — AI-Powered Search Demand Analysis

Using AI to Analyze Search Volume and Intent

Search data tells you what people are actively looking for right now. If thousands of people search for a solution to the problem you solve every month, demand exists. If nobody searches for it, you either have a positioning problem or a demand problem.

Here is how to run this analysis:

Step 1: Keyword research Use tools like Ahrefs, SEMrush, or the free Google Keyword Planner to find:

  • Monthly search volume for your core problem keywords
  • Search volume for “[competitor] alternative” queries
  • Search volume for “how to [solve the problem your product solves]”

Step 2: Intent classification Not all search volume is equal. Classify keywords by intent:

  • Informational: “What is [problem]” — Awareness exists, but no purchase intent
  • Investigational: “Best [solution type] for [use case]” — Active evaluation
  • Transactional: “[Product type] pricing,” “buy [solution]” — Ready to purchase

A high volume of transactional and investigational queries is a strong demand signal. High informational volume with low transactional volume means people know about the problem but are not actively seeking paid solutions.

Step 3: Trend analysis Use Google Trends to check if search interest is growing, stable, or declining. A growing trend means you are riding a wave. A declining trend means you are entering a contracting market.

Social Listening at Scale with AI Tools

Search data shows explicit demand (people actively looking). Social listening reveals implicit demand (people talking about the problem without searching for a solution).

Use AI-powered tools to monitor:

  • Reddit and forum discussions: Tools like Gummy Search or SparkToro help you find communities where your target audience talks about the problem you solve.
  • Twitter/X conversations: Track keywords, hashtags, and sentiment around your problem space.
  • Industry publications and newsletters: Monitor mentions of the problem or related trends.

What to look for:

  • Frequency: How often does the problem come up in conversations? Daily mentions across multiple communities = strong demand signal.
  • Emotion: Are people mildly annoyed or genuinely frustrated? Strong negative emotion predicts willingness to pay for a solution.
  • Workarounds: Are people cobbling together manual solutions? Spreadsheet-based workarounds are a classic sign of latent demand for a dedicated tool.

Synthesizing Online Signals into Demand Scores

Raw data is noise. You need a structured way to synthesize it into a demand signal you can act on.

Here is a simple scoring framework:

SignalLow (1 pt)Medium (3 pts)High (5 pts)
Monthly search volumeUnder 500500-5,000Above 5,000
Search trend directionDecliningStableGrowing
Competitor review complaintsFew, minorModerate, recurringFrequent, intense
Social mention frequencyRarely discussedWeekly mentionsDaily mentions
Workaround prevalenceNone observedSome DIY solutionsWidespread hacks
Transactional search %Under 10%10-30%Above 30%

Score interpretation:

  • 6-12 points: Weak demand signal. Dig deeper or reconsider the market.
  • 13-20 points: Moderate demand. Validate further with Experiments 1-3.
  • 21-30 points: Strong demand signal. Move forward with confidence.

This is not meant to be precise. It is meant to force you to evaluate multiple data points instead of cherry-picking the one that confirms your bias. Evidence, not opinions.


How to Choose the Right Experiment for Your Stage

Not every experiment makes sense at every stage. Running a pre-sale campaign before you have a clear value proposition is premature. Running a search analysis when you already have paying customers is redundant.

Here is a decision framework:

Pre-Idea Stage — Start with Experiments 6 and 7

You know the space you want to work in, but you have not committed to a specific idea yet.

Run Experiment 6 (Competitor Audience Hijacking) to find unmet needs in the market. Mine competitor reviews. Identify the gaps. Let the market tell you what to build.

Run Experiment 7 (AI-Powered Search Demand Analysis) to quantify opportunity size. Which problems have the most search volume? Where is demand growing? Where are people desperate enough to cobble together workarounds?

Together, these two experiments take 3-5 days and give you a data-driven foundation for choosing which idea to pursue. Check our validation checklist to make sure you cover every base.

Have an Idea — Start with Experiments 1 and 2

You have a specific product concept and you need to know if people want it.

Run Experiment 1 (Landing Page Smoke Test) to measure broad interest. Drive traffic and see if your value proposition resonates with cold audiences.

Run Experiment 2 (Fake Door Test) to test specific features or price points. If you already have an audience or an existing product, fake doors give you more granular demand data than a landing page alone.

These experiments take 3-7 days and cost $200-$500 in ad spend. They answer the question: “Does this idea resonate enough to investigate further?”

For a deeper process on testing your idea, see our full guide on how to validate a startup idea.

Ready to Commit — Run Experiments 3 and 4

You have validated interest and you are ready to test whether people will put real commitment behind it.

Run Experiment 3 (Pre-Sale / LOI Campaign) to test financial commitment. This is the single highest-signal experiment on this list.

Run Experiment 4 (Concierge MVP) to test whether you can deliver value and retain customers. This doubles as your first revenue stream.

These experiments take 1-2 weeks and produce the strongest possible demand evidence. If people pre-pay and your concierge customers are happy, you have validated demand at the highest level of the evidence hierarchy.


Running Multiple Experiments in a 2-Week Sprint

Single experiments give you data points. Stacked experiments give you a demand picture.

The biggest mistake founders make with validation is running one experiment, seeing ambiguous results, and not knowing what to do next. Did the landing page convert at 3% because demand is moderate, or because the copy was bad? You cannot tell from one experiment.

That is why we stack experiments.

How Proof Engine Stacks Experiments for Maximum Signal

In a typical product validation sprint, we run 3-5 experiments simultaneously across a 2-week window:

Week 1:

  • Launch a smoke test landing page with paid traffic (Experiment 1)
  • Run AI-powered search demand analysis (Experiment 7)
  • Begin competitor audience research (Experiment 6)
  • Set up fake door tests for 2-3 positioning variants (Experiment 2)

Week 2:

  • Analyze Week 1 data and double down on winning positioning
  • Launch a 48-hour pre-sale or LOI campaign (Experiment 3)
  • Run a concierge pilot with 5-10 customers if applicable (Experiment 4)
  • Synthesize all data into a demand scorecard with go/no-go recommendation

By the end of 14 days, you have demand data from multiple independent sources. If all signals point the same direction, you know your answer. If signals conflict, you know exactly where to dig deeper.

Why Parallel Experiments Beat Sequential Ones

Sequential experiments have a compounding delay problem. If each experiment takes a week and you need three to feel confident, that is three weeks minimum. And if the first experiment produces ambiguous results, you spend another week re-running it before moving to experiment two. Four weeks. Five weeks. Six weeks. Suddenly, validation has taken as long as building would have.

Parallel experiments compress the timeline and increase signal quality:

  • Cross-validation: When your landing page converts at 5% AND your search analysis shows 3,000 monthly searches AND competitor reviews reveal a glaring unmet need — that is three independent data points confirming the same thing. Your confidence is exponentially higher than any single experiment.
  • Faster iteration: If your landing page converts poorly but your pre-sale campaign succeeds, you know the demand exists but your positioning needs work. You learn this in days instead of weeks.
  • Reduced confirmation bias: It is easy to explain away one disappointing result. It is much harder to explain away three.

Traditional validation studios run 1-2 experiments over several weeks. Our AI-native approach lets us set up, run, and analyze 3-5 experiments in a single 2-week sprint. AI accelerates the analysis — processing competitor reviews, scoring search data, and synthesizing signals across experiments — while human judgment drives the strategy and interpretation.

The result: a multi-signal demand picture far more reliable than any single test.

We use the same structure across Fintech MVP development, AI sales assistant MVP concepts, and B2B workflow automation validation work. The channels and messaging change, but the evidence standard does not.


Let Us Run These Experiments for You

You now have the playbook. You can run every experiment in this guide yourself.

But if you want 3-5 demand experiments run simultaneously, analyzed by a team that has done this hundreds of times, and delivered as a clear go/no-go recommendation in 14 days — that is exactly what our product validation sprint delivers.

$4,500 flat rate. 2-week sprint. Real demand signals, not opinions.

You get:

  • 3-5 demand experiments designed for your specific market
  • A demand scorecard with data from every experiment
  • A clear go/no-go recommendation backed by evidence
  • All landing pages, ads, and campaign assets — yours to keep

Want the condensed version? The experiment playbook format works well as a two-page internal reference with setup checklists and benchmark tables.


Related reading: