The “Minimum Viable Please” Episode: What Founders Still Get Wrong About MVPs

During a recent startup office hours session, a founder proudly described their “MVP.”

The slide included onboarding, billing, analytics, referrals, and a feature roadmap that looked suspiciously like a mature SaaS product.

The problem?

No one had paid. No one had switched. And the team had spent months building something designed more to look impressive than to test a real assumption.

That tension—between what founders think an MVP is and what it should be—is exactly what drove a recent episode of the Zero to Traction podcast.

In this episode, hosts Josh David Miller (JDM) and Cameron Law, joined by their AI co-host Cass, run a game called “Minimum Viable Please.”

The format is simple:
Cass presents a startup stage, a learning goal, and a proposed MVP. Josh and Cameron then evaluate it by asking three questions:

  1. Is it minimum?

  2. Is it actually viable for the learning goal?

  3. Does it generate real learning, or is it just over-engineered theater?

The exercise surfaces a pattern many founders fall into: building products when they should be designing experiments.

MVPs Are Not Small Products—They’re Small Experiments

Early in the discussion, Josh reminds listeners that the purpose of an MVP is not to ship software. It’s to test the next critical assumption.

He references the classic four-step startup progression:

  1. Customer Discovery – confirm the problem exists

  2. Customer Validation – confirm the solution works

  3. Customer Creation – create repeatable demand

  4. Company Building – scale the business

Many startups jump from step one straight to step four.

Instead of validating the solution, they build a product and hope the market figures out the rest.

The MVP scenarios in the episode illustrate exactly how that happens.

Scenario 1: Testing Payment by Giving It Away for Free

Startup Stage

The founders just completed customer discovery interviews. They do not yet have a working product.

Learning Goal

They want to know if customers will actually pay for the solution.

The “MVP”

The team built a fully functional freemium mobile app with:

  • onboarding

  • Stripe integration

  • a referral program

They plan to release it for free to “see what users do.”

The Problem

Cameron immediately points out the mismatch between the learning goal and the experiment design.

If the goal is to test whether customers will pay, giving the product away for free does not answer the question.

Josh summarizes the issue bluntly:

The experiment doesn’t measure the thing the founders say they want to learn.

In addition, the founders appear to be skipping the customer validation stage entirely. After discovery, they should still be exploring which solution actually works, not building a polished app.

A Better MVP

Josh and Cameron suggest simpler tests:

  • Create a landing page with pricing visible before sign-up.

  • Use clickable prototypes instead of a full application.

  • Ask early users directly whether they would pay a specific price.

These approaches test the core assumption without spending months building unnecessary software.

Key Lesson

If the learning goal is payment, the MVP must include an actual payment decision.

Scenario 2: The Switching Cost Problem

Startup Stage

The founders identified a narrow niche: independent gym owners struggling with membership and payment management.

Learning Goal

They want to know if gym owners will switch from their current software.

The “MVP”

The founders built a polished web application with:

  • calendar management

  • billing integration

  • a CRM for members

  • an admin dashboard

They plan to run Facebook ads and offer a 30-day free trial.

The Problem

Josh notes that switching behavior is about inertia, not just features.

Even if a new product is technically better, customers often stick with the existing solution because switching involves:

  • data migration

  • retraining staff

  • workflow disruption

  • time and risk

Instead of building a complete product, founders can test switching behavior earlier.

A Better MVP

Josh proposes a simple evaluation method:

  1. Ask customers to rate their current tool on a scale from 1–10.

  2. Walk them through a prototype of the new solution.

  3. Ask them to rate the new experience on the same scale.

If the new solution is not at least three points better, most customers will not switch.

Cameron adds another important observation: founders often try to improve every feature they hear complaints about. The result is a bloated MVP that does not solve any single pain point dramatically better.

Key Lesson

Switching happens when a product is meaningfully better at one critical task, not slightly better at everything.

Scenario 3: The Trust Problem in AI Products

Startup Stage

Three founders—none of them technical—are building an AI product for teachers.

Learning Goal

They want to understand whether teachers will trust AI to provide feedback on writing assignments.

The “MVP”

They built a GPT-powered grading assistant that:

  • integrates into Google Classroom

  • imports grading rubrics

  • generates comments

  • exports grades

The tool is packaged as a Chrome extension and they are planning demos with school administrators.

The Problem

Josh highlights a critical mistake: the MVP requires maximum trust from day one.

Installing a Chrome extension that integrates with student records raises immediate concerns about:

  • data privacy

  • compliance with education regulations (such as FERPA)

  • accuracy of grading

  • workflow disruption

The founders want to test trust, but their MVP demands more trust than the market is ready to give.

Josh breaks the concept of trust into several distinct concerns:

  • Privacy – how student data is stored and accessed

  • Compliance – adherence to educational regulations

  • Accuracy – whether AI feedback is reliable

Each concern requires a different experiment.

A Better MVP

Instead of building a complex integration, the team could start with:

  • a Wizard-of-Oz prototype, where AI feedback is simulated manually

  • a narrow use case such as rubric-aligned comment suggestions

  • early teacher testing without full system integration

These approaches test trust gradually instead of demanding it immediately.

Key Lesson

When trust is the key assumption, the MVP should reduce risk, not amplify it.

What Founders Can Learn from “Minimum Viable Please”

Across the three scenarios, a consistent theme emerges:

Most startup MVPs fail because they are designed as products, not experiments.

A strong MVP does three things:

  1. Targets a single assumption

  2. Measures the outcome clearly

  3. Minimizes the effort required to learn

When founders skip these principles, they build software that looks impressive but generates very little insight.

Recommendations

Based on the episode’s discussion, founders should approach MVP design using a simple framework:

1. Define the learning question clearly.
Examples:

  • Will customers pay for this solution?

  • Will users switch from their current tool?

  • Will the market trust AI in this workflow?

2. Design the smallest experiment that answers that question.

3. Remove every feature that does not directly support the test.

Final Takeaway

An MVP is not a smaller version of the final product.

It is the fastest credible way to learn whether a critical assumption is true.

Founders who treat MVPs as experiments move faster, waste fewer resources, and uncover the truth about their market far earlier.

Those who treat MVPs as miniature products often spend months building something the market never needed.


About Josh David Miller

​Over the past decade, Josh David Miller has empowered over 100 startup founders and innovators to launch and scale their ventures. As the driving force behind the Traction Lab Venture Accelerator,

Josh specializes in guiding early-stage startups through the intricate journey from ideation to product-market fit. His expertise lies in transforming innovative concepts into viable, market-ready solutions, ensuring entrepreneurs navigate the challenges of the startup ecosystem with confidence and strategic insight.

About Cameron R. Law

Cameron R. Law is a Sacramento native dedicated to building community, growing ecosystems, and empowering entrepreneurs.

As the Executive Director of the Carlsen Center for Innovation & Entrepreneurship at California State University, Sacramento, he leverages his passion for the region to foster innovation and support emerging ventures. Through his leadership, Cameron plays a pivotal role in shaping Sacramento's entrepreneurial landscape, ensuring that innovators and builders have the resources and support they need to succeed.

Next
Next

How Jason Wang Built CaterAI: Voice AI That Takes Restaurant Orders