Pre-PMF: Your Mission is to Reduce Uncertainty
Pre- Product Market Fit, your single mission is to reduce uncertainty. Not ship features. Not perfect architecture. Not run proper processes. The speed at which you can learn from your target market what works and what doesn't, and the speed at which you can adjust your direction based on that, will be one of the strongest determinants of your success.
Everything else you do should serve that purpose.
Researcher, Not Artist
In my experience, successful founders feel a lot more like researchers than a creative artists.
As a Founder pre-PMF, your ultimate job is to validate the hypotheses that, if true, make your company worth building.
Two hypotheses underpin almost every business idea that I've worked with: the value hypothesis (does this solve a real problem worth paying for?) and the growth hypothesis (will it spread?). From this you derive everything else: who your user is, what they need, how many exist, what the solution looks like.
The best founders say this over and over: hypothesize, test, conclude, repeat. You are running Phase 1 trials for your idea. You're not fine-tuning packaging or optimizing manufacturing. You're testing whether the drug should exist at all.
Every decision you make should be serving one of two purposes:
- Optimize how fast new learnings steer your company direction
- Keep you alive to keep learning
Optimize How Fast You Learn
Pre-PMF, your learning velocity is one of the strongest predictors of success. This is the speed at which new learnings can steer your next decision and overall business direction.
For many digital product teams, a common false friend is launch velocity. Launch velocity alone will oftentimes give you the illusion of dynamism while keeping you stuck in a frustrating loop of lackluster releases. There are many reasons why early stage teams fall short of accelerating learning. Getting enamored by launch velocity is one of them.
Another common pitfall I've observed is confusing creative iteration with experimental iteration. Weekend redesigns because you had a struck of inspiration. Competitor-driven changes because they did something cool. Teams that don't yet have a strong understanding of their uncertainty and their role in reducing it often adopt the aesthetics of early stage experimentation without the substance.
Especially in regulated or compliance-heavy domains (where I most love to work), following best practices blindly without adjusting to the company's level of uncertainty has been a consistent pitfall. Few examples:
- Obtaining a license before PMF, instead of seeking borrowed-license or simpler pathways to validate PMF
- Building a complex architecture "that followed best practices" instead of optimizing for rapid iteration
Many discussions I had when addressing these issues inevitably steered towards size and resources: "We need processes for our team size" or "we can't afford such complexity". But the real driver is elsewhere: You need to build processes, architecture, team structure, and borrow best practices for your current level of uncertainty (and size and resources second).
A key example: at Financemate, our size and stage calls for a simple stack that can be owned by two people. One key decision we made early was to split financial calculation logic into its own independent repo with a stable API and hard versioning, so we can iterate the user experience without risking errors in financial projections (highly certain domain, where errors erode customer trust faster than we can build it with great product)
Move Fast, Don't Break Things
A larger issue that seems to steer companies in regulated domains away from iteration velocity is their understanding that "move fast and break things" just doesn't apply to them. And that is true.
Compliance, safety, scientific rigor, data security -- those are "right to play" requirements that, if you are building something that matters, you can rarely do without. Finding PMF is how you succeed. Playing by the rules is how you don't fail.
The key to attaining good learning speed in these domains is to develop a thorough understanding of what risks can kill your business in the cradle if they materialize, and protecting against them. Blindly adopting data protection rules and procedures from Roche will slow you down to a crawl, but storing patient data in your local /home directory will land you in jail (or at least it should).
Designing a PMF-seeking machine in a regulated domain is an art and science of its own. I will write a deeper piece on this topic in the future.
Charting Uncertainty
I like to build hypothesis trees—infinitely indented bullet lists that start with a product's purported strategy and branch into all supporting hypotheses. This helps me understand where uncertainty lies and in turn helps me identify what to build, explore, test or ask next.
Each node has four parts: the hypothesis itself, how you'll test it, what success and failure look like, and what you'll do in either case. Bonus points if the criteria are quantitative, but it's not always necessary or possible.
To prevent this from becoming a fun little document that derails you from actually building your business, prune aggressively. Only the hypotheses that would change your direction if proven wrong deserve a deep development into the aforementioned four parts. If validating or invalidating something wouldn't shift what you're building, you don't explore it too deeply.
A few techniques have helped me and my teams populate and prioritize the tree.
The pre-mortem is one I use often. You ask "It's 12 months from now and we've failed—what killed us?" and work backwards. This tends to surface the assumptions so fundamental that they're terrifying to write down. Those are usually the ones that belong at the top of your tree.
The Riskiest Assumption Test is a cousin to this: identify the assumption that would kill you fastest if wrong, and test that first—before building anything. Your database structure is unlikely to be it.
Teresa Torres's Opportunity Solution Tree follows similar logic, mapping outcomes to opportunities to solutions to assumption tests. The key insight across all of these is making implicit assumptions explicit and testable.
One final check: is your tree informing what you build and in what order? Is it helping you react to feedback? If no to either, the tree has become overhead. Prune it or abandon it.