The Firehose of Code

The Firehose of Code

In supply chains and manufacturing, removing a bottleneck in a chain usually reveals the next one. The job of the people optimizing these is a never-ending game of whack-a-mole.

Coding Agents basically eliminated the bottleneck that most of our management systems (Agile and friends) were built to administer. And yet, product outcomes don't seem to have followed pace, have they?

That tells you the new bottleneck is somewhere else entirely, and that's what I want to find out.

Throughput and Aim

A Product Manager in the age before Claude Code could be okay at their job by just managing throughput. That is, ensuring the team was always busy, pushing work that mattered somewhat and showed some sort of user impact, and meeting target metrics without egregiously gaming them. This did not produce great product outcomes, but for many teams it was acceptable.

Now, the throughput went from a stream to a firehose. Ensuring "the team is delivering new features" is as trivial as paying LLM subscriptions on time.

The firehose of code has many implications for cybersecurity and code maintainability, which I will cover in other articles. But, specifically for product peeps, the new goal is Aim.

Good aim (i.e. knowing what problem to solve and how) was always a core skill for PMs, so this is not exactly new. I would argue, however, that the aim needed when commanding a Firehose of Code is a different ballgame than when steering a stream of artisanal software.

Aiming the Firehose

When I talk about aim, I mean specifically coming up with answers to the classical product questions:

  • What problem are we solving? For whom?
  • How do we solve it?
  • How do we know we solved it?
  • What do we learn if we fail?
  • How does it integrate into a cohesive whole?

When building in the old days, you had a lot of time to think about these questions, to analyse results, and to come up with the next step. Moreover, the scarcity of the engineering resource forced us to choose carefully what to build next.

If your team is building extensively with AI, your challenges are along those two axes:

  • Speed: Pre-2025 pre-PMF startups needed to launch at the speed of their learning. Now, you need to learn at the speed of your launches.
  • Selectivity: Before Claude Code et. al., engineering cost imposed a natural barrier to feature bloat and incoherent products. Now they are a serious risk.

Speed

The time between idea and live product can sometimes be hours. This makes your ability to correct product specifications based on user input (active but especially passive), one of the new major bottlenecks of your team.

In the past we could take a full sprint or two to figure out what happened to that feature we just launched, and then spend another (design) sprint thinking what to do next with our learnings (or grabbing another idea from the pile and forgetting we ever built that thing). In the firehose age, the speed at which we can learn from users by validating our hypotheses is now the speed at which we can analyse user behaviour, think about results, and decide on our next action.

In my career, only the very best PMs I worked with were experimenters: building product specs around testable hypotheses and treating launch A/B tests as field experiments. Today, I argue, this is the baseline to be good.

Building your entire release around a set of hypotheses to test means that post-launch learnings will be near instant. You already know, by design, what success and failure look like and what they'll teach you. Jumping from this to "what do we do next" is far simpler than doing it from an inspiration or HIPPO type feature with no clear hypothesis (note that hypothesis != defined success metrics, those are easy to define post-hoc).

Selectivity & Coherence

Teams have always struggled with long backlogs of ideas to try or things to fix or new small feature requests to implement for that guy in Finance who keeps asking for that green button at the bottom right.

When engineering delivery was the primary bottleneck, we learned to say no because "we can only build so many features in a quarter / month / sprint". But bottleneck gave us the time (and sometimes the excuse) to think also about coherence and consistency of user experience.

As with my case for speed above, this is what many would consider a core skill for PMs, but, in my experience, true masters of product coherence and UX consistency are fairly lonely at the top of the PM skill scale. Unshackled from the oppressive limitations of "engineers need days to write 20k+ lines of code", the temptation to just build everything and make everyone happy will strike more than one of us.

The true value add of PMs in the age of the Firehose of Code is to build features into a cohesive whole, digging behind feature requests to understand true user problems, rather than getting everyone's specs for their faster horse neatly written down.

Clarity is The Bottleneck

Ultimately, our job has always been to deliver clarity. Clarity about who our users are, what they need, how they think about that need, how we can help them solve it, and how all those disparate needs integrate into a cohesive whole that said users can understand and enjoy. This has never changed.

But having been raised into the trade at a time when too much of our job was simply ensuring delivery, we may find the Firehose age a bit disorienting.

With throughput of code almost out of the question, our ability to form clarity at the speed of software development goes into the spotlight.

-- A