We’ve all been in a meeting and had this question asked over and over again:
“When will the experiment go live?” The dream answer: “It already did.”
The reality? A familiar list of reasons and delays that you awkwardly have to explain to a Zoom-ful of people:
- We’re waiting on development
- We still need someone else’s sign-off
- Brand isn’t happy with the design yet
When this happens once, it’s not the end of the world. But when moving slowly becomes the default, it compounds. Especially for startups, where every delay means less learning and a shorter runway.
There’s the ‘ideal world’ we’re often taught: all stakeholders aligned, experiments neatly queued, and decisions made only after lifetime value has fully played out.
And then there’s reality… every month costs money, most early ideas won’t work, and waiting for perfect data often means waiting too long.
At some point, you have to let go of rigid rules and choose fast feedback over perfect planning. That’s uncomfortable, but it doesn’t mean being reckless; it means being intentional about where you move fast, where you slow down, and how quickly you decide what to learn from.
Here’s what you’re about to learn (and hopefully actually use):
- The hidden risks of waiting for certainty
- Why fast feedback beats false confidence
- When to kill experiments early
- How to ship faster without breaking trust
- And when slowing down is actually the right move
Certainty is what you get after shipping
I once worked with a client testing a landing page as part of a pre-launch experiment. The founder was a designer with an incredible eye for detail, and I joined her in double- and triple-checking every element. We’d done the work: months of research, competitor analysis, and even a painted-door test to validate interest before committing to a full build.
Then the page finally went live. Celebration time!
We waited for the pre-launch commitments to roll in. The painted door test (which gauges interest by showing a feature or offer before it exists) had signaled demand, so expectations were high. But almost nothing happened. No meaningful subscription sign-ups.
What we did learn, quickly, was far more valuable:
- Meta ads were extremely expensive at that time of year, and we needed more video content to build trust and lower costs
- People hesitated at the subscription price, so we introduced an intermediate step first, and found that it converted better
We’d done everything ‘right’ to build confidence before launch. But certainty about what worked and what didn’t only came after shipping, once real people interacted with the page.
This is where many growth teams get stuck. Early on, most bets are wrong. You’re operating with limited data, few returning subscribers, and barely any meaningful lifetime value (LTV) signal. Monetization metrics at this stage are directional at best, and not something you can wait on with confidence.
Early monetization decisions aren’t about precision; they’re about momentum. You’re not trying to predict lifetime value; you’re trying to understand whether an offer is viable at all. Signals like trial-to-paid conversion, early churn, or price sensitivity tell you where to look next, not where you’ll end up. Waiting for perfect LTV before acting assumes a level of certainty that simply doesn’t exist yet.
Your simple rule for moving faster
Reid Hoffman describes blitzscaling as prioritizing speed over efficiency in the face of uncertainty. That’s exactly what early growth requires — not recklessness, but a willingness to accept that clarity comes from exposure, not preparation.
We don’t gain certainty by thinking harder or planning longer. We learn by putting things into the world and observing how they behave. I’d love to build a campaign or feature that’s guaranteed to work. I can’t. No one can.
So what’s the strategy?
Your simple rule for moving faster should be as follows:
If the cost of being wrong is reversible and contained, move fast.
If it’s irreversible or erodes trust, slow down.
Growth isn’t about building confidence before you ship, but about earning confidence after you do, which is why fast feedback is critical.
Fast feedback is a competitive advantage
You’ll hear all kinds of stats about early-stage growth: ‘only 20% of what you do drives impact’, ‘only 10% really works’, simply that ‘most experiments fail’… I don’t know the exact number that’s accurate. What I do know, from leading growth at an early-stage startup and working with many others, is how frustrating it is just how much doesn’t work.
The teams that win aren’t the ones with a higher success rate. They’re the ones who find out faster.
Fast feedback isn’t about shipping more features or running more experiments. It’s about constantly asking: what’s the smallest possible way we can test this and learn something meaningful? That might mean testing a value proposition through Meta ads before touching the app, or experimenting with App Store messaging to see which feature focus actually drives conversion.
That might mean questions like:
- Which engagement behaviors reliably predict retention?
- What early revenue signals indicate higher-value users?
- Where do users hesitate before committing?
Most subscription apps know that annual plans typically produce higher LTV than monthly ones. The default response is to push annual as hard as possible, or remove monthly entirely. That may optimize monetization in the short term, but it slows learning.
Streema deliberately did the opposite. As Martin Siniawski has shared, they kept monthly subscriptions prominent so they could see churn sooner, understand what drove real, repeatable value, and actually talk to users who left.
Rules for fast feedback in practice
That’s what fast feedback looks like in practice: prioritizing learning speed over delayed certainty. It means doing the following:
- Design tests to fail quickly, not to be statistically significant
- Use proxy metrics intentionally, knowing they’re directional, not definitive (but good enough)
- Prefer reversible changes over big, one-way launches
- Optimize for insight velocity, not just conversion rate and monetization
- Create earlier moments of truth, even if they temporarily hurt top-line metrics
- Speak to your users rather than just relying solely on quantitative data
- Acting on what you learned, even if it isn’t the result you hoped for
Fast feedback isn’t just a product problem. It breaks down when teams can’t make decisions without multiple sign-offs, or when learning has to be approved before it’s acted on. Too many cooks spoil the broth and all that.
So the final rule is simple: build an organization designed for speed, creating autonomy and trust instead of relying on complex sign-offs.
I promise you: fast feedback compounds over time and helps you grow faster.
Kill your darlings… fast, but confidently
The hardest part of moving fast isn’t shipping. It’s the moment after, when you’ve moved quickly, intentionally built a real alternative, and it simply doesn’t work. Like the pre-launch landing page that just wasn’t performing. It’s heartbreaking, and that’s usually when the temptation to ‘wait it out creeps in.
Back to the pre-launch experiment I mentioned earlier. We were about a week post-launch. Long enough for people to convert, long enough to have some data, but not nearly enough to feel officially certain. Still, when I ran the numbers, it was hard to ignore what they were telling us. Even if we improved the creatives and lowered the cost-per-click (CPC), we would still land far above our target cost-per-acquisition (CPA) for driving subscriptions.
At that point, waiting stops being patience and starts just being hope. When early data is that far off, you can be confident it won’t pivot into winner. It might improve a bit, but not enough to justify the time, budget, and attention it would continue to absorb.
That’s why — thanks to a great suggestion from the founder — we didn’t try to patch it up. We killed it, and moved on to a two-step page instead, a fundamentally different setup.
But this is where it gets tricky.
With another client, where I was running an A/B test, I did the thing we’re not supposed to do: I peeked early. The results weren’t great. The new variant was tracking slightly worse than the original. I didn’t like what I saw, but I also didn’t kill it. With only a handful of conversions per variant, there simply wasn’t enough signal yet. In that case, giving it time proved to be the right call, and the variant ultimately won.
There are instances when you need to give experiments time. This is especially true for pricing experiments: the initial conversion rate may suggest a new pricing strategy or package offering is underperforming, but after one–two renewal cycles, you’ll see that overall revenue is up.
So I’m not saying kill all your darlings, but I’m also not saying give everything endless time. Fast shipping and fast learning don’t mean a blanket kill-it-all approach. There are times when you should stop quickly, and times when you should deliberately wait.
Rules for killing your darlings
What helps is being explicit about why you’re continuing or stopping. In practice, this is how we kill our darlings:
- Peek strategically, but don’t broadcast early results in a way that invites emotion-driven reactions
- Ensure there’s enough data to see direction, even if you’re far from statistical certainty
- Run the maths on how far off you are from the target, not just whether something is ‘up’ or ‘down’
- See experiments as part of a bigger system, not one-off tests: you can still believe in the hypothesis, even if this execution isn’t it
- Decide in advance which darlings deserve more time, such as pricing experiments that need renewal cycles to show their impact
Killing something early isn’t pessimistic. It’s often the most direct way to protect focus and create space for the next, better bet. There’s a real opportunity cost to not killing: every experiment you keep alive is one you’re choosing instead of something else that might actually move the needle.
Think of it like pruning a plant. It feels awful to cut off healthy leaves, but you do it knowing the plant will be stronger and grow better because of it.
Speed doesn’t have to come at the cost of quality or user trust
One of the biggest fears teams have when they talk about moving faster is that quality will inevitably go down. And while that can happen, it’s not what moving fast actually has to mean. Speed doesn’t mean:
- Sloppy work
- Broken experiences
- Shipping things users weren’t ready for
I was advising a client with a community-based app who wanted to improve moderation to ensure the community remained a genuinely good place to be. It was a strong idea, but also a big one. There were multiple concepts on the table, each requiring coordination across backend (to build a scoring system), frontend, design, and product. Because of that complexity, the initiative kept getting pushed out, not because it wasn’t important, but because it felt too big to start.
When we zoomed out, one thing became very clear. The entire moderation system they wanted to build depended on a single assumption: that users would actually provide the input needed to make it work — likes, reactions, or signals about positive and negative interactions. If users weren’t willing to do that, or didn’t behave in the way the system expected, the whole model would either fail or require a fundamental rethink.
So instead of building the entire moderation system, we focused on testing that assumption first.
The first step wasn’t a scoring model or a full moderation flow. It was simply introducing — and observing — whether users would actively use like/reaction buttons when interacting with others. That single behavior would tell us whether the broader idea had a foundation to build on.
If usage was low, the priority wouldn’t be better moderation logic, but understanding how to encourage or redesign that input. If it worked, the team could move forward with much greater confidence.
What does Minimum Viable Product actually mean?
What’s important here is that this didn’t feel half-baked to users. They weren’t exposed to an unfinished system or asked to tolerate a worse experience. From their perspective, they simply had more ways to react and express how interactions felt. In fact, it already gave them a greater sense of control over what good and bad interactions looked like, without needing to know anything about the system being tested behind the scenes.
This is what speed looks like when it’s done well: not rushing to build everything, but narrowing down what you actually need to learn first. What is the smallest test that meaningfully de-risks the idea? Which assumption, if wrong, would make everything else irrelevant?
This is why I like how Ethan Gar reframes the idea of a Minimum Viable Product (MVP):
“People get the minimum viable product idea wrong. They focus on the minimum, but not the viable. If I give you an app whose core functionality is broken, you won’t get value, and it won’t perform. When we are focusing on speed, we are focusing on the simplest viable version that delivers value.”
That distinction matters. The moderation example wasn’t about speed for its own sake or cutting corners. It was about delivering value early while learning which assumptions actually mattered — without overcommitting to complexity too soon.
Of course, as you move faster, things will break more often. That’s part of the trade-off. But most of the time, users won’t even notice. And when you get this right, the value gained from faster learning usually outweighs the cost of the occasional misstep.
Which brings us to the harder question: when is speed no longer the right choice, and when does slowing down actually protect quality and trust?
When should perfection take priority over speed?
Speed isn’t universally correct. There are moments where slowing down isn’t cautious, it’s responsible. Moments when perfection is the right way to ship. These are often situations where a generalist mindset no longer helps, and moving too fast creates more risk than value.
The mistake teams often make is treating perfection as the standard mode of operation, when in reality it should be reserved for specific moments. To move past that tension, it helps to be explicit about which situations actually deserve slowness.
Not all of these will apply to every app or brand, but walking through them helps clarify where slowing down is intentional rather than accidental.
1. Core functionality of your app
When it comes to the core value your app provides, there’s a minimum bar that needs to be met. Not ‘perfect’ in the abstract sense, but good enough that users clearly experience the value early in their journey.
This aligns closely with Ethan Gar’s idea of minimum viable value. The goal isn’t to ship everything, but to make sure what you do ship genuinely works. If users are paying for the app, performance and reliability matter. You can still move fast in how you validate and iterate, but the underlying experience needs to meet a clear standard from the start.
2. Large, one-way technical decisions
Some technical choices are hard to undo. For example, when advising an app considering a transition to Flutter, it became clear just how much was at stake. Migrations like this touch almost everything and create long-term consequences.
This is where slowing down is justified. You want clarity on why you’re doing it, what problem it actually solves, and how success will be measured. A common trap is using a technical rewrite as an excuse to ‘fix everything’, which often leads to long delays.
Perfection here doesn’t mean dramatically improving everything. It often means ensuring the new version performs at least as well as the old one, without introducing instability. That bar is usually lower than teams expect, but still worth protecting.
3. Data privacy and security
Whenever you’re handling user data, speed takes a back seat. This includes privacy, consent, tracking, and compliance. Carelessness here erodes trust quickly, and it’s difficult to recover from.
This is not an area for rough experiments or shortcuts. It’s one of the few places where being overly-cautious is usually the right call.
4. Vulnerable user groups
If your app serves vulnerable audiences, extra care is required. I once spoke to someone working on a mental health app for children (a double dimension sensitive topic).
In cases like this, teams often invest heavily in research and exploration before shipping anything meaningful. That doesn’t mean nothing ever gets released, but it does mean the bar for thoughtfulness and validation is higher, and the cost of getting it wrong is taken very seriously.
5. Brand-defining moments
Some releases don’t just add functionality; they reshape how users perceive the brand. These moments deserve extra care.
A good example is Ladder, a fitness app that expanded into nutrition, repositioning itself from purely fitness-focused to a broader health platform. That’s a significant shift in both competitive space and user expectations.
The first version wasn’t perfect, but it felt deliberately more complete than it strictly needed to be. Features like nutrition input via voice, image, and text were available from the start, rather than offering just one method. That choice wasn’t about speed or scope; it was about ensuring users felt the new positioning immediately.
6. Irreversible decisions (one-way doors)
Jeff Bezos describes decisions as either one-way or two-way doors. Two-way doors can be reversed; one-way doors cannot.
Changing your target audience, fundamentally shifting your app’s purpose, or making promises you can’t easily walk back are all one-way doors. These are moments to slow down, pressure-test assumptions, and be honest about the long-term implications.
7. Proven functionality being rebuilt or scaled
Finally, when something has already been validated, rushing the implementation can be counterproductive. That’s how teams end up with fragile systems, spaghetti code, and a growing backlog of bugs.
I’m painfully aware of this one, partly because my husband works as a developer at an edtech startup and spends a large part of his time untangling exactly these kinds of rushed implementations. The time saved upfront is almost always paid back with interest later.
Speed is the default. Perfection is selective.
Most subscription teams don’t fail because they move too fast. They fail because they wait too long to learn. Certainty doesn’t come from better planning, more sign-offs, or cleaner spreadsheets; it comes from shipping, observing, and deciding.
It’s a balancing act between certainty and risk:

Fast feedback is a competitive advantage, not because it guarantees success, but because it helps you stop wasting time on the wrong things sooner. Speed doesn’t have to mean sloppy work or broken trust. When done well, it’s about narrowing the learning question, testing the smallest meaningful assumption, and moving on quickly when something isn’t working.
There are moments where slowing down is not only justified but necessary. When trust, safety, irreversibility, or core values are on the line. But these moments are the exception, not the rule. For most early-stage teams, speed should remain the default. Perfection is something you apply selectively, when the cost of getting it wrong outweighs the cost of waiting.

