Preview: this page is not live

The creative volume trap in Meta ads

More winners, less waste: building a sustainable creative strategy

Nathan Hudson
Published

Summary

High creative volume in Meta ads drives diminishing returns by reducing creative diversity, causing burnout, and weakening experimentation. Large volumes clutter accounts and yield little insight when most ads get no spend. Stronger performance comes from focusing on winning creatives and efficiency metrics like cost per winner, balancing iterations with new concepts, ensuring creative diversity, and maintaining structured testing to uncover sustainable long-term growth.

Perhaps slightly controversial. I’m about to go to war with the Meta ads, creative volume advocates! And the irony is that I run an agency producing an ungodly number of ad creatives for our clients….

Just over 6 months back I spoke to an app founder who pushes 500 new ad creatives on Meta every single day. That’s ~15000 ads tested per month!

Since that day, I’ve spoken to dozens upon dozens app founders, UA Managers and Heads of Performance who have all taken the same stance, saying:

“In order to take our Meta account to the next level, we need to test more ad creatives”.

But this isn’t necessarily true. More volume isn’t always the answer.

Volume is overrated

In short, a lot of teams are putting creative volume above everything else when it comes to Meta; at times setting a metric like # of ads tested per month as the primary measure of input. But this is a slippery slope. Not only are there some harmful, unintended consequences to be aware of, but strategically, this can position the entire team to sprint off in the wrong direction.

/quot

The goal of creative testing is to find new winners. It’s not about hitting an arbitrary number of creatives tested. We want to find ad creatives that enable us to scale spend, improve performance metrics and unlock new audiences in our ad accounts. Pumping out as many creatives as humanly possible isn’t the best way to go about that.

Now I know what you’re thinking: 

“But Nathan! The more creatives we test, the higher the likelihood we’ll find new winners. And the faster we test, the faster we’ll find new winners”

Hmm.

I get where that line of thinking comes from, and in theory…I completely agree. If all other variables remain constant this would be true 100% of the time. But in practice, these variables hardly ever remain constant, and when they do, there’s a ceiling. Followed by negative returns.

{Maybe a chart here to visualise diminishing returns, followed by negative growth}

Why is the volume game actually a trap?

Decreased creative diversity

When sheer volume becomes the headline KPI, the quality of creative tests tends to drop. Every creative team naturally bends toward churning out ever smaller tweaks. 5 pixel colour shifts, copy changes that barely register, trivial format flips, just to hit targets. At that point, you’re not testing hypotheses or uncovering genuinely fresh insights; you’re playing a numbers game that makes big swings a thing of the past, scatters your learnings across a flood of low-impact variants, and ultimately erodes your chances of finding new winners.

We recently onboarded a new client at Perceptycs who had run into this exact problem with a previous creative agency. The agency was commissioned to deliver a certain number of creatives each month. At first, things were great. New concepts, some nice iterations and a healthy win rate. But overtime, the win rate started to decrease and the new concepts weren’t taking off like they used to. So what happened?

The creative agency started delivering more and more iterations of historical winners, and fewer new concepts. They started playing it safe. At first, performance improved. Happy days. The iterations extended the life of winning concepts, the win rate technically went up. So things looked healthy again. Until they didn’t. Inevitably the concepts fatigued and no amount of iterating could bolster performance. The client was forced to scale back and now we’re helping them rebuild the right way.

Honestly, it’s pretty easy to avoid something like that happening:

  • Capping the number of variants.
  • Adding a quota for iterations of winners
  • Ensuring a high enough percentage of testing budget is pushed into new concepts.

But if volume is still the focus, these measures will just open the door to even bigger problems. Here’s why.

Creative Burnout

Creative teams, whether in house or agency side, don’t resort to banking on iterations because they’re lazy. At least not in most cases…I hope! Often it’s because of creative burnout. Over time, not only does it become harder to come up with fresh creative concepts, angles and formats but teams have to do so at an increasing rate as volume targets increase. Sooner or later, win rate will start to drop, performance will get shaky and that’s when the wheels come off. Finding genuine new winners becomes akin to drawing blood from a stone.

And we all know what happens next.

  • Teams get demotivated.
  • Downward pressure increases as performance drops.
  • Everyone is back to playing it safe.

Quotas or no quotas, when metrics are in the red month after month, most folk will take iterations over new concepts if it means stronger performance.

{
Circular flow chart
Increase volume > Decreased creativity > Increased iterations > Fewer net new concepts > Plateau > Decreased performance

}

Poorer experimentation rigour

Perhaps one of the most harmful side effects of a volume first approach to creative testing is the collapse of structured experimentation methodology.

By this I mean:

  1. Hypotheses development takes a back seat: “There’s no time to waste” becomes, “how many corners can we cut and still push out enough ads?” That means teams rush straight into creative production without first articulating clear, testable hypotheses and end up tinkering, not learning.
  2. Impact vs Effort prioritisation starts to erode: It would be a flat out lie to say that high effort always equates to high impact. So often we see simple, ugly creatives that took minutes to produce outperform Spielberg-esque creatives. But when that’s the case, there’s now a performance justification for quick and easy. All of a sudden we prioritise based on production speed as opposed to likelihood to succeed. Long term this just doesn’t work. We need creative diversity which means a mix of high production and low production creatives.
  3. Corners are cut when it comes to post test analysis: When you’re staring at 200+ ad creatives, each with 20+ data points, and you have another 200 briefs to create this week… Trust me, you’re not feeling great about the task ahead! And that means more corners are cut. Placement analysis? Maybe next time. All of a sudden, you’re missing out on key insights, ignoring crucial learnings and creative testing has become a matter of throwing doo doo at the wall and seeing what sticks.
  1. Experiment documentation gets overlooked: You can forget keeping logs or writing up experiment docs. “No one even looks at those anyway.”

Maybe these things seem small. But when you’re scaling an account from 5 to 6 to 7 figures in ad spend you need some degree of structure and systematic process to consistently see success.

Now. Am I saying that it’s impossible to run a high volume of creative tests and maintain a rigorous approach to experimentation?

Of course not! But it’s easy to let these things slip.

Ad account chaos

This is where the fun begins!

Have you ever pushed 100+ creatives live in a single day?

No scratch that. Have you ever tried structuring 300+ creatives per week, consistently, across different formats, concepts, angles, creators and languages for an iOS 14+ app where you have a limit of 18 campaigns each with 5 ad sets.

Trust me. It’s uncomfortably frustrating.

Granted if you’re testing on Android first or leaning into web2app, these limitations aren’t an issue. But as Uncle Ben says, with great volume comes great structural complexity. (Or something like that).

Sure, you could just throw 50 creatives into an advantage plus campaign and let the winners rise to the top. But you’re telling me that I overcame creative burnout, put together hundreds of briefs and forced myself to follow a rigorous experimentation process for the Meta gods to decide that 90% of ads shouldn’t get any spend?

Uh uh! Nope.

Assuming there is a hypothesis behind your creative or a reason you made that ad you want to see it tested. When a creative gets spend and then fails, we can dig into why. We can look at on-platform metrics, dive into breakdowns, pay attention to placements etc.

But when the creative doesn’t get spend and we just say – “oh Meta didn’t push this creative because it’s not a winner and usually they are right”, we now have 280+ losing ads and no indication of why they didn’t perform. Soon I’ll have thousands of losing ads, multiple failed concepts / formats, and zero data.

It’s almost as if I should have just tested fewer ads…


What to focus on instead?

1. Set the right North Star

Like I said before, the point of creative testing is to find new winners! So that’s what we should be tracking:

  • # of winning creatives in a given period
  • Win rate (winning creatives ÷ total creatives tested) in the same period

If you can increase the absolute number of winners over time without your win rate collapsing, you’re doing high-volume testing the right way. I recommend plotting win rate against volume week-over-week (or month-over-month): when win rate starts to dip as volume climbs, you may be pushing things too far.

The goal is to generate net-new winners with maximum efficiency.

A key efficiency metric I like to track is Cost Per Winner (CPW):

(Total testing spend + Total production costs) ÷ # winning creatives

This shows exactly how much you’re paying, on average, to uncover each new winner. If CPW drifts upward, you’re spending more to find less.

{Could plot an example chart for this}

All of these metrics share the same North Star: more real winners, less wasted spend. If you see cost per winner climbing or win rate falling, it’s a signal to:

  1. Dial back volume
  2. Revisit hypotheses & creative diversity
  3. Double-down on quality guardrails


2. Focus on diversity as much as volume

Not only does creative diversity prevent ad fatigue, but it unlocks new growth in your ad account too.

By creative diversity, I means

  • Testing statics, videos, carousels. 
  • Mixing it up with high production value and low production value creatives. 
  • Pushing out different creative concepts and trends. (Us vs Them, Founder Story etc)
  • Working with different creators. 
  • Trying out different editing styles. 
  • Focusing on different angles and value propositions
  • Experimenting with different A.I creative formats
  • Balancing new concepts with iterations
  • Crafting scripts and briefs around different JTBD. 

I like to think of it like this:  

For every JTBD we’ve identified, we want multiple winners. For every placement we advertise in, we want multiple winners. For every demographic we deem as relevant, we want multiple winners.

The only way to do that is to focus on creative diversity and ensure we document our hypotheses and learnings to make these connections.

This also forces you to produce new concepts as opposed to iterating on historical winners. 

Tip: If you really want to be cautious try adding quotas for iterations of historical winners and aiming to at least 60% of creative testing budget towards new concepts.

3. Reward creativity and celebrate big swings (not just wins)

Earlier this year, Deeksha, our Growth Lead, came up with a killer creative concept. It was funny, engaging and all round a great ad. But the first variant didn’t do that well at all. In fact it flopped. But the ad still made our creative hall of fame and got celebrated on Slack. 

Now, on one hand, who cares, it flopped. After all, we want winners 😉. But the concept was super smart and it’s that type of thinking and creativity that enables us to find new winners. In fact, it was her creativity that eventually turned that concept into a winning creative.

We treasure that creativity and celebrate big swings. No, they won’t all pay off. But if we don’t foster a culture where creativity can breathe, we’ll just end up copying competitors off Ad Library 👀. That would make us a ‘not so Creative Agency’. And the same goes for you and your team!  

4. Don’t burnout your creative strategists

Finally, make an active effort not to burn out your creative strategists! In a world where creative is the larger lever to pull, creative strategists are literally your engine. 

If you are expecting one person or even a few individuals to come up with dozens and dozens of completely new concepts each month on their own, at an increasing rate, to greater success…You need to rethink your expectations!

More and more frequently we’ve started supporting teams who already have strong in-house creative teams but are looking to ensure diversity and increase volume without running into creative burnout. Bringing in an additional creative agency partner to buff up your creative efforts can enable you to reap all of the rewards of high volume testing with very few of the drawbacks. (If that creative partner prioritises finding new winners and aren’t just playing a volume game!).

Conclusion

High volume creative testing can work. 

But increasing volume isn’t always the answer to your performance plateau. There are a million and one ways to tank your Meta ads performance by focusing too much on creative volume without the proper guardrails.

Instead, build a systematic creative testing strategy around a single North Star: new winners, delivered efficiently. Track your win rate alongside volume, keep an eye on cost per winner, and double down on creative diversity with quotas and hypothesised JTBD.

By swapping volume-chasing for insight-chasing, you’ll preserve your team’s creativity, maintain rigour in your experimentation and unlock real, sustainable lift in your Meta accounts. 

You might also like

Share this post

Subscribe: App Growth Advice

Enjoyed this post? Subscribe to Sub Club for biweekly app growth insights, best practice guides, case studies, and more.