Advanced Meta Ads strategies for subscription apps

Master Meta Ads to drive growth for your subscription app with insights from Marcus Burke, Independent Consultant.

Tuesday, September 17, 2024 at 12:00 AM

Advanced Meta Ads Strategies for Subscription Apps with Marcus Burke and David Barnard! In this in-depth webinar, you’ll discover expert tips on how to optimize, scale, and maximize your Meta Ads campaigns to drive growth and improve ROI for your subscription app.

What you’ll learn:

  • Smart campaign architecture & attribution: Learn how to structure campaigns to support algorithmic learning and scale with precision.
  • Creative testing techniques: Explore how to test, iterate, and optimize ad creatives to improve conversions.
  • Scaling with precision: Discover key tactics to scale Meta Ads campaigns effectively while maintaining strong ROI.

Key takeaways from the webinar:

🔹 Consolidate campaigns for better optimization: By consolidating campaigns, you send stronger signals to Meta, but be careful not to over-consolidate, as it can lead to Meta prioritizing only one creative, which may not yield the best results long-term.
🔹 Start broad, refine with data: Begin with broad targeting and optimize over time using audience exclusions (e.g., exclude younger audiences who are less likely to convert or purchase high-priced subscriptions).
🔹 Allocate at least $10k for effective scaling: To ensure Meta’s algorithm has enough data to optimize your campaigns, aim for a $10k/month budget. Lower budgets may struggle to provide enough signals for effective scaling.
🔹 Test creatives regularly: Spend 20-30% of your budget on creative testing. Focus on different formats (UGC, static, video) and styles to find high-performing ads that resonate with various audiences.
🔹 Use lookalike audiences effectively: As soon as you have enough user data (ideally 10,000 entries), create lookalike audiences based on your highest value users (e.g., subscribers, paying users) for better ad performance.
🔹 UGC ads can stay effective for months: User-Generated Content (UGC) ads can perform well for several months, but monitor audience shifts and engagement to avoid creative fatigue. Refresh creatives when you notice performance drops.
🔹 Balance signal quality and quantity: Focus on deeper funnel events like trial starts or purchases instead of surface-level events like app installs to send higher-quality signals to Meta’s algorithm.
🔹 Optimize ROAS bidding for subscriptions: For subscription apps, it’s essential to feed Meta predicted lifetime value (LTV) data or qualified conversion events, rather than focusing solely on first-time purchases or trials.

Further reading to help you level up your paid UA skills:

  1. Blog: How to use web-to-app to go from 0-1 with your paid UA
    Marcus explores how to use the web to accelerate and complement your existing UA strategy.
    👉 Read the full guide

  2. Podcast: Scaling Your Subscription App with Meta Ads with Marcus Burke
    Marcus talks about the past, present, and future of Meta ads.
    👉 Listen to the podcast

  3. Community: Ask Marcus Burke Anything
    Still our most popular AMA to date! Jam-packed with insights.
    👉 Read the AMA (if you’re not already a member, please register to join)

Introduction

[00:00:00] David Barnard: Hello everyone. Uh, thanks for joining us today to chat about meta ads and Marcus. Thanks for joining us as well. 

[00:00:07] Marcus Burke: Glad to be here. 

[00:00:09] David Barnard: So we had a podcast earlier this year. Um, and for those of you who haven’t listened to the podcast, we’re going to try and not double up too much on the content. So if you enjoy what you hear, go back and listen to that one, uh, for some topics that we’re, Probably not going to cover at least not as in depth today.

[00:00:25] That was more an overview and we’re going to get into the weeds a little bit. Um, but as we get started, so most of you probably know I’m David Barnard. I work for Revenue Cat. We’re a platform with a mission to help developers make more money. So we build an SDK that goes in the app to handle monetization, paywalls, um, AB testing, like there’s a ton of stuff, uh, in the platform and we’re actually working on a lot of other really cool stuff, uh, that we’re adding to it.

[00:00:51] Um, and Marcus, why don’t you give a quick intro of who you are? 

[00:00:55] Marcus Burke: Yeah, 

[00:00:55] David Barnard: sure. 

[00:00:55] Marcus Burke: Hi everyone. Uh, I’m Marcus. I work as an independent consultant, [00:01:00] mainly focused on meta ads for subscription apps. And yeah, tend to work with like smaller app developers that are seeing like frustration on the platform and want to scale and yeah.

[00:01:10] Become a really big in the end. And I’m also creating content around that on LinkedIn. So if you’re not following me yet, make sure to check out my profile. Um, I share a lot of the stuff we’re going to talk about here also on LinkedIn. 

[00:01:24] David Barnard: Cool. And a couple of other things to talk about. I also do a podcast, as I mentioned with.

[00:01:30] Marcus was on earlier this year. It’s called the sub club podcast. So go check out sub club. com. A lot of the kind of stuff we talk about in the webinars. We’re going to, we cover, you know, on a, on a twice a month basis on the podcast, a lot of really cool guests coming up, uh, that I’m excited about. New podcast dropping tomorrow with Christian Selig, the founder of Apollo.

[00:01:49] Uh, and that was a really fun conversation. So don’t, don’t miss that one. Today, we’re going to try and go fairly quick. I know a lot of you are going to have questions for [00:02:00] Marcus. Uh, so we’re going to try and leave extra time for for Q and A today. Um, put if you do have questions for Marcus, um, we’re not going to answer the questions live.

[00:02:12] We’re going to kind of go through the content that we prepared for today. And then do a Q and a at the end. So put your questions in the Q and a, and then upvote ones that, uh, you want answered, and then actually, if you would downvote ones that were already answered. So sometimes we end up with questions early in the, in the, uh, webinar, early in the conversation that end up getting answered later in the conversation.

[00:02:36] So downvote the ones that you feel like were adequately answered in the conversation. That way I can kind of prioritize by the upvotes. Um, And last thing, we are going to share this on, uh, YouTube. And if, and if you’re registered here, it will be, um, emailed to you as soon as it’s available for replaying.

[00:02:55] And then the very last thing we are one week out from, [00:03:00] uh, me and Marcus meeting in person for the very first time. So I’m a new cat. Is hosting our first annual, uh, app growth annual and, uh, Marcus is going to be there leading a workshop. Um, the workshops unfortunately will not be streamed. Um, so we’ll have to get Marcus back on maybe early next year to make sure we’ve covered all the content that people were excited about in the workshop.

[00:03:23] Um, 

[00:03:24] Marcus Burke: even more advanced meta. Yes, 

[00:03:27] David Barnard: even, even more, more, more advanced, super deep advanced. Um, but, uh, if you haven’t already registered, go register because there will be six, uh, talks live streamed. Um, and that’s appgrowthannual. com. Uh, and, uh. I think we have like a few more in person spots, uh, not many though, um, so if you are in the San Francisco area or can book a flight real quick, um, you can still apply to attend in person, uh, but probably at this point, best to just register for the, um, The online seminar, [00:04:00] the online version.

[00:04:01] All right. That was a long intro. Let’s dive into meta. 

Diving into Meta Ads: Campaign Architecture and Attribution

[00:04:05] David Barnard: Um, so the first thing I wanted to get into, um, is, is how you think about, uh, campaign architecture and attribution when you’re just getting started with meta ads, and then maybe since this is the more advanced, we can like then start diving into, um, You know, how, how you then kind of designed for scaling.

[00:04:30] Marcus Burke: Yeah. Um, so I mean, anyone familiar with the channel probably heard that, um, you want to consolidate your account because in the end meta works on the conversion signal that you send them. So the more signal that it receives, the better it’s going to be at targeting the right audience for you and to doing that in a efficient manner and doing it On a daily basis, just as good.

[00:04:55] Um, so when structuring your account, you always want to think about like, what [00:05:00] does that mean in terms of the signal volume I can still receive on each ad set? And what are then the, uh, restrictions I want to put on the algorithm? Where do I want to handhold it so that I can in the end have control over where my money goes, because.

[00:05:17] Whoever ran campaigns before probably has seen that meta tends to call shots quite quickly on if you put everything in one campaign, they will decide quickly which ad is the best one and then allocate most of your budget towards it. So if you have very few campaigns and very few ad sets, then in the end your money is going to be very consolidated.

[00:05:38] Usually on one creative that runs. Towards a certain audience and that’s not always the best thing, um, to do. So when I started an account, um, then usually depending on my budget, I would start out with just one campaign. Um, that I’m gonna optimize towards usually an app event, which is probably something we are going to talk about [00:06:00] later.

[00:06:00] Um, and within that, I’m going to start with, um, ad sets based on that. I usually targeted broad in the beginning because. Targeting is handled quite well by the algorithm these days, so I’m not using any interests. I do use lookalikes once I scale, but often to new advertisers, they’re not available because you need a ton of data to create them.

[00:06:22] So I start broad, but I do, um, depending on the creatives that I have split out my ad sets by creative type. And the idea behind this is that I’ve seen over and over that in the end, different creatives will scale on different placements. Um, if I use a static edge four to five, that’s mainly scaling on, uh, an Instagram feed or a Facebook feed, um, because that’s just the place where there’s more static content sits.

[00:06:48] While if I run a nine to 16 short form video, UGC style, basically a creator talking to the camera. That’s a native format or native creative type to reels [00:07:00] or, um, stories. So that’s more likely to scale in these places. And in the beginning, I’m really trying to figure out like, what’s the cost per action I can drive from these different placements.

[00:07:11] And then, um, also cross check that with my first party data to understand which audience is behind which of these placements and what am I expecting in terms of conversion rates? Because it’s. A lot different if you’re buying traffic that is potentially a bit older from a Facebook feed compared to if you’re buying traffic that is relatively young, lower intent, not as, uh, with lower purchase power from an Instagram Reels that is, um, just a placement used by younger audiences.

[00:07:40] So that’s usually the first split that I kind of try to, um, explore. Um, and as I scale, um, creative type is definitely always leading my decisions there. So as I find new creative themes, for example, or I try different creative types like carousel ads, I would usually put them in a different app set, try to [00:08:00] figure out if that’s running profitably and therefore add more ad sets over time.

[00:08:06] David Barnard: Yeah. One of the things, um, that gets asked a lot, and I wanted to re ask this question is, um, how, What do you see as the minimum budget for, for a specific campaign? So you talked about kind of splitting out the campaigns. Um, and I want to dive into those budgets. Cause it’s something like people ask all the time, but it seems like things are changing.

[00:08:34] For the answer, though, I did forget to, um, talk about the poll. So we did have a poll. Uh, I meant to, to ask folks to, to fill that out when we first started, but I do want to pause and just, uh, uh, look at the poll real quick. So the highest percentage of folks on this call, um, have, little to have little experience, but still learning on meta ads.

[00:08:58] Um, [00:09:00] so some sounds like most people have actually done some meta ads. Uh, 27 percent are new to meta ads. Um, and then we actually have some very experienced folks on here that makes up, gosh, uh, 34 percent are Quite comfortable and or very experienced with meta ads. So kind of a good audience for this digging deeper.

[00:09:21] Um, but let’s, let’s, let’s take it back up just one level and answer this question because I think this applies even to the more advanced strategies. It’s like in 2024, you’re kicking off, you set up a new campaign. Um, what. How should you be thinking about budget in order to get the signals you need to train the algorithm today?

[00:09:45] Marcus Burke: Yeah, so I usually quote around 10k for the pure like media budget for that, because in the end, yeah, you need sufficient signal for the platform. Also in the beginning, Everything is a test. Of course, you’re testing your creative, you’re testing what your architecture looks [00:10:00] like, you’re testing your conversion event.

[00:10:01] And of course you want to make sure that you’re placing a few strategic bets to figure out what’s the best setup for me. And is there a way at this point where I can run at a return on ad spend that already looks interesting. And that just needs a bit of volume so that you send meta enough signals on all of these tests to figure out.

[00:10:21] Like, can, can that happen? And on top of that comes creative production, which is, yeah. Uh, I’m just seeing the question. I’m going to answer it now. It’s 10 K a month, but basically, um, I would say, I mean, like, I think after 10 K you can make a good decision on, is this going to go somewhere if you are following the best practices and you know what you’re doing?

[00:10:42] Um, so it’s not necessarily 10 K a month after the 10 K, if you see, okay, like I’m far off in terms of return, I’ve optimized this for the right event. I’ve had a creative that I feel strongly about that has been user research before. Um, then. If you’re at kind of, I don’t know, [00:11:00] like negative 80%, uh, return, probably there is work to be done in your product and your funnel on pricing before you can turn this into a profitable channel for you.

[00:11:10] So then it wouldn’t have to be 10 K per month, but. 10 K once, and then figure out what’s, what’s the next steps. 

[00:11:17] David Barnard: Yeah. And would that be per campaign? So as you spin up new campaigns with that, you would need to kind of think about each new campaign structure, having that kind of minimum investment as you like, start experimenting and scaling.

[00:11:31] Marcus Burke: Not necessarily. No, not per campaign. Um, I mean, in the end, what matters is the amount of conversion events you can drive for that cost. So if you’re optimizing for a trial start, for example, you might be getting these at as low as like 10 bucks in the U S uh, if you’re doing well and your pricing isn’t too high.

[00:11:47] And then, um, from that, basically you, you, like the recommendation is to drive 50 events per week so the algorithm can do a good job and that would already allow you to do multiple ad sets. You can test. So the 10 K rather [00:12:00] includes also that you might want to try maybe two different conversion events, uh, in that test.

[00:12:05] And then you have a few different creatives, of course, that also need to gather them signals on them to find their audience. So in the end, it will look different for every advertiser, what you’re trying to test in the beginning. Um, but. 10 K I think is a good ballpark where you can give the channel a fair chance, which is of course a lot more expensive than an Apple search ads, for example, where you can kind of bid on your three most interesting keywords.

[00:12:29] And even for a few hundred bucks, you can already see first results of whether that’s going to work and on top comms media production. So depending on the type of your app, there is going to be different styles of creatives that are relevant to it. And they can be more expensive or they can be more cheap.

[00:12:47] Um, like UGC is definitely a style that’s working for most apps, but for to produce that you need a creator that you need to pay out, then you need a video editor or the creator can do that himself. You need scripting for [00:13:00] these ads, uh, based on user research. So. Either you’re good at all these things, or you might want to hire people that are good at them, which of course means a little bit of overhead.

[00:13:09] Um, and then creative could come at the cost of maybe 400, 500 bucks. And you definitely just want to test more than just one. 

[00:13:18] David Barnard: Yeah. 

Optimizing Campaigns: Signal Quality vs. Quantity

[00:13:18] David Barnard: How do you think about that trade off between signal quality and signal quantity? Cause you can just throw money at it. And get more quantity of signal. But how do you, how do you optimize for signal quality over just kind of like raw quantity of events?

[00:13:34] Marcus Burke: Yeah. Yeah. One of the big question there is usually like, should I, should I choose an event that’s more up a funnel so I can collect more of them? Um, because Meta gives this recommendation of 50 events a week and then everyone thinks, okay, I can’t reach, it seems I can’t read 50 trials a week yet, for example, because my conversion isn’t strong enough or I’m in a very competitive space and then I’m just going to move to install, which I can get a lot of [00:14:00] because they’re only three, four bucks or something.

[00:14:02] Um, but. Like usually I would say you want to, I mean, the way this is going to work is you’re training an algorithm and you’re training it to do a good job for you. And therefore you want to, of course, feed it with a data that has relevance for you. So if you’re feeding it with a lot of install data, it’s going to get better at training at finding installs, but, um, an install can be of, you know, A very different quality.

[00:14:30] Some installs are going to be worth 50 bucks for you. Some are going to be worth 50 cents for you. So basically the modeling that needs to happen based on that and the qualification that you need to do is still going to be a whole lot of work and that’s going to be hard. So therefore I try to usually move as deep in the funnel as possible.

[00:14:49] So that Meta actually learns about what’s my, what’s my business goal in the end. In a perfect world, I would feed Meta my return on ad spend. And they would know my renewal rates. They would knew which [00:15:00] SKUs people are buying, and then they could do a perfect job. That’s not where we’re at. Um, so a trial is usually a good place to start, but even there over time, I would reconsider if a trial is the right thing for you to optimize for, because many people start a trial, but then often the trial conversion differs massively by age, gender, um, user goal.

[00:15:21] So if you’re seeing that, yeah, for example, the usual case is a broad, broad targeting, optimizing for a trial. The algorithm buys a lot of young traffic because it’s relatively cheap. They are happy to start a trial, but they are also very happy to cancel it quickly. So if you’re seeing these patterns, then really think about how can I counteract these kinds of quirks that I see in the algorithm, where it’s.

[00:15:46] Meta is blind and I need to kind of inform the algorithm where to go. So one way could be, I exclude younger age groups from targeting. I could try to only fire my trial start event for people that are older. If [00:16:00] they’ve given me that information onboarding, I could. Say any, anyone below 25, I just don’t fire the event.

[00:16:05] Hence the algorithm learns who are the people that are still starting a trial. Or I could think about building event combinations where I say, I’m only going to fire the trial if people have started it and then did, for example, a meaningful action in my app. Where I know it’s gonna increase the likelihood of them converting.

[00:16:23] So I think the conversion event is in the end, something that you want to think about deeply, and it can definitely, um, like alter over time and be upgraded. Um, and you definitely shouldn’t like set and forget it and think. Like I’m just going to do trial. I’m going to send it to the standard trial study event and I’m good forever because, um, in the end, what the, like the data that meta sees and how much of it it sees is going to make a massive difference in terms of your, um, campaign performance.

[00:16:55] David Barnard: One of the patterns I see a lot, and I actually, like, I just had a, an update to [00:17:00] my app, uh, yesterday that drove quite a bit of traffic. And so I was, I was looking through some of those transactions, uh, in the revenue cat dashboard and, and it got press on a Mac rumors and that’s what drove a lot of the traffic yesterday.

[00:17:14] That is exactly what you said. Kind of a younger audience, probably, uh, more like tech savvy, more kind of, uh, terminally online. And, um, just, we, we don’t yet, this is something actually, uh, my colleagues are working on, um, showing when people turn off auto renew. Um, so currently I, I was just kind of random sampling, going through some transactions and, and over and over, I think of the first like 10, I sampled, um, six of them.

[00:17:45] I saw the exact same pattern, open the app. And then like within 60 seconds to two minutes, start a free trial. And then within 60 seconds to two minutes, turn off auto renew. And it’s six, like six of 10, like turned off auto [00:18:00] renew within those two minutes. But a lot of them would turn off auto renew and then still come back to the app to kind of like finish setting it up or whatever.

[00:18:07] Um, is that like, so, so would you suggest, like if I were to start scaling up? Not fire that trial start event until I know they haven’t turned off auto renew. So like, and, and how would, how would you set that up within the app? Like you, you would detect that, that, um, that the, the auto renew had been turned off, um, and, and then exclude that from triggering the event.

[00:18:33] Marcus Burke: Yeah, that has been a pretty common tactic prior to ATT. Um, so yeah, for example, at Blinkist, I think we waited for like four hours, um, before we then said anyone who’s still active on their auto renew within their trial, that’s what we, that’s when we fire the event for, and then hence meta gets higher quality signal.

[00:18:52] Um, but yeah, I mean, I haven’t. I haven’t seen this, or at least at Blinkist, we didn’t see that this was a massive [00:19:00] upgrade in terms of ad performance, just from, from sending that event, but what, of course it helped a lot for is finding, um, the better creative based on, based on these signals, because if a creative, if we see a cost per trial on a creative, and it still includes that cohort, that is definitely going to cancel.

[00:19:17] Um, Um, then we’re going to be evaluating based on that cost per trial. Well, if we, um, evaluate based on a qualified, uh, cost per trial, as you call it, then, um, of course there might be creators which are driving lower quality traffic and others are driving higher quality traffic and you get a better read on that.

[00:19:36] So that’s what I feel like it’s a bit more useful for. Um, the tactic was a bit forgotten or like it wasn’t as, um, prominent when scan was the main attribution framework for most accounts, because in the end your conversion event, uh, on scan needs to be triggered when the person is in the app and.

[00:19:56] Basically this could be that they don’t necessarily are, they’re [00:20:00] not necessarily in the app after four hours. Um, so that, therefore it was a bit tricky to like make sure that you’re sending the right data back. Now that AEM is what I think most accounts are running on, I would say it’s something that’s worth a try again, especially for that kind of creative evaluation piece.

[00:20:18] Um, I wouldn’t expect kind of Performance has skyrocketed just because you’re switching from a trial start to this. Um, what I think makes a lot of difference though, and I’ve been recently seeing that a lot is also the standard event that you send this to. So meta is bucketing, um, your events into a certain standard events that they define because they want to make sure that they can collect data from all advertisers and, uh, in categories so that they can pre train their models.

[00:20:46] So if you’re sending your data to the start trial standard event and optimize for that, um, that’s going to result in a different audience targeting performance compared to if you send it to a purchase or to a subscribe. So definitely [00:21:00] it’s worth to also play around with that. Um, there recently was like a.

[00:21:04] Bug or maybe, maybe update, it’s a bit hard to say from meta where they switched off their auto logging, uh, or they switched the auto logging of events from, um, so any direct subscription without a trial would now go to subscribe and it was a purchase before, and any auto logged, uh, trial subscription would now go to start trial and it was a purchase before.

[00:21:25] Of course they didn’t tell anyone before. So when the switch happened, any campaign optimizing for the other event would just not get any conversion signal anymore. Um, but, uh, for one of my accounts, actually, it was a super interesting test because we then had to switch to the other event, which was from start trial to subscribe.

[00:21:42] And first indications look like that subscribe is actually delivering quite a bit better performance than start trialed it. And that’s really just from the event that we’re optimizing for other than that. Nothing has changed. 

[00:21:54] David Barnard: Yeah, the ultimate quality signal is subscribe. [00:22:00] Um, but have you seen any success with waiting on and and only sending the subscribe like with a three day trial or is that pretty much just not an option these days to wait the full three days before giving the algorithm feedback?

[00:22:16] Marcus Burke: No, I mean, most of the apps I work with have a seven day free trial, but usually I wouldn’t recommend send to go for an event that’s deeper within the funnel than a day. Um, I mean, with scan, it’s not even possible, but in the end meta needs like quicker signal. Otherwise for three days or four days, even they’re going to be flying blind.

[00:22:34] And only then they start receiving who’s converting. So like try to find something that happens early in the app. And if it’s not there, then try to design your app in a way that you create that event. Um, I mean, an interesting tactic is also using a pay trial, um, which in the end is much higher, um, intent than if someone starts a free trial.

[00:22:56] Um, so you can leverage introductory offers for that [00:23:00] and basically make it so that the first seven days are already. Three bucks or something, and then it renews into a 60, 70 bucks, uh, yearly, as you would usually do. And, um, that way you’re building in this initial hurdle so that all of these people that are going to cancer right away are just not going to do it because it’s not worth three bucks to them.

[00:23:19] And through that, you’ve created an event that’s of higher quality than you Just the free trial. So I think it’s also a product task in the end. It’s not just you as, or like your marketing team choosing, okay, these are the events available, which ones do we pick, but really be intentional about how can we create an event that has in the end, a high correlation with turning into a high quality user for us.

[00:23:42] David Barnard: Yeah, that’s such great advice. Um, I did want to move on. We’re 25 5 minutes in. I said we were going to save a lot of time for Q and A, but we’re only like 2 points into our notes. You’re just such a wealth of knowledge. Marcus, uh, we’re going to have to, like, do a 6 hour webinar. I mean, [00:24:00] that’s what the workshop is for.

[00:24:01] Right? Um, so, uh, yeah, we’ll have to split this into, like, a 5 part webinar series. Uh, Let us know in the chat if you wanna hear more from Marcus, and if this format is a good, um, uh, a good way to kind of ob absorb some of, uh, of what he’s learning. Um, so the next thing was that, that, you know, you talked about, uh, creating different campaigns to test different things, but then sometimes you do wanna consolidate that What is good about consolidation and then what can go wrong in consolidating some of these winning, uh, creatives or winning campaigns.

[00:24:38] Uh. Yeah, what’s good and bad about consolidation? 

[00:24:42] Marcus Burke: Yeah. Um, so I think there’s a lot of people out there that are like a bit too heavy on consolidation, I think. So, I mean, Meta pushes for it and it tells you more signal is better on each campaign, but in the end, if. If the, your conversion event is for example, a start trial and meta [00:25:00] doesn’t know who converts into a paying subscriber and into someone that renews.

[00:25:05] Then again, there is a modeling and decision making to be done on your end based on the data you see in your product. And that’s where you then want to start pulling in different campaign splits to inform that. So for example, if you see, as we said in the beginning already, if you see older people converting better than younger people, which is, you know, The case in most of the apps, because they just have more money.

[00:25:27] Then you want to, um, pull in that campaign split for creators that run on Facebook, creators that run on Instagram and reels, and they are most likely going to be driving younger and older people. And then you could have different cost per trial goals on them because, you know, people that are older are going to convert.

[00:25:46] 30 percent higher than someone that is in that younger age group. And that can happen for a lot of different things. Um, it might happen on a gender basis. Um, it might happen on different user goals. So as you grow, you might have [00:26:00] different creative themes talking to like, maybe one is more for lifestyle, the other is more for career goals, and then often anything that is career related is going to convert better because people are connecting it to like a monetary value.

[00:26:13] So that’s where you really want to figure out based on the part of the data, you have first party in your product. What do I know that Meta doesn’t so that I pull in a campaign splits, um, to then force budget onto these different segments in different ways. 

[00:26:29] David Barnard: Um, how, how reliable or how much signal are you able to.

[00:26:37] Meaningful signal. Are you able to get from app store connect? So like, if I go into app store connect, you know, they give me demographic information and again, for my app, specifically a weather app, most of my subscribers skew male and skew older. And, and they have the little charts and stuff like that.

[00:26:54] Do you, do you look at those things when like, especially early on? Like defining some of these [00:27:00] campaign structures. Like if you were setting up for my app and you saw that an app store connect, would you use that to then kind of inform the campaign structure and how you do that early targeting, or do you feel like that’s like not high enough signal and you really should be like asking those kinds of demographic questions if you’re not sure.

[00:27:16] If relevant, like maybe, maybe my brother app asking somebody to gauge is like, why the hell are you just aren’t going to answer randomly. So, so do you get any signal from, from the app store connect data or do you feel like it’s just not, not that reliable? 

[00:27:30] Marcus Burke: I usually only use it to like, look at, for example, install rates of Facebook versus Instagram, where they have like a neat breakdown and yeah, but I haven’t used it for the demographic data yet.

[00:27:41] I’m wondering if they have like a. Um, significant data set on that, or if that’s just from the people that share their data and then it might be skewed. Um, I haven’t used it for that, but I like the idea, especially for apps like yours, because of course you cannot have an extensive onboarding in every app and ask [00:28:00] users a ton of questions because for a weather app, it’s probably a bit weird.

[00:28:03] Yeah. 

[00:28:03] David Barnard: Yeah, once you start running the campaigns, does, does Facebook give you a pretty good breakdown of that? So like, it’s probably better to then do these demographic splits based on the Facebook data and kind of what you’re seeing there versus relying on something like the app store. 

[00:28:18] Marcus Burke: Yeah. I mean, Facebook is still great at reporting on a granular level.

[00:28:21] So it’s, it’s becoming more and more of a black box in terms of like how targeting is happening and you usually just run broad and then they do their magic. But in terms of reporting, you can always look into breakdown results on an age, gender level on every placement. So you can definitely decipher patterns where you’ll see like, Hey, whenever we spend more money on 25 to 34 on the Facebook placement, then we saw a higher performance in the product.

[00:28:48] So maybe that’s what we need to do more of. So I would definitely encourage everyone to like, look at breakdowns. Set up reporting so that you will see what am I like top three placements? Where does my money go? Because in the [00:29:00] end meta is multiple channels, Instagram and Facebook are two apps that are gigantic.

[00:29:05] Um, and then within these are different placements that are again, used by different users. And like one campaign can this week spend all your budget on Instagram reels and next week, spend all your budget in a totally different place. And that’s gonna do a lot of things to the traffic that you are driving.

[00:29:22] David Barnard: Yeah, fascinating. 

Creative Testing Strategies

[00:29:24] David Barnard: Um, the next thing I wanted to get into was, uh, creative testing and how to effectively test creatives. I mean, you know, I, I, I, I, I, I, a bit of an a novice myself, like, uh, I guess what was it like 26 percent of the folks here. Um, but I, you know, I, I hear from. People running big accounts that, you know, testing hundreds of creatives and doing rapid iteration and all that kind of stuff, what are the different kind of levels of that?

[00:29:52] And then, then how, how do you think about balancing just the raw creative, um, you know, hundreds of [00:30:00] creatives versus like, how do you even hone in on like which creatives you should start? Start testing, especially early. And then, I mean, it’s just a big jumbled, uh, mess in my mind. So I think it’s going to, and I mean, even for people running big accounts, it’s like, I don’t know that they even always know, like strategically, like which direction to head.

[00:30:19] So I think this is going to be a great topic for beginners all the way up through more advanced folks. So how do you think about that? Kicking off a creative day. 

[00:30:27] Marcus Burke: Yeah, I mean, yeah, I kind of touched a bit on it, uh, in the beginning with like setting up different ad sets by creative type. So when I look into like a fresh account or one that has been like only recently scaling, I usually like advice to in the beginning, um, tests, very big swings.

[00:30:45] Like you want to look into different creative types. You want to look into. Different, um, creative styles within each type. So, I mean, a nine to 16 video can be UGC filmed by creator. This can be an animation video. Um, they’re, they can look [00:31:00] very, very different. And that’s what you want to explore in the beginning, especially to tap into these different placements and figure out, is there a certain placement audience combination that is already like relevant to me and can work.

[00:31:12] And if you test all your stuff in statics in the beginning, then. That’s like even testing different styles, just on static or different messages that you can learn from that. But if they all run on Facebook and the people you’re looking for aren’t on Facebook, then the results are not going to be as significant for you.

[00:31:29] So definitely think very, very broad in the beginning. In terms of like how to like produce these and how to tackle production, I usually try to put like a certain percentage of spend on testing so that I say, I mean, in an early account, we want to be aggressive on testing because maybe we only have like one creative working right now.

[00:31:51] So let’s make sure we’re spending 20, maybe even 30 percent of our budget on test campaigns, where we then iterate through a ton of creatives and really find [00:32:00] like, what’s that, um, that one, um, creative type and messaging and style combination that might be working well already. Um, and also I try to always.

[00:32:10] Put a percentage of my ad spend on production in the end. That’s often a bit forgot about. I think there’s usually like big accounts are usually working by the like 10 to 90 or 10, uh, 80 to 20 rule on like testing and a business as usual budget. But I feel like often it’s a bit forgotten that you need to invest into like new assets, creators producing video content.

[00:32:30] So I also like to put a percentage on that so that you actually, as your account scales, also make sure to keep producing new stuff, because you don’t Even if it’s working well right now, um, there’s going to be times when it’s going to be rough. So you want to make sure like you are still feeding new learnings into that process as you scale.

[00:32:49] Um, and then 

[00:32:51] David Barnard: where do you draw the line? So, so when you set up these test accounts, like what’s the signal you’re looking for or what’s the threshold you want to see that like, [00:33:00] okay, this creative is winning and until it drops below this, we’re going to be putting a lot of budget behind that creative.

[00:33:06] Like what are the kind of signals that you’re looking at? 

[00:33:11] Marcus Burke: And so I usually run this in a weekly or biweekly cycle where then each week or every other week, we’re getting new creatives, uh, testing them in the account. They would run in a separate campaign so that we’re not messing with the performance of what’s already working.

[00:33:26] And then within that, I have my different, uh, creative types, maybe even themes by ad set. Um, so I basically have a few hypotheses based on the content where they’re going to go in terms of a placement. And if I have two types that are completely different, then I want to give them a fair shot by having them run in separate ad sets, because otherwise Meta is gonna like, Put all the ad spend on like one creative and if they, for example, they tend to favor UGC content because again, it’s driving a younger audience that is cheaper.

[00:33:59] So that’s why I want [00:34:00] to like pull in these splits again. And then depending on how expensive my cost per trial or cost per per action in the end call on conversion event is I need a certain budget on that. Um, and like, if you’re optimizing for trial and it’s as low as 10 bucks, um, then you might just need like a hundred, 150, 200 in terms of daily budget on these in the U S.

[00:34:23] And, um, uh, to get to a significant volume of, of conversion events within a week. Um, I would say have it run for at least a week, because you want to have performance on weekdays, workers versus weekends, all times of the day, just to see like, is this actually working, um, and going to be scalable or that we just like hit a good day, um, and it’s never going to spend more money than that.

[00:34:47] Um, and I would say, 

[00:34:48] David Barnard: and then the number of creatives that you’re able to kind of insert into the testing phase that would depend heavily on, like, how much the spend is right. So, like, if you don’t have a lot of spend for [00:35:00] your creative testing, then you would test fewer creatives. Take bigger swings. And then as you scale your budget, then you can test more creatives still hopefully taking some big swings.

[00:35:12] Is that kind of how you think about like volume of creative? 

[00:35:15] Marcus Burke: Um, yeah, pretty much. It’s limited by budget. Of course, there’s ways to optimize that. For example, a lot of people test in like the Philippines or Indonesia where like prices are relatively low and there is a good proficiency in English and then.

[00:35:29] Something that wins there has maybe a 70, 80 percent likelihood of also winning in the U S and that could be a first step just to reduce costs of, um, costs of testing. Um, other than that, in the beginning. Yeah. I mean, as you start out, everything is a test, as I said, right? So you don’t know what you’re doing.

[00:35:46] You, and you have, you don’t have any proven concepts. So you’re like testing and all of your campaigns and. It’s limited by how much you can spend. And then as you scale, you still want to take big swings, but also get a [00:36:00] good ratio of big swings and like iterations, which are basically more proven, um, changes to your ads because you’re going to learn over time and you’re going to find out what’s the messaging that works, which styles tend to perform.

[00:36:14] Do we have two or three creators in our roster that have done all our top performers, so they should be the ones doing the ads and then it’s really easy Cross match these learnings. So you have a new concept, you found out, Hey, uh, this new messaging is working well. So you’re going to apply the style to it.

[00:36:30] That always works. Then you’re going to have the creator do it. That also always works. And of course, those are lower risk than doing these. Big swings into totally new, new things. And so as you scale, like try to weigh these against each other and really more and more shift towards iterations because you learn what you’re doing.

[00:36:49] David Barnard: Yeah. Fascinating. I hadn’t heard the Philippines thing specifically, would there, would there be any, or have, have you seen any kind of, um, [00:37:00] Red flags when that’s not going to work. So like, if you’re going to test in the Philippines, you need to make sure that you have priced it for that market. And like, are there specific things you want to set up in those geographies to make sure that you’re effectively testing and actually getting the 70 to 80 percent reliability versus like, maybe if your price is way too high.

[00:37:22] Your product is not attracted to that market or like, what are those other considerations you would take if you are going to test outside the U S and then try and apply those learnings to the U S. 

[00:37:32] Marcus Burke: Yeah, yeah, definitely. Pricing is a big one. So don’t like, I mean, for most apps, probably pricing isn’t all that optimized in the Philippines because it’s not a driver for their revenue.

[00:37:41] So make sure that the pricing you have in the U S. Um is in a similar manner in the philippines then so if you’re more towards the Premium range then make sure you’re converting that into a premium in the philippines But of course it’s going to be 80 cheaper than what you’re charging in the u. s And [00:38:00] still going to be premium in the philippines So get a good understanding of what can we charge in that market so that it applies And other than that, of course, there are apps and creative types that lend themselves better to this than others.

[00:38:14] Um, so for example, if you’re like, um, if you’re tapping into certain cultures with your, with your apps and making like, um, in the end, I don’t know, or you, yeah, basically things that only us American would understand, then of course, uh, the correlation is going to be lower, but yeah. Because someone in the Philippines is not going to be relevant to this.

[00:38:34] So if you’re talking about US specific things, or you even have an app that is kind of localized to the US, then of course you will have to be testing in that market to make sense of it. Um, so yeah, I’m not testing in the Philippines and all of my accounts, I’m doing it in some, um, and I’ve seen. Good results with it, but often I actually start in the U S rather.

[00:38:54] Um, and then as we scale testing spent, that’s one way to optimize it. Because [00:39:00] if you can on each test spent 70 percent less budget, that means you can test even more, um, and therefore get more creative throughput and find winners. 

[00:39:09] David Barnard: Yeah, that makes a lot of sense. One thing, and I’ve said this a few times before, I don’t have data to back this up.

[00:39:16] But like one, one consideration that that I, I think is in play here is that when you’re testing in other markets, you’re, you’re, you’re, you’re hitting kind of the middle and upper middle class. Uh, of those markets who can’t afford. Um, and so like, keep in mind, depending on where you’re testing and what you’re testing and your app and the relevance to the market and stuff like that, that even though it’s a market that generally has like a lower, um, propensity to spend and stuff, if you are, um, you know, if it’s mostly folks with the iPhone 15 pro or a high end Samsung device or whatever, That the, the actual demographics may actually in those specific parts of the market [00:40:00] might actually look more like a us market than you would otherwise expect, given the overall, uh, demographics of, of the different countries that you can test in.

[00:40:08] So, uh, yeah, a lot of, a lot of considerations there. Yeah. Also make sure 

[00:40:13] Marcus Burke: that the market actually is relevant for the OS. that you’re using. Like, I know there are like certain markets that are like 80 percent Android. And then if you try testing on iOS there, then there might not be all that much volume that you can do that in the first place.

[00:40:26] Um, so definitely also plays a role. Yeah. 

[00:40:29] David Barnard: But then on the flip side, if, if, um, the iOS market in that country is way more of that kind of upper middle class or whatever, then, then it actually is a better testing environment on iOS in that they’ll probably be closer to how the US Yeah, forms broadly. Um, but again, like you still need to have enough signals.

[00:40:49] So if there’s like no high end devices or the iOS devices are all like, you know, 10 year old use devices, then it looks really different than if, if you really are kind of. Yeah. [00:41:00] Reaching, uh, a market that’s, or a, a, uh, category folks. It’s more like the U S um, 

[00:41:07] Marcus Burke: yeah. All right. So I’m sure like the easiest way to do this is just run some mutaneous tests in the market you’re choosing and in the U S and see if the results apply and then if you can move to the cheaper market.

Scaling with Precision: An Overview

[00:41:19] David Barnard: Um, so the last topic we only have, uh, I mean, we can probably go over a little bit on the Q and a, but, um, well, let’s do a speed run of this last topic. And then, uh, and then we’ll just jump into Q and a. So, uh, last chance to ask your questions, uh, vote up, vote down, and we’ll move to questions after we’re done.

[00:41:38] Let’s do this, let’s shoot for like five minutes on this topic. I know, like, again, maybe we should just save this and do a whole nother webinar and, uh, if you’ll find it interesting, um, let us know and we can, we can, you know, go a lot deeper. Um, but the last topic is, uh, scaling with precision. Um, which is from your notes.

[00:41:55] I really like. that framing. You’re not just scaling by [00:42:00] throwing more money. Um, ideally you’re scaling with precision. So what did you mean by that? And give us an overview about how you think about scaling with precision. 

[00:42:09] Marcus Burke: Yeah. I mean, Peter helped a lot with that framing. So thanks to him. Um, but yeah, in the end, I mean, you are, you know, Like oftentimes ad spend is a metric we look at a lot.

[00:42:20] And then if we’re hitting our goals at, I don’t know, maybe our goal is a hundred percent or 200 percent return after, um, like 14, 30 days. And then if we scale, uh, our, uh, budget and stick within that goal, everyone is happy, but maybe we’ve reduced return by more than we increased it. Spend and then that’s not a good idea.

[00:42:40] So definitely like, don’t just throw more money at meta, but try to be mindful about how you do it. Um, A lot of it comes down, I think, to what we already discussed in terms that you try to stay in control of where your money is going and you have a very good view on where that’s happening. So really pull your [00:43:00] breakdown reports.

[00:43:00] You can do that either through ads reporting or using tools like, um, uh, super metrics. That basically plug into the API. And then you can on a daily basis, look at what’s my share of budget going to Facebook versus Instagram and the different placements so that you definitely see when something is changing quickly.

[00:43:18] And I’ve seen cases where accounts flipped. They’re targeting one 80 within a day because of a new creative. And of course, that’s when you want to get skeptical because probably your down fundamentals are not going to look the same. Maybe they’re better, but maybe not. And of course you want to stay in control of that.

[00:43:34] Um, Kind of the two main ways I think about scaling are either horizontally over to vertically so you can either add more and more budget to your existing campaigns and ad sets and try to scale them, or you can create new ad sets. Um, and then they have their own budget and are going to start spending.

[00:43:51] Um, and I would definitely encourage anyone that is Scaling to add ad sets as well. Um, you don’t need to be constrained by like that mantra [00:44:00] of account consolidation, and I’d only want to have one advantage plus campaign and everything goes in there because what that means is meta is calling all the shots and you’re kind of very, your structure is very risky because whenever that one campaign and ad set stops working or they start, um, I don’t know, flipping your creative mix.

[00:44:17] Then everything breaks. So the more ad sets you pull in, as you scale, um, the more you can stay in control and make sure that your budget goes in the right places. And if one stops working, then you just replace it basically. So yeah, more ad sets. I’m a big fan of that. And. Like breaking, um, breaking down your, your metrics on a placement and, uh, audience levels so that you see whenever shifts are happening.

[00:44:43] David Barnard: Awesome. Great stuff. 

Q&A: Testing New Creatives

[00:44:45] David Barnard: So let’s dive into the q and a. Looks like we got a ton of questions. Um, 

[00:44:51] Marcus Burke: let’s go. 

[00:44:53] David Barnard: So I’m going to sort by up votes. Um, so. Uh, I [00:45:00] apologize. I’m not going to call out names because I’m terrible at pronunciation. But, uh, first question, what’s the best approach for systemic workflow to test new creatives?

[00:45:13] Should I keep an always on campaign with the best ads and test new? Should it be separate testing campaign? Then how to transfer the winning ads from the test campaign to always on in order to not crush. into the learning phase. 

[00:45:29] Marcus Burke: Um, so I definitely recommend testing a separate campaign and then separate ad sets based on what we talked about.

[00:45:35] So statics in their own ones, videos in another one. So you can evaluate them based on a placement level performance. Um, then once you looked at them in terms of. Performance. So the main KPIs I usually look at are for videos, hook rate, hold rate. So how far do people watch into the video, uh, click rate, and then a cost per result.

[00:45:56] In the end, um, once you feel like you found a [00:46:00] winner, you can either then duplicate that into your business as usual campaign, where it would then scale alongside your always on winning ads. Um, but. I would definitely do that in a strategic manner. So if you found something where you feel like, okay, this is.

[00:46:17] Five, 10 percent better than what my current winning ads are doing. Maybe rather keep it in the backlog to not interfere with that algorithmic learning, because if you’re doing well at the moment, you’re hitting your goals. A 5 percent uplift is probably not going to change the game for you. And there is going to be a point when what’s working right now in your scaling headset won’t work anymore.

[00:46:38] And then that’s the time when you can pull in that kind of mid sized winner. If you find something that is kind of totally blowing you away and it’s 50 percent cheaper than everything else. And you feel like you cracked the code, then of course. Like start scaling it, put it in the business as usual one, or even think about scaling that testing ad set where it’s running.

[00:46:57] If it’s a totally new creative type, sometimes [00:47:00] I would do that. If I’m testing like a new format, which is not life yet in any of the business as usual ones, then I would just start scaling that creative testing one, because I want to keep it separate anyways, because it’s targeting a different placement and audience.

[00:47:15] David Barnard: Awesome. All right. Let’s go to the next question. Um, for indie developers on a shoestring budget, love the question. Um, I’m right there with you. 

Q&A: Budgeting for Meta Ads

[00:47:30] David Barnard: Is there anything useful we can do with meta ads for less than a month or should we just stick with Apple search ads? 

[00:47:39] Marcus Burke: For less than 1k a month, um, I think it’s going to be very tough.

[00:47:43] I mean, one thing I, I wrote about also on the revenue cat blog is maybe using web to app, which is, I think a bit, I mean, it takes some, um, work in the beginning because you need to set up a web funnel and that sort of thing. Just in the end [00:48:00] comes on top of having an app, I think kind of due to better conversion signal, these campaigns can be a bit harder, easier to handle and need less, um, at spend to like get to a point where they’ve learned whom to target, but below one K.

[00:48:14] I think you’re going to have a really hard time on matter. And I think the investment is not going to be the best. So as long as you’re not able to spend more, I would say Apple search ads is the more interesting place to be for you. Um, other interesting channels can be, for example, influencers, um, that I can work on a low budget.

[00:48:35] So if you find someone creating content in your niche that, um, has like a very good relevance for the audience that you’re buying, then try to cooperate with these people and have them post content about your app. Um, and that can work on a low budget as well, but these platforms that optimize for signals.

[00:48:53] You just need more, uh, input for them. 

[00:48:57] David Barnard: Yeah, yeah. Great answer. It’s, uh, [00:49:00] unfortunate, but it just, it is, it is what it is. Like, um, it’s an ex, I mean, and this is, this is kind of gets back to just the changes in the industry that, that with algorithmic targeting, it just, it’s. Doesn’t work as well if you can’t train the algorithm, um, and, you know, back, you know, 10 years ago, you just, you know, ramp up based on interests and other things.

[00:49:26] And, um, yeah, just, just so much less effective these days. 

[00:49:31] Marcus Burke: Yeah. It’s just like, if you, if you restrict your targeting too much to these like specific audiences, it gets really expensive. And then again, it just won’t work. 

Q&A: Attribution and Accuracy Post iOS 14.5

[00:49:39] David Barnard: Um, next question, curious, what methods you use for attribution apps, flyer, adjust regular Facebook attribution, et cetera.

[00:49:49] Also curious how accurate the attribution is approximately what percentage of purchases are actually attributed correctly after iOS 14. [00:50:00] 5. 

[00:50:00] Marcus Burke: Yeah. Um, I usually recommend to start only on the meta SDK. If you’re so small as starting out, I think in the end, it, it’s Kind of make sure there is no middleman, which introduce also risks in the end.

[00:50:12] If you’re sending signals through six different platforms before they actually arrive at meta, then there’s breaking points where you can lose signal and that’s just going to make it harder. So I often, yeah, would just run meta with, with their own SDK, which is free and you can easily implement a new app.

[00:50:28] Um, In terms of how many percentage of purchases get attributed. So it depends on your attribution framework. Um, the default now is AEM, which is Metas Aggregated Event Measurement. So basically they’re fingerprinting, um, mainly based on IPs. Um, with that, I actually see pretty good match rates, like 80 percent plus some accounts I’m getting close to.

[00:50:48] to like a a hundred percent match rate. Um, but it also depends on how many traffic sources you have. Um, so as your, um, um, kind of medium mix grows and you’re sending more traffic to the [00:51:00] app, they might kind of go wrong and they’re modeling a bit, but. Otherwise, I think it’s a pretty good solution. Scan tends to under track quite a bit more for me.

[00:51:09] So I often see like 40, 50 percent less conversions in the account than are actually happening. Of course, that’s a bit easier to figure out if you’re small and you have Meta as one channel, you switch on campaigns, you see I’m getting 50 trials a day and Meta reports 25. Okay. Then I know what’s happening.

[00:51:26] If you’re big and you have a lot of organic, that’s a lot harder to figure out. And that’s when you need to do incrementality testing or try this in markets where you’re not as active so that you can figure out like, what can I expect from, from my tracking? 

[00:51:39] David Barnard: Yeah. Great answer. Um, I’m going to go off script a little bit and ask my own question.

[00:51:45] Um, do, how is scan working these days and is it, and then. What do you think Apple can do to improve it? And maybe go quick since, uh, this [00:52:00] is my question and we have a lot more questions to get to, but I mean, it is something that’s talked about a lot in the industry. Like, you know, can’t just. Doesn’t seem to be living up to what it could be.

[00:52:11] So, I mean, do you, do you have it turned on for all it can, or like, can you turn it on and off or like, what’s, what’s the state of scan today? 

[00:52:19] Marcus Burke: Yeah, you can turn it on and off. So you can have your scan schema set and then have one campaign optimizing for scan events and another campaign optimizing for aggregated event measurement.

[00:52:31] Um, and I’ve been advocating for doing that a bit because I feel like. You don’t want to forget about scan and how to do this. And you want to have like your analytics in place that, you know, Hey, I’m for this campaign. I’m missing 40 percent of my events. And then I can model that out of how performant it is.

[00:52:48] But I’m definitely seeing the tendency that most accounts are running on AEM because it’s just more convenient. There is no post back delay. You don’t have to worry about privacy thresholds, which are really [00:53:00] a killer if you’re small, because oftentimes you’re getting even less events because you’re not hitting the 88 insults a day.

[00:53:06] Okay. So definitely a meta, I would, I would guess that most advertisers these days are an AEM if they can. Um, I know there are some like, uh, restricted areas where you have to use scan, but, um, yeah, like Apple really will have to improve on that if they want people on that framework. Um, I think like with scan for.

[00:53:27] There were a few, um, like enhancements, uh, that they, that they did introduce, but then Meta still didn’t roll that out. So I feel like if they, if they’re not going to make it, uh, substantially better than ad platforms don’t even have an incentive to implement them. Um, so as Meta sees better performance on AEM, they’re not going to move to scan, which puts us in a similar position again, then we were before with the IDFA.

[00:53:54] Now everything is IP powered. And we’re waiting for the day that, uh, Apple is taking the IP away, [00:54:00] which feels a bit weird. Um, but that’s just how performance looks like at the moment. And Scan is a bit dead, unfortunately. 

[00:54:10] David Barnard: Oh, Apple, um, all right, back, back to the regularly scheduled program. Um, MP needed isn’t MP needed or recommended?

[00:54:19] If you wanna advertise on Meta and TikTok, um, ad spend about five to 10 KA month, what are, what are the key advantages, I guess, key advantages of, um, running with MMP, um, versus not? 

[00:54:33] Marcus Burke: Yeah. Um, for Meta, you don’t need an MMP. You can also spend 500 KA month, and you can still run on the meta SDK, um, and on TikTok.

[00:54:43] I think to this date, you still will need an MMP because they don’t have their own SDK yet. I think it’s an alpha or something, so it’s coming. Um, but like, if you want to spend on TikTok these days, I think you still need one. Um, I would say at your ad [00:55:00] spend, you don’t want to spend on two channels and you don’t want an MMP.

[00:55:03] Um, I would say like rather do meta, um, choose the SDK and learn what you’re doing in terms of a like creative style messaging. Um, because in the end, what you can do on Instagram reels is quite similar to what you can do on TikTok. So you can tap into a similar audience with similar creatives. So if you figure this out on meta, And you’re hitting a ceiling at some point, you can still move to TikTok.

[00:55:28] Um, and at this ad spend, I mean, we said before we want to be at 10 K to properly test the channel. So don’t split it between two channels. 

[00:55:36] David Barnard: Yeah. Unfortunate, but true. Um, next question. If I’m starting with many testing ad groups and don’t, Have yet have a scaling evergreen campaign. What’s your best practice to scale the winners?

[00:55:53] Should they go together in a new campaign from scratch with an initial higher budget? [00:56:00] 

[00:56:00] Marcus Burke: So, I mean, yeah, so if everything is a test in the beginning, then like the one that turns out to be the winner, you can convert that basically into your evergreen campaign. So you don’t need to then duplicate your winners and put them somewhere else, but rather keep that campaign life.

[00:56:16] Um, In terms of combining the, like different creatives. So let’s say you have five ad sets, you tested three ads in each one of these. Now you have two where you feel like I want to keep those running because they seem to be working well. Then the question to me would again be, should these be combined because they’re running towards a similar audience on similar pay placements.

[00:56:36] So definitely check the results down to that. If all of this comes from like Instagram reels and it’s coming from 25 to 34 year old, uh, males, then they can be combined within the same ad set, but be aware that if you add additional ads to an ad set, you’re going to relaunch that learning phase. So that also needs to be consideration.

[00:56:55] Like if you don’t, if results look strong and you don’t want to interfere with that, then [00:57:00] rather. Scale both ad sets, um, until you feel like one of them is not working as well anymore. And you need changes anyways. And then take those moments when you’re making updates anyways to, for example, also consolidate creatives.

[00:57:12] So usually like how I change things in my account, like targeting, for example, I want to exclude people. I don’t know, currently it’s running on 18 to 65 plus, I want to, uh, trim that down to 25 to 65 plus, because I’ve seen the young age group not converting. Then I try to bundle my changes with other stuff.

[00:57:31] If things are still working well, I’m not going to make that change because it’s gonna basically restart the campaign and then it might not work. So I’m going to collect other things while this is still running, maybe find new creative winners. Maybe I have other changes I want to do to the campaign.

[00:57:45] And then I choose one date where I say, now I’m going to apply all of this to minimize how often I interfere with the learning. 

[00:57:54] David Barnard: Great. Um, next question. 

Q&A: Advantage Plus Campaigns

[00:57:58] David Barnard: Um, [00:58:00] Do you encourage using advantage plus campaigns? 

[00:58:05] Marcus Burke: My favorite ones. I’m also going to be at a promotion summit in San Francisco with a talk called disadvantage plus, uh, 10 hotfixes for Meta’s broken algorithm.

[00:58:14] Um, 

[00:58:15] David Barnard: that’s a great title. 

[00:58:18] Marcus Burke: I don’t say like, don’t run advantage plus, but for me, it’s an add on for scaling. So in the end, this horizontal scaling to me means also running different, um, campaign types, uh, because, um, they are going to go after different people. And advantage plus campaign is functioning differently than an epi event optimized campaign, a normal one.

[00:58:36] But. So, um, to me, it’s just kind of an additional layer I’m going to put on top, but I usually encourage to start on manual ones because they allowed for more, um, like more ad sets and deeper granularity. Um, and for example, um, having the creative type split is easier if I do one campaign with a. Kind of UGC ad set and animation ad set, and maybe a [00:59:00] static ad set with advantage plus, I would have to create three campaigns for that.

[00:59:03] And many people are like, not so happy to do that because again, it’s account consolidation. That’s, uh, it’s against account consolidation in the end. So I would encourage not to build an account entirely on advantage plus and throw everything in there as meta tends to recommend these campaign types work with a lot of assets.

[00:59:22] So you can have 50 creatives in there and like have one advantage plus campaign at 10, 000 bucks a day and have everything in that. But again, Meta often functions on this, like winner takes all principle. And then they spend all your budget on like one or two creatives. And if that stops working, then the whole account stops working.

[00:59:42] So that’s why I usually build my structure on manual campaigns, maybe 70 percent of spend, and then I have an add on, which is advantage plus for scaling.

[00:59:54] David Barnard: Excellent. Um, I forgot to ask Marcus, uh, I can stay on a few more minutes. Do you, do you need [01:00:00] to, we kind of hit the top of the hour. Do you need to bounce or do you want to answer a couple more questions? 

[01:00:05] Marcus Burke: Let’s do a few more. All good. 

[01:00:06] David Barnard: Okay. Um, yeah, I saw a few, some of these were kind of answered already.

[01:00:12] So I’m going to kind of skip down. I thought this one was a really good one. Um, Okay. 

Q&A: Audience Restrictions and Lookalikes

[01:00:16] David Barnard: At what stage during testing or scaling do you add lookalikes or add restrictions on audience? 

[01:00:25] Marcus Burke: Um, so when it comes to restrictions on audience, I usually do that early on already because um, yeah, oftentimes that’s, it works pretty similarly across all accounts.

[01:00:36] 18 to 24, that age group is very, very hard to monetize if you’re not. And app that is specifically made for that audience. Like, I don’t know if you’re trying to build a social network, of course you need more users and you need cheap traffic to get that network effect going. So then keep them included.

[01:00:52] But if you’re a subscription app optimizing for trial start and you’re charging 60 bucks a year, then. It’s going to be [01:01:00] tough to convert these people because just purchase power isn’t there. So I will make a few assumptions there early on and restrict to rather like start the account on high return, because like I’m happy to spend, like the client is going to be happy if I spend 50 K on a profitable return and it’s not about scale yet.

[01:01:19] So I rather start more restricted. And then as we scale, I might open up into younger audiences to see, okay, Can we make that equation work somehow? Where’s the tipping point? Um, can we still buy users 20 plus? Can we buy users 22 plus, or does it really need to be 28 or 30 plus? So I’ll play around with that a bit.

[01:01:38] When it comes to lookalikes, I use them as soon as they become available, basically, which means you need enough, um, valuable data to feed into that custom audience that is then used to generate the lookalike. Um, Medan usually recommended that was pre agt like 2k in like odd people that they need to match to what you’ve uploaded.

[01:01:59] But match [01:02:00] rates have been very, very low ever since IDFAs are gone, because that has been the predominant match type to find people on mobile. So the main, uh, identifier now is email addresses. And of course, like you’re not necessarily using the same email address and your product this day than in your Facebook account that you set up.

[01:02:19] I don’t know when. So that’s especially if 

[01:02:21] David Barnard: you’re using like sign in with Apple and it’s the email. 

[01:02:26] Marcus Burke: So usually what I look for is 10, 000, um, entries of whatever I’m selecting. And I’m trying to use a valuable selection, um, which is usually payers at least. And then over time I go even more granular and I go for people that have bought a 12 month subscription or even people that are renewed, which of course is an even higher quality signal.

[01:02:46] Um, but in the end, you don’t want to build lookalikes just for some mid value event, because it’s going to be hard for you to evaluate them because you don’t have down funnel conversion in, uh, meta. You’re only going to see [01:03:00] how costly was the cost per trial, maybe a qualified trial. If you’re using that, um, uh, strategy that we talked about before.

[01:03:08] But then you need to figure out in your blended data, am I currently buying a more qualitative audience or not? And oftentimes the upper funnel is actually more expensive because you are limiting reach massively. An audience like, uh, reach for a lookalike might be, uh, like 4 million in the U S if you go for like a 2 percent lookalike.

[01:03:27] And then, um, that is going to be more expensive, but oftentimes the down funnel conversion is going to be a lot higher and making for a good return. But you need to figure it out. Hence, you’re looking for really a big bump. If things are more or less the same as your blended data, then it’s going to be really hard to figure out if you made a dent.

[01:03:47] David Barnard: All right. Last. Well, let’s, let’s do two more. Um, there’s just, there’s a lot of good questions in here. Like I said, we need to do like a six hour, uh, I guess, uh, yeah, it just needs to be a series. [01:04:00] 

Q&A: UGC Ad Lifecycles and Performance

[01:04:00] David Barnard: Um, what’s the average life cycle for a UGC ad? And, and actually I would expand this to say not just UGC, but what’s the average for UGC.

[01:04:09] And then what’s the average for ads generally weeks, months. How do you know if a UGC ad is a winner? Okay. I had some perform amazingly well for two to three weeks and then performance deteriorated on a cost per result basis. 

[01:04:23] Marcus Burke: Um, I would say it can be pretty long on meta. So I have accounts where we still run profitably on ads that have been around for, for months for sure.

[01:04:32] Um, in the end, it depends on like, where did performance start and what’s your goal. So if you kind of want to achieve a certain return on ad spend, and as you launched that new ad, it has been performing. Two times better than that. Then of course you can run this a lot longer while still hitting the goal before it’s going to go below goal.

[01:04:54] Um, another metric that is interesting here is of course, how much money you’re trying to put behind it. So the [01:05:00] more money you spend on it, um, the faster you’re going to see fatigue on that creative because, um, your. Like basically Meta will go from like ideal customers into not as ideal customers and like go into like more, more and more broader audiences over time.

[01:05:18] And with that, you’re definitely going to see that performance goes lower and lower. So that’s where then you need to figure out like. Like how, how far can I, can I take this? But I would say generally, if you found a really like a win win, then that’s going to work, um, like rather months than just weeks.

[01:05:37] Um, if this is happening, I would say definitely again, check like, where is Meta spending your money? Has something changed there? Like, might they be taking this to another placement now? Um, and then that might be a hint that in the beginning they found a better audience on another, uh, placement. But. Some signal told them to not do that anymore.

[01:05:57] Oftentimes this is even just engagement [01:06:00] signals. So I’m seeing my targeting change based on comments people are leaving. But then I have people commenting that are saying this sucks, or I don’t believe this works, which of course, isn’t a nice signal for targeting, but Meta doesn’t Does use it for that.

[01:06:14] So really dig deep into where has that creative been running, have people engaged with it to figure out if something has changed and that could be then a valuable learning for you to know, we need to go back to where we came from. 

[01:06:28] David Barnard: All right. Last question. 

Q&A: Subscription Campaign Optimization

[01:06:30] David Barnard: Um, how are campaigns for subs or how should campaigns for subscriptions be optimized with ROAS bidding in meta?

[01:06:40] The first payment usually comes with a negative user economics. Then with LTV, it goes positive. What purchase revenue should be sent to Meta in this context, or are you sending purchase data to Meta, or are you focusing more on events? 

[01:06:55] Marcus Burke: Yeah, I’m not using value bidding in any of my accounts at the moment or [01:07:00] ROAS bidding.

[01:07:00] So you can either kind of just tell them maximize value, or you can say, I want to see a certain ROAS, which is basically a bid cap on your value based bidding. Um, If you’re doing it, then I would say you definitely want to consider LTV or like expected value from these, um, um, cohorts and not just what’s happening in terms of first revenue, because a monthly user definitely will, um, grow into higher LTV over time while a yearly user will stay flat for a year.

[01:07:30] So that means you need to tell META what you’re expecting to happen. And this is also a good place where you can then feed in your learnings. from different cohorts. If you know that older users are gonna basically grow in revenue much more than younger users, and you’ve asked an onboarding, how old they are, then make sure that you basically layer on these predictions, um, to feedback, really smart value to them.

[01:07:53] So it’s not about really sending back your revenue, but really sending that just a value that then, um, [01:08:00] correlates with, um, higher value for you that you can make up, um, That’s usually what I would try to do here, but I haven’t done it in a while. Uh, last time we tried this was at Blinkist when I worked in like 2020.

[01:08:13] And back then we haven’t seen great results because also the, um, uh, the algorithm in the end was, uh, more expensive. So it’s definitely like a premium, uh, bidding type. But I think if you see like really large discrepancies and like how audiences perform, uh, in the long run, then definitely that can be interesting, but you need to Like basically hack it a bit because the algorithm was made for gaming.

[01:08:38] And, uh, in the end where you have IAPs from like 1 to 500, and then the algorithm can really learn who is going to be a, like VIP user and who’s just here to like, uh, try the game. So you need to figure out how you can feedback these more diverse values based on what you learn in onboarding. 

[01:08:57] David Barnard: Awesome. Uh, VIP.

[01:08:59] So [01:09:00] that’s a, a nice, uh, way to say whale, right? 

[01:09:05] Marcus Burke: That’s a nice way to say whale. Yeah. When I worked at it in the gaming industry, uh, we had this internal note saying, please don’t call them whales anymore.

[01:09:16] David Barnard: No, I think that’s good. All right, Marcus. 

Conclusion

[01:09:19] David Barnard: Thank you so much. Uh, there, there were a ton of questions in here and, and, um, I’ll go through the questions afterward and, um, Use that to help inform kind of future content. So, um, sorry if I didn’t get to your question, but 

[01:09:33] Marcus Burke: I can also use them to inform future content.

[01:09:35] David Barnard: Yeah, yeah, absolutely. Um, but thank you all for, for sticking around those of you who did, um, those of you who are watching this after the fact, uh, the rest will be less relevant. But, um, uh, again, uh, Revenue Cats app growth annual is happening next week. And, and actually like Marcus said, the day after is app promotion summit, San Francisco.

[01:09:57] So, uh, if you can’t make it for [01:10:00] app growth annual, um, most of us from, uh, app growth annual will be at app promotion summit the next day. So great place to go learn from and meet Marcus. Um, if you aren’t going to be at app growth annual, um, and then again, check out the podcast. Um, I think, uh, Peter may have already linked it in the chat, the previous podcast episode that we had, um, uh, with Marcus, uh, again, a good kind of overview.

[01:10:24] We didn’t go nearly as deep as we did today, but I think the two kind of complement each other well. So go check that out. And then lastly, anything you want to share? I mean, we were just talking before it seems like you’re. pretty busy, but if, if people did want to work with you or try and work with you or get on your waiting list, um, what does that look like these days?

[01:10:43] Marcus Burke: I mean, I don’t have an official waiting list. Maybe I should have one. Um, oftentimes it’s a bit about timing and when I feel like I can take on a client and someone reaches out, that is interesting, then I just make it happen. Um, but yeah, right now I’m rather looking to reduce hours towards the end of the year because I need some more family time.

[01:10:59] So really [01:11:00] before. For Q2 2025, I don’t have any open slots, but yeah, make sure to follow me on LinkedIn. I share everything I learned and everything I have learned on the platform and I answer questions and comments. So like we can make this the same format happen as we’ve did here. Um, and otherwise, yeah, maybe I’m seeing a few of you in San Francisco at the app growth annual or promotion summit.

[01:11:21] Uh, and yeah, really looking forward to that leaving on Sunday. 

[01:11:25] David Barnard: Awesome. Me too. Or I’m leaving on Monday, but really looking forward to meeting you in person and hanging out with the, uh, can be a lot of cool folks in town for, for both these events. So it’ll be a fun, uh, it’s fun to, to, to get out of this little, uh, zoom box and actually see, see some, some of the industry faces.

[01:11:42] Um, but yeah, thanks again, Marcus, looking forward to seeing next week. Uh, thank you all for sticking around and asking so many good questions that I had a hard time, um, getting through. We had a hard time getting through them all. Um, and we’ll definitely have Marcus on again soon. So thank you. 

[01:11:58] Marcus Burke: See you then.

[01:11:59] Bye everyone.

Subscribe to newsletters

Watch more

Roast My Paywall Live - Biggest subscription apps
December 17, 2024

Roast my paywall: Roasting the App Store’s biggest subscription apps

From ChatGPT to Duolingo, we’re putting the world’s top subscription apps in the hot seat. See what makes their paywalls work—and where they fall short.

10 years of Google app campaigns insight in 1 hour
December 10, 2024

10 years of Google app campaigns insight in 1 hour

Join Ashley Black as she shares a decade of insider knowledge

How to go viral on TikTok: A live creative strategy jam
December 4, 2024

How to go viral on TikTok: A live creative strategy jam

Your app. Viral TikTok ideas in real time.

Want to see how RevenueCat can help?

RevenueCat enables us to have one single source of truth for subscriptions and revenue data.

Olivier Lemarié, PhotoroomOlivier Lemarié, Photoroom
Read Case Study