AdAdvisor was built by people who had already spent years managing Meta campaigns at scale before writing a single line of code. Over $60M in ad spend across e-commerce brands, lead generation businesses, and agencies.
You see patterns at that scale that aren't visible from a single account. What works across categories. What always fails. What looks good in Ads Manager but doesn't hold up to real business math.
Here's what we actually learned.
Creative is the targeting
Meta's algorithm has evolved to the point where your creative does more targeting work than your targeting settings. A strong creative that resonates with your ideal buyer causes Meta to find more people like them automatically through engagement signals.
This means the old approach of building precise audience layers and then running generic creative has it backwards. Strong, specific creative with broad targeting consistently outperforms narrow targeting with weak creative.
In practice: if your creative speaks directly to the problem your best customer has, in the language they use to describe it, the algorithm finds them. If it speaks to everyone, it finds no one.
ROAS in Ads Manager is not the same as profitability
Across hundreds of accounts, one of the most consistent patterns is advertisers scaling campaigns that are actually losing money because they'
re measuring ROAS without comparing it to their break-even.
A 3x ROAS looks like a win. For a product with thin margins and high fulfilment costs, 3x ROAS might mean losing $5 on every sale. For a digital product with 90% margins, 1.5x ROAS might be highly profitable.
The only ROAS number that matters is the one specific to your cost structure. Every other comparison is noise.
The one calculation worth doing before anything else
Selling price divided by (selling price minus total costs per order). That's your break-even ROAS. Every campaign decision should be evaluated against that number. Use AdAdvisor's free Break-Even ROAS Calculator for free.
The accounts that scale have more creatives, not bigger budgets
The accounts that grew fastest weren't the ones with the most budget. They were the ones with the most consistent creative testing operation. New creative every two to three weeks. Clear hypotheses being tested. A library of winning assets being built over time.
Creative fatigue is the most predictable thing in Meta advertising. Every creative has a shelf life. Accounts with a pipeline of new creative ready to rotate in before fatigue sets in compound their results. Accounts running the same creative for months consistently see declining performance.
Audience overlap kills more accounts than bad targeting does
Multiple ad sets competing for the same users is one of the most common and least-discussed problems we've seen. When two ad sets target overlapping audiences, they bid against each other in the same auction, driving up CPMs for both.
The result looks like rising costs and declining performance without any obvious cause. The fix is adding audience exclusions between ad sets so they don't compete with each other.
Before scaling any account, audit for audience overlap. It's often the reason a previously profitable structure stops working as more campaigns are added.
The learning phase is not a problem. Resetting it is.
Almost every advertiser we worked with had a habit of adjusting campaigns during the learning phase. Changing the creative because early results looked weak. Modifying targeting because day three CPAs seemed high. Adjusting bids because delivery felt slow.
Every one of those changes resets the learning phase. The campaign never stabilises. Performance stays volatile indefinitely and the advertiser concludes that Meta doesn't work for their business.
The discipline of not touching a campaign during the learning phase is one of the highest-value habits in Meta advertising. Set it up correctly, let it run, and evaluate after 7 days and 50 conversions.
Tracking quality determines campaign quality
Meta's algorithm is only as good as the conversion signals it receives. Accounts with high Event Match Quality scores consistently outperform accounts with weak tracking, even when the creative and targeting are comparable.
iOS privacy changes reduced browser-based pixel signal significantly. Accounts that implemented the Conversions API to send server-side conversion data recovered most of that signal. Accounts that didn't are still running on incomplete data and paying higher CPMs as a result.
Speed of learning compounds over time
The accounts that grew fastest weren't necessarily the most sophisticated. They were the most systematic. They made decisions faster because they had clear metrics. They tested more because they had a process. They scaled confidently because they knew their break-even numbers.
The accounts that stalled were usually the ones making decisions based on feel, running the same creative for too long, and treating every bad day as a crisis rather than normal variance.
Meta advertising rewards advertisers who move deliberately and systematically, not ones who react to every data point.
| Accounts that stalled | Accounts that scaled |
|---|---|
| Measured ROAS without comparing to break-even | Always evaluated against break-even ROAS |
| Changed campaigns during learning phase | Let learning phase complete before adjusting |
| Ran same creative until performance collapsed | Rotated fresh creative every 2 to 3 weeks |
| Narrow targeting, weak creative | Broad targeting, strong creative |
| Browser pixel only, incomplete tracking | Pixel plus Conversions API, strong signals |


