Most DTC ad accounts I audit at the $2-10M revenue range have both CBO and ABO campaigns running at the same time with no written rule for when to use which. The media buyer picks the mode that feels right per campaign, the founder never asks, and the result is that the budget model is basically random. That randomness costs money, because CBO and ABO compound differently at different stages of an audience's life, and getting the pairing wrong leaks spend.
This is a decision log. Not a theoretical argument. The rules below are what I actually apply in audits, and what I recommend operators write into their paid social playbook.
| Dimension | CBO | ABO |
|---|---|---|
| Budget control | Campaign pool, Meta allocates | Per ad set, operator allocates |
| Best for | Mature audiences, stable winners | Testing, uneven ad sets |
| Learning phase | Faster if pool is large enough | Slower, but more predictable |
| Risk | Meta over-spends on the wrong ad set | Operator over-spends on own hunch |
| Minimum spend | Roughly $500/day to be meaningful | $100/day per ad set floor |
| Best at $2-10M DTC | Evergreen prospecting + retargeting | Creative testing, geo splits |
What the two modes actually are
CBO is Campaign Budget Optimization. You set one budget at the campaign level and Meta allocates it across ad sets based on where the algorithm thinks the next conversion will come from. ABO is Ad Set Budget Optimization. You set a budget per ad set and the ad sets compete on their own budget without cross-pool reallocation.
Meta pushed CBO as the default a few years back and has been nudging accounts toward Advantage Plus Shopping ever since. The result is that a lot of operators assume CBO is always "better" and ABO is legacy. That is not right. Both are live, both work at scale, and the question is when to use each.
The decision log
I use three questions to decide on every campaign.
Question 1: How mature is the audience? If the ad sets inside the campaign have months of conversion history and stable performance, CBO works. Meta has enough signal to reallocate sensibly. If the ad sets are new, with unknown CTR and unknown CPA, ABO gives you operator control over the first week or two while you get baseline data. Starting a new audience under CBO means Meta will often over-spend the ad set with the highest early CTR even if that ad set has the wrong downstream conversion shape.
Question 2: Are the ad sets comparable in size and expected CPA? CBO assumes the ad sets inside the campaign are roughly fungible, that shifting spend from one to the other is a pure optimization question. If one ad set is a prospecting audience and another is a retargeting audience with a 3x higher expected CPA, CBO will often over-spend the retargeting audience because its CPA reports lower. That is not the win it looks like, because the retargeting pool is finite and the math only works because you paid for the prospecting at a different stage. When ad sets are non-comparable, ABO keeps each at its intended spend level.
Question 3: How much total spend? CBO needs roughly $500/day of campaign-level budget to be meaningful, in my experience. Below that and you are not giving Meta enough signal to make good reallocation decisions. ABO's $100/day per ad set floor is more forgiving at smaller budgets.
Where CBO shines
Evergreen prospecting campaigns with three or more proven broad audiences, each with months of conversion history, each running the same creative set. CBO handles the slight day-to-day shifts in audience saturation better than a human can.
Retargeting campaigns with multiple audience tiers (site visitors, cart abandoners, past purchasers) where the CPA spreads are predictable and Meta's reallocation is more signal than noise.
Campaigns using Advantage Plus Shopping, which is essentially CBO plus automated audience expansion. When Advantage Plus Shopping is worth trusting goes deeper on the specific cases where the automation pays off.
Where ABO shines
Creative testing campaigns. You want each ad set to get equal budget so the test results are not contaminated by Meta's reallocation. Budget equality is the test control.
Geo-split campaigns where each geo has a different CAC ceiling based on gross margin differences. CBO will happily over-spend the highest-ROAS geo, which is often also the lowest-volume geo, and you end up starved in the geos that actually matter to growth.
New audience launches. The first two weeks of a new audience's life need operator-controlled budget to establish baseline CPA before CBO can do anything useful with the ad set.
The dual-structure pattern
Most mature DTC ad accounts at $2-10M end up with a dual structure. CBO handles the stable, proven prospecting and retargeting pools. ABO handles the testing layer, the geo splits, and any new audience launches. The two live as separate campaigns, reconciled at the MER level not the platform ROAS level, and the rule is written down so the next operator does not have to reverse-engineer it.
What breaks most often
Three patterns I see over and over.
Pattern one: CBO with one dominant ad set. The campaign has four ad sets but Meta pushes 90 percent of the budget to one of them. The operator thinks this means the dominant ad set is the winner. Sometimes it is. Often it is the ad set with the lowest CPA in the first 72 hours, which was a function of audience saturation timing, not actual underlying performance. Split into ABO for a week, look at the individual ad set performance on equal budgets, and compare.
Pattern two: ABO with hand-tuned budget chasing. The operator manually shifts $50 a day between ad sets based on yesterday's CPA. This is CBO with a human substituting their own reaction time for Meta's algorithm. Meta is faster and the human is introducing noise.
Pattern three: mixing CBO and ABO inside the same audience pool. Two campaigns, one CBO, one ABO, both targeting the same prospecting audience. This creates overlap, inflates frequency, and makes attribution harder. Pick one per audience pool.
The learning phase wrinkle
Both modes depend on the 50-events-per-week threshold to exit learning phase. At $200K monthly ad spend with a $100 AOV, you have roughly 2,000 purchases per month to allocate across the account. If you spread those across 15 ad sets, no ad set gets 50 events per week. The campaign structure stays in learning forever, regardless of which budget model you picked. Why Meta learning phase never exits walks through the consolidation math.
“The budget model is a smaller decision than how many ad sets you run. Six consolidated ad sets under either model will almost always outperform twenty fragmented ad sets under either model.
”
What to write down
Every paid social playbook should have a one-paragraph rule for when each mode gets used. Mine looks like this: CBO for mature prospecting pools at $500/day or more, CBO for retargeting with multiple tiers, ABO for creative testing and geo splits, ABO for the first two weeks of any new audience launch. Advantage Plus Shopping gets treated as CBO plus audience expansion, allowed only when the catalog and feed are clean.
Write your version down. Stick to it for three months. Compare results. Revise if the data tells you to. The worst version of this is running a different model on every campaign based on what felt right that Monday.
Should I ever run CBO and ABO inside the same campaign?
No. It is not allowed technically in the way you are probably thinking. A campaign is either CBO or ABO. What you can do is run two campaigns side by side, one of each, targeting different objectives. Just do not target the same audience pool with both.
Does Advantage Plus Shopping make this decision obsolete?
Only for the specific campaigns where you are willing to give up control of audience, placement, and creative rotation to Meta's automation. For creative testing, geo splits, and anything where you need to isolate a variable, Advantage Plus Shopping is the wrong tool regardless of how much Meta pushes it.
What minimum spend makes CBO actually work?
In my experience, $500/day campaign-level budget is the floor where CBO reallocation starts producing meaningful signal. Below that, you are basically running a random budget distribution.
Where to go from here
This piece is part of the paid social for DTC operators hub. If campaign structure feels unstable, Why Meta learning phase never exits is the next read. If you want to know whether your server-side measurement is clean enough to trust CBO's allocation decisions, the DTC Stack Audit runs the CAPI tracking-audit module alongside the rest of the Shopify stack checks.
