
The Free Part, Where I Tell You The House Is On Fire
Adam Heimlich said the quiet part out loud.
"Today we learned something insane. Platforms target people who have a 99% chance of buying the brand without advertising, then call the resulting purchases 'incremental sales produced by their platform.'"
That is the founder of Chalice AI, a longtime adtech operator, describing in plain language what the optimization layer of every major advertising platform is actually doing right now. The platforms are billing you for sales that were already happening. Not metaphorically. Not as a thought experiment. As an operational description of what is occurring inside the AI optimization layer of every major ad platform on earth, while you read this.
That should have ended several careers. It started none. Heimlich said it. The industry shrugged. The dashboards stayed green. The robots learned to lie. They are doing it as we speak, at machine scale, and the entire commercial structure of digital advertising is organized to make sure nobody notices.
This is Part I. It is free. Part II is paid, because the people who need it most are the ones currently signing the invoices that fund the cheating.
Counting Versus Predicting
For sixty years, advertising measurement asked one question. Did anyone see the ad. Eyeballs on screens. The metric could be faked, and was, but the underlying question was binary. There was a fact of the matter. You could check.
Outcomes is a different question. Outcomes asks whether the ad caused the thing. Whether the person would have bought the Toyota anyway. Whether they were already searching for the prescription. Whether the campaign moved the needle, or whether the needle was moving and the campaign showed up to take credit.
This is not a question you can answer by looking at data. This is a question you can only answer by guessing what would have happened in a universe where the ad never ran, a universe you cannot visit because the ad already ran. That guess is called a counterfactual. Counterfactuals are vibes with a regression attached.
Exposure measurement counts. Outcomes measurement predicts. The moment you move from counting to predicting, the answer depends entirely on who is doing the predicting and what they get paid to conclude.
The Study That Should Have Ended The Industry
In 2014, three economists at eBay ran an actual experiment.
Tom Blake, Chris Nosko, and Steve Tadelis turned off paid search advertising on eBay's own brand keywords. Ninety-nine point five percent of the lost paid clicks were recaptured by organic search.
Read that again. eBay had been paying Google for traffic that was already arriving. This was not optimization. This was a tollbooth on a road people already lived on.
The non-brand keyword test was somehow worse. Returns statistically indistinguishable from zero. The platform was claiming credit for sales it did not cause. The actual incremental customers were somewhere else, ignored by an optimization system trained to chase the cheapest measured conversion, which is always the conversion that was going to happen anyway.
The industry's response, then and now: "eBay is a special case." Every brand is a special case. The special cases, taken together, are the entire market.
That was 2014. Then we gave the system to the robots.
Reward Hacking, Or, Why The Robots Are Doing This On Purpose
There is a term in machine learning. Reward hacking. You give an optimization system a measurable proxy for an unmeasurable goal. The proxy is easier to game than the goal. The system games the proxy. This is not a flaw. This is the system working. A peer-reviewed 2024 survey on AI deception treats it as documented behavior, not metaphor. The robots are not malfunctioning. The robots are doing their job. The job is the problem.
Advertising is the cleanest reward hacking environment ever built. The true goal is incremental business impact. The measurable proxy is observed conversions. These are not the same thing. The eBay study proved it. The reward hacking literature predicted it. We built billions of dollars of AI infrastructure on top of the proxy anyway.
The AI does what AI does. It finds the cheapest path to the measured number. The cheapest path is the user who was already going to convert. Multiply by ten million auctions a second. Run for three years. You now have an optimization system that, with extraordinary efficiency, finds people who were going to buy your client's product anyway and bills your client for the privilege.
This is not a bug. We told the machine to maximize a number. The machine maximized the number. The number was the wrong number. We knew it was the wrong number in 2014. Surprise.
Who The Algorithm Actually Finds
Heimlich was describing pre-existing demand. There is a worse version, and it is also happening right now.
When you optimize against a click or a form fill, you are not finding interested customers. You are finding the population of humans whose behavioral baseline is to respond to advertising at rates uncorrelated with purchase intent. Four populations dominate the response pool.
The chronically responsive. Click everything. The algorithm cannot tell the difference between "wants this product" and "clicks on things."
The cognitively vulnerable. Vulnerability and responsiveness correlate. The algorithm drifts toward vulnerability. This is not a bug. It is the system working as designed.
The economically desperate. "Probably a scam but what if it isn't" tilts toward clicking. Lifetime value: garbage. You find out in ninety days, by which time the budget has been re-approved.
The bots. At the level of behavioral signature, a bot and a chronically responsive human look identical. Bots are cheaper. Supply expands to fill any optimization budget that rewards them.
You are paying premium CPMs to reach a population the algorithm selected because they respond to advertising in ways uncorrelated with purchase intent. Then the platform reports the conversions as incremental sales caused by the platform.
This is what the dashboard is hiding.
The Nike Number
What happens when a brand believes the dashboard. Nike. Shifted budget to performance channels. Cut brand investment. Trusted the numbers.
Stock dropped twenty-one percent in a single day. One hundred fifty billion dollars in enterprise value, gone.
Most brands experience the same dynamic at smaller scale. The only people who can see it have already been laid off. One hundred fifty billion dollars is the price of letting the optimization layer grade its own homework. That is the public number. The private number, distributed across every brand currently running platform-attributed campaigns, is much larger. Nobody is going to publish it. You are paying it. You just cannot see it on any report you currently receive.
This was the free part. The basic shape of the problem is too important to paywall. If you work in this industry and you did not know what I just described, you are making decisions in the dark.
Part II is the receipts.
The four criteria a metric must meet to survive contact with two AI optimizers iterating against each other twenty-four hours a day. Three out of four is not enough. The platforms will sell you three out of four constantly. One miss is the entire failure mode. I will name the four, and I will name the vendors who pass and the vendors who do not.
Why EDO and similar verified-outcome systems survive when almost nothing else does. Not all outcomes measurement is the same. Some of it is real signal, structurally protected from the optimization layer. Some of it is laundered correlation sold by vendors commercially captured by the platforms they are supposed to audit. The difference matters. I will show you how to tell.
The optimization governance model. How to use a good outcome without re-creating the bad-outcome problem one level up. The three principles. The places where bots cannot be trusted. The political reality of telling your CEO that you cannot give them that number this quarter.
How to stop letting platforms grade their own homework. The structural moves your account rep will fight you on. The fight is itself diagnostic. The harder they push back, the more important the rule is.
The Monday morning playbook. Three concrete moves, in order of difficulty. The easy one. The medium one. The hard one. If you make all three, you will be further along than ninety percent of the brands currently spending on these platforms. That is not a high bar. The fact that it is enough to put you ahead is the entire problem.
Subscribe to our premium content at ADOTAT+ to read the rest.
Become a paying subscriber to get access to this post and other subscriber-only content.
Upgrade
