Sign up here |
|
|---|

The Web Is Half Slop. Your Brand Safety Vendor Has No Idea Which Half.
Let's start with the number that should have broken the internet but instead got absorbed into a trade press trend round-up and died quietly on a Thursday afternoon: as of early 2026, more than half of all new English-language content published on the web is AI-generated. Not "some." Not "a growing share." Not "an emerging concern we should probably put on the roadmap." More than half. The majority. The web, which programmatic advertising has spent twenty years treating as a vast garden of authentic human expression worth monetizing, is now predominantly machine-produced.
Nobody sent a memo. Nobody called an emergency meeting. Nobody updated a single bidstream standard.
The Graphite study — analyzed across 65,000+ articles published between January 2020 and May 2025 — found that AI-generated content first surpassed human-authored content in November 2024, reaching approximately 52% of newly published articles by May 2025. In late 2022 that figure was 7.8%. That is a 6x increase in three years, in case you were wondering whether this is a "wait and see" situation. It is not.

The programmatic supply chain absorbed every step of that transformation without adding a single quality standard specific to AI content. No new bidstream field. No updated GARM category. No SSP onboarding requirement. The same lightweight WHOIS check, ads.txt verification, and traffic analytics review that approved web publishers in 2018 is approving them right now — including the ones cranking out 10,000 synthetic recipe pages a month from a server farm in a jurisdiction you've never heard of.
Welcome to the AI content quality crisis. Nobody built a fire exit.
The Industry's Official Response Is About the Wrong Problem
To be fair to the IAB — and we try to be fair, even when it's painful — they did something. In January 2026, they released an AI Transparency and Disclosure Framework. A risk-based model. Thoughtfully constructed. Requires disclosure when AI materially affects authenticity in ways that could mislead consumers. Synthetic actors in commercials. AI-generated spokespeople. Deepfake testimonials. Chatbots cosplaying as humans in ad units.
Solid work. Necessary work. The wrong work for what we're talking about.
Here is what the IAB framework does not cover: the AI-generated page your ad is running against right now. The IAB framework protects consumers from AI in your ad creative. It says precisely nothing about AI in the publisher content surrounding it. The bidstream — that river of auction signals flowing through SSPs, ad exchanges, and DSPs at millions of auctions per second — contains zero standardized fields for "this page was produced by a generative AI system." None. Blank. The field does not exist.
"The industry built a fence around the ad creative. The publisher content that ad sits next to? Completely unwalled. Completely unmapped. And programmatic is buying all of it, every millisecond, at scale."
How AI Farms Get Into Your Media Plan Without Being Invited
The supply chain gap isn't an accident or an oversight. It's a structural feature of how programmatic inventory onboarding has always worked — and never been fixed.
When a new publisher applies to an SSP, the check is publisher-centric, not content-centric. Basic WHOIS verification. Domain age check. Ads.txt presence. High-level traffic analytics review. Content quality checks exist, but they're manual, sampled, and designed for a world where the bottleneck was finding enough publishers — not drowning in synthetic ones. Nobody updated the checklist when the game changed.
Once an AI farm passes one SSP's lightweight vetting — and it will, because AI bots can launch 50 fully active MFA websites in a single weekend — it appears as a legitimate reseller in other SSPs' seller.json chains. Buyers see a familiar SSP name. They don't see the domain behind it. Publishers plug multiple SSP endpoints into prebid wrappers with minimal human review. The AI farm gains multi-SSP access almost automatically, spreading across exchanges like a weed through a crack in the pavement — one that your brand safety vendor is charging you to fix while standing on the wrong side of.

Your Brand Safety Vendors Are Selling You Confidence They Cannot Quantify
Here is where we get uncomfortable. IAS has a product called "Low-Quality GenAI Avoidance." DoubleVerify has "AI SlopStopper for Open Web" — a name that is, genuinely, what they chose. HUMAN Security focuses on IVT and bot traffic. All three are actively marketing tools to solve the exact problem we're describing.
None of them have published a single independent precision/recall benchmark for AI content detection. Not one. Not a PDF. Not a footnote. Not a number someone accidentally left in a slide deck. The efficacy claims are entirely capabilities-based: "near real-time scoring," "pre- and post-bid controls," "real-time verification across open web inventory." What they are conspicuously not saying — because they cannot say it — is: here is our documented accuracy rate, audited by an independent third party, for detecting AI-generated content pages as distinct from general MFA or invalid traffic.
IAS's GenAI Avoidance tool entered beta only in April 2026. For a problem that has been compounding since late 2022. For a problem that, by the time the tool entered beta, already represented the majority of new web content. That is not a response. That is a participation ribbon handed out after the race ended.

And then there's Adalytics. In 2024, Adalytics published findings showing Fortune 500 brands — including Microsoft, Meta, and Disney — had ads served next to explicitly harmful content: pornography, racist material, the full inventory nightmare, while IAS and DoubleVerify brand safety code was active on those pages. IAS called the reports "inaccurately represented." DoubleVerify called the findings "entirely manufactured." Outside sources who spoke to trade press said the reports accurately reflected what the publisher and advertiser tags showed.
You can decide who you believe. The more relevant question: these are the vendors you're trusting to solve a harder, newer, more technically complex problem. The vendors who disputed whether their tools detected pornography accurately want you to trust them on AI-generated content at scale. As pitches go, that one needs some work.
"We can't detect pornography reliably but trust us on AI-generated content" is not a comforting pitch.
The Part That Should Make You Genuinely Uncomfortable
Here is the structural reason this problem persists, and it has nothing to do with technology.
The vendors selling AI avoidance tools profit from two things simultaneously. They profit from the existence of the problem — you need their tools. And they profit from the scale of open programmatic inventory they measure — a shrinking open exchange shrinks their addressable market. These are not aligned incentives. They are a vested interest in the problem remaining just unsolved enough that you keep paying for the solution.
This is not a conspiracy. It's an incentive structure. IAS and DoubleVerify are not lying to you. They're assembling valid evidence in ways that are consistent with their P&L. The risk camp emphasizes consumer trust erosion and adjacency damage. The opportunity camp — Zefr, Omnicom Media Group — produces research showing AI content can drive positive brand lift. Both camps are right. Neither has the full picture. And nobody has a financial incentive to produce the full picture, because the full picture is complicated, nuanced, and deeply inconvenient for everyone's positioning.
Here's what the data actually shows when you put it side by side:

Same survey. Same year. 56% of media professionals alarmed. 61% excited. The industry is simultaneously afraid of AI content and eager to buy it, and has no tools to tell the difference between the inventory that will help your brand and the inventory that will humiliate it. That's not a trend. That's a product gap. A very large, very expensive, very industry-wide product gap.
The First Empirical Evidence — and What It Actually Shows
In March 2026, Zefr and Omnicom Media Group's research arm published the first empirical study of brand outcomes adjacent to AI-generated content. They tested nearly 5,000 US and Canadian consumers, exposing them to ads running next to eight distinct types of AI-generated video content. This is the study the whole industry should have done two years ago but didn't, because doing it would require acknowledging the question was worth asking.
The results were not what either camp wanted.
Ads adjacent to AI-generated satire, artistic content, and youth-focused entertainment generated positive brand perception — audiences described associated brands as "refreshing" and "innovative." Ads adjacent to AI-generated spam and misinformation produced sharply negative reactions, with financial services brands showing particular vulnerability. The decisive variable was not whether the content was AI-generated.
It was whether the content was any good.
"Treating all AI as a single risk category is both inaccurate and limiting." — Jon Morra, Chief AI Officer, Zefr, March 2026
This is genuinely good news, if you know how to act on it. Blanket AI avoidance isn't a brand safety strategy. It's a scale strategy, and a terrible one. When more than half the web's new content is AI-generated, blocking all of it means blocking the majority of available new inventory. 87% of programmatic spend is already in PMPs. The open exchange is shrinking. Blanket avoidance accelerates that flight, nukes your reach, and — per the Zefr/OM data — may actually deprive you of inventory that would have helped your brand.
The problem isn't AI content. The problem is that the programmatic supply chain has no reliable infrastructure to tell good AI content from bad. The vendors selling discrimination tools have published no evidence that their tools can do it accurately. And the industry standards bodies haven't defined the problem clearly enough to mandate a solution.
That is a lot of "nots" for a $200 billion industry.
What the Industry Actually Needs — and Doesn't Have Yet
The Zefr/OM research, combined with the DV consumer data and the IAS feature taxonomy, points toward a practical framework. Not AI versus human. Quality versus slop. The relevant classification isn't authorship — it's context, supply path integrity, and whether the inventory is designed to deceive or designed to serve an audience.

None of this framework exists in the bidstream today. There is no standardized quality signal field in the IAB Tech Lab Supply Chain API for AI content. GARM does not classify it. The IAB's AI framework covers creatives, not publisher pages. IAS, DV, and HUMAN are running real-time detection tools calibrated for a 2022 problem against a 2026 supply chain.
And here's what makes it especially maddening: the data to build this framework exists. The bidstream signals are there. The SPO tools are there. The verification platforms are there. What's missing is the industry's will to standardize a solution that would reduce the ambiguity that currently funds several vendor business models.
That is the crisis. Not that AI content exists. Not that AI content is in your media plan. That the infrastructure to discriminate between good AI content and digital landfill at programmatic speed does not exist, the vendors claiming to provide it have published no evidence that it works, and the economic incentives of every major player in this ecosystem are oriented away from solving it cleanly.
Part II of this series builds the scoring framework you need to start solving it yourself. The bidstream signals. The DSP audit. The supply path forensics. The exclusion methodology that doesn't require you to trust a vendor who won't show you their accuracy numbers.
It's members-only. Obviously.
Frequently Asked Questions
What percentage of programmatic inventory is AI-generated content? No major forecaster has published a clean figure for "AI-generated content as a share of programmatic impressions" — and that absence is itself the story. What we know: 51.7% of newly published English-language web articles are AI-generated as of early 2026 (eMarketer/IAS/Graphite). Since programmatic buys against the open web in real time, that content is entering the supply chain continuously. The bidstream has no standardized signal to identify it. Nobody is counting.
How do AI content farms pass SSP onboarding? SSP onboarding remains publisher-centric, not content-centric. Standard checks: domain age, WHOIS verification, ads.txt presence, high-level traffic analytics. No automated detection for AI-generated text at onboarding. Once accepted by one SSP, a domain appears as a legitimate reseller in other SSPs' seller.json chains — gaining multi-exchange access with minimal additional scrutiny. One weak gate opens the whole supply chain.
Are brand safety tools accurate at detecting AI-generated content? None of the major brand safety vendors have published independent, audited precision/recall benchmarks for AI content detection specifically. IAS's GenAI Avoidance product entered beta in April 2026. Adalytics documented in 2024 that Fortune 500 brands had ads placed next to explicitly harmful content while both IAS and DV tags were active and reporting clean. Make of that what you will.
Is advertising next to AI-generated content always bad for brands? No — and this is the most important thing in this article. The Zefr/Omnicom Media Group study (March 2026, ~5,000 consumers) found ads adjacent to AI-generated satire, artistic content, and youth-focused material drove positive brand perception. Ads adjacent to AI spam drove sharply negative reactions. The decisive variable was content quality, not AI authorship. Blanket avoidance is not the answer. Discrimination is.
What is the financial cost of AI content farms to advertisers? ANA/TAG TrustNet Q2 2025 estimated $26.8 billion in annual programmatic value lost to redundant supply, measurement gaps, and low-quality inventory. Zero dollars of that total is publicly attributed to AI-generated content specifically. The industry is comfortable quoting a $26 billion waste figure but unwilling — or unable — to say how much of it flows to AI slop. The absence of that breakdown is a choice, not an accident.
What adtech trends in 2026 are most affected by AI content saturation? Supply path optimization, contextual targeting, brand safety technology, and programmatic media buying are all directly in the blast radius. The shift to PMPs (now 87.8% of programmatic spend) is partly a flight from open exchange quality uncertainty. Contextual targeting is experiencing a full renaissance as the cleanest available signal in an AI-saturated environment. The vendors who can credibly score AI content quality — not just flag it — are going to own the next cycle.

The Rabbi of ROAS
Subscribe to our premium content at ADOTAT+ to read the rest.
Become a paying subscriber to get access to this post and other subscriber-only content.
Upgrade





