SPECIAL REPORT: Which Brands Bankrolled Child Exploitation

And Why They’d Like You to Forget About It

Two weeks ago, we told you this was coming. 

We laid it all out—the inevitable fallout, the industry-wide panic, the finger-pointing, the feigned shock. We even said that some of the biggest brands in the world were about to get caught funding the absolute worst kind of content imaginable.

And what did we get?

A chorus of denials. “That’s not possible.” “Adalytics is exaggerating.” “There’s no way brand safety tools would let this happen.”

And yet, here we are. Google, Amazon, Microsoft, and a lineup of Fortune 500 advertisers have been exposed—monetizing a website flagged by the National Center for Missing & Exploited Children (NCMEC) for hosting child sexual abuse material (CSAM).

And let me tell you something: This isn’t just another ad tech scandal to be swept under the rug. I’ve spent my career exposing the darkest corners of the internet. Before I was an ad tech guy, before I was running publications and podcasts, I was a special investigator on the first-ever Electronic Crimes Task Force in the nation, right here in NYC. Yeah, that’s me—the ex-hacker the Secret Service brought in to help catch the bad guys.

The National Center for Missing & Exploited Children (NCMEC)? They weren’t some distant agency sending out press releases—they were in our office. I worked hand in hand with them. I saw the worst of the worst, the kind of crimes that make your stomach turn. And I can tell you, this is not just some “unfortunate ad placement.” This is a systemic failure at every level of ad tech.

What Did These Companies Actually Fund?

Not just some slightly shady website with pirated movies. No, they were actively funding a site so dangerous that law enforcement agencies monitor it for the most heinous crimes imaginable.

Adalytics uncovered that ads from Sony, Pepsi, the NFL, Starbucks, Nestlé, Honda, Audible, Unilever, MasterCard, and the U.S. government were running on ibb.co and imgbb.com—websites that had already been flagged by NCMEC for hosting CSAM.

Let that sink in. The U.S. Department of Homeland Security—the very agency tasked with fighting these crimes—had its ads running on a website distributing child abuse material.

And just when you think it couldn’t get worse:

How Big Brands Ended Up Funding Child Exploitation—and Why They Can’t Pretend They Didn’t Know

Let’s be absolutely clear: this isn’t just some minor ad placement blunder, some errant banner ad that landed in an awkward place. This isn’t some mild embarrassment where a car insurance ad ends up next to a controversial news article.

No, this is something so grotesque, so absolutely unconscionable, that there are no PR-friendly words to soften it.

What did these companies—Sony, Pepsi, the NFL, Starbucks, Nestlé, Honda, Audible, Unilever, MasterCard, and even the U.S. government—actually fund?

Not just a sketchy website pushing pirated movies. Not just some digital Wild West filled with copyright infringement.

These brands, knowingly or not, were funneling ad dollars into a site flagged by the National Center for Missing & Exploited Children (NCMEC) for hosting child sexual abuse material (CSAM).

Let that sink in. A website so toxic that law enforcement agencies actively monitor it for the most heinous crimes imaginable was being actively monetized by some of the biggest and most supposedly "responsible" companies on the planet.

It’s horrifying. It’s unforgivable. And it’s exactly what happens when an industry built on automation, obfuscation, and plausible deniability is allowed to operate unchecked.

Homeland Security’s Ads Ran on a CSAM Site. Read That Again.

If there’s a moment when the simulation breaks, when irony ceases to be a concept and reality just collapses in on itself, this is it.

The U.S. Department of Homeland Security—the very agency tasked with fighting human trafficking, online exploitation, and cybercrimes—had its own ads running on a site distributing CSAM.

I spent years working inside this world. Before I was in ad tech, before I was running media and calling out bad actors, I was a special investigator for the first-ever Electronic Crimes Task Force in the nation, in New York City.

The Secret Service brought me in. They didn’t bring in the bureaucrats. They didn’t bring in some clean-cut think tank analysts. They brought in hackers like me to track down the worst criminals on the internet.

And I can tell you, NCMEC wasn’t just some distant organization in our reports. They were in our office. We worked hand in hand with them. And the sites they flagged? Those weren’t gray areas. They were red alert, shut-it-down-immediately, send-in-the-feds, do-not-pass-go nightmares.

And yet, here we are. In 2024. With ads from Homeland Security running on a site flagged for CSAM.

If there is a more damning indictment of just how irreparably broken the programmatic ad supply chain is, I haven’t seen it.

The “We Didn’t Know” Defense Is an Absolute Joke

Cue the flood of carefully worded PR statements.

"We take this matter very seriously."
"We are investigating how this happened."
"We are working with partners to improve brand safety solutions."

Bull.

Because let’s get one thing straight: They didn’t know because they didn’t care to check.

This wasn’t a fluke. It wasn’t an anomaly. NCMEC flagged these sites back in 2021. That means there were years of warnings, years of reports, years of opportunities to pull the plug.

But did anyone in this trillion-dollar industry do a thing about it?

No.

Because ad tech thrives on not knowing.

If they actually tracked every ad placement, if they actually took accountability for every site they monetized, the entire system would collapse under the weight of its own negligence.

Instead, the system is designed for maximum plausible deniability:

  • Brands throw money into the programmatic abyss, assuming their ads will magically end up in clean, reputable places.

  • Agencies trust whatever brand safety report lands on their desk, because actually vetting it would take time and effort.

  • Ad verification firms stamp “AI-powered” on their tools and call it a day.

  • Ad exchanges let it all happen, because whether an ad ends up on The New York Times or a CSAM-riddled cesspool, they still take their cut.

And when it all blows up?

  • The brands blame the agencies.

  • The agencies blame the verification firms.

  • The verification firms blame the AI.

  • The ad exchanges shrug and say “we’re just a platform.”

And then, of course, the same cycle repeats.

Follow the Money—Everyone Got Paid

Let’s not lose sight of what this means.

These sites—the ones hosting CSAM? They were making money.

That’s the real horror here.

Ad dollars keep these sites alive.

It pays for their hosting. It pays for their bandwidth. It allows them to persist and thrive in the shadows of the internet.

And thanks to ad tech’s broken system, Sony, Pepsi, Nestlé, and the U.S. government helped fund it.

Let’s be real—if this were any other industry, heads would roll.

  • If a major bank got caught funneling money to traffickers, there’d be Congressional hearings.

  • If a pharmaceutical company was found funding an illegal opioid ring, there’d be lawsuits by the morning.

  • If a defense contractor accidentally sent weapons to a terrorist group, executives would be in handcuffs.

But in ad tech? We’ll get zero accountability, a few vague “brand safety enhancements,” and a continued insistence that AI will somehow fix it next time.

“Brand Safe” – According to Who?

Advertisers checked their records and found that their brand safety verification vendors had labeled 100% of these placements as "brand safe."

Yep. DoubleVerify and Integral Ad Science—two companies that charge brands millions for “protection” against this very thing—rubber-stamped these ads as perfectly fine.

Let’s hear from someone who trusted these tools:

“Our reports from our measurement providers showed these sites were 100% brand safe. That’s just horrifying. We rely on these systems, and they failed us completely.”

And here’s my friend Rob Leathern, former head of Google’s Privacy and Data Protection Office, with the understatement of the year:

“The degree to which publishers and advertisers can be anonymous on these websites is a problem … People can hide behind anonymous web domains, and I don’t think that’s something we should accept as society.”

Well, no kidding.

And yet, the ad tech industry has let this happen for years.

Congress Smells Blood

The usual playbook after an ad tech scandal?

  1. Act shocked.

  2. Issue a vague PR statement about “commitment to transparency.”

  3. Do nothing.

Except this time, Congress is watching. And they are pissed.

Senators Marsha Blackburn (R-Tenn.) and Richard Blumenthal (D-Conn.) just fired off letters to the CEOs of Google, Amazon, DoubleVerify, IAS, the Media Ratings Council (MRC), and the Trustworthy Accountability Group (TAG) demanding answers.

And they are not mincing words:

“Where digital advertiser networks like Google place advertisements on websites that are known to host such activity, they have in effect created a funding stream that perpetuates criminal operations and irreparable harm to our children.”

Translation: You knew. And if you didn’t, you were grossly incompetent.

Oh, and if any of these companies thought they could wave around their AI-powered brand safety tools as a defense? Think again:

“The ability of AI to catch harmful content has been vastly overstated. Your company continues to profit from a system that has failed to prevent ads from funding criminal operations.”

For the ad verification giants, the questions get even uglier:

“How much revenue has your company made from measuring ads on these offending websites?”

Translation: If you knew this was happening, you profited from it. If you didn’t, you failed at your core job. Pick one.

Amazon, Google, and the “We Had No Idea” Defense

Predictably, the companies caught in the scandal are scrambling for cover.

The usual PR jargon is rolling out:

  • “We take this very seriously.”

  • “We are reviewing our policies.”

  • “We will work closely with partners to ensure better outcomes.”

But the real question is: How many times are we going to see this play out before something actually changes?

Adalytics has been publishing reports for years exposing how programmatic ad placements fund fraud, scams, and worse. And yet, nothing changes.

A media buyer summed it up perfectly:

“Everyone’s been playing hot potato with accountability. It’s the DSP’s fault, it’s the SSP’s fault, it’s the verification vendor’s fault. At the end of the day, the whole system is a black box.”

And that’s by design.

This Isn’t Just an “Oopsie” – It’s the System Working as Intended

For years, ad tech has operated on an “ask no questions, cash the checks” mentality. Everyone gets a cut:

  • Brands think they’re protected because they paid for a brand safety tool.

  • Agencies trust those reports because it’s easier than scrutinizing them.

  • Verification companies sell their AI-driven fairy dust and call it a day.

  • Ad exchanges let this all happen because they profit either way.

And then, when a report like this comes out? The whole industry pretends to be shocked, shocked! that something so horrific happened under their watch.

“We had no idea!” they cry.

Except they did. They just didn’t care enough to check.

The Ad Tech Industrial Complex: A System Designed to Fail (But Profit Anyway)

Ah yes, the age-old ad tech kabuki theater. A scandal breaks. Brands act shocked. Agencies claim they had no idea. Verification firms spout some AI-powered nonsense about "enhancing detection algorithms". Ad exchanges pretend they’re just neutral middlemen.

And the money? Oh, the money keeps flowing.

Because here’s the dirty little secret: This isn’t a bug. It’s a feature.

Every time something like this happens, the industry follows the same script. Brands sigh deeply and insist they trusted the brand safety tools. Agencies throw up their hands and say, we rely on third parties for this stuff! Ad verification firms go full Professor Frink and mutter about "complex algorithmic challenges." And ad exchanges? They’ll tell you they don’t actually control where the money goes, they just move it around. Like a bank, but without the regulations or consequences.

And so it continues.

Because no one in this system actually wants to know where the money is going. The whole industry is built on deliberate opacity. The second you start pulling at the thread, you see how deep the rot goes. But no one pulls too hard because—well, why would they? The machine is running just fine, thank you very much.

For years, ad tech has functioned on an “ask no questions, cash the checks” model.

  • Brands pay for “protection” from the very system that keeps failing them.

  • Agencies trust the reports because scrutinizing them would mean admitting they don’t actually know where half their client’s budget goes.

  • Verification firms slap an “AI-powered” label on the problem and move on.

  • Ad exchanges let it all happen because they skim a little off every transaction—whether it’s funding CSAM or cat videos, they get paid either way.

And when the inevitable disaster happens? When ads for Fortune 500 brands show up next to the absolute worst content imaginable? The system doesn’t change. It just… reacts. Temporary outrage. Vague commitments to “doing better.” Maybe a flashy new “brand safety solution” that conveniently requires another expensive contract. And then? Business as usual.

The most laughable part of all of this is the “We had no idea!” routine.

Except they did. They always do. They just didn’t care enough to check.

What Happens Next?

Congress is now considering multiple bills aimed at cracking down on online child exploitation and demanding more accountability from platforms and ad tech.

  • The Stop CSAM Act would require platforms to report CSAM, conduct annual transparency reports, and impose new liabilities for companies that fail to stop it.

  • The Kids Online Safety Act (KOSA) has 70 co-sponsors and would force platforms to protect minors from harmful content.

And now, with brands being publicly dragged into this mess, they’ll be forced to demand real transparency—or risk another scandal that could make this one look tame.

The Reckoning Is Here—Will Anything Change?

Adalytics’ report is just the latest in a string of industry-shaking exposés. More will come. And every time, the pattern will repeat.

Unless, of course, advertisers finally wake up and demand a real, traceable supply chain—one that doesn’t just rubber-stamp sites as “brand safe” and hope nobody notices when things go wrong.

Until then? Don’t believe the “we take this seriously” PR spin.

Believe the receipts.

The Ad Verification Industry Just Had Its “Oh Sh*t” Moment—And Fumbled It

Sometimes, an industry gets a moment of reckoning so severe that anything short of absolute outrage and swift action is a moral failure. This is one of those moments.

Adalytics just exposed something horrifying: some of the biggest companies in the world—MasterCard, Nestlé, Starbucks, Unilever, even the U.S. Government—had their ads appearing alongside child sexual abuse material (CSAM) and hardcore pornography on ImgBB, a widely used image-hosting site.

This is the nightmare scenario for any brand, but more importantly, it's a failure that should send shockwaves through the entire ad tech industry. The very companies that claim to provide “brand safety” and “media quality verification” were asleep at the wheel while ad dollars were funneled into a platform hosting illegal content.

READ THE REST BELOW

Subscribe to our premium content at ADOTAT+ to read the rest.

Become a paying subscriber to get access to this post and other subscriber-only content.

Already a paying subscriber? Sign In.