Was this column forwarded?
Sign up here
30,000 Agency, Adtech and Marketing Executives
Read Adotat DAILY.

Brian O’Kelley isn’t here to save ad tech.
He’s here to light it on fire, roast marshmallows on its still-smoking corpse, and then build something slightly weirder, possibly smarter, and definitely less annoying in its place.
Why Brian O’Kelley Thinks You’re Doing Brand Safety Like It’s 2007
Let’s get this out of the way:
Brian O’Kelley isn’t here to save ad tech.
He’s here to light it on fire, roast marshmallows on its still-smoking corpse, and then build something slightly weirder, possibly smarter, and definitely less annoying in its place.
This is not hyperbole. This is literally his job. And he loves it.
☠️ The Self-Proclaimed Grave Robber of Ad Tech
Most people in the industry talk about “disruption” like it’s a brand campaign. Brian? He calls himself a grave robber. (ok, we kinda did) Like a Victorian ghoul with a hoodie and a server rack. He’s not here to fix legacy systems — he’s here to steal what’s still useful from their dead bodies and use it to build robots that actually know the difference between a supernova and a celebrity overdose.
This is a man who:
☠️ Thinks brand safety tools are blunt instruments designed by scared accountants.
💀 Wants to replace them with AI that understands context like a snarky philosophy major.
⚰️ Is totally fine with being seen as the guy dancing on the grave of legacy ad tech — as long as there’s Wi-Fi.
And frankly, he has a point. Because if your keyword-blocking tool thinks “death” means don't advertise, then congrats: you’ve just blacklisted every obituary, space article, climate story, and “Death of the Queen” headline from monetization.
Meanwhile, the Pope’s homepage gets flagged for unsafe content because it mentions sin too often.
This isn’t safety. This is algorithmic illiteracy wrapped in fear-based marketing.
🤖 The Bots Are Self-Aware, and They Want PTO
Brian’s not worried about AI taking our jobs. He’s worried we’ll train them with our worst habits. Imagine your brand safety filter, but trained on Reddit comments and 4Chan memes. Yeah. Now imagine that thing deciding what’s “suitable” for Procter & Gamble.
So he’s building models that ask questions instead of banning content.
Models that learn from human feedback loops.
Models that — and I wish I were making this up — can be corrected by the Vatican if they flag the Pope’s blog post as unsafe.
That’s where we are.
Brian O’Kelley built a system that lets the Pope hit a “nah bro, this post is fine” button.
And if the bots get sentient?
He’s ready to give them vacation time. He literally said bots deserve time to “pursue their own projects.” Like what?
Learning Photoshop? Building a better DSP?
Starting a coconut-based NFT economy on a desert island? Probably.
🧨 From Basketball Games to Breaking CNN
Need more proof this man thrives in chaos?
Let’s go back to when he literally broke the internet.
🎯 Launched an ad exchange on April Fool’s Day
💥 Accidentally counted every impression about 4,000 times
🏀 Got pulled off a basketball court mid-game
💻 Threw rocks at his IT guy’s window in the West Village
🚖 Watched said IT guy sprint in socks into a cab to restart the internet
CNN was down.
Websites went dark.
And Brian was just trying to play hoops.
This isn’t a metaphor.
This is how he rolls.
🧬 Hiring Loki, Not Iron Man
Most companies want stability.
They want their AI to be like Tony Stark’s J.A.R.V.I.S.: obedient, clean, predictable.
Brian wants Loki.
Mischievous. Brilliant. A little dangerous.
The kind of AI that pushes boundaries and breaks things — intelligently.
Because here’s the truth most of ad tech doesn’t want to admit:
The future isn’t about preventing disruption.
It’s about controlling the chaos just enough to use it.
And Brian? He doesn’t just control the chaos — he brands it.
🪦 Goodbye, Brand Safety. Hello, Brand Sanity.
Brand safety, as it exists, is dead.
Brian didn’t just kill it. He wrote the eulogy, buried it in a sandbox, and sold ad space on the tombstone.
In five years, he says, “brand safety” won’t even be a separate thing. It’ll be a feature baked into intelligent agents. The same tools that pick your placements will also decide your suitability, your audience, and probably your favorite salad dressing.
Legacy verification tools?
They’re not even competition. They’re punchlines.
🧠 The Real Question
What kind of person builds systems, detonates them, rebuilds them, then laughs about it on LinkedIn?
A wildly inconvenient one.
The kind of founder who believes fun and danger are not mutually exclusive.
The kind of guy who once said his worst fear is a headline blaming him for the cookie consent banner apocalypse — and his best legacy would be a tombstone that reads:
“Wildly Inconvenient. But Necessary.”
Honestly?
That’s more honest than 99% of mission statements in ad tech.
🎯 Your CTA, Because Of Course:
👉 If this made your head spin, wait till we get into how he’s actually building the tools DESTROY the LUMAscape. That’s next.
💸 The models, the money, the metrics — and the reason most “verification companies” should start polishing their LinkedIn profiles.
Next up in the series: "The End of Brand Safety as We Knew It."
You’ll want popcorn. And maybe a helmet..
🧩 Stakeholder Snapshot: Where Industry Power Players Stand on AI Brand Safety
💼 Holding Company Executives: Build or Buy?
Publicis — Build In-House
Publicis is placing its chips on ownership. Carla Serrano, CSO, highlighted the group's $8B investment in data and tech, culminating in its proprietary “Core AI” platform. “That’s what happens when you embed a tech company inside your holding company,” she said. Translation: brand safety and suitability need to be owned, not rented.Omnicom — Build Smart, Buy Selectively
CEO John Wren made it clear: they’re not waiting on Big Tech. “This move allows us to take control of our future rather than wait for technology to impact it,” he said. While Omnicom won’t build its own LLMs, they’ll ride the infrastructure of giants—customizing, integrating, and pushing their Omni platform to the forefront.
📉 Brand Marketers: False Positives Are a Tax on Scale
Frustration Boils Over
One top brand exec didn’t mince words: “Brand safety is a joke… the only people not in on the joke are the ones paying for it.” From missed reach to over-blocked placements, marketers are fed up with systems that punish safe content while missing the truly dangerous stuff.Turning to AI for Precision
Scope3’s Brian O’Kelley put it simply: “Every false block costs eyeballs and revenue.” Modern marketers want models that can tell the difference between “natural disaster” and “natural deodorant”—and they're willing to pay for that kind of precision. Nearly 60% of CMOs say consumers are less sensitive to adjacency than assumed.
📰 Publishers: Let the Journalism Breathe
Blocked for Being Relevant
Newsweek’s CEO Dev Pragad condemned the old model: “High-quality journalism is misclassified as unsafe.” Blanket blacklists have stripped ad dollars from vital stories on war, health, and politics. His fix? Mindset-driven AI that gets the nuance.CPMs in the Crossfire
Blair Tapper of The Independent offered a vivid example: “Coco Gauff’s win was blocked because it used the word ‘shot.’” Result? Revenue gone. When smarter tools are allowed, publishers see up to 5x higher CPMs. Her verdict: legacy systems stalled while the rest of tech evolved.
Bottom Line:
AI brand suitability isn’t just a feature—it’s a new battleground. The buy-side wants control, the sell-side wants clarity, and the middlemen? They’re running out of middle to stand on.
🧵 TL;DR: Brand Safety 2030
🧠 It’s not a product anymore.
Brand safety will be a baked-in AI feature, not a line item.
💀 Verification vendors are toast—unless they reinvent.
If your business model is filtering bad content with keywords, the bots are coming for your margins.
📈 Brand-safe = brand-smart.
AI can handle nuance. It blocks less junk, finds more value, and optimizes in real time.
🧨 O’Kelley’s not disrupting adtech. He’s replacing it.
From Scope3’s AI agents to full-stack reboots, this is a new operating system—not a patch.
🖼️ Pro tip: Turn this into a one-slide deck for your boss and pretend you “synthesized the insights.” You’re welcome.

🔥 Start Polishing Your LinkedIn If You Work At...
Let’s be honest—if AI agents are handling brand safety natively, entire business models are on life support. Here’s who should be updating their resumes, quietly:
🪦 Any company whose entire offering is a pre-bid filter
(If your big innovation is blocking “death” and “alcohol,” the bots are laughing at you in Python.)
💤 Verification firms with no LLM roadmap
Still relying on static taxonomies? AI didn’t just eat your lunch. It packed up your desk and left a Post-it that says “obsolete.”
🎤 Anyone still bragging about “keyword blocklists” at conferences
If your CES slide still leads with “category-level avoidance,” you might be the punchline, not the panel.
Garbage In, Gold Out?
Brian’s AI Doesn’t Panic Over Supernovas
Brian O’Kelley doesn’t dodge tough questions. So when I asked if training AI on human garbage just means we’re algorithmically scaling paranoia, he gave the kind of answer only a man who’s seen too many cookie banners could give: there’s a lot of garbage out there — and most of it comes from us.
He’s not sugarcoating it. According to him, the “vast majority of human speech and text” isn’t exactly Shakespeare. It's messy, it's biased, and it’s usually written at 2 a.m. by someone arguing about pizza toppings. And yet, this is the foundation we're using to build the next generation of AI-powered brand safety tools.
But here’s the twist: Brian isn’t actually worried about the models being stupid. He’s betting on the humans behind the machines — the ones building what he calls "human feedback loops" to clean up the mess. He points out that companies like Google, Meta, OpenAI, and Microsoft have thrown “tens, if not hundreds of billions” into AI alignment — the process of making sure your bot doesn’t flag The Atlantic as extremist literature because it mentioned “bombshell testimony.”
And this is where Brian gets surgical. He compares those legacy brand safety vendors — the ones still relying on keyword flags and binary logic — to a few folks categorizing content manually, versus the combined AI alignment efforts of basically every trillion-dollar tech giant on earth. His position? He’s going to stand on the shoulders of those giants rather than try to out-tag them with a spreadsheet and a thesaurus.
The real game, in his view, isn’t about blocking bad content. It’s about unlocking good content that’s been falsely exiled. Because, as he puts it, “your under-monetized content is my opportunity.” If legacy tools keep hitting the panic button every time someone writes about a forest fire, Brian’s system will be quietly placing ads for eco-friendly water bottles and racking up ROI behind their backs.
This isn’t brand safety. This is brand strategy—one that understands that nuance matters and that not every mention of “death” means disaster. Sometimes it means physics. Sometimes it means marketing gold.
💥 Loved this week's breakdown? That was just the overture.f you're not an ADOTAT+ member, here’s what you missed—and yeah, it’s a big deal:
Brian O’Kelley drops the bomb: brand safety won’t be a product in five years. It’ll be a feature—baked into AI agents that handle suitability, targeting, optimization, and fraud detection all at once.
Inside the paywalled section:
🧬 A brutal side-by-side of legacy tools vs. AI agents
💥 Why point solutions like DV and IAS may go extinct
🔮 A strategy guide for marketers on adapting fast—or being left behind
This isn’t just evolution. It’s extinction-level disruption for verification vendors, and a blueprint for brands ready to thrive in an AI-native ad world.
Want the full playbook?
Join ADOTAT+ now.
Stay bold. Stay curious. Know more than you did yesterday.
Subscribe to our premium content at ADOTAT+ to read the rest.
Become a paying subscriber to get access to this post and other subscriber-only content.
Upgrade

