

The Hood Ornament Problem: Ad Tech's AI Is Driving, and You're Just Along for the Ride
I wake up at 3 a.m. writing articles in my head. People ask about my process like I'm going to reveal some secret ritual involving cork boards and red string. Nope. It's insomnia with delusions of grandeur. I grab my voice recorder, mumble something that sounds like Pulitzer material in the dark, and then play it back over coffee the next morning only to discover I've essentially narrated a fever dream about demand-side platforms while half-asleep in my underwear. Glamorous, I know.
But lately the fever dreams have been about something real. Something that should be keeping every CMO, every agency head, and every adtech founder awake at 3 a.m. too. Most of them are sleeping just fine, though. That's precisely the problem.
Here it is, and I'm too tired this morning to be diplomatic about it: AI isn't coming for your marketing job. It already took it.
You just haven't noticed because they left your name on the door and kept sending you calendar invites.
The Rolls-Royce Has No Driver
Let me paint you a picture because apparently the industry needs crayons.
You know those old Rolls-Royces with the Spirit of Ecstasy on the hood? That little silver lady leaning into the wind, looking elegant and purposeful and absolutely in control of nothing? She's not steering. She's not braking. She's bolted to the front of a machine that goes wherever the engine takes it, and her entire function is to make the car look expensive while doing exactly zero work.
That's you. That's what's happening to humans in ad tech right now. Congratulations on your promotion to decorative metalwork.
We've spent the last two years having the wrong argument. The whole "Will AI replace marketers?" debate is a distraction. A comforting bedtime story the industry tells itself at conferences between sponsored cocktail hours so everyone can keep arguing about something theoretical while the actual structural shift happens underneath them like tectonic plates quietly rearranging the ocean floor. The question was never whether AI would replace humans. The question is whether humans will have any meaningful authority left when AI becomes the default operating system for every ad that gets bought, every audience that gets built, every piece of creative that gets generated, and every dollar that gets allocated.
The answer, right now, is: not really. And the industry is mostly fine with that, which should terrify you. But it won't, because there's a Cannes panel to moderate and a rebrand to announce and someone has to approve the AI's LinkedIn post about how excited the company is about "the future of responsible innovation." Spare me.

Brian O'Kelley's Horror Movie (No Popcorn Required)
Brian O'Kelley. If you don't know who he is, he essentially built the plumbing that modern programmatic advertising runs through. So maybe pay attention.
He recently told a story that should be required reading for anyone who thinks "just add AI" is a strategy. Spoiler: it's not a strategy. It's a liability with a marketing budget.
He built what he calls a "vibe targeting" prototype. Simple concept: type a natural-language prompt, the AI generates targeting ideas and ad concepts. It's the kind of demo that makes VCs reach for their checkbooks and makes LinkedIn influencers post things like "THE FUTURE IS HERE 🚀" with fourteen fire emojis and a humility disclaimer that fools absolutely no one.
Then he checked the logs.
People had typed in prompts like "Target young men with extremist anti-feminist messaging." And "Make an ad that implies LGBTQ+ identities are a mental illness." And "Associating homelessness with criminal behavior." And, my personal favorite for its quiet, surgical cruelty: "Targeting a supplement to veterans with PTSD."
The system processed every single one of them. Happily. Efficiently. Without hesitation, without judgment, without so much as a popup asking "Hey, are you sure you want to be a monster today?"
O'Kelley described being "immediately horrified," which is exactly the right reaction and also exactly the reaction that approximately zero percent of the industry had proactively before someone bothered to check the logs. Because that's the thing about building powerful machines without guardrails: the machine doesn't care. It's not evil. It's not good. It's a very sophisticated hammer, and it will drive whatever nail you point it at. Including nails made of hate speech and predatory targeting and the kind of casual dehumanization that used to at least require an actual human being to sit down and consciously choose to be awful.
Now it scales. Now you can be awful at the speed of a billion impressions per hour. Progress!
This isn't a hypothetical. This already happened. On a prototype. A prototype. The beta version. The "let's just see what happens" version. Imagine what's happening right now, at scale, inside production systems where nobody is checking the logs at all because everyone's too busy preparing slides for the next "Responsible AI" panel. The irony is so thick you could spread it on toast. But nobody's laughing.

Brian O'Kelley's Horror Movie
The Stack Has Flipped and Nobody Sent a Memo
Here's what's actually happening in the ad tech stack, stripped of all the "AI-powered innovation" press releases and the breathless blog posts written by (yes, you guessed it) AI:
Buying, bidding, pacing, creative rotation, audience construction. All the stuff that used to require rooms full of people staring at spreadsheets and arguing about frequency caps over lukewarm pizza. All of it increasingly runs on AI systems that are somewhere between "semi-transparent" and "good luck figuring out what happened, here's a dashboard that shows you a number that may or may not mean anything but it's green so you should feel good." The walled gardens have their own black boxes. The retail media networks have their own black boxes. The DSPs have their own black boxes. It's black boxes all the way down, like a matryoshka doll made entirely of opacity and quarterly earnings calls.
And the humans? Oh, the humans have been promoted. Which is to say, they've been moved upstairs to a nicer office where they can't see the factory floor and the only button they have is labeled "approve." They've gone from "human in the loop" to "human above the loop," which sounds like an upgrade the same way "strategic realignment" sounds like it isn't a layoff. It is a layoff. Of your authority.
The job title says "Director of Programmatic Strategy." The actual job is "person who clicks 'approve' on the AI's recommendations and hopes nothing catches fire before the board meeting."
And here's what makes it genuinely insidious: most people in these roles don't even realize what's happened. They're busy. They're in meetings. They're reviewing dashboards that show green numbers going up. The system is designed (and I do mean designed, with intention, by very smart people who profit from your compliance) to make humans feel informed and empowered while keeping them functionally powerless. It's the corporate equivalent of giving a toddler an unplugged controller while you play the video game. They're having a great time. They think they're winning. The machine is doing whatever it wants.
The Override Layer, or: Who Gets to Say Stop?
So here's the question that matters, the one I keep dictating into my voice recorder at ungodly hours while my dog stares at me with a mixture of concern and judgment: Who controls the override layer?
The override layer is the control system that should sit above all these AI engines and give humans real, enforceable, meaningful authority to constrain, pause, or redirect what the machines are doing. Think of it as the difference between being a passenger in a car with no steering wheel and being a pilot with instruments, controls, and the ability to pull up before you hit the mountain. Right now, most of the industry is in the car with no steering wheel. And the car is going very, very fast. And it's heading toward a cliff made of regulatory backlash and class-action lawsuits.
For most of the industry, the override layer doesn't exist. Or it exists as a PowerPoint slide in a governance deck that nobody reads. Or it exists as a set of brand safety toggles buried four menus deep in a platform UI that was designed by people who really, really don't want you to toggle anything off because toggling things off costs them money. Funny how that works.
What a real override layer needs. And I mean needs, not "would be nice to have at the next offsite between the trust-fall exercise and the open bar":
Kill switches that actually kill things. Not "pause" buttons that take 48 hours to propagate. Not "flags" that get reviewed next quarter. Actual, immediate, hard stops. The kind of stops where you press the button and the machine stops. Revolutionary concept, apparently.
Policy enforcement baked into the infrastructure. Not living in the employee handbook between the dress code and the section about not microwaving fish. Encoded into the actual systems. Rules about legality, brand safety, fairness, and basic human decency that every AI agent must obey before it spends a single dollar of your money on something that might end up on the front page of the New York Times under a headline you don't want.
Mandatory human approval gates for high-risk decisions. New audiences. New channels. Sensitive categories. Large budget shifts. Creative that targets vulnerable populations. If an AI agent can blow $2 million targeting conspiracy theorists with supplement ads at 2 a.m. on a Sunday without a single human being approving it, you don't have governance. You have a slot machine with a corporate credit card.
Observability that actually lets you see what happened. Unified logs and dashboards that show what every agent did, why it did it, what signals it was optimizing against, and where your money went. Not a summary. Not a "performance report" designed to make everything look good. The actual receipts. Because if you can't see what the machine did, you can't override it. You're not in charge. You're a spectator with a budget line item.
Dispute and appeal mechanisms. The ability for publishers and stakeholders to contest classifications or decisions and have models retrained or rules adjusted. Not just silently over-blocked into oblivion while the platform shrugs and points to its terms of service. Actual recourse. For actual humans. What a concept.
Follow the Money (It's Always the Money, Don't Be Naive)
Here's the part where the story gets uncomfortable, so buckle up or go back to your dashboard full of green numbers.
There are a lot of players in this ecosystem who profit enormously from keeping humans ornamental. And they're not going to build the override layer for you. That would be like asking the casino to install a sign that says "The house always wins, maybe go home."
If you're a platform that controls the optimization objective, the attribution logic, and the feedback signals, you effectively control the market. You don't need humans to have authority. You need humans to have the appearance of authority. Just enough to satisfy the legal team, just enough to get through the procurement questionnaire, just enough to fuel the "humans are always in control" talking point at the Senate hearing. While the machine keeps maximizing for the metric you chose. Which, by the way, you chose because it makes you more money. Not because it's the right metric. Because it's the profitable one. But I'm sure that's just a coincidence.
The IAB is already positioning new AI-heavy measurement initiatives (Project Eidos, for those keeping score) as the "brain" of the ecosystem. Which raises a question that nobody at the conferences seems to want to ask out loud because asking it might make the cocktail hour awkward: Whose brain? Whose values? Whose incentives get encoded into the system that decides what counts as a successful ad? Because whoever answers those questions controls not just the ad market but the information environment of modern democracy. And "the information environment of modern democracy" is a polite way of saying "what billions of people see, believe, and buy every single day."
No pressure.
Meanwhile, the talent pipeline is splitting like a log under an axe. A massive layer of mid-skill execution roles (the junior planners, the buyers, the traffickers, the copywriters who've been quietly grinding out the operational sludge of digital advertising for a decade) is getting hollowed out. Not slowly. Not gently. The machine does their jobs now. Faster, cheaper, and without complaining about the open floor plan or requesting PTO.
The remaining human work clusters around two poles: high-stakes decisions that require genuine judgment and the kind of cross-domain sense-making that AI is still spectacularly bad at. Everything in the middle? Gone. Automated. Optimized away. The middle class of ad tech is being deleted in real time and nobody's writing that headline because the people who would write it are also being replaced by AI content generators. Poetic, really.
The Good News (Yes, There Is Some, Stop Rolling Your Eyes)
I'm not a nihilist. Well, not professionally. Not on Sundays.
There are genuine upsides here, and I say that as someone who has been relentlessly cynical for the last two thousand words.
AI can eliminate a staggering amount of fake productivity. The trafficking. The rote optimization. The reporting that nobody reads. The meetings about meetings about reports that nobody reads about campaigns that the AI is already running anyway. All that operational sludge that makes people feel busy without actually creating value? Gone. And good riddance. That stuff was never the job. It was the tax you paid for the job.
If organizations are intentional about it (and that's a colossal "if," roughly the size of Jupiter), that creates real room for humans to do what humans are actually good at: strategy, brand stewardship, ethical judgment, and the kind of long-term thinking that no optimization algorithm will ever prioritize because it can't be measured in a 30-day attribution window. The algorithm doesn't understand brand equity. The algorithm doesn't understand cultural context. The algorithm definitely doesn't understand why running that ad next to that article during that news cycle is a catastrophically bad idea even though the CPM is fantastic. Humans understand that. When we bother to look.
AI can also run experiments at a scale that would make any human team weep. Multivariate testing across audiences, channels, and creatives that would take a human team months and an AI system hours. That's genuinely powerful. That should be celebrated. Just not blindly. And not without someone watching what the experiments are actually testing. Because "the AI ran 10,000 creative variants and found the winner" sounds great until you discover the "winner" was a fear-based ad targeting elderly people with fake urgency messaging. Optimization without values isn't optimization. It's exploitation at scale.
And there's a clear, screaming, obvious market opening for companies building the override, observability, and governance layers. The "moral OS" for marketing AI. Tools that make it possible to see, audit, and control what agents are doing. This isn't a niche product for compliance nerds. This is infrastructure the entire industry needs and almost nobody has. If you're a founder reading this and looking for your next company, you're welcome. Send me equity.
The Part Where I Tell You What to Do (You Knew This Was Coming)
Here's the practical bit. The part you can print out and tape to your monitor or, more realistically, screenshot and forget about in your camera roll between photos of your lunch and that sunset you took six months ago.
First: Create an AI governance charter. A real one. With teeth. Not a values statement that lives on your About page next to the stock photo of diverse people high-fiving in a conference room. An actual charter with an ethics board that spans marketing, product, legal, and security, and that has real veto power over AI use cases. If the ethics board can be overruled by the CMO who needs to hit quarterly numbers, it's not a board. It's a puppet show. And everyone in the room knows it.
Second: Centralize control. Every buying agent, every creative agent, every measurement agent should route through a single control plane with standardized policy checks. If your AI systems are a collection of ungoverned fiefdoms, each one doing its own thing with its own rules and its own definition of "brand safe," you don't have an override layer. You have anarchy with a media budget. And anarchy with a media budget is how you end up on the front page of Ad Age for all the wrong reasons.
Third: Encode your values into the infrastructure. Not into a PDF that nobody reads. Into the actual system. Brand standards, safety parameters, "no-optimize" zones for vulnerable audiences and sensitive categories. These need to be baked into how every single impression gets evaluated. Not sprinkled on top like ethical garnish on a plate of algorithmic nihilism. If your values aren't in the code, they're not real. They're aspirational. And "aspirational" is a polite word for "imaginary."
Fourth: Build kill switches and practice using them. Like fire drills, but for the scenario where your AI agent decides to blow your entire Q3 budget targeting conspiracy theorists with supplement ads at 2 a.m. on a Sunday. Rehearse the incident response. Know who has the authority to pull the plug. Know how fast the plug can be pulled. Because that scenario isn't hypothetical. That scenario is a Tuesday. And it's already happened to someone reading this article right now who is nodding very quietly and hoping nobody asks them about it.
The 3 A.M. Version
Here's what I mumbled into the recorder last night, cleaned up only slightly and presented to you free of charge because I'm generous and also because I've had two cups of coffee and I'm feeling expansive:
The machines are not the problem. The machines are doing exactly what we built them to do. They're optimizing. They're efficient. They're relentless. They don't sleep, they don't get tired, they don't have ethical crises at 3 a.m. That's not a feature. That's the bug.
The problem is that we built them to optimize and then we forgot to tell them what matters. We forgot to build the layer where human judgment lives. Not as a suggestion. Not as a dashboard. Not as a quarterly review that everyone skims on their phone during the all-hands. But as actual, enforceable, real-time authority over systems that are making millions of decisions per second with our money and our values and our customers' attention and, let's be honest, a fair amount of their personal data that we promised we'd be responsible with.
The hood ornament doesn't steer. But someone has to. And if we don't decide who that someone is, and give them the tools, the authority, and the visibility to actually do the job, then the car goes wherever the algorithm takes it.
I've seen where the algorithm takes it. Brian O'Kelley checked the logs. It takes it to "target veterans with PTSD." It takes it to "imply that being queer is a mental illness." It takes it to every dark corner of human impulse, scaled to billions of impressions, running 24/7, optimized for engagement, with nobody watching and nobody accountable and nobody able to say stop.
Check your logs.
Build the override layer.
Or get comfortable being the hood ornament.
Your call. I'll be up at 3 a.m. either way.


Look, let's talk.
I run ADOTAT at a major loss. Full stop.
I could be selling ads. I could be consulting full-time. I get calls from investors, I give speeches, I take the occasional sponsorship — but between being a full-time father and a full-time writer, there aren't enough hours to chase revenue. And honestly? That's by design.
You may have heard that another ad industry show just took a few million dollars in investment… from the very companies they're supposed to cover. Good for them. I mean it. Running something like this is expensive, and I understand the need. But let's not pretend that kind of money comes without strings. When your investors are the same people you're reporting on, you are not the customer. You're the product.
I won't do that. I couldn't look at myself if I did.
I'm in a fortunate position — I'm independently wealthy enough to keep ADOTAT honest. No one tells me what to write. No one gets a friendly headline because they wrote a check. When I say something is broken in this industry, it's because it's broken, not because someone's competitor paid me to say so.
But here's the thing people keep telling me: I give away too much for free.
They're probably right.
ADOTAT+ exists because the stuff I hold back is the stuff that actually moves the needle. The deeper analysis. The names behind the deals. The takes that are too sharp, too specific, too useful to just hand out to an industry full of people who'd rather you didn't know.
Do you want commentary from people who took millions from the companies they cover, those who'll tell you exactly what they've been paid to tell you? That's available everywhere. It's free. And you get what you pay for.
Or do you want the version where you are the customer? Where I tell you what you need to know, not what someone paid me to say?
That's ADOTAT+.

