What Everyone in AdTech and AI Is Getting Catastrophically Wrong About What Comes Next

The Override Layer: What Everyone in AdTech and AI Is Getting Catastrophically Wrong About What Comes Next

I'm about to leave for IAB ALM, the annual conference where everyone in charge of digital advertising comes together, navel gazes intensely, and pretends we know what we're talking about. There will be panels. There will be networking. There will be a lot of very expensive drinks purchased by companies whose business models are about to be upended by forces they're not even discussing correctly.

And here's what's been gnawing at me: Everyone is asking the wrong question.

The VCs. The founders. The CMOs nervously refreshing LinkedIn to see if their job still exists. The trade press breathlessly covering every new model release like it's the Second Coming. The consultants charging six figures to tell you things you could learn from a Reddit thread. All of them.

They keep asking: Will AI replace humans?

Look, I get it. It's a great question for panel discussions. It fills seats. It generates clicks. Chrome skulls. Laser eyes. Robot apocalypse. Skynet jokes that stopped being funny in 2015 but somehow still get knowing chuckles from audiences who should know better. It makes for excellent conference keynotes and absolutely terrible strategy.

Here's what's actually happening: We're witnessing a role inversion. Not replacement. Not the dramatic extinction event that would at least give us something concrete to fight against. Just drift. A quiet, inexorable shift in the center of gravity that almost nobody in our industry has noticed because they're too busy arguing about whether ChatGPT can write ad copy.

Spoiler: It can. That ship has sailed, hit an iceberg, and sunk to the bottom of the Atlantic. The point is where that leaves the rest of us standing on the shore.

For most of advertising history, humans were the scarce, special ingredient. We were the magic. Judgment. Taste. The ability to look at a brief and see something nobody else saw. We built systems to support that magic. Every tool we invented, from the printing press to the DSP, the CDP, the DMP, and every other acronym cluttering up our bloated martech stack existed to amplify what humans could do.

The human was the sun. Everything else orbited around us.

Then we built machines that do the scalable parts better than we do. Pattern recognition across billions of data points. Optimization that never sleeps, never gets distracted, never has a bad day because their kid is sick. And yes, convincing copy written at 3 a.m. without caffeine, without existential dread, without a soul. Copy that is good enough for 90% of the use cases that currently employ human writers.

Good enough is a catastrophe disguised as a convenience.

So the stack flipped. And almost nobody noticed because they were arguing about attribution models.

AI is now the default layer. Not the assistant. Not the tool that humans use. The default. Always on. Cheap to the point of being essentially free. Fast enough to be functionally instantaneous.

And humans? We become the exception layer. Not the main character anymore. The override.

Remember when elevators had operators? Human beings whose entire job was to move a lever and announce floors? They weren't replaced because elevators became malicious. They were replaced because that particular human function wasn't where humans added unique value.

Now think about pilots. In early aviation, autopilot assisted pilots. Now pilots supervise autopilot and intervene only when things get weird. Nobody calls modern aviation "anti-human." It's just brutally honest about what scales.

Same thing is happening in adtech right now.

AI handles the normal. Humans handle the meaningful.

That sounds flattering if you don't think about it too hard. But here's the catch: Most of what we currently do isn't meaningful. Most of our jobs are composed of routines and processes that feel meaningful because we're the ones doing them. We've confused effort with importance. We've convinced ourselves that busyness equals value.

When AI takes over the normal, we're going to discover just how much of our professional identities were built on sand. How many "strategic" roles were actually just glorified pattern matching. How many "creative" positions were actually just remixing the same ideas according to formulas. How many media planners were just doing math that machines do better.

You're not obsolete. But you're non-default now.

Default systems optimize. Humans contextualize. Default systems produce. Humans decide what should exist. Default systems answer questions. Humans decide which questions are worth asking. Default systems hit KPIs. Humans decide if the KPIs were worth hitting.

That's why this feels existential. Because being optional is new.

The quiet danger is not extinction. It's complacency.

If humans accept being "the extra" without insisting on authority, we become decorative. A vibe. A mascot. The warm brand layer slapped on top of autonomous systems we no longer understand.

Every VP of Marketing becomes a hood ornament on a self-driving car. Nice to look at. Projects success. Absolutely not steering.

I've seen this already. I've sat in meetings where humans rubber-stamped AI outputs without understanding them. Where the recommendation from the machine was accepted not because it was examined and found to be good, but because examining it would have required expertise that had already been let go in the last round of layoffs.

That's where it gets bleak. Not because the machines are malicious. Because we're willingly handing over the steering wheel. And we're so burned out from years of performative productivity that we've forgotten we were supposed to know where we're going.

You cannot override a system if you don't understand what it's doing.

But here's the upside nobody wants to admit.

If we're disciplined, this is the first time in history where marketers might be freed from fake productivity. The performative busywork that fills calendars with meetings about meetings. The endless operational sludge of status updates and alignment sessions that exist primarily to justify the existence of the people attending them.

Freed to specialize in the things that actually require a human. Judgment. Not pattern matching, but genuine judgment about competing values. Ethics. Deciding what we should do, not just what we can do. Taste. That ineffable sense of what's good that no training data can replicate. And perhaps most importantly: saying "no" to perfectly optimized bad ideas.

AI as the workhorse. Humans as the conscience.

This is uncomfortable. This is necessary. And frankly, it's very Jewish. The tradition I come from says even God wants to be argued with. Abraham argued with God about Sodom. Moses argued about destroying the Israelites. The entire Talmud is structured as an argument. We don't accept "because the system said so" as valid justification. We treat "why?" and "should we?" as more important than "can we?" and "how fast?"

So yes, this all makes sense. Annoyingly, uncomfortably, it makes too much sense.

Not because humans are lesser. Because we stopped being scarce at the wrong layers. We were the special ingredient in execution, and now execution is cheap. We were the bottleneck in production, and now production is infinite.

What's still scarce? Wisdom. Judgment. The ability to look at a system optimizing perfectly for the wrong goal and say: Stop.

What matters now is who still gets to say stop.

The adtech industry is pouring billions into making AI better at optimization. Fine. That's table stakes. Programmatic? AI runs it. Creative testing? AI does it faster. Audience modeling? AI processes more signals than any human team ever could.

But almost nobody is investing in what comes after. Building the systems and structures that preserve meaningful human authority over machines that will soon be smarter than any individual human at any specific task. Training the next generation of marketers not to operate the machines, but to judge whether the machines should be operating at all.

We need to stop asking "how do we use AI?" and start asking "how do we stay in charge of AI while letting it do what it's good at?"

The companies that figure this out won't just survive the transition. They'll define it. They'll understand that the future isn't human versus machine. It's humans above machines. In the sense of oversight. In the sense of final authority. In the sense of maintaining the right to say no.

Everyone else? Hood ornaments. Decorative. Vestigial. The human-shaped logo on a fully automated brand.

I want to leave you with something my grandmother used to say. Okay, she didn't. But let's pretend she did, because it sounds better that way and this is advertising, we make things up for a living: "The one who watches is more important than the one who works."

I used to think that was lazy. Now I realize it's the oldest wisdom about power: The one who decides whether the work is good is more important than the one who does the work. The one who can stop the process is more important than the one who runs the process.

We are transitioning from being the ones who work to being the ones who watch. The question is whether we'll watch with wisdom, judgment, and authority, or whether we'll watch passively while the machines optimize us into irrelevance.

I know which one I'm choosing.

See you at ALM. I'll be the one asking uncomfortable questions during the Q&A.