Sign up here
 30,000 Agency, Adtech and Marketing Executives 
Read Adotat DAILY.

Advertise? Comments?
[email protected]
or 505-932-9060

Our Amazing Sponsor

How Adtech Convinced You a Tamagotchi Could Run Your Campaigns

Meet Your New Boss: A Script with a Fancy Hat

Let’s get something out of the way: your new AI “agent”? It’s not your CMO. It’s not even your junior analyst on a Red Bull bender. It’s a macro in a trench coat, performing parlor tricks and hoping you don’t ask to see the source code.

Right now, the adtech world is pushing out “AI Agents” faster than venture capitalists can Google what “agentic” actually means. Every pitch deck, demo, and overfunded SaaS founder with a microphone is shouting that this is the future: autonomous, self-improving, all-seeing campaign managers that never sleep and never make mistakes.

In reality? You’ve been introduced to a glorified rules engine with a voice modulator and a marketing budget.

It’s a Rube Goldberg machine built to change your bids by 4% and send you an email about it—if it doesn’t crash halfway through.

Welcome to the Age of Theater, Not Intelligence

We’re living through a Broadway revival of “AI: The Musical”, and the lead actor is wearing a fake mustache and reading cue cards.

Let’s just say it: most “AI agents” being pitched today are pure Potemkin AI—they look impressive on the surface, but underneath, it’s cardboard, duct tape, and the desperate hope that you won’t ask follow-up questions. Meta’s Yann LeCun didn’t mince words when he called this out for what it is: Cargo Cult AI, Wizard-of-Oz AI, prestidigitation pretending to be magic. In other words? Bullsh*t. And yes, pardon his French.

If it quacks like automation, operates like automation, and crashes like automation? It’s not an agent. It’s a glorified intern with no insurance plan.

The Lexicon of Deception: AI, Automation, and the Myth of the Agent

Automation is your 1997 coffee pot with a timer. It performs a task when told to. Set it, forget it, and pray it doesn’t burn the house down.

AI, if we’re being generous, is the new intern that’s watched every single one of your previous campaigns and now tries to guess what’s next. Sometimes they’re right, sometimes they hallucinate a solution involving Snapchat ads for your B2B client. Still, effort appreciated.

Agentic AI, though—that’s a different beast entirely. Think: a fully autonomous strategist. It doesn't just respond. It plans. It sets goals. It adapts without being told. According to Gartner’s Tom Coshow, these are systems that “can plan autonomously and take actions to meet goals.” That’s miles beyond a system that says “clicks are down” and just raises your bid by 11 cents.

Deloitte nailed it when they said “agency implies autonomy”—meaning the system isn’t just following instructions, it’s figuring out how to meet your objective and choosing its own path. It’s not a script. It’s strategy. It’s not following a recipe—it’s making one up based on what’s in the fridge.

The problem? Almost no one in adtech is actually offering this. They're just taking that same coffee pot and putting a barista apron on it.

Your “AI Agent” Is Just a Coffee Pot Wearing a Headset

Let’s not kid ourselves. Calling a rules-based optimizer an “AI Agent” is like calling your microwave a chef, your playlist a DJ, or your Roomba an interior designer. It’s automation with a superiority complex.

It doesn’t understand nuance. It doesn’t adapt to context. And it sure as hell doesn’t have independent goals. If your so-called agent still needs Gary from ops to manually reset it after your campaign tanks, it’s not autonomous—it’s codependent.

Think of it this way: if your system can’t figure out that your campaign is bombing without a Slack message from your media team, it’s not an agent. It’s a toddler in a lab coat.

Who Wins From All This Theatrical Bullsh*t?

Here’s the kicker: this confusion? This carefully crafted haze of half-truths and technobabble? It’s not a bug—it’s a business model.

Vendors, platforms, and fund-hungry startups thrive in the fog. When everything is “AI-powered,” no one has to prove anything. You can’t audit a dream. And when CMOs buy in, they’re not just buying a tool—they’re buying the promise that they’re ahead of the curve. That they’re riding the AI wave while their competitors are still stuck fiddling with Excel.

But spoiler: most of those “agents” are still using Excel too. They’re just calling it something sexier and charging you five times more for the privilege.

You know who doesn’t benefit? The brands. The marketers. The media buyers stuck manually re-uploading creative because the “agent” couldn’t figure out the aspect ratio changed. The real victims are the people who thought they were finally getting help, and instead got a half-baked automation engine with commitment issues.

The “AI Agent” Bullsh*t Checklist

For those wondering if they’re getting sold digital snake oil.

Here’s a quick guide to know if you’re dealing with real AI—or just a marketing intern with a ChatGPT login:

  • Claims to be “autonomous,” but still needs a media planner to babysit it.

  • Promises “it learns on its own,” yet forgets everything the minute the budget resets.

  • Says “no human input required,” until it errors out halfway through your campaign launch.

  • Refuses to show how decisions are made, but proudly touts “trust the algorithm.”

And the most tired, overused, gaslighty line of all?
“Don’t worry—it’s learning.”

Yeah? So is my dog. Doesn’t mean I’m handing it my media budget.

The Bottom Line: It’s Not an Agent. It’s a Script in Gucci

What we’ve got is marketing theater, not machine intelligence. We're watching a bunch of overly confident platforms perform AI cosplay, hoping we won't peek behind the curtain. The promise is agency, autonomy, insight. The reality? A spreadsheet wearing a wig.

So before you buy into the dream of a self-driving media plan, ask yourself:
Is this really an agent? Or is it just a script with a fancy hat and a LinkedIn profile?

The Rabbi of ROAS

It’s Not Just Hype — It’s Fraud (Eventually)

How “AI-powered” became the new “Trust Me, Bro”

Welcome to the part of the AI conversation where the feds get involved.

We’re not just talking about hype anymore. We’ve moved into full-on regulatory whack-a-mole, where companies slap “AI” on their products like it’s low-sodium, and consumers are supposed to believe they're getting the future. Instead, what they’re often getting is manual labor, smoke, mirrors, and a dash of wire fraud.

This isn’t some slippery-slope philosophical debate about semantics and innovation. This is deceptive marketing that’s crossed into the land of government subpoenas, civil penalties, and very uncomfortable depositions.

Let’s break down exactly how we got here—and which companies are already learning the hard way that you can’t bluff your way into the future forever.

When “AI-Powered” Becomes “Legally Indicted”

Let’s talk receipts. Here’s your rogues' gallery of companies that played buzzword bingo a little too hard—and got called on it.

DoNotPay

The Claim: The “world’s first robot lawyer” that could draft your legal documents, fight traffic tickets, and apparently channel Clarence Darrow with just a Wi-Fi signal.
The Reality: No actual lawyers. No actual legal wins. Just a half-baked chatbot wrapped in a trench coat and legalese. The FTC said it best: this was “misleading, deceptive, and potentially harmful.”
The Consequence: $193,000 in fines, mandatory disclosures, and a well-earned spot on the “do not believe” list.

Ascend Ecom (and Friends)

The Claim: “AI-powered passive income streams.” You just invest in one of their e-commerce stores, and like magic, advanced AI will run it for you while you sip martinis and pretend you’re Jeff Bezos.
The Reality: No AI. Just deception, misrepresentation, and a lot of bank accounts bleeding out.
The Consequence: The FTC shut the whole operation down, seized assets, and banned them from ever selling business opportunities again. Which is a polite way of saying: get a new job.

Global Predictions

The Claim: The “first regulated AI financial advisor.” A bold claim that would’ve been impressive… if it were true.
The Reality: The SEC found the tech was mostly smoke, with no real AI backbone to justify the marketing.
The Consequence: Civil penalties, mandatory corrections, and a sharply worded press release from the SEC.

From Buzzwords to Bullsh*t to Fraud

There’s a point at which “inspired marketing” becomes felony territory.

That line gets crossed when:

  • You claim your AI will deliver guaranteed legal wins, investment returns, or business profits.

  • You fail to disclose that your “AI” is actually being run by a guy named Raj in a call center clicking buttons manually.

  • You knowingly let consumers or investors believe your tech does something it fundamentally cannot do.

That’s not hype. That’s fraud. And regulators are finally catching up.

The Case of Amazon’s “Just Walk Out” — Or, Just Don’t Ask Questions

Amazon hyped “Just Walk Out” like it was the moon landing of retail. The story: walk into a store, grab what you want, walk out—no lines, no checkout, all powered by seamless AI and magical cameras.

What they didn’t mention?
Behind the curtain was a small army—up to 1,000 workers in India—reviewing camera footage manually to determine what people actually bought.

In 2022, reports showed that 700 out of every 1,000 transactions required human review.

Amazon claimed those workers were just “training the model,” but let's be real: if you need 1,000 people to “train” your AI every day, it’s not AI—it’s outsourcing with a hoodie on.

The lesson? When your tech depends on hiding the humans behind it, you’re not selling AI. You’re selling a fantasy.

What Would Lina Khan Do?

FTC Chair Lina Khan has made it clear: there is no magical AI loophole. No exemption. If your tech misleads people and causes harm, you’re getting a visit from the nice folks with subpoenas and press releases.

Here’s what she’s watching for (and so should you):

Red Flags That Get You on the FTC’s Naughty List

  • “AI-powered” but no actual AI (translation: your spreadsheet learned how to lie).

  • Concealing human involvement and pretending it's machine intelligence.

  • Making promises you can’t deliver (looking at you, “guaranteed ROI”).

  • Burying user complaints or gagging reviews with shady NDAs.

  • Any form of consumer harm, from privacy invasions to financial loss.

Lina herself summed it up like this: “Using AI tools to trick, mislead, or defraud people is illegal… There is no AI exemption from the laws on the books.”

Read that again. Tattoo it on your pitch deck if you need to.

Why This Isn’t Slowing Down Anytime Soon

The incentives to fake it are huge. VCs are drunk on AI promises. Tech media still fawns over anything with the word “agent.” And most buyers can’t tell the difference between real machine learning and a cleverly labeled rules engine.

Add in the opacity—who can really peek behind the curtain of proprietary systems?—and you’ve got the perfect environment for performative innovation.

It’s easier than ever to convince people you’ve built a spaceship, when you’re really just pulling a wagon with a fan on it.

And Now the Setup for Part 3: What Real Agentic AI Actually Looks Like (And Why So Few Have It)

Next up, we’ll go deep into what actual agentic AI requires—why 99% of what you’re being sold isn’t even close, and the tiny handful of players actually doing the hard work to get there.

For now, remember:

  • Just because it calls itself “AI” doesn’t mean it is.

  • Just because it talks like ChatGPT doesn’t mean it thinks.

  • And just because it automates a task doesn’t make it an agent.

Summary Table: Enforcement Action Roundup

Company

Regulator

Allegation

Consequence

DoNotPay

FTC

Lied about being a robot lawyer

$193k fine, customer notifications

Ascend Ecom

FTC

Fake “AI-powered” ecom biz opps

Ban, asset forfeiture, court injunction

Global Predictions

SEC

Misrepresented AI in investment tools

Civil penalties, mandatory settlement

Amazon

Not yet

Sold AI magic, used humans in reality

PR backlash, industry skepticism

Want to know what actual AI agents look like?
Ready for the truth about who’s faking and who’s funding the real deal?

Stay tuned for Part 3: “Real Agentic AI: Why It’s Rare, Expensive, and Worth Watching.”

Sidebar: Is Zeta’s “AI” Just a Fancy Mail Merge?

Zeta Global calls itself an “AI-Powered Marketing Cloud.”
They talk about agentic workflows, multi-agent systems, proprietary models—basically, a veritable AI zoo.

But critics?
They say it’s less Isaac Asimov, more Mad Libs with a data license.

1. “It’s Not AI—It’s a Spam Cannon with Fancy Shoes”

A short-seller report from Culper Research didn’t hold back.
They claimed Zeta’s platform is “little more than a spam machine,” allegedly juiced by scraped data from sketchy sources like Disqus comments and so-called “consent farms.”

Translation?
The “intelligence” might just be a bulk emailer running on rails—with barely enough brains to separate opt-ins from oops, we tricked them.

Critics say Zeta’s AI isn’t autonomous, self-learning, or adaptive—just a fancy rules engine with good PR.

2. Consent Farms, Not Deep Learning?

That same report threw serious shade at Zeta’s data strategy.
Allegedly, they built scale by harvesting emails from low-rent clickbait traps masquerading as legitimate content sites. These “consent farms” are accused of baiting users into agreeing to share data, all so Zeta’s system can target them in bulk email blasts.

Critics argue: If the data is junk, the AI is a fraud in a lab coat.
It’s not adaptive decision-making—it’s industrial-strength carpet bombing.

3. Automation Dressed Up As Intelligence

Much of what Zeta calls “AI” is, according to detractors, predefined workflows—things like segmentation, campaign launch triggers, and reporting automation. Smart? Sure. But intelligent?

Asking a rules engine to think is like asking your thermostat to write a novel.

They say these “AI agents” are basically macros: fast, efficient, and dumb as a brick in a blazer.

Zeta’s Clapback

Zeta, unsurprisingly, is calling BS.
They’ve denied everything: said the short-seller report is “objectively false,” and claimed that independent audits have already verified both their data practices and tech stack.

Zeta insists their agents aren’t just rule-followers—they’re adaptive, autonomous, and capable of real-time decision-making. They point to their proprietary multi-agent architecture, generative AI, and a client roster full of receipts.

Their stance?
“This isn’t hype—it’s the future of enterprise marketing.”

Point / Counterpoint

Criticism

Zeta’s Response

AI is just automation

Our agents learn, adapt, and act autonomously

Relies on shady data sources

Data is compliant, audited, and verified

Consent farms fuel growth

We reject that claim—our growth is from scalable AI systems

Lacks transparency or explainability

Platform provides explainable, customizable agentic workflows

Industry Vibe Check: Hype, Truth, or Something in Between?

Analysts say Zeta isn’t alone in walking the tightrope between buzzwords and breakthroughs. Plenty of adtech players play fast and loose with AI language—Zeta just happened to end up in the crosshairs.

Early users of Zeta’s Agentic AI suite report improved efficiency, engagement, and campaign performance. But even those cautiously optimistic admit that "agentic" might still be aspirational, not fully realized.

Bottom Line: Real AI or Really Slick Branding?

There’s a real, ongoing debate:
Is Zeta pioneering real AI, or are they just really good at branding what’s essentially automated plumbing?

The answer probably lives somewhere between a GitHub repo and a marketing deck.

Until regulators or auditors offer a final word, marketers should stay skeptical, ask tough questions, and demand demos that don’t hide behind the word 'proprietary'.

Because in this industry, the difference between “cutting-edge tech” and “buzzword soup with a side of lawsuits” is often just a few unchecked PowerPoint slides away.

Judy Shapiro doesn’t just talk about AI agents — she built one.

And not the kind you slap in a press release to juice your valuation. The kind that took three years, was co-developed with the National Science Foundation, and has the peer-reviewed receipts to back it up. So when she says that most so-called “AI agents” in adtech are about as legitimate as a timeshare in the metaverse, you listen.

She puts it like this: AI is to agentic AI what a bicycle is to a motorcycle. The bicycle — your basic AI — is limited, manual, and gets you where you're going only if you keep pedaling. The motorcycle? That’s the agent. It has a motor, it goes farther, faster, and doesn’t need you to push it every five seconds. In marketing terms, your standard AI might help you build an email, but an agent can plan the entire campaign, adjust mid-flight, and report back on what worked — all without human babysitting.

Too often, vendors blur the lines between the two. Judy calls it out bluntly: if the system relies on human-led decisions at every turn, it’s not an agent. It’s just automation in a trench coat. A real agent? It initiates action, adapts, corrects itself, and executes across multiple platforms with no hand-holding. It doesn't pretend to do everything. It does one very hard thing well. In her case, the system interprets web content the way a human would — not by spitting out keyword matches, but by actually understanding the meaning and context of a page.

Her team built a three-part agentic system. One module creates data visualizations from structured data. A second reviews and improves them. The third interprets them into human language — and the magic happens when those three argue their way to consensus. That’s what agentic AI does: it collaborates internally, adapts, and gets better with time. And no, it can’t be repurposed to write tweets or buy media — because real agents aren’t generalists, they’re specialists built for deeply contextual tasks.

Shapiro warns marketers to be skeptical of sweeping AI claims — especially the kind that promise to cut labor costs, eliminate campaign delays, or replace your strategy team with an all-seeing bot. Those benefits might happen eventually, but if that’s the primary pitch, you’re not being sold innovation — you’re being sold wishful thinking.

What’s the one question she thinks every CMO or buyer should ask? “What’s the agent’s accuracy rate?” If the vendor replies with “It depends,” or throws in vague language about how results vary per client, congratulations — you’ve found a fake. Real agents have benchmarks. They’re built with measurable goals in mind. No accuracy metric? No agent.

As for vendors hiding behind the word “proprietary” to avoid showing how their systems actually work — she’s not having it. Building an agent is a repeatable process, not some secret recipe. You can explain how a souffle rises without giving away the brand of butter. If a vendor can’t describe their agent’s architecture, its internal process, or how its different components reach consensus — they’re not protecting IP, they’re covering up theater.

And no, the FTC isn’t likely to swoop in to clean up the mess. In Judy’s view, the industry is on its own. If marketers want accountability, they’re going to have to ask better questions, demand transparency, and stop accepting “AI agent” as a magic spell that wards off scrutiny.

The bottom line? If someone pitches an all-knowing, fully autonomous AI agent that “does it all” — campaign strategy, content generation, optimization, targeting — just remember: the real Wizard of Oz was a guy behind a curtain.

Resonance Theater: Now Featuring Synthetic Empathy by Hannah Grey

The “Persona Resonance Architect” is the brainchild of Hannah Grey, a venture capital firm founded by Kate Beardsley and Jessica Peltz-Zatulove. They’re not your average spreadsheet-swinging VCs — they come with polished pedigrees, deep brand-side chops (Unilever, Verizon), and a keen eye for startup storytelling.

Which makes this all the more interesting.

They’re known for being sharp, early believers in founders solving real-world problems. But here, they’ve dipped a toe into the AI mystique economy, presenting what looks like a strategic positioning tool as if it were a semi-sentient machine whispering emotional truths into pitch decks.

In reality?

This “AI Operating System” — and the Persona Resonance Architect at its core — seems far more rooted in internal playbooks, investor heuristics, and structured qualitative data than any actual multi-agent intelligence. It likely runs on LLM prompting (probably GPT-class models), layered with carefully tuned inputs based on Hannah Grey’s brand-marketing knowledge bank.

Which is fine. Smart, even.

But let’s not confuse this with agentic AI.

There are no autonomous systems arguing behind the scenes. No dynamic adaptation. No feedback loops. No decisions happening without human intervention. It’s a well-architected advisory engine, not a self-directed AI strategist.

⚠️ Why This Matters

When respected investors with reach and influence start using the term “AI Agent” to describe what is functionally an interactive brand workbook with LLM scaffolding — it erodes clarity across the ecosystem.

Founders will repeat it.
Pitch decks will include it.
Vendors will latch on.
And real AI researchers? They’ll roll their eyes and get back to work building the real thing.

If Hannah Grey’s tool helps founders better align messaging to audiences? Great.
But let's call it what it is: a synthetic persona simulator with predefined logic and advisory overlays.
Not an agent. Not even close.

You’ve read the first few chapters.
You’ve laughed at the trench-coat macros.
You’ve nodded while I called out the platforms selling fairy dust as “agentic AI.”
And you’ve seen the lawsuits coming like slow-motion train wrecks.

But here’s the truth: you’re still standing outside the room.
And inside? The real stuff is happening.

ADOTAT+ isn’t about giving you more noise. It’s about giving you what everyone else pretends to know — the parts your competitors are already using to rewrite their decks, rethink their RFPs, and revise their careers.

What You’re Missing Right Now:

  • The Vendor Vetting Table — The real, plug-and-play matrix to call bullsh*t in meetings and pitch calls.

  • The MASQRAD Breakdown — A rare peek into a multi-agent AI system that actually works. Not the fantasy. The receipts.

  • The Legal Liability Playbook — What to do when your CMO says, “But the AI did it,” and the FTC is already circling.

  • The Frameworks — Transparent AI protocols, internal audit checklists, and the survival guides no one will email you unless you’ve already paid.

Let’s be clear:
This is the stuff people screenshot behind closed doors.
The stuff PR teams beg to unpublish.
The stuff agencies wish you wouldn’t read before the pitch.

And while you’re hovering on the fence, your competitors already subscribed.
They’re not sharing it.
They’re not tweeting it.
They’re quietly upgrading their pitches, vetting platforms, and avoiding the landmines you’re still stepping on.

The Industry Has Enough Tourists.

ADOTAT+ is for people who actually make decisions — and know that being three weeks ahead of the news is the difference between closing the deal and cleaning up the mess.

You’re here. You’re reading every word.
You know this is the part they don’t teach at Cannes.

Subscribe to ADOTAT+.
Because “wait and see” is exactly how you lose your seat at the table.

logo

Subscribe to ADOTAT+ to read the rest.

Unlock the full ADOTAT+ experience—access exclusive content, hand-picked daily stats, expert insights, and private interviews that break it all down. This isn’t just a newsletter; it’s your edge in staying ahead.

Upgrade

Keep Reading