Sign up here
 30,000 Agency, Adtech and Marketing Executives 
Read Adotat DAILY.

Advertise? Comments?
[email protected]
or 505-932-9060

Our Amazing Sponsor

Demographics are sometimes nothing more than the astrology of adtech

Dear Agencies: Your Taxonomies Are Older Than My Kid’s Roblox Login

Let’s talk about the decomposing zombie still running laps in your media stack.

No, not the cookie—we’ve collectively cried, eulogized, and resurrected that poor pixel more times than a Marvel villain. I’m talking about your taxonomies. Yes, those. The pre-iPhone, pre-TikTok, GeoCities-era relics that still organize video content using logic so outdated it probably thinks Vine is a trending app.

You know the kind: broad-stroke labels like “sports,” “travel,” and “urban”—as if slapping those on a piece of video content gives you anything close to audience relevance. It doesn’t. It gives you garbage. And yet, somehow, these shallow descriptors are still dictating where millions in ad dollars go every quarter.

Zack Rosenberg, CEO of Qortex, has had enough.

He didn’t start the company because the world needed another AI startup. He started it because the industry is running on a broken promise: that video targeting is somehow working. According to zack, the real wake-up call came when his team discovered that video content was being miscategorized upwards of 88.98% of the time. Not just occasionally. Not a rounding error. Nearly nine out of ten videos were wrongly labeled by existing “verification” systems.

That’s not just inefficiency. That’s malpractice with a spreadsheet.

And it leads to real-world disasters. zack told me about a certain airline—he didn’t name names, because decency still exists somewhere—whose media agency targeted “travel” videos. What the targeting system found were airplane videos. Sounds logical, right? Except, as zack explained, “those airplanes weren’t making it to their destinations.” The system thought it found a happy vacation clip. It found a crash.

And that’s just the most dramatic example. Every day, brands are placing ads using context engines that read transcripts like they’re flipping through a toddler’s sticker book. One video, zack noted, contained just a single word: “Whoa.” That’s it. That was the entire transcript. Now, is that whoa from a touchdown, a car crash, a surprise proposal, or someone opening a glitter bomb? Impossible to know without actually watching the video. But that didn’t stop the machine from confidently tagging it “sports.”

Let that sink in.

We’re building campaigns on machines that think “whoa” = sports.

This isn’t contextual targeting. This is algorithmic astrology—slapping a few broad strokes on chaotic reality and hoping your horoscope knows what audience segment to serve a pre-roll ad to. Meanwhile, Qortex is over here parsing site, sound, motion, tone, and emotional cues in real-time—frame by frame—and building what zack calls “a multimodal approach to actually understand the content.”

Let’s pause on that: actually understand. Not scrape a few keywords. Not infer meaning from the page URL. And certainly not rely on the user-generated metadata—which, as zack pointed out, is usually just a set of “three or four keywords the uploader threw in and called it a day.” Whether it’s a YouTuber uploading from their bedroom or a media conglomerate publishing syndicated content, the tags are usually an afterthought.

Yet entire ad campaigns rely on them.

It gets worse. zack explained that most contextual engines also look at the page where a video is hosted, not the video itself. “There’s typically no correlation at all between the page content and the video,” he said. Videos change constantly, sometimes every few hours. Just because a video used to match the page doesn’t mean it still does. And still, these systems act like that’s enough to categorize a viewer’s mood, mindset, and buying intention.

It's lazy, outdated, and possibly dangerous.

Qortex built something different. It watches. It listens. It learns. It identifies the “micro-moments” in content that actually matter—not just for targeting, but for avoiding brand disasters. If you’ve ever worried about your ad running next to hate speech, violent content, or just totally irrelevant trash, you’re not paranoid. You’re probably right. And the old taxonomies won’t save you.

zack’s frustration was palpable: “Most of the industry is still using IAB taxonomy 1.0… from 2006.” That’s right. Two-thousand. Six. You know, when The Hills was considered high drama and MySpace was cutting-edge social. And yet, this is still what SSPs and DSPs are using to map video content in 2025.

If that doesn’t terrify you, it should.

And the industry’s excuse? As zack put it bluntly, it’s just easier. “Nobody’s challenged it in a way where people understood what the difference could be,” he said. Buyers default to simplicity. Click sports. Click travel. It’s fast. It’s tidy. It’s useless.

So here we are, pretending our campaigns are “data-driven” while our data sources are more primitive than a 2007 Facebook status update.

If you’re still running media plans based on these fossilized taxonomies, you’re not optimizing. You’re LARPing as a media strategist.

It’s time to get real. Start watching the content. Start demanding AI that actually understands what’s happening on screen. Because zack and Qortex are already doing it—and the rest of the industry is looking like the Nokia of adtech.

This isn’t contextual targeting 2.0. This is video intelligence with a pulse. If you’re still buying “sports” and “travel” like it’s Mad Men meets AltaVista, I’ve got a newsflash: you’re the problem.

And Qortex just became the answer.

The Rabbi of ROAS

You Are What You Watch: Real-Time Content > Two-Week-Old Cookies

Let’s put it plainly: demographics are sometimes nothing more than the astrology of adtech—oversimplified, outdated, and clung to out of habit rather than actual utility.

The idea that someone’s age, gender, and ZIP code can predict their purchase intent in 2025 is the kind of thinking that leads to ads for minivans targeting 20-year-old city dwellers who searched “cheap tequila” last week.

zack Rosenberg, CEO of Qortex, is making the case for something more precise: content behavior as a real-time signal of intent. As he put it, “The content people are watching right now tells you a hell of a lot more than what they Googled two weeks ago.” It's not just snappy—it’s strategic.

Take his own family as an example. “My wife and I are in the same age bracket, same income group, same household,” he said. “But we don’t buy the same products. We don’t even watch the same content.” It’s a perfect breakdown of the fallacy that demographics equal intent. If shared life stages and shared Netflix accounts can’t explain consumer behavior, then gender + age + ZIP definitely isn’t cutting it.

The Case Against Demographic Modeling

Here’s where the old system breaks down: traditional targeting requires a user to be this age and this gender and this income level and this location… stacking up so many conditions that the overlap becomes statistically useless. “The more ‘ands’ you use, the less accurate you become,” zack explained. “And if you're using all of that to drive media buying? You’re flying blind.”

Instead, Qortex focuses on what someone is consuming right now. It’s real-time, it’s contextual, and—critically—it’s tied to actual emotional and behavioral cues. Watching Bob the Builder at 2AM? That’s a far better indicator of household status than any cookie trail from last month. Binging DIY deck repair videos? You're probably about to spend money at Home Depot.

“Content is the only thing that’s real-time,” zack emphasized. “Cookies and past behavior tell you what someone might’ve been into last week, not what they’re thinking about today.”

From Micro-Moments to Predictive Models

The Qortex platform takes this a step further by breaking video into micro-moments—distinct segments of attention that can shift tone, topic, or intent within seconds. A news broadcast, for example, might open with a tragic headline but spend most of its runtime profiling a feel-good local story or a community initiative. “We see this all the time,” zack said. “But because brands can’t differentiate between the segments, they block all news by default. That’s not just bad targeting—it’s a missed opportunity.”

Qortex’s inferred audiences engine uses five content-based factors—including setting, language, pacing, representation, and visual cues—to determine who a piece of video is actually speaking to. Rather than relying on stale metadata, the system analyzes the content itself to understand its likely appeal.

A campaign for a fitness brand, for example, might assume that yoga and Pilates content are its best media bets. And sure, those might hit the expected demo. But zack shared a surprising result from a real-world test: a 30% lift in brand metrics occurred not in wellness content, but in family and parenting videos. “That tells us the target audience is more behaviorally complex than a traditional plan would suggest,” he said. “And it also shows how expanding into content-aligned placements uncovers entirely new audiences.”

Goodbye Page Context, Hello Content Alignment

One of the biggest industry blind spots Qortex is addressing? The assumption that the page context = video context. “Most of the time, there’s no connection at all between the page and the video,” zack noted. “We’ve seen videos rotate every six hours on the same page—completely unrelated to the article headline or site section.”

That disconnect becomes even more damaging in environments like YouTube, where autoplay, recommendation engines, and fragmented watch sessions destroy any coherent narrative from the page level. It’s a chaotic, fast-moving stream—and one that can’t be controlled by static, page-based context.

Instead of asking “Where is this video playing?”, Qortex asks “What is this video saying?” That subtle shift rewrites the playbook for real-time audience modeling. And it’s why, increasingly, content alignment is being seen not just as a creative tool—but as the new data layer for precise audience curation.

Because in the end, your audience isn’t defined by a profile.
They’re defined by what they choose to watch.

And if you’re still planning media around cookies, cohorts, or last month’s intent signals?
You’re already late to the moment.

🧠 Why You Need to Join ADOTAT

Because you’re tired of press release parrots masquerading as journalism.

At ADOTAT, we don’t do fluff. We don’t cozy up to PR firms, and we’re not here to regurgitate whatever jargon-filled nonsense just dropped on the wire. We dig deeper—past the spin, beneath the buzzwords, and right into the gears of what’s actually happening in advertising, media, and tech.

Every week, we break stories no one else dares to touch. Like how 80% of ad platforms are still running on a taxonomy that predates the iPhone, or how “premium video” might just be autoplay trash served from a server farm in the middle of nowhere.

We call BS when platforms claim they’re using AI but are really duct-taping keyword logic to a metadata schema built by interns in 2004. We tell you how multimodal engines like Qortex are actually fixing the mess, not just rebranding it.

Our deep dives connect the dots across CTV, programmatic, attention, AI, and whatever alphabet soup the industry pretends to understand this week. If something’s broken (and trust us, it usually is), we’ll find it, name names, and explain what it means for your budget.

We’re fiercely independent—no holding company strings, no sugarcoated partnerships, and definitely no "sponsored content" dressed up as analysis.

If you’re in this industry and want to stop being lied to?

Subscribe to ADOTAT.
Stay Bold. Stay Curious. Know More than You Did Yesterday.

logo

Subscribe to ADOTAT+ to read the rest.

Unlock the full ADOTAT+ experience—access exclusive content, hand-picked daily stats, expert insights, and private interviews that break it all down. This isn’t just a newsletter; it’s your edge in staying ahead.

Upgrade

Keep Reading