
Sign up here | Advertise? Comments? |
---|
Congrats! You’re Getting ADOTAT’s Free Edition—Welcome to the Kiddie Pool.
You’ll get the headlines, the snark, and just enough industry gossip to hold your own at a marketing happy hour. But let’s be real—you’re only skimming the surface.
Want the insight behind the numbers? The details behind the hype? The perspectives that actually matter? That’s ADOTAT+. The real deal—where the research is deeper, the analysis is sharper, and the secrets the industry doesn’t want you to know... well, we spill them.

One Year Later: Where’s the Beef?
The 5 Biggest TV Measurement Myths That Are Robbing You Blind
Let’s kick this off with the most depressing fun fact in advertising: as much as 10% of streaming ads are served while the TV is literally turned off.
That’s right—your brand spot might be running to an empty screen humming in the dark like a busted fridge.
And yet the platforms still bill you, the agencies still dress it up in glossy decks, and CMOs still wave the numbers around like they just cracked the Manhattan Project.
This is the hall of mirrors that is TV measurement. Everything looks sleek and scientific until you realize half of it is duct tape and fairy dust. But myths have staying power because they make everyone’s lives easier: agencies get to declare victory, platforms keep the cash printer humming, and CMOs get a chart that looks good on a boardroom slide. Advertisers, meanwhile, are left funding campaigns that deliver phantom lift—success stories that evaporate the second you look under the hood.
Let’s pull these myths apart, one by one.
Myth #1: An “Impression” Equals a Human Being Actually Watching
This one is the industry’s favorite bedtime story. An impression, we’re told, means your ad was “seen.” The reality? An impression means a slot was filled, a pixel fired, and some ad server marked the job as “done.” It says absolutely nothing about whether a human being’s eyeballs were in the vicinity.
Impressions are receipts, not results. Treating them as gospel is like counting unopened junk mail as “household engagement.” That kind of magical thinking is why campaigns “over-deliver” on paper while under-delivering in real life.
Myth #2: Reach and Frequency Equal Persuasion
Marketers cling to reach and frequency like toddlers to a blankie. “We hit 80% of households five times each!” Terrific. You’ve successfully wallpapered the country.
But repetition is not persuasion. Playing a mediocre ad on loop doesn’t make it compelling—it just makes it annoying. Reach and frequency measure exposure, not impact. You can buy all the “opportunity to see” you want, but if the creative flops or the placement is irrelevant, you’ve purchased nothing more than a very expensive screensaver.
Myth #3: Brand Lift Studies Are Apples-to-Apples
This is where things get downright comical. “Brand lift” sounds like a neat, tidy measure of effectiveness. In reality, it’s whatever the vendor says it is.
Some vendors run exposed vs. control surveys inside their own walled gardens.
Others cobble together panels stitched with device graphs and modeling.
A few use incrementality testing—geo holdouts or randomized control groups—when they’re feeling brave.
Each approach has a bias baked in: survey fatigue, panel skew, attribution noise. Put two vendors on the same campaign and you’ll get two different results. Both will claim victory. Both will send you a shiny deck. Only one might be telling the truth—and good luck figuring out which without a PhD in econometrics.
Myth #4: Linear TV and CTV Are Interchangeable
Here’s where CFOs get fleeced. Linear TV is panel-based, rooted in the gospel of Nielsen diaries and statistical weighting. CTV, by contrast, struts around with deterministic IDs and promises of household-level precision. They are not the same thing, not even close.
Yet dashboards keep mashing them together like they’re different flavors of yogurt. The result? Numbers that look scientific but are actually closer to spreadsheet astrology. You get reach curves that make no sense, attribution models that double-count, and presentations that tell you your campaign was “optimized” when in reality it was just duct-taped across incompatible measurement systems.
Myth #5: Attention Guarantees Effectiveness
Attention is the new sugar high of advertising. Everyone wants to mainline it—minutes of gaze, predictive attention units, eye-tracking heat maps. And yes, attention matters. But attention is not magic.
A viewer can stare at your ad like it’s a traffic accident and still not remember your brand name thirty seconds later. Attention is a leading indicator, not a guarantee. Think of it as caffeine: it gives your campaign a jolt, but the substance still has to deliver. If the creative stinks, no amount of “attention time” will turn it into persuasion.
Why This Matters
These myths endure because they’re convenient. Agencies love them because they create easy wins. Platforms love them because they keep CPMs inflated. CMOs love them because they produce clean, defensible slides for the board. But advertisers who actually pay the bills? They’re the ones left holding the bag.
Every dollar spent believing these myths is a dollar flushed down the drain of false confidence. And here’s the part no one says out loud: the vendors know it.
They know impressions aren’t people, that reach isn’t persuasion, and that attention isn’t salvation. But illusion sells. And until advertisers start demanding better, they’ll keep cashing the checks.
What’s Next
This is just the opening act. The myths are the teaser; the real action is in the subscriber-only breakdowns:
How Nielsen, EDO, Lumen, and Adelaide spin their measurement stories.
Why attribution models inflate lift like a tech stock bubble.
When attention actually matters—and when it’s just snake oil.
Why CTV is both the most measurable and the most chaotic channel in media.
If you care about not lighting your ad budget on fire—or you enjoy watching sacred cows get slaughtered—stick around. The real fun is just beginning.

The Rabbi of ROAS
Learning Section: What Anders Lithner Really Teaches Us About Lift and CTV
This isn’t a puff piece; it’s a learning piece. Think of it as sitting in on a seminar where the professor occasionally swears at bad metrics.
Meet Anders Lithner, CEO and co-founder of Brand Metrics. He’s the Swedish data whisperer who has spent his career dismantling how we think about advertising effectiveness. Brand Metrics is the platform built on his obsession: turning “brand lift” into something measurable, comparable, and actually useful.
Here’s what he said on The ADOTAT Show, and what you should take away if you care about how campaigns are really judged.
Brand Lift: Not the Fairy Dust You Think
You can’t buy behavior. Lithner insists this is the biggest misconception in advertising. Behavior doesn’t happen because you shoved an ad into someone’s face. First you have to change a mind or a heart—only then will behavior follow. Skip that step, and you’re wasting spend.
Lift has to be comparable. If a study says your campaign scored “+5,” the next question is obvious: is that good or bad? Brand Metrics enforces a standardized core survey so every result sits in a database of 60,000+ campaigns. The number means something because it can be compared. Without that, every lift study is a snowflake—pretty, but useless.
Panels aren’t the enemy—vanity metrics are. He doesn’t wage war on Nielsen-style panels. The real enemy? Metrics that don’t matter, like CTR or “half of a banner visible for one second.” Those may look neat on a chart, but they tell you nothing about business outcomes.
Perception drives purchase. His ketchup test: most fridges have Heinz, not because it’s objectively the best, but because people believe it’s the best. That’s brand building. Performance marketing can trigger a trial, but long-term revenue belongs to the brands that occupy permanent space in your head.
Attention and CTV: The Promises and the Pitfalls
Attention is not viewability 2.0. Viewability asks, “Was the ad there?” Attention asks, “Did anyone actually look?” They’re not the same. But even attention, Lithner argues, is incomplete without outcomes. You can’t optimize campaigns on gaze time alone.
CTV can be powerful—but don’t get lazy. The old assumption—“TV on = attention”—dates back to the 1950s. It doesn’t hold up. People mute, chat, or glance at their phones. In CTV, full-screen with sound on is strong, but you still need verification. Don’t treat attention as a given.
Placement matters, but context rules. Data shows inside-content placement generally outperforms outside-content, but there’s no one-size-fits-all. A burger ad and an energy company ad require different strategies. Objective, creative complexity, and category all shape where lift happens.
Frequency, Time, and the Cold Truth
Low frequency is the silent killer. One ad, one time? Forget it. It won’t move brand consideration. Lithner argues more money is wasted on too low frequency than on oversaturation. Think of it as skiing: one run on the kiddie slope doesn’t prove skiing doesn’t work.
Reach × Frequency × Time. He says the classic planning model is still valid, but add the third axis: time. Real lift requires repeated exposure over meaningful time horizons—not just one-off bursts.
Why This Matters
Lithner’s framework challenges the industry’s addiction to short-term metrics: clicks, CTR, single exposures, vanity dashboards. His view is blunt—the best ROI may not show up this quarter. That’s not failure, it’s how branding works.
For CTV, his lesson is equally sharp: don’t confuse assumptions for measurement. Attention can be powerful, but it must be validated and tied back to outcomes. Otherwise, it’s just another shiny metric vendors push to justify inventory.
The Path Ahead
By 2030, Lithner believes brand outcomes themselves will become predictive signals, not just backward-looking reports. With millions of survey datapoints, Brand Metrics is already training models to forecast expected lift. That means campaigns won’t just be measured—they’ll be optimized mid-flight against predicted brand impact.
It’s a future where lift is no longer a lagging indicator but a lever.
Learning Takeaway
If you remember one thing from Anders Lithner, make it this:
Perception precedes behavior.
Frequency needs time.
Attention must be tied to outcomes.
Comparability is everything.
And if you forget all that, at least remember the ketchup: Heinz wins not by being the cheapest or the cleverest, but by owning a spot in your head long before you ever stand in the grocery aisle.
👉 This section is designed to grow your knowledge, not flatter a vendor. The lesson? Don’t buy the myth that impressions or clicks equal effectiveness. Learn how lift actually works, and you’ll spend smarter. You just saved yourself a $5000 seminar.

Transparency, Neutrality, Efficiency
The Vendor Divide: Why TV Measurement Still Lacks a Single Truth
Brand lift was supposed to be the universal yardstick. The magic ruler that proves whether millions in ad spend actually pushed anyone closer to buying.
Reality check: there is no universal yardstick.
Instead, the industry is stuck with rival vendors, each hawking their own gospel:
Nielsen clings to panels like scripture.
EDO preaches behavioral intent.
Lumen worships attention as a diagnostic truth.
Adelaide insists they bottled alchemy with the AU score.
👉 They’re not Coke vs. Pepsi competitors. They’re rival priests in the same cathedral, all chasing the same scarce prize: the KPI that decides where billions in ad dollars flow.
Nielsen: The Currency King on Life Support
For decades, Nielsen has been the Vatican of measurement. Its 42,000-household panel and survey machine defined the “currency” of TV trading. Agencies still kneel, because those ratings anchor billions in deals.
But in a streaming-first world? Panels feel like rotary phones.
Nielsen’s Hybrid Gospel: Big Data + Panel
Cross-Platform Reach: Measures across digital, CTV, social, audio, radio, and linear. Tagged exposures matched to surveys, probability models when tags fail.
Outcome Linkage: Nielsen One pairs brand KPIs with sales/conversions, even linking exposures to anonymized credit card data. Their Outcomes Marketplace pipes in partners like Realeyes.
Survey Fatigue Fixes: Capped frequency, quality checks, and 72–96 hour windows.
Defensive Line: “People panels measure people. Big Data gives us scale. Together, it’s the most accurate way to measure TV.”
Translation: Panels aren’t dead — they’re the glue keeping Nielsen’s cathedral from collapsing. Critics call it inertia. Nielsen calls it gospel.
EDO: The Outcomes Evangelist
EDO doesn’t bother with recall surveys. Their creed: watch what consumers do, not what they say.
What They Measure
Predictive Behaviors: Branded searches, site visits, and app activity triggered by ad exposures.
Engagement Volume & Rate: Direct ties to market share growth — with an 83% correlation to sales across industries.
Why It Matters
Faster Signals: Surveys lag; EDO delivers insights while campaigns are still live.
Proven Lift: Clients average 15%+ increases in efficacy within weeks of optimizing to EDO outcomes.
Brand Equity Defense: Kevin Krim argues repeated consumer actions are brand equity in motion — far more meaningful than survey recall inflated by annoying jingles.
Where EDO Meets Surveys
When paired with survey providers, EDO generally aligns — but offers more granularity and speed. Surveys explain sentiment weeks later; EDO shows which audiences, placements, and creatives are driving action now.
Translation: If nobody Googled you after the ad, did it really matter?
Lumen: The Attention Diagnostics Vendor
Lumen doesn’t try to be a currency. They’re not selling CTV ratings. Their sermon is simpler: attention is the missing diagnostic layer.
What They Provide
Large-Scale Eye-Tracking Panels: Attention scores across digital, social, display, and video.
Modeled Attention Data: Impression-level insights fused with campaign delivery data.
Boardroom Appeal: CMOs love dropping “attention” into decks like it’s the new ROI shorthand.
Where They Fit
Planning & Creative Optimization: Lumen’s data helps identify which placements and creatives actually get noticed.
Not Currency, But Influential: Their metrics bleed into CTV/streaming conversations, not as a deal currency, but as a planning and diagnostic layer CMOs can’t resist.
Translation: Lumen isn’t fighting for the altar. They’re the diagnostics vendor whispering in the marketer’s ear, “Here’s what actually got noticed.”
Adelaide: The AU Apostles
Adelaide argues attention seconds are meaningless without context. Their creed: the AU (Attention Unit) predicts real outcomes, not just stares.
Three Big Differences
Safe for Optimization: Lower CPAU = less waste. Lower CPM = chaos.
Input vs. Output: AU uses attention as input, but predicts outcomes: awareness, intent, sales.
Currency-Ready: Premium publishers already guarantee deals on AU. Nobody’s doing that with raw attention seconds.
Why AU Works
Outcome Data: Trained on awareness, consideration, conversions, sales, sourced from Lucid, Kantar, Upwave, Dynata, Attain, Circana, DISQO, Nielsen, Foursquare, plus Adelaide’s own panel.
Proven Lift: Campaigns optimized to AU see 41% higher awareness, 43% higher intent, 56% higher sales.
Validation: MediaSense confirmed AU is a statistically reliable predictor across KPIs and verticals.
Transparency vs. Black Box
Adelaide shares its inputs (eye-tracking, placement data, outcome benchmarks) but not the weights. Why? To avoid the viewability trap, where everyone games the metric. Instead, they commission audits and publish case studies.
Translation: AU isn’t a black box — it’s a guarded formula. You’ll know the ingredients, just not the recipe.
Are They Really Competitors?
Not really. This isn’t Coke vs. Pepsi.
Nielsen is still the currency incumbent.
EDO owns the mid-funnel intent lane.
Lumen is diagnostics, not currency.
Adelaide is the one leaning hardest into currency claims.
But it feels like a fight because:
Budgets are finite. Optimizing to AU means less for Nielsen lift. Choosing EDO outcomes means fewer survey buys.
Everyone pitches as the truth. Panels, behavior, attention, outcomes — each insists they’re the real gospel.
Holding companies weaponize them. WPP, Omnicom, and Publicis push their chosen “truth partner” in pitches, turning measurement into a proxy war.
Bottom line: They’re not competitors in method. But they’re all fighting to be the KPI that directs billions in ad dollars.
Why Fragmentation Matters
This isn’t just nerdy vendor drama. Fragmentation creates real fallout:
Cherry-Picked Metrics: Everyone wins in their own deck. Reality? Not so much.
No Comparability: Frankenstein dashboards stitched together in panic.
Legacy Inertia: Panels still dominate because they prop up the system.
Bias Everywhere: When the client pays the vendor, neutrality is a fairy tale.
The Big Question
So who becomes the next currency?
Nielsen’s Big Data + Panel hybrid?
EDO’s behavioral intent?
Lumen’s diagnostic attention models?
Adelaide’s outcome-trained AU?
👉 Welcome to the arms race of 2025. It’s messy, political, and only getting uglier.
What’s Next (For ADOTAT+ Readers)
The free sermon ends here. Behind the ADOTAT+ curtain, we’ll reveal:
The hidden conflicts of interest baked into each methodology.
Why survey-based lift and behavioral lift almost never align — and who profits from the gaps.
The money trail: which holding companies and platforms push which “truth,” and why it’s more about power than accuracy.
👉 Read the next part in ADOTAT+ to see how brand lift became the biggest confidence game in advertising.
The Cheat Sheet: Pick Your Measurement Religion
Vendor | Methodology | What They’re Really Selling | Best-Fit Objective |
---|---|---|---|
Nielsen | Panels + surveys + Big Data | Inertia as a business model — the system nobody loves but everyone still pays | Reach & frequency currency |
EDO | Behavioral signals (search, site, app activity) | Receipts, not feelings — intent as the mid-funnel bridge to sales | Outcome-driven engagement & predictive lift |
Lumen | Eye-tracking + contextual analytics | Eyeballs or bust — proving attention is the new currency (even if it feels like a lab experiment) | Brand lift & upper/mid funnel |
Adelaide | Machine-learned AU score | Astrology for CMOs — one shiny number to make efficiency-obsessed marketers sleep at night | Efficiency & attention-based ROAS |
What You’re Missing in ADOTAT+
The free section shows you the cracks in attribution and brand lift. But ADOTAT+ takes you inside the room where those cracks are quietly engineered — and who pays the price when they’re ignored.
Here’s what subscribers are already getting:
The Money Trail
Which agencies and platforms benefit the most from inflated lift metrics — and how those incentives shape the dashboards you see.The Vendor Playbook
How Nielsen, EDO, Lumen, and Adelaide frame their metrics as the industry’s “truth” — and the real costs marketers face when they take SOME claims at face value.Case Studies With Teeth
Behind-the-scenes examples of campaigns that looked like wins in decks but collapsed under incrementality testing — and the budgets that vanished in the process.Behind-the-Deck Analysis
How lift gets tripled across platforms, why walled gardens resist deduplication, and what that means for your next media plan.Survival Frameworks
The experiments, heuristics, and counterfactuals top CMOs demand before signing off on inflated ROI slides.
👉 If you’re reading only the free version, you’ve seen the preview. In ADOTAT+, you get the uncut version — the politics, the players, and the playbook.
Subscribe to our premium content at ADOTAT+ to read the rest.
Become a paying subscriber to get access to this post and other subscriber-only content.
Upgrade