
Sign up here | Advertise? Comments? |
---|
Adalytics Didn’t Lie…But They Didn’t Tell the Truth Either.
It began, as these things always do—not with evidence, not with receipts, but with a whisper.
The kind of whisper you only get after midnight, usually from someone holding a whiskey in one hand and a grudge in the other.
Into my inbox floats a note from “David” (I call him David, because that’s his name) —ex-agency guy, ad world veteran, now apparently playing the role of handler for the enfant terrible of ad fraud, Adalytics.
He didn’t knock.
He slithered.
“Hey, Pesach,” the message kinda read as I remember it, “I’m advising Adalytics. We’ve got something big. Explosive. And whatever you do, don’t tell Patience at the Wall Street Journal you’ve seen this. If they find out, they won’t run it.”
I should have known right then.
Right there—like a director whispering “Cut” but still letting the actor cry.
This wasn’t journalism. It was theater.
No—it was community theater.
The kind where half the cast is overacting, the other half forgot their lines, and no one realizes the audience left during intermission.
But it felt like a story.
And stories? That’s our drug.
The Seduction of a Scandal
Let me be honest.
We all wanted to believe.
We wanted a villain.
A monster.
A system so broken it practically begged us to be the hero.
Because the idea that one of the industry’s gatekeepers—DoubleVerify, of all companies—was asleep at the fraud switch while advertisers bled budget?
That’s the kind of story that makes LinkedIn influencers wet themselves.
And Adalytics gave them just that:
A perfect narrative.
Simple enough to retweet.
Dramatic enough to podcast.
Technical enough to sound smart, but vague enough no one had to open DevTools.
It lit up the feeds.
The Brand Safety Elders came out of their retirement caves, robes flowing, swinging their sacred scrolls of MRC guidelines.
You know the type…
Guys who haven’t bought an ad since the Bush Administration but still think they’re the conscience of the industry.
They started chanting in unison:
“This is the moment. This is the reckoning. This is what we’ve been warning about.”
Meanwhile, no one actually read the logs.
No one asked hard questions.
No one checked the timestamps or looked at the call chains.
They just believed.
Because believing feels easier than knowing.
The Problem With the Report?
It Wasn’t Fraud.
It Was Fragile.
See, this wasn’t a grift in the usual sense.
It was the f’ing Dunning-Kruger Olympics.
A one-man band with a data science degree and a search bar, wandering into a combat zone with a butter knife.
Krzysztof Franaszek—the founder of Adalytics—isn’t a criminal.
He’s not malicious.
He’s not some cigar-smoking villain laughing as brands burn.
He’s a smart guy in the wrong room.
And somewhere along the way, he got it into his head that adtech—this beautiful, broken mess of pipes and pixels—was something he could just “figure out.”
Newsflash: You can’t A/B test your way through a protocol stack.
You can’t simulate header bidding with a bot you pulled off a GitHub repo and pretend it’s human behavior.
But that’s what he did.
He used URLScan.io, a generic headless browser crawler, passed it off as “proprietary tech,” and started scanning sites like it was a treasure hunt for fraud.
Problem is, URLScan isn’t declared. It’s not on the IAB bot list. It’s not even remotely representative of human traffic.
Even URLScan’s own CEO was like:
“Uh… this isn’t what we’re built for.”
And yet, somehow, this became the heart of the report.
The crumbling foundation beneath the screaming headline.
Let’s Break Down the Mistakes—Because There Were Plenty
🚨 Mistake #1: A Tag Fired. So What?
Adalytics confused tag activity with ad delivery.
Which is like assuming the oven’s hot because the light’s on.
Tags fire all the time.
They fire in test environments.
They fire on broken pages.
They fire when a developer sneezes near a misconfigured SDK.
But firing ≠ billing.
Firing ≠ viewability.
Firing ≠ fraud.
And anyone who has spent ten minutes in this business knows that.
Yet somehow, Adalytics built an entire case around the notion that a tag firing meant a fraudulent impression.
It’s like writing a crime novel where every creaky floorboard is treated as a murder weapon.
🧩 Mistake #2: The Tags Weren’t DV’s
DV reviewed all 115 examples cited in the report.
And over 50% of the tags?
Not even theirs.
Let me say that again.
More than half of the tags Adalytics blamed DoubleVerify for… weren’t DoubleVerify’s.
That’s not an “oops.”
That’s not a footnote.
That’s a false positive rate that would make your dentist cry.
Adalytics was accusing DV of allowing 0.01% of GIVT through post-bid, which is fully MRC-compliant and not billable anyway.
Meanwhile, their own error rate is… checks notes… 50%.
If you’re going to throw stones, maybe don’t build your house out of hallucinated impressions.
🔧 Mistake #3: Proprietary Bot? Nah. Rented.
There was nothing proprietary about their methods.
The “bot” wasn’t even theirs.
It was URLScan, running a basic script that crawled pages without cookies, context, or even the faintest whiff of reality.
Trying to analyze adtech fraud using URLScan is like trying to diagnose cancer with an Etch-A-Sketch.
It doesn’t work.
It was never meant to work.
And pretending otherwise isn’t brave—it’s careless.
What DoubleVerify Actually Did (Hint: Their Job)
Here’s the boring, unsexy, but critical reality:
DV filtered everything properly.
Every example cited was reviewed.
Post-bid filtering caught all known GIVT.
Nothing was billed.
URLScan was blocked daily, long before Adalytics showed up.
You may not like verification vendors.
You may not trust them.
But in this case? They did the work.
So… What’s the Real Story Here?
The story isn’t that Adalytics caught anyone.
The story is that they didn’t understand what they were looking at—
and the industry still rewarded them for it.
Because we love a takedown.
Because every burned-out media buyer and jaded analyst wants to believe someone finally cracked the code.
Because every washed-up “brand safety expert” is desperate to matter again.
I see them out there.
Bathrobed prophets of doom, shouting from their LinkedIn rooftops, convinced they’re fighting fraud with their Substacks and SoundClouds.
They don’t want nuance.
They want revenge.
But let me be very clear:
I’ve reported real fraud.
I’ve taken down real networks.
I know the difference between what’s broken and what’s misunderstood.
And this?
This wasn’t malicious. It was misguided.
I was fooled.
You were fooled.
We all were.
Because it felt right.
But feelings don’t make fraud.
Facts do.
The industry deserves better than vibes.
So the next time someone hands you a “secret report,”
asks for an embargo, and whispers not to tell the Journal—
Ask to see the logs.
Check the code.
Follow the headers.
And if it doesn’t add up?
Walk away.
Stay bold. Stay curious.
And next time?
Let the Brand Safety Elders yell into the void.
We’ve got work to do.

Glossary of the Adalytics Mess (For those who don’t live and breathe log files and packet traces)
Tag Firing
What It Means:
When a tiny snippet of code (“tag”) loads on a webpage or app, it sends a signal (“fired”) to an ad server or tracking platform.
Why It’s Important:
A tag firing does not mean an ad was seen, billed, or even fully loaded. Think of it like the “ding” of a microwave — it tells you something happened, but not whether your food is cooked, edible, or even in the microwave in the first place.
What Adalytics Got Wrong:
They treated any tag firing as proof of an ad being delivered and billed. That’s like saying every time you get a spam call, you must have bought a timeshare in Boca.
Ad Delivery
What It Means:
An ad is delivered when it’s actually served to a user in a way that counts toward billing. This includes a whole sequence of events — bid request, auction, render, and verification.
Why It’s Important:
Delivery is what advertisers pay for. If an ad doesn’t meet all the conditions, it’s not billable.
What Adalytics Got Wrong:
They assumed that if a tag was seen, an ad was “delivered” — skipping every technical step in between. Imagine blaming FedEx for “delivering” a package just because a truck drove by your house.
Pre-Bid Filtering
What It Means:
The process of detecting and removing bad inventory before the ad auction happens.
Why It’s Important:
Stops fraud at the door, so advertisers don’t even bid on garbage.
What Adalytics Got Wrong:
They didn’t distinguish between pre-bid and post-bid filtering, so they assumed any bot signal seen after the fact meant pre-bid failed. That’s like criticizing a bouncer for letting someone in when the person was actually removed after they got past the door.
Post-Bid Filtering
What It Means:
The process of detecting and removing invalid traffic after the ad has technically been served, but before it’s billed.
Why It’s Important:
It’s the safety net — ensuring advertisers don’t pay for garbage impressions.
What Adalytics Got Wrong:
They treated any presence of bot traffic post-bid as a “gotcha,” ignoring that post-bid filtering is exactly what’s supposed to happen.
GIVT (General Invalid Traffic)
What It Means:
Non-human traffic that can be detected using known patterns — like declared bots, data center IPs, or spiders.
Why It’s Important:
MRC (Media Rating Council) standards allow for some GIVT to show up in raw logs as long as it’s removed before billing.
What Adalytics Got Wrong:
They saw 0.01% of GIVT in DV’s raw logs and treated it as billable fraud — even though it was already filtered out of invoices. That’s like finding dust in a vacuum cleaner and accusing Dyson of selling dirty floors.
False Positive
What It Means:
When a detection system flags something as bad that isn’t actually bad.
Why It’s Important:
Too many false positives and you stop detecting real issues because everything looks like an issue.
What Adalytics Got Wrong:
Over 50% of the tags they said belonged to DoubleVerify didn’t belong to DV at all. That’s a false positive rate so bad it’s practically performance art.
What It Means:
A tool that scans web pages for security research — not for ad fraud detection. It uses automated crawlers that don’t behave like humans and aren’t declared as bots.
Why It’s Important:
In adtech, crawlers like this are typically excluded from analysis because they distort data.
What Adalytics Got Wrong:
They used URLScan as their primary “proprietary bot detection system” and treated its activity as proof of fraud. That’s like using a Roomba to storm Normandy and declaring it a war hero.
MRC Standards
What It Means:
The Media Rating Council sets the rules for how ad measurement, viewability, and fraud detection should be done.
Why It’s Important:
Following MRC standards is the difference between credible measurement and backyard science.
What Adalytics Got Wrong:
They implied DV was non-compliant without showing any evidence — while their own methodology would never pass an MRC audit.