
Six Companies, Twelve Failure Modes, And The Questions Your CFO Should Be Asking Before The Next Renewal
One Basket, Many Eggs
A “kinda important person” at one of the largest media companies out there told us recently:
"I'm worried about the sustainability of Nielsen as a company if we put all of our eggs in that basket."
She uses many measurement companies, not just one. And she has a point. Nielsen was taken private in 2022 by a consortium led by Elliott Investment Management and Brookfield. Elliott is an activist fund. Activist funds cut, restructure, and eventually look for the door. That's the business model, not a flaw.
Which is why this conversation made us want to keep writing about it. In our opinion, Nielsen survives for quite a while. Just not forever.
The diagnosis
Most of what your agency is buying right now as "outcomes measurement" does not survive contact with a CFO who can read a spreadsheet.
That is the actual story coming out of iSpot's Disrupt event, NBCU's upfront, and the dozens of conversations we have had with operators, advisors, and former insiders over the last three weeks. The trade press will not write it that way, because the trade press is funded by the people selling the outcomes. So we will. Somebody has to, and the interns at the PR firms are busy.
Here is what is actually happening. Sales lifts that mysteriously appear when the measurement vendor is also being paid to do the optimizing. Closed-loop attribution that closes the loop precisely where the vendor's invoice gets approved. Conversion windows that stretch like a yoga instructor until the numbers "work." Lifts that do not replicate across vendors, across panels, or across quarters. The word "currency," which iSpot spent three years and an unknowable amount of legal fees fighting Nielsen and VideoAmp over, did not come up once at Disrupt. It vanished.
It did not vanish because the war ended in a draw. It vanished because the challengers lost and walked away. That is the read from a measurement-industry veteran with more than a decade inside several of the companies in this story, speaking on background.
"The currency measurement war is over, if there ever actually was one," the veteran told us. "Nielsen held onto its crown, but continues to face pressure as legacy sell-side clients' measurement budgets shrink. Meanwhile, the challengers have largely shifted their focus to outcomes and stopped treating currency as the main battleground."
So why is every panel at every conference still about currency?
"Yet, we'll continue to see panel after panel, trade show after trade show, with the same talking heads talking the same trash about currency. It's exhausting."
This is somebody who has sat in those rooms for ten years. They are not exhausted because the conversation is hard. They are exhausted because it is fake. Currency was conceded. Outcomes is the new costume. Outcomes is a softer category, which is the point. Softer categories are harder to audit. Harder to audit is the entire business model.
NBCU kicked off upfront week with a Performance Insights Hub featuring six different measurement vendors. Six. The reason is not that NBCU loves choice. The reason is that no single vendor's numbers are trusted enough to bet a campaign on, the buyers know it, the sellers know it, and the six-vendor format means no single vendor can be blamed when the numbers do not reconcile. Six vendors is not a feature. Six vendors is a hostage negotiation where everybody in the room agreed in advance that nobody is going home with the body.
And it is worse than that, because the reconciliation is not actually reconciliation. Zachary Rozga of Thece.co, on the record:
"Senior buyers are privately discounting these numbers because the ROI on 'blind' impressions has essentially hit zero. They aren't genuinely buying what is being sold; they are participating in a legacy system that lacks a verification signal. Holding companies tune their models because they are trying to find signals in a synthetic environment. The category reconciles across multiple vendors not to find the truth, but to find the most plausible guess."
Read that twice. The most plausible guess. That is not a description of measurement. That is a description of a séance with better software. Six vendors in a room at NBCU is not six chances at the truth. It is six guesses being averaged into a number that looks defensible enough to put on a slide that the intern made.
If you are a CMO, a procurement lead, or a finance partner approving a measurement contract this quarter, you have a problem. You are about to sign a renewal for something you cannot independently audit, from a vendor whose methodology is proprietary, whose coverage gaps are not disclosed, whose conflicts of interest are not in the MSA, and whose CEO may or may not still be at the company when the contract renews next year. We are looking at one vendor whose own LinkedIn and own homepage cannot agree on who the CEO is, four months after the change. A toddler with a Squarespace login could fix it in an afternoon. Nobody has.
The veteran did:
"A lot of the challenger measurement providers are PE-backed, have lots of debt, and will have pressure on them to optimize their businesses, shrink their workforces, and show better margin (or some margin!). It's going to be a tough slog for everyone in the space and consolidation is inevitable. Beyond all the hype, who has real enterprise scale and deep brand relationships? That's who wins."
"Or some margin." The parenthetical is doing real work there. The contract you are signing today may not be with the same company in 18 months. It may not be with a company at all. It may be a line item in somebody else's acquisition deck, slotted in next to a logo and a synergy bullet that the intern also made. The intern has had a busy quarter.
That is the near-term risk. The medium-term risk is bigger. The medium-term risk is that the data underneath the entire category, including the proprietary datasets the incumbents are quietly being praised for sitting on, is about to lose the one property that made it worth measuring in the first place. Which is the assumption that a human was there.
The framework: the nine axes a measurement vendor scorecard actually needs
Every measurement vendor in this category will hand you a deck full of logos, partnership announcements, and a slide showing how many billions of impressions they process per month. None of that tells you whether their numbers are real. The impression count is the part of the deck the intern made. (See? Busy quarter.) Here are the nine axes that actually matter. The first eight are for the contract you are signing this quarter. The ninth is for the one after that, and if you are not already asking about it, you are late.
1. Coverage gaps
What does the vendor literally not see? Smart TV ACR has streaming-sized holes in it because Vizio, Samsung, and LG are required by contract with certain streaming apps to shut detection off when those apps are playing. That is not a footnote. That is half of premium video, and your vendor is selling you the other half as if it were the whole. If your measurement vendor is pitching ACR-derived viewing data, the first question is which apps are dark to them and how big those apps are in your category. If they cannot answer the question on the call, they cannot answer the question.
2. Methodology disclosure
Will the vendor put their panel construction, weighting, and modeling assumptions in writing? Not in a marketing deck. In the MSA. If the answer is no, the answer is no, and the rest of the conversation is theater performed by people who get paid whether or not you clap.
3. Conflicts of interest
Is the same vendor measuring the campaign and optimizing it? Is the same vendor scoring the lift and getting paid a percentage of media when the lift is high? If yes, you do not have measurement. You have a vendor grading their own homework with a calculator they built, in a classroom they rented, using a rubric they wrote on the way to the meeting.
4. Leadership stability
How many CEOs has the vendor had in the last 24 months? How many Chief Commercial Officers? How long has the head of methodology been in the chair? A measurement vendor with three CEOs in two years is not selling stability. They are selling whatever the new CEO's deck says they are selling this quarter, and the deck will be different next quarter, and you will still be on the hook for the contract. The CEO will be on LinkedIn calling it "the next chapter." There will be a sunset emoji.
5. Customer concentration and capital structure
What percentage of revenue comes from the top five customers? From the top ten? Measurement vendors with concentrated revenue answer to their three biggest clients, not to the market. If two of those clients are holding companies, the vendor's incentives are aligned with the holding companies, and the holding companies' incentives are aligned with their own P&L, and somewhere at the bottom of that org chart is you, paying the bill.
There is a second layer to this axis, which our background source put on the table. The challenger cohort is not just concentrated on the customer side. It is leveraged on the capital side. The veteran's read, again: "A lot of the challenger measurement providers are PE-backed, have lots of debt, and will have pressure on them to optimize their businesses, shrink their workforces, and show better margin (or some margin!)." Vendors under capital pressure ship the features that defend renewals, not the features that improve accuracy. Ask who is on the cap table. Ask what the debt covenants look like. If the vendor will not tell you, the vendor is telling you.
6. Litigation exposure
Has the vendor been sued by a competitor, by a customer, or by a former employee in the last three years? What was alleged? What was the outcome? Civil verdicts in this category are public. PACER costs ten cents a page. Most buyers never look. Look.
7. Data ownership and audit rights
Does the contract give you the right to audit the vendor's underlying data? Does it give you the right to take the raw data with you when you leave? If the answer is no, you are renting numbers you cannot verify from a landlord who keeps the keys, changes the locks when you complain, and raises the rent at renewal.
8. Independence from the seller
Is the vendor paid by the buyer, the seller, or both? Vendors paid by sellers report numbers sellers like. Vendors paid by both have a structural problem they cannot solve no matter how many times they say "neutral third party" in the press release. You can say "neutral third party" into a mirror for an hour. The invoices still come from somewhere.
9. Verified human presence
This is the axis nobody is asking about yet, and the one Rozga argues will define the winners and casualties of the next 24 months. The first eight axes assume that whatever the vendor is measuring, a human was on the other end of it. That assumption is failing.
Here is Rozga, on the record, doing the framing the rest of the category is avoiding:
"Metadata and attention curves are only valuable if you can prove they weren't generated by a machine. If a brand values a connection to real people, the only metric that matters is verified human presence. What I see currently are solutions accelerating a Hall of Mirrors scenario where AI is used to fight AI fraud, creating endless layers of probabilistic guesswork. Bot versus bot, where the black hat is always one step ahead of the white hat."
And the load-bearing sentence, the one worth printing out and taping to a CFO's monitor:
"In 24 months, a $20-a-month account will be able to fake 'metadata' perfectly."
That is the entire pitch deck of half this category, defeated by a ChatGPT Plus subscription and a long weekend. Rozga's argument from there is that the survivors will not be the companies with the biggest panels or the slickest dashboards. They will be the ones who can answer one question deterministically rather than probabilistically:
"The real commercial question isn't what percentage of tasks require proprietary data. It's what percentage requires certainty that there was a human there at all."
He calls the winners Human Truth Engines and the framework Estimation vs. Refereeing. The vocabulary is a little overheated, which is what happens when a founder is trying to name a category before anyone else does. The underlying point is not overheated. It is that probabilistic models of human behavior are going to get overwhelmed by AI bot farms that mimic human patterns at zero marginal cost, and the moat is going to move from "we have more data" to "we can prove the data came from a person."
The question to ask the vendor is whether any part of their measurement stack rests on a deterministic signal that a verified human was present, rather than on a probabilistic inference that one probably was. Most vendors will not have a clean answer. Some will pretend the question does not apply. A few will tell you the truth, which is that the category has not solved this and most of the proposed solutions are in the R&D pilot phase. Rozga has a thesis and he has a horse in it. The thesis is still worth taking seriously, because the alternative is to keep buying impressions in a market where the impression itself is no longer evidence of anything.
The honest part nobody wants to say
The measurement category is worth several billion dollars a year and almost none of the buyers spending that money have the internal capability to audit what they are buying. They know it. The vendors know it. The agencies in the middle, who in many cases are paid by the same vendors they are recommending to the brand, know it best of all. That is the actual market. Not a market for truth. A market for plausible deniability when the campaign underperforms. The deck exists so that when the numbers do not work, somebody can hold it up in a meeting and say "well, the vendor said."
Rozga's version of this is sharper than the polite one and worth quoting at length:
"The cynicism we're seeing isn't just representative; it's a survival mechanism. When over half of all web traffic is non-human, current KPI metrics like impressions and clicks become financial liabilities rather than assets. And those whose livelihood is tied up in it are just trying to ensure they have a job, they are not looking to 'solve the problem.'"
That is the sentence the trade press will not print. The people inside the system are not trying to fix it. They are trying to keep their jobs inside it. The system continues to operate as if the signal were there because the KPIs, the renewal cycles, and the comp plans all depend on the assumption that it is. The gap between "the system is broken" and "the system continues to be funded" is the gap where the next two years of consolidation, repricing, and category redefinition is going to happen.
The reason the trade press will not write this is the same reason your agency will not say it on the QBR call. Everyone in the room is being paid by someone in the room. The only people who can speak plainly are the buyers, and the buyers have not yet realized they are allowed to ask the questions in the scorecard out loud. Nothing happens if you ask. Nobody gets fired. The vendor might get squirrelly, which tells you everything you need to know about the vendor. That is what this document is for.
The texture
One more thing about the texture of this category, because it shows up in the scorecard whether anybody wants to admit it or not. The people running these companies do not like each other. That is not gossip. That is a feature of the market. Our inbox over the last three weeks has been a parade of executives, advisors, and ex-operators trying very hard to shape how we feel about other executives, advisors, and ex-operators. Some of it is on background. Some of it is over a second martini. None of it is in a press release. We are not going to detail any of it here because it is not our job to detail it, and frankly the personal stuff is the least interesting part of the story.
It matters only where it touches the company. A measurement vendor that is spending meaningful executive time running opposition research on a competitor's CEO is a measurement vendor whose executive time is not being spent on the product. A vendor whose comms shop is harder at work spinning a civil verdict than the vendor's methodology team is at work disclosing a panel weight is a vendor telling you what they actually prioritize. A vendor whose own customers describe the product as good and the founder as unpleasant in the same sentence is a vendor with a key-man problem hiding inside a customer-retention number.
Note the patterns. Score against the axes. Move on.
Part Two and Part Three of this report. SAGE versus ChatGPT, the outcomes-measurement scam in detail, why NBCU lists six vendors and what the Nielsen exclusion means, the named Seattle company quietly selling occurrence data to Comscore and others, iSpot's MRC accreditation moat, Samba's ACR streaming advantage and the contractual carve-outs, the Semasio crosswalk, VideoAmp's three-CEO tape with the revenue numbers we think are overstated, and the full M&A matrix.
Plus the downloadable scorecard. Twenty-three pages, print-ready, the document procurement actually brings to the meeting:
The Master Scorecard. Six vendors, eight axes, scored with inline citations.
The Blank Scorecard. For your team to fill in based on vendor written responses.
The Risk Matrix. Privacy, financial concentration, leadership, methodology. For the General Counsel.
The M&A Catalyst Map. Who gets acquired in the next 18 months. For the CFO.
Six Vendor Tear Sheets. Ownership, leadership, litigation, methodology, our take.
The Twelve-Question RFP Checklist. With good-answer and red-flag markers.
Twelve Renewal Questions. Hard for an incumbent to dodge.
The Outcomes Field Guide. Six tricks that turn outcomes math into vendor math.
Subscribe to our premium content at ADOTAT+ to read the rest.
Become a paying subscriber to get access to this post and other subscriber-only content.
Upgrade

