- ADOTAT Newsletter - Adtech, Marketing and Media
- Posts
- Marketing Mix Modeling: The Ghost of Measurement Past Returns—Again
Marketing Mix Modeling: The Ghost of Measurement Past Returns—Again
Why MMM Won’t Stay Dead

You know it, I know it, and anyone who has spent more than five minutes in marketing knows it—measurement is a mess. It's always been a mess. And the only thing more persistent than bad attribution models is the industry's desperate need to convince itself that this time, we’ve finally figured it out.
And so, once again, like some washed-up rock band clinging to a nostalgia tour, Marketing Mix Modeling (MMM) is back. And this time, it’s bringing AI, open-source tools, and promises that sound suspiciously like the last three times it was hot. Only now, it has some big tech overlords slapping their names on it, so it must be legit, right?
MMM: The Lazarus of Measurement
For the uninitiated (or those who haven’t been forced to sit through a three-hour attribution workshop while silently Googling “how to fake enthusiasm in meetings”), Marketing Mix Modeling (MMM) is an old-school statistical approach that, in theory, tells brands where their ad dollars are actually making an impact. In reality? It’s the corporate version of a fortune teller reading tea leaves—except the tea leaves are ad spend reports, and the fortune teller is a consultant billing you by the hour.
Let’s break it down. MMM works by taking a pile of historical data—sales, media spend, weather patterns, inflation, whether a Kardashian launched a new skincare line that week, and whether Mercury was in retrograde—and attempts to connect those dots into something resembling logic. The goal? To tell you what parts of your marketing budget actually drive revenue and which ones are just paying for your agency’s fancy dinners.
Sounds useful, right? Except there’s a small problem: it’s all in the past. MMM is essentially the rearview mirror of marketing—it tells you what might have worked months ago but gives you absolutely no idea what you should do right now. It’s like driving a car using only your rearview mirror while flooring the gas pedal, hoping you don’t hit a tree. It doesn’t update in real time. It doesn’t help you optimize spend on the fly. And it definitely doesn’t tell you if that “game-changing” TikTok campaign your CMO keeps hyping up actually did anything beyond making an intern go viral.
So why, exactly, is MMM suddenly making yet another comeback in 2025, as if it’s the marketing version of bell-bottom jeans? Two words: Google and Meta. And let’s be real—when the two biggest ad platforms on the planet start aggressively pushing a new measurement framework, you can bet it’s not out of the goodness of their data-hoarding hearts. No, this is about making sure that when you run your next MMM report, it magically shows that you should be spending even more on—you guessed it—Google and Meta.
MMM’s comeback isn’t some organic return to data-driven wisdom. It’s a panic-fueled response to the slow collapse of cookie-based tracking and the increasing difficulty of measuring digital ads in a privacy-first world. With Apple nuking IDFA, Google dragging its feet on third-party cookie deprecation, and regulators circling the ad industry like sharks sensing blood, Big Tech needs a new way to justify ad spend. And what better way than a complex, opaque statistical model that conveniently reinforces the importance of their own platforms?
But sure, let’s all pretend this is about “advertiser empowerment” and not just another chapter in the long, ridiculous saga of ad platforms making the rules while marketers scramble to keep up.
The Truth About Media Mix Modeling (MMM) – What You’re Getting Wrong
Patrick Cronin at Butler/Till laid it out: Media Mix Modeling (MMM) is the gold standard for understanding marketing effectiveness. It’s not just another fancy dashboard—it’s actionable, insightful, and gives marketers a clear view of how different media investments drive business KPIs like sales.
But let’s be real—just because MMM is powerful doesn’t mean it’s foolproof. If your setup is a mess, your results will be, too. Here’s what you need to know to avoid turning your MMM into a glorified guessing game.
DO:
✔ Model for reality, not theory. Your MMM needs to reflect real-life ad dynamics—carry-over effects, ad fatigue, and saturation all influence how media works. Ignore them, and your model is fantasy, not strategy.
✔ Take advantage of MMM’s flexibility. Want a more nuanced view? Break out video ads into CTV and OLV channels for more clarity. Need broader strokes? Aggregate multiple media types into one category to simplify optimization. MMM isn’t one-size-fits-all—customize it.
✔ Use it as a starting point, not gospel. MMM can guide budget allocation, but it won’t factor in things like publisher contracts, media availability, or overall business strategy. It’s a roadmap, not a magic bullet.
DON’T:
✖ Cram everything into the same model. Throwing every marketing channel into the mix just because you can will dilute results and create misleading conclusions. MMM should be targeted and specific.
✖ Get lost in the weeds. This isn’t the tool for placement-level or ultra-granular optimizations. MMM works best when looking at channel mix performance, not individual ad spots.
✖ Let your model go stale. The world moves fast, and so should your MMM. Trends shift, behaviors evolve—if you’re not refreshing quarterly, you’re working with outdated insights.
The takeaway? MMM is a must-have for proving marketing’s impact, but only if you use it wisely. The difference between success and failure is in how well you tailor and maintain it.
Big Tech Wants You to Love MMM (For a Reason)
If you think Google and Meta are pushing MMM out of some deep commitment to helping brands make better decisions, let me introduce you to a bridge I’d like to sell you.
Google has open-sourced Meridian, its MMM tool, while Meta has Robin—because nothing says transparency like tech giants offering free tools that just so happen to justify more ad spend on their own platforms. But why are they so into MMM all of a sudden?
A few theories:
They need an alternative to Multi-Touch Attribution (MTA), which has been on life support ever since Apple nuked IDFA and Chrome keeps waffling on cookies. If advertisers can’t rely on last-click data anymore, then MMM, which conveniently favors upper-funnel channels like YouTube and Instagram, starts looking a whole lot more appealing.
It helps them “prove” they deserve more of your ad budget. Traditional digital attribution often undercounts view-based platforms like YouTube or Facebook’s feed ads. MMM, on the other hand, is perfectly fine making assumptions based on high-level trends rather than actual user behavior.
They need a hedge. With cookies and user-level tracking getting more complicated than a congressional hearing on TikTok, Google and Meta need to show regulators and advertisers that there are still ways to measure performance—even if those ways are conveniently skewed in their favor.
The Problem with MMM: It’s a Great Theory, But a Nightmare in Practice
Here’s the thing—MMM sounds great in PowerPoint decks, but it falls apart when you actually try to use it for, you know, real marketing decisions.
For starters, it’s slow. Really slow. Like “waiting-for-your-government-issued-rebate-check” slow. Unlike MTA, which at least gave you a semi-real-time look at performance, MMM is built on historical data, meaning you get answers after the fact. And if something changes (spoiler: something always changes), your model is already outdated by the time you actually try to use it.
Then there’s the issue of triangulation. You’ll hear this word thrown around a lot by consultants and execs trying to sound smart, but here’s what it actually means: “We have no idea what’s right, so we’re just going to take a bunch of different models and average them together until they tell us something we like.”
It’s like trying to guess the weight of a cow by asking 30 random people at a state fair. Someone’s gonna be wildly off, someone else might be close, but ultimately, all you’re doing is coming up with a number that feels right.
And don’t even get me started on the operational nightmare of actually implementing MMM. A lot of brands sign up for it because, well, it sounds like something serious companies should do. Then they get their first report and realize they have no idea how to actually act on the results.
Oliver Gwynne doesn’t trust mixed media modeling (MMM) because he sees it as an overinflated, outdated tool built more for justifying ad spend than for accurately measuring impact. His skepticism boils down to a few key points:
Simplistic and Over-Optimistic Assumptions – MMM relies on regression models that assume media spending directly correlates with sales, ignoring external factors like seasonality, viral trends, or competitor activity. It takes correlation and tries to pass it off as causation.
Agency-Driven Bias – Since MMM is often built and used by advertising agencies, there’s an incentive to make results look better than they actually are. Agencies need to show uplift and reach to justify ad spend, which means tweaking parameters in their favor.
Inflated Reach and Engagement Numbers – Ad sellers claim massive exposure, but real-world impact is far lower. Oliver breaks down tube advertising on London’s Central Line to show how stated reach figures (millions per day) shrink to a fraction when factoring in real viewing conditions, attention spans, and actual purchase intent.
Real-World Disprovals – He points to cases like eBay and Airbnb, which slashed hundreds of millions from their ad budgets after discovering MMM had drastically overstated the impact of digital ads. When they turned off campaigns, conversions barely changed—proving MMM had been inflating its effectiveness.
Google's Self-Serving Data – The introduction of Google Meridian, an open-source MMM tool, raises further doubts. Google, whose primary business is selling ads, has every incentive to skew models to make its own platform look like the highest-performing option. Given its history of quietly tweaking attribution models, there’s good reason to be skeptical.
What’s the Alternative?
Instead of relying on MMM, Oliver suggests a far simpler (and cheaper) approach: just stop running an ad and see if sales drop. Testing ad effectiveness through controlled pauses and basic tracking in a spreadsheet often yields more reliable insights than complex MMM models designed to keep advertisers spending.
His bottom line? MMM is a black box built to keep ad dollars flowing rather than to provide actionable insights. If even industry giants like eBay and Airbnb got duped by it, what chance does the average marketer have?
Advertisers Are Running Blind, But MMM Won’t Save Them
If there’s one thing marketers love more than a shiny new attribution model, it’s completely ignoring how to use it. We’re talking about an industry that jumps from one buzzword to the next like an influencer chasing the algorithm—first it was last-click, then multi-touch, then incrementality, now it’s MMM—and at no point does anyone stop to ask, Wait, do we even know what we’re trying to solve?
Because let’s be clear: most brands don’t have a measurement problem. They have an “actually making decisions” problem.
That’s why every quarter, CMOs walk into executive meetings armed with deck after deck of MMM reports, regression analyses, and charts with more lines than a Wall Street cocaine party—only to turn around and dump another million into the same media mix because, well, that’s what they did last year. If MMM really worked the way these brands hope, the first thing it would tell them is to stop lighting money on fire.
But they don’t. Instead, they commission yet another study, set up another framework, and add more columns to their already bloated spreadsheet hellscape, all while their media buyers sit there thinking, Cool, but what do we actually change?
How to Stop the Madness
If you actually want to measure things in a way that helps you make decisions—real, tangible, move-the-budget decisions—you need to do something radical: use common sense. And that means building a measurement strategy that does not rely on just one method, or worse, whatever Google is pushing this quarter. Here’s what that looks like:
A Day-to-Day Attribution System:
No, it’s not perfect. No, it won’t capture everything. But it will give you a consistent read on performance so that when your CMO asks what happened to sales yesterday, you don’t have to say, Give me three months and a Python script to run an MMM model. Whether it’s Google Analytics, platform-reported data, or something like Rockerbox, you need a daily pulse.Incrementality Testing (aka, Grow Up and Run an Experiment):
Want to know if your ads actually work? Turn them off. If you run a $10 million campaign and sales don’t budge, guess what? That channel is a big fat waste of money—and no amount of MMM hand-waving is going to change that. The best brands test constantly, whether it’s holdouts, geo-based experiments, or even ghost bidding (RIP, cookie tracking).MMM (If You Must, But Set Expectations Lower Than Your Intern’s Salary):
MMM is not a Monday-morning decision-making tool. It’s not going to tell you whether to spend more on TikTok this week or if that influencer campaign actually moved the needle. What it can do is help validate long-term trends—if you use it right. The problem is, most brands use it as a CYA strategy instead of an actual planning tool.
The Bottom Line: MMM Isn’t the Future—It’s Just Another Tool
Look, MMM isn’t bad. It’s just not the magic bullet that Google, Meta, and breathless LinkedIn thought leaders want you to believe it is. You know the ones—the self-appointed measurement gurus who write 20-tweet threads about how MMM is “revolutionizing marketing,” complete with diagrams that look like they were stolen from a failed PhD dissertation. These people will tell you that MMM is the single source of truth and that if you’re not using it, you’re basically lighting your media budget on fire.
Want to impress the boardroom? Sure, run an MMM model and show them a bunch of beautifully color-coded charts proving that, yes, marketing does something. Want to actually know if your CTV campaign is driving incremental sales? Good luck.
And that’s the thing: MMM isn’t new. It’s been around since the Mad Men era, back when marketers had to manually enter ad spend data into punch cards and hope the mainframe didn’t crash. It’s not some radical innovation—it’s just the latest in a long line of measurement tactics that the industry rediscovers every few years whenever it runs out of better ideas.
So yes, MMM is back. Again. And like an aging rock band on their third farewell tour, it will keep coming back every few years with a new gimmick. One day it’s “AI-powered,” the next it’s “privacy-safe,” and before you know it, it’ll be “blockchain-enabled” because someone at a conference said that would make it sound more futuristic.
But don’t fool yourself: MMM is not here to save advertising. It’s just another tool, another framework, another excuse for consultants to bill you for endless “strategic workshops” where they spend half the meeting drawing arrows on a whiteboard while saying words like “triangulation” and “statistical significance.”
Because let’s be honest: if measurement were really solved, what would all the scammy consultants with their podcasts do for a living?

ALT OPINION: 🧠 Why Your Marketing ROI is Built on Self-Graded Homework (And How to Fix It) 🎯
Welcome to this week's dose of marketing truth bombs. If you're still trusting Google and Meta to tell you how well your ads are performing, buckle up—because it’s time for a reality check. 🚀
💡 The Problem: Your Ad Data is Rigged
Platforms like Meta, Google, and TikTok love showing you impressive ROAS (Return on Ad Spend) numbers. But let’s be honest—they’re grading their own homework. Of course, they’ll tell you their platform is driving all your conversions.
That’s where Mixed Media Modeling (MMM) and Correlation Modeling come in. Instead of relying on platform-biased reports, these advanced models analyze all your marketing efforts holistically—TV, CTV, search, social, PR, offline sales, and even macroeconomic trends.
📊 What the Experts Say:
🎙️ Contrary to Popular Opinion podcast, hosted by Kelly Maguire & Todd Juneau of Vuja Dé Digital, recently tackled this head-on with creative strategist Bijan Malaklou. The key takeaways?
🚨 MMM isn’t a magic bullet, but it’s necessary. If you’re running ads without it, you’re basically throwing darts in the dark.
🧐 Correlation ≠ Causation. Just because your sales spiked after a Facebook campaign doesn’t mean that ad was the reason. MMM helps you separate actual impact from happy coincidences.
📅 Historical data is king. You need at least two years of clean data to make reliable marketing predictions. No shortcuts. No guessing.
🔍 Privacy changes make this even more urgent. With third-party tracking crumbling, MMM is one of the only ways to measure marketing effectiveness without depending on pixel-based tracking.
⚡ Why You Need to Care
👉 If your brand is spending big on marketing but doesn’t understand why things are working (or not), you’re burning money.
👉 If you’re only looking at last-click attribution, you’re ignoring 90% of the buyer’s journey.
👉 If you don’t have a strategy for marketing measurement in a cookie-less world, your competition is going to eat your lunch.
So yeah, it’s expensive. It’s complex. But flying blind is worse.

Rembrand’s AI: Product Placement Without the Headache
Why reshoot when you can remaster? Rembrand’s AI Studio lets you drop products into your videos like they were always meant to be there—seamlessly, effortlessly, and without the post-production migraines.
🎬 Creators: Monetize on your terms—no awkward ad breaks, just seamless integrations.
📢 Advertisers: Be part of the story, not an annoying pop-up.
Try it now and you could snag $10,000 (because who doesn’t love free money?).