An honest exploration, written for B2B marketers who don't have a data science team, don't run experiments at geo-level scale, and have read enough hot takes to wonder if they're missing something. We'll walk through what MMM actually is, how it differs from what most marketers do today, and help you figure out whether it belongs in your work right now. We have a working hypothesis. We're trying to hold it loosely.
We make software in the marketing performance space. Today, our product does touch-level analysis with statistical pattern matching — which is a different exercise from MMM, as we'll get into. We're actively exploring what role MMM should play in our product and in the broader B2B marketing world, and we're using this document partly to gather feedback and ideas from the people who'd actually use it. We have early hypotheses. We're holding them loosely on purpose.
So consider this a thinking-out-loud document, not a verdict. We'll lay out what we know, where we're uncertain, and how you might figure out where you stand. If you have a perspective — whether you've run MMM, considered it, or written it off — we'd like to hear it.
Marketing mix modeling is a statistical technique. At its core, it's a regression model — the same kind of math that asks "if I change this, how much does that change?" Specifically, MMM takes your historical marketing spend across channels, your historical revenue or sales, and a bunch of other variables that influence outcomes (seasonality, pricing, the economy, competitor activity), and tries to estimate how much each channel contributed to results.
The output looks something like this: "Across the past two years, every dollar you spent on paid social produced roughly $1.40 in attributable revenue, with high confidence. Every dollar on display produced about $0.60, with much lower confidence." Modern Bayesian MMM dresses this up with probability distributions and prior beliefs about how channels behave — saturation curves, decay rates, that kind of thing — but the underlying exercise is the same.
The technique was developed for consumer packaged goods, where companies spend across TV, radio, print, and digital, and want to know how to allocate the next quarter's budget across that portfolio. It's been around since the 1960s. The Bayesian variants and the open-source tools (Google's Meridian, Meta's Robyn) are recent, but the core idea is decades old.
MMM is what you'd build if you were trying to answer a CFO question: "At a portfolio level, where is our marketing budget paying off, controlling for everything else that's happening?"
It's not built to answer "which campaign should I run next week?" or "is this specific account engaging?" Those are different questions, with different tools.
Most discourse about MMM versus attribution treats them as competing answers to the same question. They're not. They're different exercises looking at different units of analysis, answering different questions. Neither is better. Knowing which one fits your work is the whole game.
One thing worth being honest about: MMM is sometimes presented as the rigorous, causal approach to marketing measurement, in contrast to "unscientific" touch-based work. That's a bit overstated.
MMM is also correlational. It's a regression model. It estimates statistical relationships, controlling for what you put into the model. It's more rigorous than naive last-touch attribution, certainly. But to prove that a marketing input caused a revenue outcome, you need actual experiments — geo holdouts, audience holdouts, incrementality tests. That's a separate methodology, and it sits above MMM on the causal hierarchy, not below it.
The honest framing: touch-based analysis is correlation at the entity level. MMM is correlation at the aggregate level, with statistical controls. Incrementality testing is causation, for the specific bet you tested. Three tools, three jobs. None of them is the apex of measurement. They're complementary.
MMM isn't a more rigorous version of touch-based analysis. It's a different exercise, looking at a different unit, answering a different question. The choice between them isn't about sophistication. It's about fit.
MMM is a real tool with real strengths. We're not arguing against it. We're trying to be honest about the conditions under which it produces trustworthy, actionable answers — and the conditions under which it doesn't.
MMM needs enough observations to estimate channel effects with reasonable confidence. The conventional minimum is 2-3 years of weekly data, which gets you 100-150 observations.
For B2B with smaller volumes and noisier signals, you typically need more, not less. A 10,000-leads-per-month B2B company has data, but the signal-to-noise is harder than for a consumer brand with 10,000 daily sales.
MMM is most valuable when you're spending meaningfully across multiple channels and trying to optimize the mix between them.
A B2B company with 70% of spend in paid search and LinkedIn doesn't have much of a portfolio to optimize. MMM applied there produces wide confidence intervals on the small channels and tells you what you already knew about the big ones.
MMM works best when the lag between marketing input and revenue outcome is short and consistent. Consumer purchases happen in days or weeks.
B2B sales cycles often run 3 to 18 months, and they're variable. The longer and lumpier the cycle, the harder it is for MMM to associate spend with outcomes confidently. This is structurally why MMM came from CPG, not B2B.
MMM produces portfolio-level recommendations: "shift 25% from display to programmatic audio." Acting on that requires an organization that can pull those levers at that resolution.
Most B2B marketing teams can't reallocate quickly across channels at that scale — in-flight campaigns, agency relationships, sales-and-marketing alignment, brand considerations all bind. The recommendation arrives as an abstraction the org can't operationalize.
MMM produces quarterly or semi-annual outputs. If your team is making big portfolio decisions on that cadence, the timing fits.
If you're optimizing at campaign-level cadence — adjusting weekly, reading dashboards daily, iterating on creative — MMM isn't speaking your operating language. Different tool for different cadence.
Historically, MMM has required data scientists or external consultants to handle data prep, model specification, and interpretation. The math itself is software; the judgment around the math has been human.
This is the condition AI may genuinely be changing — and the place we're holding our view most loosely. More on that below.
This is the part of the conversation where our working hypothesis is genuinely open. We have a view. We could be wrong.
For most of MMM's history, the binding constraint hasn't been the math. The math has been productizable for years — Google and Meta have open-sourced their tools, and any technical team can pick them up. The binding constraint has been the judgment around the math: choosing variables, setting priors, transforming data, interpreting outputs, translating recommendations into operational action. That's where the data scientists earn their fees, and that's why MMM has been a Fortune 500 capability rather than a mid-market one.
AI plausibly changes this. Modern models are quite good at the kinds of judgment tasks MMM has needed humans for — picking sensible priors from industry benchmarks, flagging when a model is overfitting, translating coefficient outputs into plain-language recommendations. A product that wraps MMM in AI assistance could, in principle, bring it down to teams that couldn't have run it before. They might not even need to know what's happening under the hood.
That's the optimistic case, and we don't dismiss it. If MMM becomes a button rather than a project, the calculus genuinely changes — and probably faster than most of us expect.
But — and this is the part of our view we hold more firmly — the structural conditions don't go away because the software gets easier. A B2B company with 8,000 leads a month, two main channels, a nine-month sales cycle, and a marketing team of four people won't get meaningfully more value from MMM no matter how seamless the tool gets. The data volume wasn't the binding issue. The fit wasn't there to begin with. Easier MMM helps the teams who already had the structural fit and were blocked by complexity. It doesn't manufacture fit where none existed.
AI will productize the math layer of MMM over the next few years. More teams will have access to it. We'll likely add it to our own product when our customers want it.
But MMM is still a portfolio-question tool, and most B2B marketing isn't a portfolio question. It's an operational one.
We could be wrong about the pace. We don't think we're wrong about the fit.
Six questions about how your marketing actually operates. The verdict at the end is our honest read based on your answers — not a sales pitch, just a working assessment.
We think MMM is a real, useful tool for the specific class of organizations that have the data volume, the channel mix, the portfolio-level decision cadence, and the operational capacity to act on its outputs. Historically, that's been a Fortune-500-sized club. AI is plausibly opening the door to more teams, and that's good — more tools accessible to more marketers is unambiguously progress.
But we don't think MMM is the future for everyone, and we're skeptical of the discourse that pitches it that way. Most B2B marketing teams operate on smaller data, longer cycles, and campaign-level cadences that MMM isn't built for. For those teams, the touch-based, entity-level, operationally-paced work is where the actual value lives — and it's where the trust and clarity gaps in this category most need to be closed.
We'd rather help most B2B marketers get really good at the work they're actually doing, with data they can trust, than encourage them to chase a tool that was built for a different shape of company. If MMM becomes genuinely useful for our customers — productized, embedded, useful without requiring a PhD — we'll add it. Until then, we'd rather be honest about what fits where.
MMM is a real tool for a real class of company. Most B2B marketing teams aren't that class of company. AI may change that. We're watching.
This document is a companion to The Manifesto — our working framework for how teams of every size can get better at marketing measurement without waiting for a future that may not be theirs to inhabit. If MMM is for you, great. If it isn't, the work in front of you is still worth doing well.