We make a number of historical claims across this publication — about when regression was developed, when marketing mix modeling emerged, which companies pioneered which methods, when open-source tools were released. This page collects the sources behind those claims. We've tried to use primary sources where possible (company websites, peer-reviewed papers, official releases) and to flag the places where the historical record is approximate or contested. If you find an error, tell us.
For the statistical history, we leaned on academic sources — the published papers themselves where accessible, peer-reviewed historical surveys, and the encyclopedia entries that summarize them. For the marketing mix modeling history, we used a combination of academic surveys, vendor company sources (verified directly from the companies' own materials where possible), industry publications (AdExchanger, Search Engine Land, Forrester), and the technical documentation of the open-source projects themselves.
A few caveats worth naming up front. The exact origin date of MMM is genuinely contested in the literature — some sources point to the 1960s, others to the 1970s or 1980s — because the technique evolved gradually from econometrics rather than being invented in one moment. We've used hedged language ("emerged in the 1960s and 1970s") where the literature is mixed. Similarly, the precise inflection points in the digital attribution era (when it started, when it began breaking down) are approximate by nature.
Where a claim is well-documented across multiple sources, we cite the most authoritative one. Where a claim relies on a single source, we say so. If you find a better or contradicting source, we'd genuinely like to hear about it — this is a living document.
Beyond the historical record, this publication makes a number of framing and analytical claims. These are our own arguments, not facts to be verified — but where they rest on factual context, here are the sources we drew on.
This is our framing, but it draws on widely-documented industry developments: cookie deprecation (Chrome's announced phase-out, since pushed back multiple times), Apple's App Tracking Transparency (introduced 2021), GDPR (2018), and the broader privacy regulation environment. The Marketing Science Institute's 2023 panel discussion cited in the Domaleski article above identifies these as the drivers of renewed MMM interest, which supports the "detour ending" reading.
This is widely acknowledged in the practitioner literature, including the Hungry Robot Medium piece (which describes MMM's CPG origins explicitly) and the Wikipedia entry on MMM (which notes the limitations for new products and unstable launch periods). The B2B challenge — longer sales cycles, smaller data volumes, narrower channel mix — follows directly from the technical conditions MMM needs, which we discuss in the MMM Companion document.
This is our current best read, held loosely. The Funnel.io and MASS Analytics pieces both describe the ongoing operational challenges of open-source MMM (data prep, model specification, interpretation) that AI assistance plausibly addresses. We've avoided making strong claims about how fast this will progress because the evidence is still emerging.
This is our observation, drawn from working with B2B marketing teams. We don't cite an external source for it because we haven't found one that articulates the dynamic this clearly — though we'd welcome pointers if you know of relevant research on sales-marketing alignment around measurement. The closest adjacent work is in the SiriusDecisions / Forrester sales-marketing alignment literature, which we've drawn on conceptually.
This is our argument, presented in the main hub. We don't claim it as historical fact. The supporting evidence is the recurring confusion we observe in industry discourse — the "attribution is dead" content cycle, the inconsistent definitions of attribution across vendors and analysts, and the procurement difficulties that follow from buying software described by a misleading category name.
This publication is a working document. If you find a factual error — a date that's off, a source we've miscited, a claim that's more contested than we've represented — we want to know. The voice of the publication is "we're trying to get this right and we're holding our views loosely," and that has to apply to the factual record too, not just to the argumentative claims.
The reverse is also true: if you have a better source for something we've cited weakly, or a primary source for something we've cited secondarily, we'd like to upgrade. The goal is to be the publication that does the work other vendor content doesn't bother to do.