The loudest voices in marketing measurement are right that something is dead. They're pointing at the wrong corpse. The work isn't dead. The need isn't dead. The category isn't dead. Every marketer still needs to know whether their efforts are paying off and what to do more of. That need is more urgent than ever. What's failing — and what's been failing all along — is the word we gave the discipline that's supposed to serve it. And the failure of the word is causing real harm to the work underneath.
SKIP TO THE RENAME →Before we defend anything, we want to concede everything that deserves conceding. The skeptics aren't crazy. Several specific things in this category really are finished. Naming them honestly is how the rest of the argument earns its standing.
First-touch. Last-touch. U-shaped. W-shaped. Linear. Time-decay. Data-driven. Multi-touch. Statistical fingerprinting. They are all valid for different situations. None of them is universally right. The thing that's actually dead isn't any of these models. It's the meeting that derails into arguing about which one to use and walks out without a decision. It's the idea that there's one correct methodology and we just have to find it.
What should replace the debate: pick the model that fits the use case, attach a trust rating to whatever number it produces, and move on. Different questions deserve different models. A team trying to understand campaign-level operational performance picks differently than a team trying to credit revenue at the company level. Both are right.
The reason this debate persists isn't intellectual confusion — it's that sales is compensated on credit assignment and reads 'let's improve our measurement' as 'let's redistribute the credit.' Marketing usually cares about influence, not sourcing, but the conversation gets dragged toward sourcing because that's where the political weight lives. Until marketing and sales agree, in advance, that measurement isn't credit, the debate has no clean exit. Which is why it never ends. Bury it.
A measurement strategy that depended on third-party cookies as its primary signal was always going to break, and it did. Products built on that foundation are finished. The replacement isn't 'no measurement' — it's measurement that doesn't depend on a single fragile signal.
The early-2010s tools that treated attribution as a credit-allocation game — pick a model, divide the pie, argue about the splits — solved the wrong problem. The work they did is no longer where the value lives, and the products that haven't evolved past it are appropriately on the way out.
Sourced pipeline vs. influenced revenue vs. attributable bookings — pick the one that fits the decision you're making. Insisting on a single number that does all jobs equally well is how we ended up with numbers that do no jobs well.
The need underneath all the noise. It doesn't go away when a post goes viral. It doesn't go away when a methodology gets attacked. It's structural to the work of doing marketing in a company that cares whether the work is paying off. Operationalized, it's three questions.
Is the marketing engine actually improving quarter over quarter, or are we just running it harder? A real answer requires comparable periods, complete data, and the discipline to define 'better' before measuring it.
Most teams cannot honestly answer this. The CMO asks it weekly. The CFO asks it quarterly. The answer is usually a confident shrug.
Where should the next dollar go, the next campaign aim, the next quarter's plan focus? The narrow version of the question — 'more of what we've already done' — is the honest one, because it's the only version your data can actually answer.
Most teams answer this by intuition, vendor recommendation, or whoever's loudest in the planning meeting. None of those are the data.
Every number this software produces should come with a confidence signal. Data completeness, sample size, period comparability, signal-to-noise. The user should know which answers to bet a quarter on and which to treat as directional.
Almost no product in this category does this. Every chart looks equally authoritative. That's the actual trust problem.
Three stages of operating maturity, plus a parallel discipline. Each stage answers all three questions, at increasing fidelity. The map of where teams are now and where they're going — which is the thing the discourse has been refusing to draw.
Read carefully — The Lab is not Stage 4. It's a different discipline that runs alongside — different inputs, different questions, different cadence. Treating it as the destination is the discourse's biggest distortion.
You stop defaulting to first-touch or last-touch alone. You see which campaigns touched the deals that closed, across channels, in one place. The basics — done well.
Touched-pipeline counts, period over period. A real baseline for the first time.
Which channels and campaigns are showing up on closed-won. Coarse, but directional.
Mostly a data-completeness check. Are pipelines clean, are touches landing, is anything missing.
Your team can answer 'which campaigns touched this closed deal' in under five minutes, in one place, without exporting to a spreadsheet.
Treating Stage 1 visibility as the answer instead of the floor. Teams stall here for years and call it 'attribution.'
You move from 'what happened' to 'what's changing.' Deals moving faster when certain campaigns touch them. Segments accelerating or stalling. The conversation shifts from touch counts to motion.
Velocity, conversion rate by stage, and segment-level trend lines that survive period comparison.
Which segments are accelerating, which are stalling, and which campaigns correlate with the change.
Sample-size and period-comparability checks. 'Is this trend real or is it three deals?'
Planning meetings reference movement, not just totals. People say 'enterprise is accelerating' before they say 'enterprise has more pipeline.'
Confusing correlation with causation. 'Velocity went up when we ran the webinar series' is a story, not a cause.
You compare current campaign activity against fingerprints of what's worked before. The system tells a story about what to do next, grounded in your actual historical data. Spend decisions reference recommendations, not just opinions.
Engine-level health: are campaigns matching the patterns of past winners, or drifting toward past losers.
Concrete recommendations: 'this campaign matches the fingerprint of the Q2 enterprise winner; double the budget.'
Confidence scores per recommendation, based on similarity to past patterns and signal-to-noise in current data.
Spend decisions explicitly reference recommendations. The QBR slide shows 'system says X, we did Y, here's the gap.'
Treating recommendations as decisions. The system suggests; humans still pick. The team that forgets that gets bitten.
Bayesian marketing mix modeling. Geo holdouts. Incrementality testing at scale. Not a higher rung — a different discipline entirely. It answers a different question, on a different cadence, with different inputs.
Aggregated spend by channel and geo, macro indicators, controlled holdouts. Not the touch-level data of Stages 1–3.
Bayesian statisticians, econometricians, experimental designers. Not the same skill set as the operating team.
Quarterly or annual portfolio recalibration. Not the weekly operating loop where Stages 1–3 live.
Some teams running well at Stage 3 will commission Lab work occasionally. Some will never need to. Treating the Lab as the destination everyone should reach is the discourse's biggest distortion — and it's what makes the 'attribution is dead, MMM is the future' argument feel revolutionary when it's actually skipping the entire middle.
Before agreeing or disagreeing, the honest thing is to ask. The answers are revealing — and so is the fact that there are so many of them.
Yes — we agree. That's been a real problem for years. Incomplete data, broken pipelines, dashboards that can't be inspected. If that's what's dead, we're nodding along.
Also true. Most companies don't have a working definition of what 'good' looks like. Marketing and sales argue about the model rather than the work. That's exhausting, and worth calling out.
The first-generation credit-assignment platforms? Sure, those are exhausted. The modern multi-touch-with-pattern-recognition stack? Very much alive. Important difference.
Last-touch is finished. Linear attribution was never great. Multi-touch with success-pattern fingerprinting is the most useful it's ever been. These are very different things, and the word 'attribution' doesn't help us tell them apart.
That's a more interesting argument — but credit assignment is one mechanic inside the discipline, not the discipline itself. The discipline survives without it. The mechanic is what needs to evolve.
Which is why renaming this category isn't a vanity project — it's a precondition for having any of these conversations productively. We're going to propose some candidates later, and we want your help picking one.
Most miscategorized things still work. The acronym CRM is bad in many ways, and customer relationship management happens fine despite the name. Attribution is one of the rare cases where the bad name is actively damaging the work. Three specific harms, observable in the wild.
The word does double duty as both a mechanic (credit assignment) and a category (the cross-channel revenue measurement discipline). That ambiguity is what lets a respected voice declare the category dead while recommending its actual work. Fix the name and the argument collapses, because the speaker has to pick which thing they mean. Almost every 'attribution is dead' essay survives only on this equivocation.
A marketer who buys 'attribution software' is signaling, through the word itself, that they want credit assignment. The RFP gets written around credit-assignment criteria. The procurement conversation happens in credit-assignment language. The success metrics get set against credit-assignment outputs. Then the software gets used wrong, fails to deliver operational improvement, and the team blames the category. The name shaped the failure before the contract was signed.
'Attribution' focuses attention on credit — who deserves it, how to assign it, what the right model is. The right questions, as the framework shows, are whether the engine is getting better and what to do more of. Every time a marketer reaches for 'attribution data,' the word is subtly redirecting them from operational improvement toward credit allocation. The vocabulary is shaping the work, and the work it's shaping isn't the work that matters.
Sourced pipeline. Influenced revenue. Marketing-attributable bookings. CAC. Pipeline-to-spend ratio. The list grows every quarter, and most of it is either a scorecard metric (telling you whether you're getting better) or a compass metric (telling you what to do next). The harm comes from using one for the other.
Sourced pipeline is a fine scorecard at the engine level. It's a terrible compass at the campaign level. Influenced revenue is a useful compass for which segments to lean into. It's an awful scorecard for whether sales is improving. CAC is a portfolio metric, not a campaign metric, and most teams use it the wrong way.
The framework forces the question: what stage are you operating from, and is this metric a scorecard or a compass for that stage. The metrics page works through every common one and labels it.
By this point in the page, the case is made: the work is alive, the need is permanent, the framework exists, the discourse is muddled, and the name is causing real harm. Time to replace it. Revenue intelligence beat 'sales analytics' because it described the outcome. This category needs the same move. Below are the candidates we keep coming back to. Vote. Argue. Suggest a better one.
Honest. Boring. Describes the work. The performance frame matches what teams actually need: a scorecard for whether the engine is improving and a compass for where to point next.
Borrows the move that worked for "revenue intelligence." Carries the right connotation: pattern recognition, signal extraction, recommendation. Risk: too generic without a qualifier.
Most precise of the bunch. Names the input (marketing), the outcome (revenue), and the discipline (intelligence). Mouthful. Probably too long for everyday use.
Narrower than the others — focuses on the unit of work most teams actually plan around. Strong inside operational conversations, weaker at the executive layer.
Bigger tent — includes sales motion, not just marketing. Useful for orgs where the line between marketing and sales has blurred. Risk: too broad to mean anything specific.
Reader-submitted candidates go to our team for review before joining the tally.
Disagree with a stage definition. Argue the Lab really is a fourth rung. Defend last-touch. Tell us we missed a harm. This page improves when you push.
Comments go to our team for review before publishing.
The three-questions framing is what I've been trying to articulate for two years. Scorecard, compass, trust rating. Every vendor pitch I sit through pretends to do all three and actually does maybe one and a half. The trust rating piece is the missing one.
I buy the relocation of the Matt argument, but be careful: not all of the 'attribution is dead' crowd is making the same argument. Some really do mean 'measurement is impossible now,' which is a different and worse claim. Worth distinguishing.
Where does small-scale incrementality fit? Lift studies on individual channels, not full geo holdouts? Feels like it bridges Stage 3 and the Lab. Probably worth a footnote on the framework.
The work is alive. The need is permanent. The category is progressing. The framework exists. The metrics matter. The trust problem is solvable. And the word — the single five-syllable noun at the center of all of it — is the part that needs to go.
The only thing dead about attribution is the name.
Where Are You on the Progression?
Is Marketing Mix Modeling For You?
A Brief History of Regression in Marketing.
Sources & References.