THE WORK IS ALIVE·THE NEED IS PERMANENT·THE CATEGORY IS PROGRESSING·THE WORD IS DEAD·THE WORK IS ALIVE·THE NEED IS PERMANENT·THE CATEGORY IS PROGRESSING·THE WORD IS DEAD·
A thesis

The only thing dead about attribution is the name.

The loudest voices in marketing measurement are right that something is dead. They're pointing at the wrong corpse. The work isn't dead. The need isn't dead. The category isn't dead. Every marketer still needs to know whether their efforts are paying off and what to do more of. That need is more urgent than ever. What's failing — and what's been failing all along — is the word we gave the discipline that's supposed to serve it. And the failure of the word is causing real harm to the work underneath.

SKIP TO THE RENAME
01
What's actually dead

Let's bury
what should be
buried.

Before we defend anything, we want to concede everything that deserves conceding. The skeptics aren't crazy. Several specific things in this category really are finished. Naming them honestly is how the rest of the argument earns its standing.

THE BIG ONE

The endless debate about which attribution model is 'right.'

First-touch. Last-touch. U-shaped. W-shaped. Linear. Time-decay. Data-driven. Multi-touch. Statistical fingerprinting. They are all valid for different situations. None of them is universally right. The thing that's actually dead isn't any of these models. It's the meeting that derails into arguing about which one to use and walks out without a decision. It's the idea that there's one correct methodology and we just have to find it.

What should replace the debate: pick the model that fits the use case, attach a trust rating to whatever number it produces, and move on. Different questions deserve different models. A team trying to understand campaign-level operational performance picks differently than a team trying to credit revenue at the company level. Both are right.

The reason this debate persists isn't intellectual confusion — it's that sales is compensated on credit assignment and reads 'let's improve our measurement' as 'let's redistribute the credit.' Marketing usually cares about influence, not sourcing, but the conversation gets dragged toward sourcing because that's where the political weight lives. Until marketing and sales agree, in advance, that measurement isn't credit, the debate has no clean exit. Which is why it never ends. Bury it.

ALSO DEAD / Cookie-only data sources

A measurement strategy that depended on third-party cookies as its primary signal was always going to break, and it did. Products built on that foundation are finished. The replacement isn't 'no measurement' — it's measurement that doesn't depend on a single fragile signal.

ALSO DEAD / First-generation credit-assignment products

The early-2010s tools that treated attribution as a credit-allocation game — pick a model, divide the pie, argue about the splits — solved the wrong problem. The work they did is no longer where the value lives, and the products that haven't evolved past it are appropriately on the way out.

ALSO DEAD / 'There's one right metric'

Sourced pipeline vs. influenced revenue vs. attributable bookings — pick the one that fits the decision you're making. Insisting on a single number that does all jobs equally well is how we ended up with numbers that do no jobs well.

Four real deaths. None of them are attribution. They're things that lived inside attribution and are appropriately ending. The discipline they served is more important than ever.
02
What's permanent

Every marketer needs
data to get better.

The need underneath all the noise. It doesn't go away when a post goes viral. It doesn't go away when a methodology gets attacked. It's structural to the work of doing marketing in a company that cares whether the work is paying off. Operationalized, it's three questions.

QUESTION 01 / THE SCORECARD

Are we getting better?

Is the marketing engine actually improving quarter over quarter, or are we just running it harder? A real answer requires comparable periods, complete data, and the discipline to define 'better' before measuring it.

Most teams cannot honestly answer this. The CMO asks it weekly. The CFO asks it quarterly. The answer is usually a confident shrug.

QUESTION 02 / THE COMPASS

What should we do more of?

Where should the next dollar go, the next campaign aim, the next quarter's plan focus? The narrow version of the question — 'more of what we've already done' — is the honest one, because it's the only version your data can actually answer.

Most teams answer this by intuition, vendor recommendation, or whoever's loudest in the planning meeting. None of those are the data.

QUESTION 03 / THE TRUST RATING

How much should we trust either answer?

Every number this software produces should come with a confidence signal. Data completeness, sample size, period comparability, signal-to-noise. The user should know which answers to bet a quarter on and which to treat as directional.

Almost no product in this category does this. Every chart looks equally authoritative. That's the actual trust problem.

Scorecard. Compass. Trust rating. Three questions, in a loop, forever. Everything in this category is either in service of one of them or it's furniture.
03
The framework

The Progression.

Three stages of operating maturity, plus a parallel discipline. Each stage answers all three questions, at increasing fidelity. The map of where teams are now and where they're going — which is the thing the discourse has been refusing to draw.

1
WALK
STAGE 01
2
JOG
STAGE 02
3
RUN
STAGE 03
L
THE LAB
PARALLEL

Read carefully — The Lab is not Stage 4. It's a different discipline that runs alongside — different inputs, different questions, different cadence. Treating it as the destination is the discourse's biggest distortion.

01
WALK

Cross-channel campaign visibility.

You stop defaulting to first-touch or last-touch alone. You see which campaigns touched the deals that closed, across channels, in one place. The basics — done well.

SCORECARD

Touched-pipeline counts, period over period. A real baseline for the first time.

COMPASS

Which channels and campaigns are showing up on closed-won. Coarse, but directional.

TRUST RATING

Mostly a data-completeness check. Are pipelines clean, are touches landing, is anything missing.

HOW TO TELL YOU'RE HERE

Your team can answer 'which campaigns touched this closed deal' in under five minutes, in one place, without exporting to a spreadsheet.

COMMON FAILURE

Treating Stage 1 visibility as the answer instead of the floor. Teams stall here for years and call it 'attribution.'

02
JOG

Velocity, trend, and segment analysis.

You move from 'what happened' to 'what's changing.' Deals moving faster when certain campaigns touch them. Segments accelerating or stalling. The conversation shifts from touch counts to motion.

SCORECARD

Velocity, conversion rate by stage, and segment-level trend lines that survive period comparison.

COMPASS

Which segments are accelerating, which are stalling, and which campaigns correlate with the change.

TRUST RATING

Sample-size and period-comparability checks. 'Is this trend real or is it three deals?'

HOW TO TELL YOU'RE HERE

Planning meetings reference movement, not just totals. People say 'enterprise is accelerating' before they say 'enterprise has more pipeline.'

COMMON FAILURE

Confusing correlation with causation. 'Velocity went up when we ran the webinar series' is a story, not a cause.

03
RUN

Pattern-matched recommendations.

You compare current campaign activity against fingerprints of what's worked before. The system tells a story about what to do next, grounded in your actual historical data. Spend decisions reference recommendations, not just opinions.

SCORECARD

Engine-level health: are campaigns matching the patterns of past winners, or drifting toward past losers.

COMPASS

Concrete recommendations: 'this campaign matches the fingerprint of the Q2 enterprise winner; double the budget.'

TRUST RATING

Confidence scores per recommendation, based on similarity to past patterns and signal-to-noise in current data.

HOW TO TELL YOU'RE HERE

Spend decisions explicitly reference recommendations. The QBR slide shows 'system says X, we did Y, here's the gap.'

COMMON FAILURE

Treating recommendations as decisions. The system suggests; humans still pick. The team that forgets that gets bitten.

L
THE LAB

Causal inference and portfolio modeling.

Bayesian marketing mix modeling. Geo holdouts. Incrementality testing at scale. Not a higher rung — a different discipline entirely. It answers a different question, on a different cadence, with different inputs.

DIFFERENT INPUTS

Aggregated spend by channel and geo, macro indicators, controlled holdouts. Not the touch-level data of Stages 1–3.

DIFFERENT EXPERTISE

Bayesian statisticians, econometricians, experimental designers. Not the same skill set as the operating team.

DIFFERENT CADENCE

Quarterly or annual portfolio recalibration. Not the weekly operating loop where Stages 1–3 live.

Some teams running well at Stage 3 will commission Lab work occasionally. Some will never need to. Treating the Lab as the destination everyone should reach is the discourse's biggest distortion — and it's what makes the 'attribution is dead, MMM is the future' argument feel revolutionary when it's actually skipping the entire middle.

Most working marketers belong at Stage 2 or Stage 3. The discourse is either pretending they're at Stage 1 (and broken) or pretending they should be in the Lab (and aren't). Both are wrong.
04
An honest engagement

When someone says
'attribution is dead,'
what do they mean?

Before agreeing or disagreeing, the honest thing is to ask. The answers are revealing — and so is the fact that there are so many of them.

Do you mean people don't trust the data?

Yes — we agree. That's been a real problem for years. Incomplete data, broken pipelines, dashboards that can't be inspected. If that's what's dead, we're nodding along.

Do you mean teams aren't aligned on how to measure?

Also true. Most companies don't have a working definition of what 'good' looks like. Marketing and sales argue about the model rather than the work. That's exhausting, and worth calling out.

Do you mean the tooling? Which tools?

The first-generation credit-assignment platforms? Sure, those are exhausted. The modern multi-touch-with-pattern-recognition stack? Very much alive. Important difference.

Do you mean a methodology? Which one?

Last-touch is finished. Linear attribution was never great. Multi-touch with success-pattern fingerprinting is the most useful it's ever been. These are very different things, and the word 'attribution' doesn't help us tell them apart.

Do you mean credit assignment as a concept?

That's a more interesting argument — but credit assignment is one mechanic inside the discipline, not the discipline itself. The discipline survives without it. The mechanic is what needs to evolve.

WHAT WE KEEP FINDING
We've asked these clarifying questions a lot. The word 'attribution' means fifteen different things to fifteen different people. That's the problem. Before anyone can productively debate whether attribution is dead, we have to agree on what we're debating. The current vocabulary doesn't let us do that.

Which is why renaming this category isn't a vanity project — it's a precondition for having any of these conversations productively. We're going to propose some candidates later, and we want your help picking one.

05
Why this matters

A bad name
is usually harmless.
Not this one.

Most miscategorized things still work. The acronym CRM is bad in many ways, and customer relationship management happens fine despite the name. Attribution is one of the rare cases where the bad name is actively damaging the work. Three specific harms, observable in the wild.

HARM 01

It enables the circular argument.

The word does double duty as both a mechanic (credit assignment) and a category (the cross-channel revenue measurement discipline). That ambiguity is what lets a respected voice declare the category dead while recommending its actual work. Fix the name and the argument collapses, because the speaker has to pick which thing they mean. Almost every 'attribution is dead' essay survives only on this equivocation.

HARM 02

It miscalibrates what teams ask for.

A marketer who buys 'attribution software' is signaling, through the word itself, that they want credit assignment. The RFP gets written around credit-assignment criteria. The procurement conversation happens in credit-assignment language. The success metrics get set against credit-assignment outputs. Then the software gets used wrong, fails to deliver operational improvement, and the team blames the category. The name shaped the failure before the contract was signed.

HARM 03

It points at the wrong question.

'Attribution' focuses attention on credit — who deserves it, how to assign it, what the right model is. The right questions, as the framework shows, are whether the engine is getting better and what to do more of. Every time a marketer reaches for 'attribution data,' the word is subtly redirecting them from operational improvement toward credit allocation. The vocabulary is shaping the work, and the work it's shaping isn't the work that matters.

This isn't a marketing inconvenience. It's a vocabulary actively producing worse decisions across the industry. Which is why renaming it is not a vanity exercise.
06
What numbers actually matter

Every metric is
either a scorecard
or a compass.

Sourced pipeline. Influenced revenue. Marketing-attributable bookings. CAC. Pipeline-to-spend ratio. The list grows every quarter, and most of it is either a scorecard metric (telling you whether you're getting better) or a compass metric (telling you what to do next). The harm comes from using one for the other.

Sourced pipeline is a fine scorecard at the engine level. It's a terrible compass at the campaign level. Influenced revenue is a useful compass for which segments to lean into. It's an awful scorecard for whether sales is improving. CAC is a portfolio metric, not a campaign metric, and most teams use it the wrong way.

The framework forces the question: what stage are you operating from, and is this metric a scorecard or a compass for that stage. The metrics page works through every common one and labels it.

07
The funeral, and the replacement

So what do we
actually call it?

By this point in the page, the case is made: the work is alive, the need is permanent, the framework exists, the discourse is muddled, and the name is causing real harm. Time to replace it. Revenue intelligence beat 'sales analytics' because it described the outcome. This category needs the same move. Below are the candidates we keep coming back to. Vote. Argue. Suggest a better one.

01
Marketing Performance Analytics

Honest. Boring. Describes the work. The performance frame matches what teams actually need: a scorecard for whether the engine is improving and a compass for where to point next.

02
Marketing Intelligence

Borrows the move that worked for "revenue intelligence." Carries the right connotation: pattern recognition, signal extraction, recommendation. Risk: too generic without a qualifier.

03
Revenue Marketing Intelligence

Most precise of the bunch. Names the input (marketing), the outcome (revenue), and the discipline (intelligence). Mouthful. Probably too long for everyday use.

04
Campaign Intelligence

Narrower than the others — focuses on the unit of work most teams actually plan around. Strong inside operational conversations, weaker at the executive layer.

05
Go-to-Market Analytics

Bigger tent — includes sales motion, not just marketing. Useful for orgs where the line between marketing and sales has blurred. Risk: too broad to mean anything specific.

SUBMIT A NEW CANDIDATE

Reader-submitted candidates go to our team for review before joining the tally.

08
Push back

Write in the
margin.

Disagree with a stage definition. Argue the Lab really is a fourth rung. Defend last-touch. Tell us we missed a harm. This page improves when you push.

LEAVE A COMMENT

Comments go to our team for review before publishing.

LENA R., DEMAND GEN LEAD

The three-questions framing is what I've been trying to articulate for two years. Scorecard, compass, trust rating. Every vendor pitch I sit through pretends to do all three and actually does maybe one and a half. The trust rating piece is the missing one.

ANONYMOUS, B2B SAAS

I buy the relocation of the Matt argument, but be careful: not all of the 'attribution is dead' crowd is making the same argument. Some really do mean 'measurement is impossible now,' which is a different and worse claim. Worth distinguishing.

MARCUS T., AGENCY

Where does small-scale incrementality fit? Lift studies on individual channels, not full geo holdouts? Feels like it bridges Stage 3 and the Lab. Probably worth a footnote on the framework.

Returning to where we started

The only thing dead about attribution is the name.

The work is alive. The need is permanent. The category is progressing. The framework exists. The metrics matter. The trust problem is solvable. And the word — the single five-syllable noun at the center of all of it — is the part that needs to go.

— THE RAMPMETRICS TEAM · UPDATED IN PUBLIC
More in this series

Field Notes — a working publication.