Skip to main contentFlow Architecture
A letter from the MQL

Dear MQL haters.

“I was just trying to help you prioritize your leads. Why does everyone keep saying I'm dead?”

Everyone's bashing the marketing qualified lead. But the concept was never the problem — the implementation got gutted by an organizational dynamic that nobody talks about.

The popular take

MQLs are dead. Or so they say.

Open LinkedIn on any given day and you'll find someone declaring that MQLs are a relic. A gumball machine. A vanity metric that floods sales with junk and gives marketing something to celebrate that doesn't actually matter. The criticism has become so mainstream that bashing MQLs is practically a content strategy.

And some of that criticism has a legitimate kernel. If you're chasing leads that aren't a good fit — people at the wrong companies, with no buying authority, who downloaded a white paper on a whim — then yes, your MQL process is generating noise, not signal. Nobody wants that.

But here's what gets lost in the bashing: the people who built lead scoring cared. They had meetings. They debated which behaviors should carry more weight. They thought carefully about target personas, account tiers, and threshold criteria. Building a good scoring model is detailed, thoughtful work — and the teams that did it well created something genuinely useful. Reducing all of that to “the gumball machine” is dismissive of real effort and real expertise.

It's also worth asking who benefits from declaring MQLs dead. In many cases, the loudest voices are people selling an alternative — an account-based platform, a new methodology, a different software category. They need MQLs to be the villain so their product can be the hero. That's not objective analysis. That's competitive positioning dressed up as thought leadership.

So if the concept of scoring and prioritizing inbound leads isn't inherently broken — and it isn't — then what actually went wrong?

The original design

What an MQL was supposed to be

The marketing qualified lead was designed to do something simple and useful: put a post-it note on an inbound lead that says “this one is worth your attention.” That's it. It's a prioritization signal. A way for marketing to say to sales: out of all the leads that came in this week, these are the ones you should look at first.

When done right, that prioritization is built on three dimensions:

Behavior Fit

What the person did. A demo request carries more weight than a white paper download. A pricing page visit means something different than a blog read.

Person Fit

Who the person is. Their job title, their role, their seniority. A VP of Marketing and an intern both fill out forms — but they're not the same signal.

Account Fit

Where the person works. Do they work at a company that fits your ICP? A lead from a target account is fundamentally different from a lead at a company you'd never sell to.

When all three dimensions are in play, an MQL is a genuinely useful signal. It means: this person did something meaningful, they're the right kind of person, and they work at a company we care about. That's worth acting on. That's what scoring was designed to produce.

The intent
The MQL was never meant to be a declaration that someone is ready to buy. It was meant to be a prioritization layer — a way to surface the leads most likely to be worth a conversation, so sales doesn't have to manually evaluate every single inbound form fill. It's a filter, not a verdict. And the teams that built these models — who spent hours in rooms debating thresholds, testing criteria, refining the logic — were doing real, valuable work. The concept deserves more respect than it gets.
The power dynamic

How sales killed the scoring model

Here's what nobody talks about when they bash MQLs. The reason the scoring model got gutted isn't a marketing problem. It's an organizational power dynamic.

Sales teams are — understandably — terrified of missing opportunities. They don't want any system sitting between them and a potential deal. Even if they helped design the scoring model. Even if they agreed to the thresholds. The anxiety is always the same: what if the scoring is wrong and I miss a deal because marketing held it back?

That fear is real and it's legitimate. Nobody wants to find out after the fact that a hot lead got buried because the scoring model underweighted it. And in most organizations, sales has more political power than marketing. So when sales says “don't hold anything back” — the scoring model bends.

What happens next
First, person fit gets stripped out. “Don't filter on job title — just send me everything, I'll decide if they're the right person.” Then account fit goes. “Don't filter on company — what if there's an opportunity at a company we hadn't considered?” Now you're left with behavior only. Someone downloaded something? Send it over. Someone visited a page? Send it over. The scoring model has been reduced to a single dimension — and the bar is on the floor.
The result

A one-dimensional MQL that everyone hates

Once the scoring is stripped down to behavior only, the MQL becomes exactly what its critics describe: a firehose of undifferentiated leads. Marketing sends everything over because they've been told to. Sales gets buried in noise because there's no filtering. And then everyone complains that MQLs don't work.

How MQL scoring was designed
Three-dimensional scoringHigh signal
Behavior Person Account
What it actually became
Behavior only — one dimensionLow signal

But here's the part that doesn't get said out loud: sales is now doing the scoring manually. Every lead that comes over, the AE or SDR opens it up and makes a judgment call — is this the right person? Is this a good company? Is this worth my time? They're doing exactly what the scoring model was designed to do, except now they're doing it one lead at a time, in their heads, with no consistency and no scale.

The scoring didn't disappear. It just moved from a system to a person. And it got worse in the process.

What was removed

Automated, consistent, three-dimensional scoring that could process every lead at scale and surface the ones most likely to convert — before a human ever touched it.

What replaced it

Manual, inconsistent, one-at-a-time evaluation by individual reps — each applying their own judgment, with their own biases, and no shared framework.

The irony

The people who broke it are the ones complaining about it

This is the part that's hard to say diplomatically, but it's true: MQLs got a bad reputation largely because the people with the most power to influence the scoring model are the same people who demanded it be gutted. Sales pushed the bar to the floor, and then complained that what came over wasn't good enough. The concept gets blamed for a failure that was imposed on it.

Meanwhile, the people writing LinkedIn posts about “the MQL gumball machine” are rarely acknowledging this dynamic. It's easier — and gets more engagement — to declare a concept dead than to explain the organizational politics that broke it. The nuance doesn't fit in a headline.

And the marketers caught in the middle? They know the scoring should be better. They know three-dimensional scoring would produce higher-quality leads. But they also know that the last time someone tried to hold leads back for better scoring, sales escalated it, and the scoring model got stripped down again. So they send everything over, take the criticism, and move on.

The cycle
Sales demands a low bar. Marketing lowers the bar. Leads get noisy. Sales complains about lead quality. Industry thought leaders declare MQLs dead. Nobody addresses the power dynamic that caused the problem in the first place. Repeat.
Part 1 takeaway

The scoring model is sound. It just needs to be reclaimed.

Scoring and prioritizing inbound leads is not a broken idea. It's a necessary one. The alternative — sending every lead to sales unscored and letting individual reps make quality judgments one at a time — is more expensive, less consistent, and harder to optimize. The path forward isn't to abandon MQLs. It's to reclaim the three-dimensional model and give it the organizational backing it needs to work.

But the MQL debate doesn't stop at scoring. There's a broader industry conversation happening — and it's worth understanding what's actually useful in it and what's just noise.

The bigger debate

MQL vs. ICP: a false choice

The industry has framed this as an either/or. You're either an MQL-driven organization (focused on individual leads) or an ICP-driven organization (focused on target accounts). Pick a side.

The ICP approach — ideal customer profile — says: define your target accounts first. Use data to identify the 200, 500, 1,000 companies that are the best fit for your product. Tier them into enterprise, commercial, and SMB. Filter by company size, geography, tech stack, industry. Then focus your marketing and sales effort on those accounts, not on random inbound leads from companies you'd never sell to.

It's a good idea. It's also not new. This is database marketing — a practice that's been around for decades. The only thing that's changed is the branding. “ICP” sounds more modern than “target account list,” but the underlying concept is the same: know who you're going after and focus your resources there.

The critics position ICP as the antidote to MQLs. Stop chasing individual leads, they say. Focus on accounts. And they're not wrong about the direction — account-level thinking is important. But framing it as a replacement for lead scoring is where the argument falls apart. You still need to know which individuals at those accounts are engaging, what they're doing, and how to prioritize them. That's scoring. That's what MQLs were designed to do.

The false choice
You don't have to choose between leads and accounts. You need both. ICP tells you which companies to focus on. MQL scoring tells you which people at those companies are showing intent. One without the other is incomplete — accounts without lead-level signals are just a list, and leads without account context are the gumball machine everyone's complaining about.
The bridge

An MQA is just a collection of MQLs

When you shift to an account-based approach, the terminology changes. Instead of a marketing qualified lead, you start talking about a marketing qualified account — an MQA. It sounds like a different concept. But when you look at what's actually underneath, it's the same building blocks.

Each MQL has three dimensions: behavior fit, person fit, and account fit. When you have multiple MQLs at the same company, the account fit circle is shared — they all work at the same place. What varies is the behavior (what each person did) and the person fit (what role each person holds). Stack those individual MQLs together and you get an MQA — a view of collective engagement at a single account.

Individual MQL — one person, three dimensions
Single MQL

One person scored across behavior (what they did), person (who they are), and account (where they work). Each dimension contributes to the overall score.

MQA — multiple people, shared account
Collection of MQLs — MQA

Three people at the same account. Each has their own behavior and person fit. But the account fit is shared — they all work at the same company. The MQA is the sum of these individual signals, rolled up to the account level.

Same concept, higher altitude

This is why the MQL-vs-ICP debate is so frustrating. If you do MQLs right — with all three dimensions — you're already capturing the data you need to build an account-level view. The MQA isn't a competing framework. It's what happens when you aggregate well-scored MQLs at the same company. The individual scoring feeds the account-level picture.

And when you see an account where multiple people have strong behavior fit, strong person fit, and the account itself is a tier-A target — that's your qualified buying group forming. You didn't need to abandon MQLs to get there. You needed to do them well and then look at them from a higher altitude.

A legitimate gotcha: funnel math
There is one real problem with counting MQLs at the lead level: it can inflate your funnel math. If you have 10 MQLs but 3 of them are from the same company, you really have 7 potential deals, not 10. Run that through a pipeline model — 20% hit pipeline, 20% close — and you're overestimating by a meaningful margin. This doesn't mean MQLs are broken. It means when you're doing funnel planning, you need to deduplicate at the account level. Count the MQLs for prioritization. Count the accounts for forecasting. Those are two different uses of the same data.
The insight
If MQL scoring had been implemented properly from the start — with behavior, person, and account dimensions intact — the gap between “MQL-driven” and “account-driven” would barely exist. An MQA is just MQLs viewed at the account level. A qualified buying group is just MQLs clustered by role at a target account. The building blocks were always the same. The industry just got distracted by the debate and forgot to look at how the pieces fit together.
The real evolution

From leads to accounts to buying groups

Here's where the conversation gets genuinely useful. The best idea to come out of this whole debate isn't ICP itself — that's table stakes. It's the concept of the qualified buying group.

The insight is simple but powerful: B2B decisions aren't made by individuals. They're made by groups of 3 to 5 people — sometimes more — who collectively evaluate, champion, and approve a purchase. If you're only tracking individual leads, you're seeing fragments. If you're only tracking accounts, you're seeing the container but not what's inside it. The buying group is the unit that actually matters.

The old model
Individual MQLs

Score and prioritize individual leads. Each person is evaluated in isolation. No account context, no buying group awareness. This is where the gumball machine criticism comes from.

The ICP layer
Target Accounts

Define your ideal customer profile. Tier your accounts. Focus resources on companies that fit. This adds the account dimension — but still doesn't tell you what's happening inside the account.

The real unlock
Qualified Buying Groups

Track and prioritize the buying group as a unit. See which people at a target account are engaging, what roles they represent, and whether the group collectively shows enough signal to warrant sales attention. This is where individual behavior, person fit, and account fit come together.

The qualified buying group takes the best of both worlds. It uses ICP to define which accounts matter. It uses behavior and person-fit scoring to track what's happening at the individual level. And then it rolls it up to the buying group — asking not “did one person do something?” but “is a group of decision-makers at a target account collectively showing intent?”

That's a fundamentally better question. And it's the one the industry should be organizing around — instead of arguing about whether MQLs are dead.

Qualified buying group — example
Acme Corp

Tier A · Enterprise · ICP match · 4 of 5 buying group roles engaged

VP Marketing
Demo request, 3 webinars, pricing page
Active
Dir. Demand Gen
Case study download, 2 emails opened
Engaged
Marketing Ops
Product page visit, white paper
Engaged
CFO
Pricing page visit
Aware
CTO
No engagement yet
Not yet

This is what it looks like when you bring it all together. You're not chasing a single lead. You're not just looking at an account name on a list. You're seeing a buying group forming — who's engaged, what they've done, which roles are still missing — and making decisions based on the collective signal, not a single form fill.

Bringing it together

Stop debating the concept. Fix the implementation.

The MQL isn't dead. The ICP isn't a revolution. And the LinkedIn hot takes aren't helping. What's actually useful is much simpler than the debate makes it seem:

First, score your leads properly — with all three dimensions (behavior, person, account), not just behavior. That's the MQL done right. Second, define your target accounts and tier them. That's ICP — good practice, not a paradigm shift. Third, track and prioritize buying groups, not just individuals. That's the qualified buying group — the real evolution that bridges the gap between lead-level signals and account-level strategy.

None of these are competing ideas. They're layers of the same system. And the teams that treat them as complementary — instead of picking sides in an industry debate — are the ones that end up with a demand engine that actually works.

Modernizing the concept

The MQL wasn't broken. The tooling just caught up.

Whether you call it MQL, MQA, qualified buying group — the label doesn't matter. The underlying principles have always been sound: score behavior, evaluate the person, consider the account. What's changed is that you're no longer limited to what you can manually build inside a marketing automation tool.

And let's be fair — the MAPs were actually pretty good at giving you the scaffolding for scoring. That was one of their genuine strengths. But if you've set up scoring a few times, you know the ceiling. The rules are static. The data sources are limited to what the MAP can see. The output is a number — 25, 50, 75 — that means different things to different people. It works, but it's basic.

What's happening now is that the same core principles are being executed at a level that wasn't possible before, because three things have converged:

More Data Sources

The proliferation of APIs means you can pull in signals that were never available before — product usage data, conversation intelligence from tools like Gong, website engagement, event activity, and more. The scoring inputs are richer and wider than anything a MAP could capture on its own.

AI-Powered Analysis

Instead of static scoring rules that someone manually configured, AI can analyze patterns across all of those data sources in real time — identifying which combinations of signals actually correlate with closed deals, and adjusting dynamically as patterns change.

Narrative Intelligence

Instead of a score of 25 — which means nothing to a sales rep — the system produces a human-readable explanation of why this account looks promising. A story, not a number. "Three people in the buying group are actively engaging, the VP just attended a webinar, and product usage spiked this week."

This isn't a new framework replacing an old one. It's the same framework — behavior, person, account — with the execution constraints removed. You're not throwing out MQL scoring. You're doing it in a way that's no longer limited by what one person can manually configure in a MAP. More inputs, smarter analysis, and output that a sales rep can actually act on without needing to interpret a number.

The predictive scoring companies were the first wave of this — applying statistical models to lead prioritization. Good idea, early execution. What's happening now takes that further, with AI that can process a much wider set of signals in real time and explain its reasoning in plain language.

Companies doing this now
There are innovative companies already building on this premise. TrailSpark, for example, is taking the multi-dimensional scoring model, connecting it to a much wider set of data sources — product engagement, conversation data, website activity, CRM signals — and using AI to produce real-time, narrative-driven prioritization. It's the same core idea. It's just being executed at a level that wasn't possible five years ago.

The real question

The question was never “are MQLs dead?” The question is: are you scoring leads with the right dimensions, looking at them at the account and buying group level, and using the best available tools to do the analysis? If your scoring is one-dimensional, your data sources are limited, and the output is a number nobody trusts — the problem isn't the concept. It's the implementation. The principles are sound. The execution is ready for an upgrade.

Ready to optimize marketing?

Join leading B2B marketing teams who've transformed their marketing with Rampmetrics.

Rampmetrics Dashboard
Live
Pipeline Influenced
$4.2M
+23%
Marketing ROI
340%
+18%
Deals Influenced
127
+31%
Avg Touchpoints
8.4
+12%
Pipeline by Channel
LinkedIn Ads$1.8M
Google Search$1.2M
Email Nurture$720K
Webinars$480K