El lead scoring que no miente
Most lead scoring systems produce confident numbers that mean nothing. A lead scores 85 out of 100, gets routed to sales, and the rep discovers it's a student researching a term paper. Meanwhile, a CFO who visited your pricing page twice and downloaded a case study sits at 32 because she never opened a marketing email.
The problem isn't lead scoring as a concept. It's that most implementations score activity instead of intent. Here's how to build a model that actually predicts who will buy.
Why Traditional Lead Scoring Fails
Traditional point-based scoring assigns values to activities: opened email (+5), visited website (+3), downloaded whitepaper (+10), attended webinar (+15). The theory is simple — more engagement equals higher purchase intent.
The reality is different. The most engaged leads are often:
- Researchers, not buyers. Students, journalists, and competitors visit frequently and download everything. They score high and convert never.
- Low-level employees. Individual contributors consume content voraciously but have no budget authority. Your sales team wastes time qualifying them out.
- Existing customers. Current users who browse your blog and open your emails inflate lead scores without representing new revenue potential.
Meanwhile, actual decision-makers — the VP who Googled your product, visited your pricing page, and left — barely register in your scoring model because they don't engage with your nurture sequences. They're in buying mode, not learning mode.
The fundamental error: traditional scoring treats all activity as equal and assumes more activity means more intent. It doesn't.
Behavioral Scoring That Works
Not all actions are equal. Some behaviors strongly predict purchase intent. Others are noise. The art of lead scoring is separating the two.
High-Intent Behaviors (20-40 points each)
| Behavior | Why It Matters | Points |
|---|---|---|
| Pricing page visit | Actively evaluating cost — buying signal | 40 |
| Demo/contact form start | Direct purchase intent, even if not submitted | 35 |
| Case study download | Evaluating social proof — late-stage consideration | 30 |
| Comparison page visit | Actively comparing vendors — decision phase | 30 |
| Return visit within 48 hours | Urgency signal — they're actively researching | 25 |
| Integration/API docs visit | Technical evaluation — involves buying committee | 20 |
Medium-Intent Behaviors (5-15 points each)
| Behavior | Why It Matters | Points |
|---|---|---|
| Blog article read (2+ articles) | Topic interest, not necessarily buying | 10 |
| Email click (non-nurture) | Active engagement with specific content | 10 |
| Social media engagement | Brand awareness, rarely buying signal | 5 |
Low-Intent or Misleading Behaviors (0-3 points)
| Behavior | Why It's Misleading | Points |
|---|---|---|
| Email open | Automated preview panes inflate this metric | 2 |
| Homepage visit | Could be anyone for any reason | 1 |
| Generic whitepaper download | Often gated content farmers, not buyers | 3 |
The key insight: weight the scoring toward bottom-of-funnel actions. A single pricing page visit is worth more than 20 email opens. Most scoring models get this backward because marketing teams build them, and marketing teams optimize for engagement metrics, not revenue.
Firmographic Scoring
Behavioral scoring tells you how interested a lead is. Firmographic scoring tells you whether they can actually buy. Both matter.
Company-Level Scoring
| Signal | Good Fit | Points | Poor Fit | Points |
|---|---|---|---|---|
| Company size | 50-500 employees | +20 | < 10 or > 10,000 | -10 |
| Industry | Your target verticals | +15 | Outside your ICP | -10 |
| Annual revenue | $5M-$100M | +15 | Unknown or < $1M | 0 |
| Technology stack | Uses complementary tools | +10 | Uses competitor | -5 |
| Funding stage | Series A-C | +10 | N/A | 0 |
Contact-Level Scoring
| Signal | Good Fit | Points | Poor Fit | Points |
|---|---|---|---|---|
| Title contains VP, Director, Head, C-level | Decision maker | +25 | — | — |
| Title contains Manager | Influencer | +15 | — | — |
| Title contains Intern, Student, Analyst | Low authority | — | -15 | — |
| Department match (your buyer) | Relevant department | +10 | Unrelated | -5 |
| Business email domain | Professional | +5 | Gmail/Yahoo | -10 |
The firmographic scoring should act as a multiplier, not an addition. A director at a target-size company who visits your pricing page (40 behavioral + 25 title + 20 company) is a 85-point lead worth immediate outreach. A student at a university who downloads every whitepaper (3 + 3 + 3 + 3 - 15 - 10) is negative and should be excluded from sales routing entirely.
Building and Calibrating Your Model
Step 1: Analyze Closed-Won Deals
Pull every closed-won deal from the last 12 months. For each, map:
- The first touchpoint
- The actions the contact took before becoming an opportunity
- The firmographic profile of the company and contact
- Time from first touch to closed-won
Look for patterns. If 70% of your closed deals involved a pricing page visit, that behavior deserves heavy weighting. If zero closed deals came from webinar attendees, stop weighting webinar attendance.
Step 2: Start with Manual Scoring
Don't automate immediately. Have your sales team manually score the next 50 inbound leads on a 1-10 scale based on their gut feeling. Compare those scores against outcomes 90 days later. The reps who close deals have calibrated intuition — capture it before you automate.
Step 3: Build, Test, and Iterate
Implement the scoring model in your CRM or marketing automation tool. Run it in parallel with your existing lead routing for 30 days. Compare:
- Do high-scoring leads convert at a higher rate?
- What percentage of converted leads were scored above threshold?
- What percentage of high-scoring leads were qualified by sales?
If your model scores 80% of converted leads above threshold and fewer than 30% of high-scoring leads are disqualified by sales, the model is working. If not, adjust weights.
Step 4: Implement Score Decay
Lead scores should decrease over time. A pricing page visit from yesterday is a strong buying signal. The same visit from six months ago means nothing. Implement decay:
- Behavioral scores decay by 50% after 30 days of inactivity
- Behavioral scores reset after 90 days of inactivity
- Firmographic scores don't decay (company attributes change slowly)
Without decay, your CRM accumulates zombie leads — contacts who scored high months ago and now sit in your pipeline inflating metrics without representing real opportunity.
MQL vs SQL Thresholds
The threshold where a lead becomes "Marketing Qualified" (routed to sales) should be calibrated against your sales team's capacity and feedback.
Too low: Sales reps drown in unqualified leads and stop trusting the scores. They go back to cherry-picking from the CRM manually.
Too high: Good leads languish in marketing nurture when they should be talking to a human. By the time they cross the threshold, they've already chosen a competitor.
The calibration process: start with the threshold that routes roughly 20% of leads to sales. Track acceptance rate (percentage sales accepts as qualified). Target an 80% acceptance rate. If it's lower, raise the threshold. If it's higher, lower it.
FAQ
Should we use predictive lead scoring (AI/ML) instead of rule-based? Only if you have 1,000+ historical closed deals to train on. Below that, your dataset is too small for ML models to beat well-calibrated rules. Most B2B companies don't have enough data for predictive scoring to outperform a thoughtfully built rule-based model.
How do we handle anonymous website visitors? You can't score them by firmographic data, but you can score their behavior. Use reverse IP lookup tools (Clearbit Reveal, Leadfeeder) to identify companies. Score the company even if you don't know the individual. When they eventually identify themselves through a form fill, merge the anonymous behavioral score.
What's the relationship between lead scoring and sales automation? Lead scoring should trigger automation, not replace human judgment. High-scoring leads get routed to reps for personalized outreach. Medium-scoring leads enter automated nurture sequences. Low-scoring leads get deprioritized. The score determines the speed and type of response, not whether to respond.
How often should we recalibrate the model? Quarterly. Pull the last quarter's conversion data, check whether high scores predicted conversions, and adjust weights. Also recalibrate whenever your product, pricing, or target audience changes significantly.
Lead scoring that works requires ongoing attention, not just initial setup. Build it, calibrate it against real outcomes, and iterate. The payoff is a sales team that spends time on prospects who will actually buy. Empirium can help you design and implement scoring models integrated with your CRM and marketing stack.