A ready-to-use lead scoring model template with demographic scoring, behavioral scoring, negative scoring, MQL thresholds, and score decay rules. Built for B2B marketing teams running HubSpot, Salesforce, Marketo, or any CRM with scoring capability.
Last updated: March 2026 · Reading time: 11 min
Lead scoring assigns numerical points to each lead based on who they are (demographic fit) and what they do (behavioral signals). When a lead’s score crosses a threshold, they’re flagged as marketing-qualified (MQL) and routed to sales. Without scoring, sales teams waste time on leads that will never buy, and hot prospects sit in a nurture queue too long.
Lead scoring is a methodology that ranks prospects against a numerical scale representing the perceived value each lead represents to the organization, using demographic fit and behavioral engagement data.
The numbers back this up. According to MarketingSherpa (2024), companies using lead scoring see a 77% increase in lead generation ROI compared to those that don’t. Forrester Research found that organizations with mature lead scoring programs generate 50% more sales-ready leads at 33% lower cost per lead. Yet only 21% of B2B companies have implemented lead scoring (Demand Gen Report, 2024).
The template is split into three scoring categories. Each attribute or action gets a point value.
| Attribute | Criteria | Points |
|---|---|---|
| Job Title | C-level (CEO, CMO, CTO) | +15 |
| Job Title | VP / Director | +10 |
| Job Title | Manager | +5 |
| Job Title | Individual contributor | +2 |
| Company Size | 500+ employees | +10 |
| Company Size | 50-499 employees | +5 |
| Company Size | 10-49 employees | +3 |
| Industry | Target industry match | +8 |
| Industry | Adjacent industry | +3 |
| Location | Primary service region | +5 |
| Location | Secondary region | +3 |
| Annual Revenue | $10M+ ARR | +8 |
| Annual Revenue | $1M-$10M ARR | +5 |
| Action | Points | Rationale |
|---|---|---|
| Requested a demo | +30 | Strongest buying intent signal |
| Visited pricing page | +15 | Evaluating cost = active consideration |
| Attended webinar | +20 | Invested 30-60 minutes of attention |
| Downloaded whitepaper or ebook | +10 | Willing to exchange contact info for content |
| Visited case study page | +10 | Looking for proof and validation |
| Opened 3+ emails in 7 days | +5 | Consistent engagement pattern |
| Clicked email CTA | +5 | Moving beyond passive reading |
| Visited 5+ pages in one session | +8 | Active research behavior |
| Returned to site 3+ times in 14 days | +10 | Repeat visits signal ongoing evaluation |
| Submitted contact form | +25 | Direct outreach intent |
| Watched product video (75%+) | +8 | Engaged with product-level content |
| Shared content on social | +3 | Mild advocacy signal |
| Signal | Points | Rationale |
|---|---|---|
| Competitor domain email | -50 | Likely researching, not buying |
| Student email (.edu) | -30 | Low purchase authority |
| Free email domain (gmail, yahoo) for B2B | -10 | May not represent a company |
| Unsubscribed from emails | -20 | Actively disengaging |
| Bounced email address | -25 | Invalid contact |
| No activity for 30 days | -5 | Cooling interest (applied via decay rules) |
| No activity for 60 days | -15 | Significant disengagement |
| Job title: intern or student | -20 | No purchasing authority |
| Country outside service area | -15 | Can’t serve this market |
The template goes beyond a simple point table. It’s designed to be a complete scoring system you can implement in your CRM within a day. Here’s what’s included:
Demographic scoring measures how closely a lead matches your ideal customer profile (ICP). The goal is to answer one question: does this person work at the kind of company that buys from us, in a role that has purchasing authority?
Start by analyzing your last 50 closed-won deals. Look at the patterns:
A critical principle: demographic scores should represent a maximum of about 40% of your total MQL threshold. If your MQL threshold is 100 points, a perfect demographic score should max out around 40. Why? Because a perfect-fit company that never engages with your content isn’t ready to buy. Behavioral signals are what indicate timing.
HubSpot’s 2024 Sales Report found that 67% of sales reps say lead quality is their biggest challenge. Proper demographic scoring directly addresses this by filtering out leads that look active but don’t fit your ICP.
Behavioral scoring measures what a lead does, which indicates how ready they are to buy. The highest-value actions are the ones closest to a purchase decision. Visiting a pricing page is worth far more than opening an email.
Organize behavioral signals into three tiers:
Tier 1: High-intent actions (15-30 points each). Demo requests, pricing page visits, contact form submissions, trial signups. These signals mean the lead is actively evaluating your product. A single Tier 1 action should represent 15-30% of your MQL threshold on its own.
Tier 2: Engagement actions (5-15 points each). Webinar attendance, whitepaper downloads, case study visits, repeated site visits. These show a lead is researching and educating themselves. Multiple Tier 2 actions accumulate to push leads toward MQL.
Tier 3: Awareness actions (1-5 points each). Email opens, blog visits, social follows. These are early signals. They shouldn’t move the needle significantly on their own, but they add context to the overall score.
One important rule: behavioral scores should have recency weighting. A pricing page visit yesterday is worth more than one 45 days ago. This is where score decay comes in (covered in the next section). Marketo’s benchmark data (2023) shows that leads who take a high-intent action within the last 7 days convert at 3.2x the rate of those whose last action was 30+ days ago.
Negative scoring is the most under-used feature in lead scoring models. Without it, you’ll route competitor researchers, students, and dead leads to your sales team. Your sales reps will lose trust in the scoring system fast.
Apply negative scores in three scenarios:
Disqualification signals. Competitor email domains (-50), student emails (-30), job titles with zero purchasing authority (-20). These leads may engage heavily with your content (raising behavioral scores) but will never buy. Negative scores correct for this.
Active disengagement. Email unsubscribes (-20), spam complaints (-50), explicit “not interested” replies (-40). These are clear signals that a lead is opting out. Respect that signal in your scoring.
Data quality issues. Bounced emails (-25), invalid phone numbers (-10), incomplete form submissions (-5). These aren’t behavioral signals but practical issues that reduce a lead’s actionability.
A tip from implementation: build a “do not route” floor. If a lead’s total score drops below -20, automatically suppress them from sales notifications regardless of future positive actions. This prevents edge cases where a competitor signs up for a webinar (+20) and visits your pricing page (+15) from getting flagged as an MQL.
The MQL threshold is the score at which a lead gets flagged for sales outreach. Set it too low and sales drowns in unqualified leads. Set it too high and hot prospects wait too long.
The recommended starting points by business type:
| Business Type | Suggested MQL Threshold | Rationale |
|---|---|---|
| B2B SaaS (self-serve, <$500/mo) | 50-65 points | Lower threshold, faster routing, product does the selling |
| B2B SaaS (sales-assisted, $500-$5K/mo) | 70-85 points | Balance between speed and qualification |
| B2B Enterprise ($5K+/mo) | 85-100 points | Higher bar, sales time is expensive |
| Professional services | 60-80 points | Relationship-driven, moderate threshold |
| Ecommerce B2B (wholesale) | 45-60 points | Transaction-focused, speed matters |
The key to getting this right: start conservative (higher threshold), then lower it based on sales feedback. It’s easier to send more leads to sales than to rebuild trust after flooding them with junk. SiriusDecisions (now Forrester) found that 90% of first-time lead scoring implementations set the threshold too low and require recalibration within 90 days.
Score decay automatically reduces a lead’s behavioral score over time if they stop engaging. Without decay, a lead who was active 6 months ago but hasn’t visited since still shows up as “hot” in your CRM. That’s a false signal.
Here’s the decay schedule we recommend:
| Inactivity Period | Decay Action | Applied To |
|---|---|---|
| 14 days inactive | No decay yet | Normal engagement gap |
| 30 days inactive | -5 points from behavioral score | Early cooling signal |
| 60 days inactive | -15 points from behavioral score | Significant disengagement |
| 90 days inactive | -25 points from behavioral score | Likely lost interest |
| 180 days inactive | Reset behavioral score to 0 | Too stale to action |
Important: decay only applies to behavioral scores, never demographic scores. A VP at a 500-person company in your target industry is still a good fit whether they engaged yesterday or three months ago. Their demographic score stays. But their urgency (behavioral score) fades with time.
In HubSpot, you can automate this with workflows that check “days since last activity” and adjust properties accordingly. In Salesforce, Pardot’s automation rules handle decay natively. The template includes implementation steps for both platforms.
Lead scoring isn’t a set-it-and-forget-it system. The initial model is your hypothesis. Sales feedback is the data that proves or disproves it. Plan to recalibrate every 90 days for the first year, then quarterly after that.
Here’s the calibration process we use at ScaleGrowth.Digital:
Step 1: Pull a 90-day MQL report. List every lead that crossed the MQL threshold. Include their score at time of routing, the scoring breakdown (demographic vs. behavioral), and the outcome (closed-won, closed-lost, or still open).
Step 2: Calculate conversion rates by score band. Group MQLs into bands: 70-79, 80-89, 90-99, 100+. Compare conversion rates across bands. If 80-89 converts at 12% but 90-99 converts at 28%, your threshold might be too low.
Step 3: Interview sales reps. Ask two questions: “Which MQLs were genuinely qualified?” and “Which were a waste of time?” Pattern match the waste-of-time leads. If they all have high behavioral scores but poor demographic fit, increase the demographic weight.
Step 4: Adjust weights and thresholds. Based on the data, modify point values. If “attended webinar” leads close at a high rate, increase its score from +20 to +25. If “downloaded whitepaper” leads rarely convert, drop it from +10 to +5.
Gartner’s 2024 report on lead management found that companies that calibrate their scoring models quarterly generate 28% more revenue from marketing-qualified leads than those using static models.
Most lead scoring fails not because the model is wrong but because nobody maintains it. The initial setup takes 2-3 hours. The ongoing calibration that makes it accurate takes 2 hours per quarter. Teams skip the second part.
“The lead scoring models that actually improve sales productivity have one thing in common: a feedback loop. We build every scoring model with a quarterly calibration meeting baked into the process. Sales reviews 20-30 recent MQLs, tells us which ones were real and which were noise, and we adjust the weights. Without that loop, scoring models degrade within 6 months. The market changes, buyer behavior shifts, and static point values stop reflecting reality.”
Hardik Shah, Founder of ScaleGrowth.Digital
Three mistakes that kill lead scoring effectiveness:
Mistake 1: Only scoring behavior, ignoring fit. A student who downloads every whitepaper you publish isn’t a lead. Without demographic scoring, they look identical to a VP doing research before a purchase.
Mistake 2: No negative scoring. Competitor employees and job seekers will engage with your content. If you can’t subtract points, they’ll pollute your MQL pool and erode sales trust in the entire system.
Mistake 3: Using the same model for 12+ months. Buyer behavior shifts. New content assets change engagement patterns. A scoring model from January 2025 doesn’t reflect your March 2026 funnel. Schedule recalibration or the model will silently become inaccurate.
Get the complete scoring model in Google Sheets with demographic, behavioral, and negative scoring matrices, MQL threshold calculator, decay schedule, and sales calibration worksheet.
Spreadsheet format. No spam. Instant access.
Map every touchpoint and stage your customers move through. Pairs with lead scoring to trigger workflows at the right moment.
12+ workflow examples with triggers, conditions, and timing. Includes scoring-based routing workflows.
ScaleGrowth.Digital builds lead scoring models calibrated to your sales data. From setup to ongoing optimization.
You need at minimum 200-300 leads per month to get meaningful data from a scoring model. Below that volume, manual qualification by sales reps is usually more effective. Once you cross 300 leads per month, scoring becomes essential because manual review doesn’t scale.
Lead scoring assigns a numerical score based on engagement (behavioral). Lead grading assigns a letter grade (A-D) based on fit (demographic). The most effective systems use both: a lead can be A1 (great fit, highly engaged), C3 (poor fit, moderately engaged), or any combination. This template includes a combined score-grade matrix.
Yes, but the model is different. Ecommerce lead scoring focuses on purchase-intent behaviors: cart additions (+15), wishlist adds (+5), product page views (+3), and cart abandonment (+10 for follow-up targeting). Demographic data matters less because ecommerce purchases are typically self-serve. Platforms like Klaviyo and Drip have ecommerce-specific scoring built in.
Initial setup takes 4-8 hours in most CRMs. HubSpot has native scoring properties you can configure in 2-3 hours. Salesforce requires custom fields and Pardot/Marketing Cloud configuration, usually 6-8 hours. The first calibration review should happen at 30 days, then every 90 days after that.
Automated. Manual scoring doesn’t scale and introduces inconsistency. Set up scoring rules in your CRM so points are assigned automatically when triggers fire (page visit, form submission, email engagement). The only manual element should be the quarterly calibration review where you adjust point values based on sales outcomes.
ScaleGrowth.Digital builds lead scoring systems calibrated to your actual sales data. We set up the model, connect it to your CRM, and run the quarterly calibration so your scores stay accurate.