Every PI marketing director has felt it: the moment you realize you're staring at six vendor dashboards, three spreadsheets, and a pile of conflicting numbers — and you still can't answer the simple question of which marketing sources are actually working.
Performance scoring solves that problem by collapsing multiple data points into a single, comparable number. Instead of juggling cost per lead, conversion rate, case severity, and settlement value independently, a performance score weights and combines them into one composite grade that tells you: how well is this lead, campaign, or channel actually performing?
What a Performance Score Actually Is
A performance score is a weighted composite metric — typically on a 0–100 scale — that aggregates the factors that matter most for PI marketing ROI. Think of it like a credit score for your marketing sources. No single data point tells the full story, but when you combine them intelligently, the result is a reliable signal you can act on.
The key word is “weighted.” Not every factor contributes equally. Cost per signed case matters more than contact rate. Settlement value matters more than raw lead volume. A well-built scoring model reflects those priorities, so a source that delivers fewer but higher-value cases scores higher than one that floods your intake team with low-quality contacts.
The Six Factors That Feed a Performance Score
At the foundation, six measurable inputs determine how a lead, campaign, or channel gets graded. Here's what each one captures and why it matters.
1. Cost Per Signed Case
The single most important factor. This measures what you actually paid to acquire a case that your firm signed — not just a lead, not just a contact, but a real client. A source producing signed cases at $1,800 each scores significantly higher than one at $4,200, all else being equal.
2. Conversion Rate (Lead to Signed Case)
What percentage of leads from this source become signed cases? A 6% conversion rate means your intake team spends less time chasing dead ends. A 1.5% rate means 98.5% of leads go nowhere — and your intake staff feels it every day.
3. Case Severity Distribution
Not all PI cases are equal. A source that consistently delivers moderate-to-severe injury cases is more valuable than one skewed toward soft tissue claims with $8,000 average settlements. Severity distribution directly affects your revenue per case.
4. Contact Rate
Can your intake team actually reach these leads? A vendor might deliver 200 leads per month, but if your team can only make contact with 40% of them, the effective lead volume is 80. Contact rate is often the first signal of lead quality — or lack of it.
5. Settlement Value Per Case
The downstream revenue metric. Two sources might both deliver cases at $2,500 cost per case, but if one averages $45,000 settlements and the other averages $120,000, they're not remotely equivalent. Settlement value closes the loop on true ROI.
6. Speed to Sign
How quickly do leads from this source convert? Faster speed to sign means shorter cash flow cycles and less intake labor per case. A source where leads sign within 3 days scores higher than one where the average is 14 days.
How Scores Differ: Leads vs. Campaigns vs. Channels
The same scoring framework applies at three distinct levels, but each level answers a different question.
Lead-Level Scores
A lead score grades an individual lead based on characteristics known at intake: source quality history, case type indicators, contact responsiveness, and geographic signals. Lead scores answer: should my intake team prioritize this lead right now?
A lead scoring 82 from a source with strong historical conversion gets called first. A lead scoring 34 from a source with 1.2% conversion history gets queued — not ignored, but triaged appropriately.
Campaign-Level Scores
Campaign scores aggregate performance across all leads from a specific campaign — such as “Google Ads | Car Accident | Dallas” or “Vendor X | TV Leads | Q1.” Campaign scores answer: is this specific campaign worth continuing, scaling, or cutting?
A campaign scoring 71 is performing well but may have room for optimization. A campaign scoring 28 is actively losing money and needs immediate review.
Channel-Level Scores
Channel scores roll up all campaigns within a category — Google Ads as a whole, all TV vendors combined, or your entire LSA portfolio. Channel scores answer: where should I allocate my next dollar of marketing budget?
When your Google Ads channel scores 78 and your mass tort vendor channel scores 41, the budget conversation becomes straightforward.
Top Lead Score
87
Google Ads | MVA | Houston
Campaign Score
71
Vendor A | TV Leads | Q1
Channel Score
44
Mass Tort Vendors (All)
Why a Single Score Beats Multiple Metrics
The objection we hear most often: “I already track these metrics individually. Why do I need a composite score?”
The answer is decision speed. When you're managing $300,000 per month across eight vendors and four channels, you don't have time to cross-reference seven metrics for each source every week. A composite score gives you the headline. If the headline looks wrong, you dig into the components. But 80% of the time, the score tells you what you need to know.
The second reason is objectivity. Without a score, budget decisions default to whoever argues loudest or has the best vendor relationship. With a score, the data speaks first — and the conversation starts from a shared set of facts.
| Lead Score | Campaign Score | Channel Score | |
|---|---|---|---|
| What It Grades | Individual lead | Specific campaign | Entire channel |
| Primary User | Intake team | Marketing director | CMO / partners |
| Decision It Drives | Call priority | Scale or cut campaign | Budget allocation |
| Update Frequency | Real-time at intake | Weekly | Monthly / quarterly |
| Key Input | Source history + lead signals | Aggregate conversion + cost | All campaigns combined |
What Changes When You Have Scores
Firms that adopt performance scoring report three consistent shifts in how they operate:
- Vendor conversations become data-driven.Instead of “we feel like your leads aren't as good,” it becomes “your campaign scored 38 last quarter — here's why, and here's what needs to change.”
- Budget reallocation happens faster. When scores drop below threshold, the conversation about moving budget starts immediately — not three months later when someone finally reviews the spreadsheet.
- Intake teams work smarter.Lead-level scores mean your best intake staff spend their time on the leads most likely to sign. That alone can drive a 15–25% improvement in conversion rate.
Performance scoring doesn't replace judgment. It gives your judgment better inputs. When you can see at a glance that one channel scores 81 and another scores 33, you're not guessing where to put your next dollar — you're deciding with data that actually reflects outcomes.
Getting Started
If you're currently managing PI marketing spend without composite scores, the gap between where you are and where you need to be is smaller than you think. The data already exists — it's just scattered across your CRM, your case management system, and your vendor invoices.
The first step is connecting those data sources so scores can be calculated automatically. The second step is establishing thresholds your team agrees on: what score means “scale,” what means “maintain,” and what means “cut.” From there, the scores do the heavy lifting — and your weekly vendor reviews go from two hours of spreadsheet archaeology to fifteen minutes of score-informed decisions.
Related guide: See our complete guide to AI for personal injury law firms — what works now, what's hype, the data foundation you need, and the 4-phase adoption roadmap.
