Cost per lead is the most commonly used metric in PI vendor management. It's on every invoice, easy to compare, and gives you a clean number to put in a budget presentation. It also tells you almost nothing about whether your marketing spend is actually producing revenue.
Vendor grading is built around a different premise: that a lead vendor should be evaluated on what you get out of the funnel, not just what goes in. Here's what vendor grading actually measures — and why the gap between CPL and what grading reveals is where most firms are losing the most money.
Related guide: See our definitive guide to cost per case for PI firms — calculation formula, benchmarks by firm size and lead source, and step-by-step tracking methodology.
Related guide: See our complete guide to evaluating PI lead vendors — the 7 metrics that define vendor quality and how to build a vendor scorecard.
The Problem With Cost Per Lead as a Primary Metric
Cost per lead measures the price of a transaction at the top of the funnel. It tells you: for every dollar you spent, how many leads did this vendor deliver? Nothing more.
That would be a useful number if every lead had an equal probability of becoming a signed case. But they don't — and the variation in conversion rates across vendors is often more significant than the variation in their cost per lead.
Consider two vendors:
- Vendor A: $75 CPL, 12% conversion rate → $625 cost per case
- Vendor B: $45 CPL, 3% conversion rate → $1,500 cost per case
At a glance, Vendor B looks like the better deal. Their invoice is smaller. Every individual lead costs less. But Vendor A is producing signed cases at less than half the cost of Vendor B.
This scenario isn't a theoretical edge case — it's one of the most common patterns in PI vendor portfolios. Firms that manage on CPL systematically underfund their best vendors and overfund their worst.
What Vendor Grading Actually Measures
A well-designed vendor grading system measures performance at every stage of the funnel from first contact to signed case — and, where data is available, through to settlement. Here's what each layer of the grade reflects.
Funnel Efficiency
The most fundamental measure in vendor grading is how efficiently a vendor converts marketing spend into signed cases. This requires connecting three data points that typically live in different systems: the marketing spend (in your invoices), the lead log (in your intake CRM), and the signed case count (in your case management system).
Cost per signed case is the output of that connection. It is the single most important number in vendor grading because it reflects both price and conversion performance simultaneously.
Lead Quality at Intake
Vendor grading also measures what happens to leads when they arrive at intake. Two metrics matter here: the intake contact rate (how many leads were actually reachable and engaged) and the rejection rate (how many leads were disqualified because they didn't meet your case criteria).
A vendor with a high rejection rate is delivering leads that don't fit your firm — wrong geography, wrong case type, disputed liability, or prior representation. These leads consume intake capacity without producing revenue. Grading surfaces this cost, which an invoice never would.
Case Quality by Source
The grade goes deeper than whether leads become cases. It also asks: what kind of cases? A vendor that consistently produces soft-tissue cases while another produces surgical and catastrophic injury cases represents a materially different revenue contribution — even if their cost per signed case is identical.
Case severity distribution by vendor is the metric that captures this. If your case management system categorizes cases by type, injury severity, or anticipated settlement range, that data belongs in your vendor grade. It's the difference between knowing what a vendor costs and knowing what a vendor is worth.
Trend Direction
Vendor grading isn't just a point-in-time snapshot — it captures directional momentum. A vendor whose conversion rate has improved for three consecutive months is fundamentally different from a vendor with the same current conversion rate whose performance is declining. Trend data is what makes grading predictive rather than just descriptive.
This is where vendor grading earns its keep most clearly: catching deterioration early, before it becomes a budget problem. A vendor whose conversion rate drops from 10% to 8% to 6% over three months will not trigger any alarm on a single-month review. A grading system that tracks trend direction catches it in month two.
Consistency
Consistency is a dimension of quality that single-period metrics miss. A vendor that delivers 10 signed cases one month and zero the next has a very different operational value than a vendor that delivers seven cases every month. The total may be similar, but the predictability is not.
For firms trying to manage intake staffing, cash flow, and growth planning, vendor consistency matters. It belongs in a complete grading model — often reflected as variance in monthly case output relative to the vendor's average.
The Downstream Dimension: Settlement Performance
The most complete form of vendor grading connects source performance all the way to settlement. Which vendors produce cases that settle at above-average values? Which produce cases that drag through litigation and settle low?
This is the hardest dimension to measure because PI settlements arrive 6 to 18 months after the marketing spend. By the time you have settlement data for a cohort of cases, the vendors that produced them may have changed significantly.
But for firms with mature data and a full-cycle attribution system, the settlement dimension is where the most important vendor quality signals live. A vendor with an average cost per case of $900 but cases that average $12,000 at settlement is delivering dramatically better ROI than a vendor with an $800 cost per case and settlements averaging $7,000. Neither of those numbers appears anywhere near a CPL metric.
Why Grading Is More Useful Than Ranking
Some firms try to manage vendors by ranking them — first, second, third in order of performance — and allocating budget proportionally. The problem with ranking is that it treats vendor selection as zero-sum. If you have five vendors and all five are performing above your firm average, a ranking approach suggests you should cut the fifth-place vendor. A grading approach recognizes that a vendor performing 5% above your average is a B+ and deserves to keep its budget.
Grading also makes performance expectations clear to vendors themselves. A vendor who knows they're graded against an objective scorecard has a different relationship with your performance conversations than a vendor who's told they're “ranked fourth out of five.” One is a standard to meet. The other is a competition — and vendors don't respond well to being ranked against each other in your portfolio.
Typical Distribution
What Grading Looks Like in Practice
Across a typical PI firm portfolio of five to seven vendors, vendor grading usually reveals a distribution that looks something like this:
- One or two A vendors significantly outperforming the firm average
- Two or three B vendors performing near the average
- One or two C or D vendors consuming budget at above-average cost per case
The firms that act on this distribution — moving budget from the C and D vendors to the A vendors — typically see cost per case improvements of 15–25% without increasing total spend. That's the financial case for grading over CPL-based management.
The firms that don't act on it usually have the same reason: the C and D vendors have long-standing relationships, or they're the cheapest on CPL, or someone on the leadership team has a strong preference that doesn't map to the data. Those are human reasons. They're not performance reasons. And over 12 months, the difference between acting on data and defaulting to relationship compounds into a very large number.
