Back to Blog
Source Intelligence5 min read2026-02-16

How to Build a Lead Vendor Scorecard for Your Personal Injury Firm

A lead vendor scorecard grades every PI vendor on cost per case, conversion rate, rejection rate, and case quality. Learn how to build one and use it in monthly reviews.

How to Build a Lead Vendor Scorecard for Your Personal Injury Firm

A lead vendor scorecard is a structured tool for evaluating every vendor in your marketing portfolio against a consistent set of performance metrics — and translating that evaluation into a grade you can act on. When built correctly, it replaces gut instinct with data, makes budget conversations objective, and gives you a repeatable system for monthly vendor reviews.

This guide walks through the complete process: which metrics to include, how to weight them, and how to use the scorecard to drive actual decisions.

Related guide: See our complete guide to evaluating PI lead vendors — the 7 metrics that define vendor quality and how to build a vendor scorecard.

Why Most Vendor Evaluations Fail Before They Start

The typical PI firm vendor review is informal. Someone pulls invoices, glances at lead volume, and makes a judgment call based on whether the vendor “feels” like it's working. That approach has two problems.

First, it evaluates vendors in isolation. Vendor A's performance this month is compared to Vendor A's performance last month — not to Vendor B, C, and D on the same terms. You can't optimize a portfolio by looking at one position at a time.

Second, it evaluates on the wrong metric. Cost per lead is easy to find on an invoice. Cost per case — the only number that tells you whether the spend produced actual value — requires connecting your invoice data to your intake and case management data. Most informal reviews never make that connection.

A scorecard fixes both problems by design.

Step 1: Choose Your Scorecard Metrics

A good scorecard includes five to seven metrics. More than that creates noise. Fewer than five misses important signal. Here are the metrics that matter most for PI vendor evaluation, and what each one reveals:

Cost Per Signed Case

Your primary metric. Divide total spend by signed cases attributed to that vendor in the measurement window. This is the number that collapses lead volume, CPL, and conversion rate into a single figure you can compare across vendors. Every other metric on the scorecard adds nuance to this one.

Lead-to-Case Conversion Rate

Signed cases divided by total leads received, expressed as a percentage. A vendor sending 100 leads that produce 12 signed cases (12% conversion) is fundamentally different from a vendor sending 100 leads that produce 4 signed cases (4% conversion), even if their cost per lead is identical. This metric reveals lead quality at the most actionable level.

Rejection Rate

Rejected or declined leads divided by total leads received. A rejection rate above 20–25% is a yellow flag. Above 35% is a serious problem that usually indicates the vendor is sourcing leads outside your agreed case criteria, outside your geographic focus, or from channels that attract low-quality claimants.

Conversion Trend (3-Month Direction)

Is this vendor's conversion rate moving up, staying flat, or declining? A vendor with a 9% conversion rate and a declining trend needs a different response than a vendor with the same rate and an improving trend. Trend data is what separates a short-term dip from a structural problem.

Case Severity Distribution

What percentage of signed cases from this vendor are high-severity (catastrophic, surgical, significant soft tissue) versus low-severity (minor soft tissue, disputed liability)? If you can pull case type or severity data from your case management system, include it. Vendors that consistently deliver low-severity cases will look better on a cost-per-case metric than they should — because the cases they produce settle at much lower values. Case-level analytics make this severity data easy to track by vendor without manual lookups.

Cost Per Lead (Contextual Only)

Include CPL as reference data, not as a scored metric. It matters for understanding the economics of a vendor's model, but it shouldn't drive the grade. CPL is an input. Cost per case is the output.

Step 2: Assign Weights to Each Metric

Not all metrics deserve equal influence on the final grade. A simple weighting model for PI vendor scorecards:

  • Cost per signed case: 35%
  • Lead-to-case conversion rate: 25%
  • Rejection rate: 20%
  • Conversion trend: 15%
  • Case severity distribution: 5% (increase this if you have reliable severity data)

These weights assume that financial efficiency (cost per case) is the primary performance indicator, with conversion quality and funnel health as secondary indicators.

If your firm places a higher premium on case quality — for example, you only take catastrophic cases — you should shift weight toward case severity distribution and reduce the weight on raw cost per case.

Step 3: Build the Scoring Scale

Each metric needs a 1–5 scoring scale. Anchor the scale to your firm's data so that a 3 represents “at the firm average” — not an arbitrary industry benchmark.

For cost per signed case, define your firm's blended average first. Then score each vendor:

  • 5: More than 25% below firm average CPC
  • 4: 10–25% below firm average
  • 3: Within 10% of firm average (either direction)
  • 2: 10–35% above firm average
  • 1: More than 35% above firm average

Apply the same relative-to-average logic to conversion rate. For rejection rate, since lower is better, a score of 5 means a rejection rate below 10%, while a score of 1 means above 35%.

Step 4: Calculate the Weighted Score

Multiply each metric score by its weight, then sum the results. A vendor that scores 4 on cost per case (35% weight), 3 on conversion (25% weight), 4 on rejection rate (20%), 3 on trend (15%), and 3 on severity (5%) earns a weighted score of 3.6 out of 5.

Convert that weighted score to a letter grade for easy communication:

  • A: 4.0–5.0
  • B: 3.0–3.9
  • C: 2.0–2.9
  • D: 1.0–1.9

Step 5: Attach a Budget Decision Rule to Each Grade

A scorecard without a decision protocol is just a report. The final step is defining what each grade requires you to do at the next budget cycle.

  • A vendors: Eligible for budget increase of up to 20%. No additional review required until next monthly cycle.
  • B vendors: Maintain current budget. Monitor for movement and re-score next month.
  • C vendors: Budget freeze. Schedule a vendor conversation within two weeks to discuss performance data. Re-score after 60 days on flat budget.
  • D vendors:Budget reduction of 25–50%. Vendor given a defined 60-day improvement window. If the score doesn't improve to C or above, end the contract.

The decision rule removes the politics from vendor management. You're not rewarding or punishing based on the length of a relationship or how likable a vendor rep is. You're following a protocol grounded in performance data.

How Often to Run the Scorecard

Monthly is the right cadence for most firms. This gives you enough data volume to make statistically meaningful comparisons while catching trend changes quickly enough to act on them.

Use a rolling 90-day window rather than a calendar-month snapshot. A 90-day window smooths out short-term noise — an unusually slow intake week, a holiday-related lead dip — without hiding genuine performance changes. Update the window forward by one month each time you run the scorecard.

Vendor Scorecard Grade Decision Rules
GradeScore RangeBudget Action
A — Outperforming4.0–5.0Eligible for 20% budget increase
B — On Track3.0–3.9Maintain current budget
C — Below Threshold2.0–2.9Budget freeze, vendor conversation in 2 weeks
D — Underperforming1.0–1.925–50% budget reduction, 60-day window

Common Mistakes to Avoid

A few patterns show up consistently in firms that build scorecards but don't see the results they expected:

  • Using vendor-reported data instead of your own:Your intake system and case management platform are the authoritative sources. Vendor-provided “quality scores” or “verified lead” counts belong in the reference section, not the graded metrics.
  • Grading on too short a window:A 30-day window produces too much noise. You'll be making budget decisions based on a few weeks of data that may not reflect the vendor's actual performance profile.
  • Adding too many metrics: More than seven metrics dilutes the scoring and makes it harder to identify the root cause when a vendor grades poorly. Keep it focused.
  • Not following the decision rules:The most common failure mode is building a scorecard, using it for a few months, and then reverting to gut instinct when the data says something you don't want to hear about a preferred vendor. Trust the system.

Building Toward Automation

A manually maintained scorecard is a significant improvement over informal vendor evaluation. It's also work — probably two to three hours per month to pull and reconcile the data. That's a reasonable investment for a firm managing four to six vendors.

As your vendor portfolio grows, the manual maintenance cost grows with it. A revenue intelligence platform automates the data collection and calculates the scorecard metrics in real time. The scorecard logic stays the same — the platform just eliminates the hours of data assembly that the manual approach requires.

Either way, the most important investment is the habit: reviewing every vendor on the same terms, every month, and making budget decisions based on what the data says. That discipline is worth building regardless of what tool you use to maintain it.

Related guide: See our complete guide to lead source tracking for law firms — the 4-level attribution chain, 8 data points, and 5-step tracking system every PI firm needs.

Related guide:For the complete category guide, see ourdefinitive guide to Revenue Intelligence for Personal Injury Law Firms — the four intelligence layers, the maturity model, and the 90-day path from spreadsheets to a connected revenue engine.

See it in action

Discover how RevenueScale tracks cost per case from click to settlement.

Book a Demo

Want to see Revenue Intelligence in action?

See how RevenueScale connects your marketing spend to case outcomes — so you can cut waste, scale winners, and prove ROI to partners.