You're managing $250,000 per month across twelve campaigns. Three are clearly working. Two are clearly not. The other seven? You're not sure — and that uncertainty is where budget gets wasted.
Campaign performance scores give you a systematic way to evaluate every active campaign on the same scale, using the same criteria, so you can make scale-or-cut decisions with confidence instead of instinct. This guide walks through exactly how to use those scores in practice.
What a Campaign Performance Score Captures
A campaign score is a composite number — typically 0 to 100 — that aggregates the metrics that actually determine whether a campaign is generating profitable cases. The inputs include cost per signed case, conversion rate, contact rate, case severity mix, and settlement value data when available.
The critical difference between a campaign score and any single metric: a score accounts for trade-offs. A campaign might have a high conversion rate but terrible cost per case (because the leads are expensive). Or low cost per lead but abysmal contact rates (because the leads are junk). The composite score weighs these factors together so you see the full picture in one number.
The Four-Zone Framework
Not every score requires the same response. We use a four-zone framework that maps score ranges to specific actions. These thresholds aren't arbitrary — they're calibrated to PI marketing economics where a 15–20% swing in cost per case can mean the difference between profitable growth and burning cash.
Zone 1: Scale (Score 75+)
Campaigns scoring 75 or above are producing signed cases efficiently, from quality leads, with acceptable (or better) case values. These are your winners. The question isn't whether to keep them — it's how much more budget they can absorb before efficiency degrades.
Action: Increase budget in 15–20% increments monthly. Monitor score after each increase. If the score drops below 70 after scaling, you've likely hit the efficiency ceiling for that campaign and should hold at the previous budget level.
Zone 2: Maintain (Score 50–74)
These campaigns are profitable but not exceptional. They're covering their cost and contributing cases, but they're not the ones you should be pouring incremental budget into. Hold steady and look for optimization opportunities.
Action: Keep current budget. Review targeting, creative, or vendor terms quarterly. If the score trends upward over two consecutive months, consider moving to Scale zone. If it trends downward, move to the Investigate process.
Zone 3: Investigate (Score 30–49)
Campaigns in this range are underperforming but not yet at the point where cutting is the obvious move. Maybe the campaign is new and still building data. Maybe there was a seasonal dip. Maybe the vendor made targeting changes that haven't fully played out.
Action: Set a 60-day review window. Identify the specific factors dragging the score down — is it cost, conversion, lead quality, or case value? Work with the vendor on a concrete improvement plan with measurable targets. If the score doesn't improve to 50+ within 60 days, move to Cut.
Zone 4: Cut (Score Below 30)
Campaigns scoring below 30 are actively losing money. The leads don't convert, the cases aren't valuable, or the cost is too high relative to results. Every month this campaign runs, it consumes budget that could be deployed to a 75+ campaign.
Action: Reduce budget by 50% immediately and reallocate to your highest-scoring campaigns. Give the vendor 30 days at reduced spend to demonstrate improvement. If the score remains below 30, terminate the campaign entirely.
Green = Scale, Blue = Maintain, Orange = Investigate or Cut
The Decision Process: Step by Step
Knowing the zones is the foundation. But applying them well requires a disciplined process — especially when vendor relationships, historical commitments, and internal politics are involved.
Pull monthly scores for all active campaigns
Generate or review the composite performance score for every campaign running in the current period. Don't cherry-pick — evaluate the full portfolio.
Compare to 3-month trailing average
A single month's score can be noisy. Compare current scores to the 3-month average. A campaign scoring 42 this month but averaging 61 over three months is different from one that's been at 42 for three consecutive months.
Classify each campaign into its zone
Map every campaign to Scale (75+), Maintain (50–74), Investigate (30–49), or Cut (<30) based on the trailing average. Flag any campaign that moved zones since last review.
Draft budget reallocation proposal
Calculate how much budget to move from Cut and Investigate campaigns to Scale campaigns. Rule of thumb: reallocate at least 80% of Cut budget to your top 3 performers.
Set review dates for Investigate campaigns
Every campaign in the Investigate zone gets a 60-day clock with specific improvement targets. Document the targets so the next review has clear pass/fail criteria.
Execute and monitor weekly
Implement budget changes, communicate with vendors, and check score trends weekly. If a Scale campaign's score drops 10+ points after a budget increase, pause the increase and investigate.
Why Trends Matter More Than Snapshots
One of the most common mistakes in score-based management is overreacting to a single month. PI marketing has inherent variability — settlement cycles, seasonal demand shifts, vendor inventory fluctuations, and intake team capacity all affect month-to-month performance.
A campaign that scored 71 last month and 58 this month isn't necessarily declining — it might be normal variance. But a campaign that's gone from 71 to 64 to 58 to 52 over four months is showing a clear downward trend that demands attention.
The practical rule: use 3-month trailing averages for zone classification, but watch monthly scores for early warning signals. If a campaign drops more than 15 points in a single month, investigate immediately regardless of the trailing average.
Having the Vendor Conversation
Scores make vendor conversations dramatically more productive. Instead of vague complaints (“your leads don't seem as good”), you can point to specific, objective data:
- “Your campaign scored 38 this quarter, down from 62 last quarter. The primary driver was a drop in conversion rate from 4.2% to 1.8%. What changed in your lead generation process?”
- “Your contact rate dropped from 68% to 41% over the last 90 days. That's the single biggest factor pulling your score down. We need that back above 60% within 60 days or we're reallocating the budget.”
- “Your campaign scores 82 and we want to scale it. Can you handle a 20% budget increase without degrading lead quality?”
These conversations are specific, measurable, and time-bound. They replace the guesswork that lets underperforming vendors survive quarter after quarter.
| Score Range | Zone | Action | |
|---|---|---|---|
| 75–100 | Scale | Scale | Increase budget 15–20% monthly |
| 50–74 | Maintain | Maintain | Hold budget, optimize quarterly |
| 30–49 | Investigate | Investigate | 60-day improvement plan |
| 0–29 | Cut | Cut | Reduce 50% immediately, 30-day exit plan |
Putting It Into Practice
The first time you run this process, expect surprises. Campaigns you assumed were working may score lower than expected. Campaigns you'd been considering cutting may turn out to be solid performers when you account for case value and settlement data.
That's the point. Without a systematic scoring framework, budget decisions are shaped by recency bias, vendor confidence, and the loudest voice in the room. With scores, they're shaped by outcomes.
A revenue intelligence platform calculates these scores automatically and updates them as new data flows in. But even a manual monthly scoring exercise — pulling the six key metrics per campaign and weighting them — puts you ahead of the 80% of PI firms still making these decisions by spreadsheet and gut feel.
Start with your top five campaigns by spend. Score them this month. Classify them into zones. Make one budget reallocation based on what the scores tell you. Then measure the result. That first cycle is usually all it takes to see why score-based management produces better outcomes than the alternative.
Related guide: See our complete guide to AI for personal injury law firms — what works now, what's hype, the data foundation you need, and the 4-phase adoption roadmap.
Related guide:For the full Revenue Intelligence framework behind this piece, read our pillar:Revenue Intelligence for PI Firms — covering Performance, Intake, Source, and Financial Intelligence, plus the maturity assessment every firm should run.
