Back to Blog
Performance Intelligence7 min read2026-05-15

What 91% Prediction Accuracy Actually Means for a PI Firm's Budget Decisions

91% prediction accuracy sounds impressive — but what does it actually measure, and what does it change about how your firm makes budget decisions?

What 91% Prediction Accuracy Actually Means for a PI Firm's Budget Decisions

When a platform claims 91% prediction accuracy, the natural question is: what does that actually mean? Not in marketing terms. In practical terms — what changes about how a managing partner allocates a $300K monthly marketing budget when the forecasts behind those decisions are right 91% of the time?

This post breaks down the number: how it is measured, what it does and does not guarantee, and why it matters more for budget decisions than most partners realize.

What 91% Accuracy Measures

The 91% figure means that the predictive model's monthly signed-case forecast lands within 10% of the actual outcome 91% of the time. If the model projects 40 signed cases for the month, the actual result falls between 36 and 44 in roughly 9 out of every 10 months.

This is not a vague claim. It is calculated against a specific standard: predictions made at the midpoint of each month (day 15), measured against the final count of signed cases at month-end, across a rolling 12-month validation window.

Breaking Down 91% Accuracy

Prediction Window

Day 15

Forecast made at month midpoint

Accuracy Threshold

±10%

Actual falls within this range

Hit Rate

91%

Meets threshold 91 of 100 months

What It Does Not Mean

To be clear about what 91% accuracy is not:

  • It does not mean the model is exactly right 91% of the time. Exact match is nearly impossible in case forecasting. The 10% tolerance band is what makes the metric practical.
  • It does not mean every vendor-level prediction hits 91%. Aggregate monthly forecasts are more accurate than individual vendor projections, because vendor-level variance cancels out at the portfolio level.
  • It does not mean the model cannot be wrong. The remaining 9% of months will see forecasts outside the 10% band — typically during unusual events like a major weather event or a sudden vendor capacity change.

Why Confidence Level Changes Budget Behavior

Here is the part that matters for managing partners: the confidence level of a forecast directly determines how aggressively a firm can act on it.

Consider two scenarios. Both involve the same firm, the same $300K monthly budget, and the same mid-month data showing Vendor C is underperforming.

Budget Decisions at Different Confidence Levels
Factor60% Confidence91% Confidence
Mid-month reallocationToo risky — forecast may be wrong 4 in 10 timesReallocate $15K-$25K from underperformer with high confidence
Vendor contract decisionsWait for 2-3 months of data to confirm trendAct on single-month forecast backed by historical pattern
Partner budget conversationsPresent ranges so wide they are not usefulPresent specific projections partners can plan around
New vendor trial evaluationNeed 4-6 months to judge performanceReliable read within 60-90 days
Annual budget planningBased on averages and assumptionsBased on forward-looking projections with known accuracy

The Real Cost of Low Confidence

When forecasts are unreliable, the rational response is inaction. A marketing director who has been burned by bad predictions learns to wait — for more data, for more months, for more confirmation. That waiting has a cost.

A vendor underperforming by $15K per month that takes three months to confirm instead of one month costs the firm $30K in preventable waste. Multiply that across five or six vendors over a year, and the cost of low-confidence forecasting is $100K to $200K in delayed decisions.

$30K

Cost of waiting two extra months to confirm vendor underperformance

A Practical Scenario: Acting on a 91% Forecast

A PI firm spends $300K per month across six vendors. On day 15, the predictive model shows:

Day 15 Forecast — $300K Monthly Budget

Projected Signed Cases

42

Target: 48 cases

12.5% below target

Vendor C Projection

3 Cases

vs. 8 expected — $45K spend

62.5% below expected

Model Confidence

91%

Within ±10% of actual 91% of time

Vendor C is projected to deliver 3 signed cases against an expectation of 8, at a cost of $45K. That is a cost per signed case of $15,000 — three times the portfolio average of $5,000.

With 91% confidence in the forecast, the marketing director can make a defensible recommendation: shift $20K from Vendor C to Vendors A and B, which are tracking at or above expected conversion rates. The remaining $25K stays with Vendor C to maintain the relationship while performance is investigated.

What Happens Without Confidence in the Forecast

At 60% confidence, the same marketing director hesitates. What if the model is wrong? What if Vendor C's cases close in the last two weeks? The rational move is to wait — and by month-end, the $45K is spent. If Vendor C delivers 3 cases instead of 8, the firm absorbed $25K in preventable waste.

This is not a one-time occurrence. It happens every month, across multiple vendors. The firms that cannot trust their forecasts consistently leave money on the table.

How Accuracy Compounds Over Time

The value of 91% accuracy is not just about any single month. It compounds. When a firm makes confident budget decisions month after month, the cumulative effect is significant:

  • Month 1: Reallocate $20K from an underperformer. Save $10K in wasted spend.
  • Month 3: Identify a vendor trending downward before the contract renewal. Negotiate better terms or replace. Save $15K per month going forward.
  • Month 6:Annual budget planning uses six months of accurate forecasts. Partners approve a budget based on projected cost per case by vendor, not last year's averages.
  • Month 12: The firm has made 12 months of data-driven decisions. The 15-20% ROI improvement is not from one big move — it is from dozens of small, confident adjustments.

What Makes 91% Possible

Prediction accuracy at this level requires three things that most PI firms already have but are not using systematically:

  • 12+ months of historical data:Lead volume, conversion rates, and spend by vendor. The model learns patterns from your firm's actual history, not industry averages.
  • Daily data updates: Weekly snapshots reduce accuracy because mid-week shifts are invisible. Daily data from connected integrations keeps the model current.
  • Source-level tracking: Aggregate data washes out the vendor-level patterns the model needs. Tracking leads, conversions, and spend by individual source is what makes vendor-specific forecasts possible.

The Bottom Line for Managing Partners

A 91% accuracy rate is not a marketing claim. It is a performance standard that determines whether your marketing director can act on mid-month data or has to wait for month-end confirmation. The difference between acting on day 15 and waiting until day 30 is typically $15K to $25K per month in recoverable spend for a firm at the $300K budget level.

Over 12 months, that is $180K to $300K in budget optimization that is only possible when the underlying forecasts are reliable enough to act on.

Want to see the accuracy against your own data? Our AI Insights module runs a backtesting analysis using your historical lead and conversion data to show what prediction accuracy looks like for your specific vendor portfolio. Book a demo to see the numbers.

Related guide: See our complete guide to AI for personal injury law firms — what works now, what's hype, the data foundation you need, and the 4-phase adoption roadmap.

See it in action

Discover how RevenueScale tracks cost per case from click to settlement.

Book a Demo

Want to see Revenue Intelligence in action?

See how RevenueScale connects your marketing spend to case outcomes — so you can cut waste, scale winners, and prove ROI to partners.