Back to Blog
Intake Intelligence5 min read2026-02-28

How to Build an Intake Performance Review for a Personal Injury Firm

Most PI firms review marketing performance monthly but never review intake performance. Build an intake review that connects conversion data to cost per case by vendor.

How to Build an Intake Performance Review for a Personal Injury Firm

Most PI firms have some version of a marketing performance review — a monthly or quarterly meeting where spend, lead volume, and conversion metrics are discussed. Very few have an intake performance review that is distinct from the marketing review, structured around intake-specific metrics, and designed to produce decisions about both intake operations and vendor performance.

An intake performance review is not a redundant meeting. It serves a different purpose: it examines the conversion layer between your marketing spend and your signed cases, identifies where leads are being lost or underconverted, and connects intake operational data to the vendor attribution data that drives marketing decisions.

This post outlines exactly how to structure one.

Who Should Be in the Room

The intake performance review should include, at minimum:

  • Marketing director or equivalent: Brings vendor spend data, lead volume by source, and cost per lead. Connects intake performance data back to marketing budget decisions.
  • Intake manager: Brings conversion rate data, rejection data, non-contact rates, and speed-to-lead metrics. Can explain patterns in the data at the specialist level.
  • Operations or firm administrator: Brings a pipeline view — how cases are progressing post-signing, where stalls are occurring, and withdrawal rates.
  • Attorney (optional but valuable): Can provide case quality perspective — are the signed cases meeting the firm's criteria? Is the severity mix consistent with expectations?

The managing partner does not need to attend every review, but should receive a summary. They are the audience for the budget decisions that the review generates.

Cadence and Format

Monthly is the right cadence for most PI firms. Weekly is too frequent — the data is too noisy and the patterns need time to emerge. Quarterly is too infrequent — problems compound for too long before anyone addresses them.

The review should be working from a prepared data package, not from a live dashboard exploration. Someone needs to own building the metrics summary each month — usually the intake manager or a marketing analyst — so that the meeting is spent discussing the data, not assembling it.

A typical intake performance review runs 45–60 minutes. If it is regularly going longer than 90 minutes, the scope has expanded beyond what a monthly review can sustain.

Intake Performance Review Agenda
Volume & Velocity10 min — lead volume, speed-to-lead
Conversion & Rejection15 min — rates by source, outliers
Case Quality10 min — severity distribution by source
Retention & Withdrawals10 min — withdrawal rates by cohort
Decisions & Actions10 min — action items with owners

The Five-Section Agenda

Section 1 — Volume and Velocity (10 minutes)

Open with the lead volume numbers: total leads received this month by source, compared to the prior month and to the same month last year if the data is available. Note any significant volume changes — a vendor who delivered 30% fewer leads than contracted is worth flagging immediately, before discussing quality.

Also include speed-to-lead averages by source. If any source is averaging over 30 minutes to first contact, that should be noted before reviewing the conversion data for that source, since slow response times inflate apparent non-conversion rates.

Section 2 — Conversion and Rejection (15 minutes)

This is the core of the review. Walk through:

  • Conversion rate by source (signed cases ÷ total leads received)
  • Rejection rate by source
  • Top 2–3 rejection reason codes by source
  • Non-contact rate by source
  • Any sources with significant month-over-month changes in conversion or rejection rate

The goal of this section is not to review every vendor in equal depth. Focus on outliers — sources that are performing significantly above or below the portfolio average, and sources where the trend has shifted materially since the prior review.

A simple visual aid helps: a vendor scorecard showing each source's conversion rate, rejection rate, and non-contact rate side by side, color coded against a threshold (green/yellow/red). This lets the room immediately identify which vendors deserve discussion and which are performing within normal range. RevenueScale's intake performance dashboard generates this view automatically every month.

Section 3 — Case Quality (10 minutes)

For firms that have implemented case severity tracking, this section reviews the severity distribution of signed cases by source. The question is simple: is the case mix from each source consistent with prior periods and with the firm's target case profile?

If your firm has not yet implemented severity classification, this section can instead cover a qualitative case quality update from the intake manager or reviewing attorney — any sources where the case profile has felt different in the past 30 days. This is less precise but still valuable as a directional signal.

Section 4 — Retention and Withdrawals (10 minutes)

This section reviews withdrawal activity in the period. The relevant data:

  • Total withdrawals in the period (by case opening cohort, not just current period)
  • Withdrawals by lead source (attributed to the source that generated the original lead)
  • Withdrawal reasons (client initiated, case quality issue, non-responsive)
  • Any sources with withdrawal rates significantly above the portfolio average

Withdrawal data has a lag — cases signed this month won't produce their withdrawals primarily until 60–90 days later. This section should be tracking the 60-day and 90-day withdrawal rates for prior cohorts, not just counting withdrawals that happened to occur in the current month.

Section 5 — Decisions and Actions (10 minutes)

Every intake performance review should end with a short decision log. What actions are being taken based on what the data showed? The format should be simple: the decision, the owner, and the timeline.

Examples of decisions that should come out of these reviews:

  • Vendor A's rejection rate has exceeded 40% for two consecutive months. Marketing director to schedule a vendor call with rejection reason data prepared. Decision point at next review.
  • Non-contact rate for Vendor C is 31% — double the portfolio average. Intake manager to audit the lead entry timing and response workflow for that source before next review.
  • Conversion rate for organic search leads has increased from 19% to 27% over the last three months. Marketing director to review Google Ads allocation for potential increase.
  • Withdrawal rate for Vendor B's Q4 cohort is 26% at 90 days. Elevated enough to discuss volume reduction. Decision deferred pending one more month of data.

The decision log becomes the opening agenda item for the next review. Accountability is built into the cadence.

Building the Data Package

The quality of an intake performance review depends almost entirely on the quality of the data package that precedes it. A few practical guidelines for the person who builds it:

Use a consistent format month over month. Changing the layout or metrics between reviews destroys the trend comparison that makes the data meaningful. The first two or three months of reviews are mostly establishing a baseline. The analysis gets richer as the trend data accumulates.

Automate wherever possible. If building the data package requires 8 hours of manual data collection each month, it will eventually be deprioritized or delegated to someone without the context to do it well. The goal is a package that can be assembled in under 2 hours, ideally triggered automatically from your intake platform and CRM. Native CRM integrations make this automated assembly possible without custom engineering work.

Flag anomalies before the meeting. If someone has looked at the data in advance and highlighted the three most notable patterns, the meeting is more productive. The group should be discussing implications and decisions, not discovering the data for the first time.

Sample Vendor Scorecard — Monthly Intake Review
MetricVendor AVendor BVendor CVendor D
Leads Received1429821075
Conversion Rate18%14%9%22%
Rejection Rate15%22%38%12%
Non-Contact Rate11%16%31%9%
Avg Speed-to-Lead8 min12 min22 min6 min
StatusOn TrackWatchOver ThresholdOn Track

What Changes After 6 Months of Consistent Reviews

Six months of consistent intake performance reviews produce several compounding benefits that individual reviews cannot.

Vendor patterns become legible. You can see which vendors have been consistently above or below threshold across multiple metrics, not just in a single month. This makes vendor decisions significantly more defensible — to partners, to vendors, and to yourself.

Intake execution improvements become measurable. If you made a process change in Month 3 — implementing a new response protocol for a specific source, or retraining specialists on rejection reason codes — you can see its effect in the data from Month 4 onward.

Marketing and intake develop a shared vocabulary. The persistent source of friction between marketing and intake — “the leads are bad” versus “the team isn't closing them” — resolves into a more productive conversation. Both teams are looking at the same data, measured the same way, on the same cadence. That shared visibility changes the quality of the collaboration.

Related guide: See our complete guide to PI intake performance — the 8 metrics every PI firm should track, benchmarks, and how to connect intake data to marketing attribution.

See it in action

Discover how RevenueScale tracks cost per case from click to settlement.

Book a Demo

Want to see Revenue Intelligence in action?

See how RevenueScale connects your marketing spend to case outcomes — so you can cut waste, scale winners, and prove ROI to partners.