Intake is where marketing spend either turns into revenue or disappears. Every dollar you spent acquiring a lead is either recouped through a signed case or lost at the intake desk. Yet in most PI firms, intake is measured as an operational function — call handling time, staff utilization, volume throughput — rather than as a revenue function.
Shifting intake measurement toward revenue metrics doesn't require rebuilding your intake process. It requires tracking different numbers from the same process you already have. This article covers the intake metrics that connect most directly to revenue performance — what they are, how to calculate them, and what to do when they move.
Why Intake Metrics Are Marketing Metrics
The marketing director and the intake manager are managing the same pipeline from different ends. Marketing drives leads into the top of the funnel. Intake converts those leads into signed cases at the bottom of the funnel. When intake conversion is poor, marketing ROI is poor — regardless of how well the ad campaigns are performing.
The most sophisticated PI marketing operations treat intake metrics as a shared responsibility: marketing owns lead quality into intake, intake owns conversion from contact to signed case, and both teams review the numbers together. Firms that silo these functions consistently blame each other for performance problems that better data would resolve.
Tier 1: The Conversion Metrics
Overall Lead-to-Case Conversion Rate
This is the baseline metric: of all leads delivered to your intake team in a given period, what percentage became signed cases? Calculate it by dividing signed cases by total leads received.
Industry range for PI firms with disciplined intake: 8 to 15% for aggregate lead portfolios. Higher rates often reflect more selective lead acquisition (not necessarily better intake). Lower rates can reflect intake execution issues or poor lead quality from specific vendors.
Track this monthly and watch the trend. A declining conversion rate is one of the earliest signals that something is wrong — either lead quality is dropping, intake execution is degrading, or your case acceptance criteria have tightened. The metric tells you something changed. The diagnostic work tells you what.
Conversion Rate by Lead Source
Aggregate conversion rate hides what matters most: how does conversion vary by vendor? A portfolio average of 11% can consist of one vendor at 18% and another at 4%. Those two vendors require completely different responses.
Track conversion rate by vendor on a 90-day rolling basis. This smooths out month-to-month variance and gives you enough data to distinguish a real quality problem from a statistical blip. When a vendor's conversion rate drops below your firm-wide average for two consecutive 90-day periods, that is a performance conversation with the vendor.
Contact Rate by Lead Source
What percentage of leads does your intake team successfully reach? This is the often-invisible first filter in the conversion funnel. A lead you can't reach is effectively a lead you didn't receive — but you still paid for it.
Track contact rate by vendor to distinguish intake execution issues from lead quality issues. If your firm-wide contact rate is 72% but contact rate from Vendor X is 45%, that vendor may be selling stale leads, recycled leads, or leads with inaccurate contact information. That is a vendor quality issue, not an intake issue.
If contact rate is low across all vendors simultaneously, suspect an intake execution problem: response time, call volume capacity, or hours of coverage.
Overall Conversion
8-15%
Aggregate lead-to-case
By Vendor Variance
4-18%
Source matters more than average
Contact Rate
72%
Firm-wide benchmark
Tier 2: The Quality Metrics
Rejection Rate by Reason and Source
When intake rejects a lead, why? The reasons tell you far more than the rate. Common PI rejection categories include:
- Liability doesn't support a case
- Statute of limitations issue
- Existing representation
- Injuries insufficient for case economics
- Prospect not interested in pursuing
- Unable to contact
Track rejection reasons by vendor. If 30% of leads from a specific vendor are rejected for “liability doesn't support a case,” that vendor is sourcing leads that don't meet your case quality bar. If 25% are rejected because the prospect already has representation, that vendor is selling leads to multiple firms.
This data is one of the most powerful tools in a vendor performance conversation. “In Q1, 34% of your leads were rejected for existing representation — that's double your rejection rate in Q4” is a specific, credible, data-based concern that vendors have to respond to.
Case Severity at Sign
At the moment a retainer is signed, record the injury severity of the case. Even a three-level scale — minor, moderate, severe — tracked consistently by intake over 12 months gives you actionable data about lead quality by vendor and a predictor of downstream settlement value.
A vendor consistently producing minor-severity cases at a low cost per case is not necessarily more valuable than a vendor producing moderate- to-severe cases at a higher cost per case — because the economics of what settles at the end of the pipeline are different. Severity at sign is the earliest data point connecting intake performance to financial outcomes.
Withdrawal Rate After Sign
What percentage of signed cases withdraw within the first 90 days? This metric is often overlooked because it happens after the conversion event that most intake teams celebrate. But a case that signs and then withdraws has a real cost — the marketing acquisition cost, the intake team time, and the attorney case setup time — with zero revenue.
Track withdrawal rate by vendor source. High early withdrawal rates from a specific vendor often signal a mismatch between how that vendor is acquiring and representing the case to prospects, and what the actual case experience involves. It can also signal a prospect quality issue — leads who were never committed to pursuing a case in the first place.
Tier 3: The Speed Metrics
Speed to First Contact
How many minutes or hours pass between lead delivery and first contact attempt? Research in legal intake consistently shows that contact rates and conversion rates drop significantly when response time exceeds 5 to 10 minutes for high-intent leads.
Track this by lead source and by time of day. Some vendors deliver leads during business hours when your intake team has full coverage. Others deliver leads at 9pm when your coverage may be limited. Speed to first contact metrics will reveal whether your intake infrastructure matches your lead delivery patterns.
Speed to Sign
Once contact is made, how long does it take to get to a signed retainer? This measures intake process efficiency — not just whether leads are being worked, but whether they're being moved to close. A prospect who has been in the intake process for 14 days without a resolution is a lead that's likely to go elsewhere.
Establish a target: most PI firms can sign a motivated, qualified prospect within 48 hours of first contact. Track the percentage of signed cases that closed within that window. Cases that took longer represent either prospect hesitation (a sales process question) or intake process delay (an operations question).
Building the Intake Dashboard
The best intake performance dashboards show three views simultaneously:
- Today's operational view:Leads in queue, contact attempts, contacts made, cases pending decision. This is the intake manager's daily operations tool.
- This week's conversion view: Conversion rate and contact rate vs. prior 4-week average, broken down by vendor. This is the weekly accountability view for intake leaders and marketing directors together.
- This month's quality view: Rejection reasons by source, severity distribution of signed cases, withdrawal rate. This is the monthly vendor management and business development view.
Firms that give both their intake managers and their marketing directors access to the same intake metrics consistently report faster problem identification and fewer “whose fault is it?” conversations when performance dips. The data makes the question answerable — and answerable questions tend to get solved rather than argued about.