Back to Blog
Thought Leadership7 min read2026-03-27

Why Intake Metrics Don't Belong to the Intake Team

Rejection rate, conversion rate, and withdrawal rate are almost always owned by intake. That's the wrong ownership model. These are marketing performance signals.

Why Intake Metrics Don't Belong to the Intake Team

Here is the organizational chart at most personal injury firms when it comes to metrics: marketing owns cost per lead, click-through rates, and ad spend. Intake owns rejection rate, conversion rate, and withdrawal rate. Each team reports to its own manager. Each manager reports to the managing partner. And nobody connects the two data sets.

This is the wrong model. Not because either team is doing poor work, but because the ownership boundaries are drawn in the wrong place. Rejection rate, conversion rate, and withdrawal rate are not intake performance metrics. They are marketing performance signals — they tell you which vendors produce qualified prospects and which produce noise. When intake owns these numbers in isolation, the firm loses its most powerful lever for optimizing cost per case.

Cross-functional metric ownership is the organizational principle that separates firms with real attribution from firms with spreadsheets and finger-pointing. This post explains why, and what to do about it.

The Current Model: Two Teams, Two Dashboards, Zero Overlap

Walk into any PI firm spending $200K or more per month on lead generation, and you will find a version of the same structure. The marketing director — let's call her Dan — manages vendor relationships, negotiates rates, and tracks cost per lead by source. She reports monthly on spend, lead volume, and sometimes cost per signed case.

Down the hall, the intake manager — let's call her Olivia — manages a team of intake specialists who answer calls, qualify prospects, and sign cases. Olivia tracks rejection rate, conversion rate, speed to contact, and withdrawal rate. She reports monthly on intake efficiency.

Dan's dashboard lives in a marketing spreadsheet or a vendor portal. Olivia's dashboard lives in the case management system. The two systems do not talk to each other. And the two people rarely compare notes in any structured way.

This is not a technology problem. It is an ownership problem. Firms have decided that intake metrics belong to intake and marketing metrics belong to marketing, and the result is a gap exactly where the most important data lives: the point where a marketing-generated lead becomes (or fails to become) a signed case.

Vendor Quality Hidden in Intake Data

Why This Creates Blind Spots

When rejection rate is owned exclusively by the intake team, a vendor quality problem looks like an intake efficiency problem. Consider a real scenario. Vendor A sends 300 leads per month at $85 per lead. Vendor B sends 280 leads per month at $92 per lead. From marketing's dashboard, both vendors look roughly comparable.

But intake is rejecting 42% of Vendor B's leads versus 18% of Vendor A's leads. That gap represents roughly 67 wasted leads per month from Vendor B — leads that consumed intake time, clogged the phone queue, and produced zero signed cases. At $92 per lead, that is over $6,100 per month in spend that generated nothing but work for Olivia's team.

Here is where the blind spot lives: Olivia sees the rejection rate. She knows Vendor B leads are lower quality. She may even complain about it in a staff meeting. But the number lives in her report, not Dan's. Dan sees 280 leads delivered at $92 each and has no structured visibility into what happens after the lead arrives.

The vendor quality problem hides in intake data. It does not surface in marketing data. And because the two teams own separate numbers, the firm keeps writing checks to a vendor that is burning cash.

Multiply this across five, eight, or twelve vendors, and the blind spot becomes a budget hole. Firms routinely waste 15–25% of their lead generation spend on vendors whose quality issues are visible to intake but invisible to marketing.

The Specific Metrics That Should Be Shared

Not every metric needs cross-functional ownership. Speed to contact is legitimately an intake operations metric. Ad creative performance is legitimately a marketing metric. But three numbers sit directly on the boundary, and all three should be visible to — and owned by — both teams.

Rejection Rate by Source

This is the percentage of leads from each vendor that intake declines. When this number varies significantly across vendors, it is a vendor quality signal, not an intake performance signal. A 38% rejection rate from one source while others run at 15–20% tells you the source is sending unqualified prospects. That is a marketing problem that requires a marketing decision: renegotiate, adjust targeting, or cut the vendor.

Olivia can identify the pattern. But Dan needs to act on it. If only Olivia sees the number, the action stalls.

Conversion Rate by Source

Conversion rate — the percentage of leads that become signed cases — is the single most important bridge metric between marketing and intake. When it is tracked only in aggregate, it tells you how well your intake team performs overall. When it is tracked by source, it tells you which vendors are producing prospects who are ready, willing, and qualified to sign.

A vendor delivering leads at $75 each with a 12% conversion rate produces signed cases at $625 each. A vendor delivering leads at $110 each with a 28% conversion rate produces signed cases at $393 each. The second vendor looks more expensive on Dan's dashboard and more productive on Olivia's dashboard. Only when both teams look at the combined number does the firm make the right budget decision.

Withdrawal Rate by Source

Withdrawal rate — the percentage of signed cases that terminate before settlement — is the metric that most firms ignore entirely when evaluating vendor performance. But it matters enormously for cost per case. A vendor whose signed cases withdraw at 25% is effectively 25% more expensive than the numbers suggest, because one in four of those “signed cases” will never generate a fee.

This data lives deep in case management, months after the lead arrived. Olivia's team tracks it as a case outcome metric. But it is, at its core, a marketing attribution metric. The vendor who generated the lead bears responsibility for the quality of the prospect who eventually withdrew. Dan needs this number to calculate true cost per case by vendor.

What Cross-Functional Ownership Looks Like in Practice

Cross-functional metric ownership does not mean eliminating roles or blurring accountability. It means three things.

Shared visibility. Both Dan and Olivia see rejection rate by source, conversion rate by source, and withdrawal rate by source on the same dashboard. Not in separate spreadsheets emailed once a month. On the same screen, updated at the same frequency, with the same definitions.

Joint accountability.When rejection rate from a vendor spikes, it is not Olivia's problem to manage and not Dan's problem to ignore. Both teams own the outcome. Dan is accountable for vendor selection and budget allocation. Olivia is accountable for consistent qualification standards. Neither can optimize without the other.

Structured review cadence.The two teams meet — weekly or biweekly — to review the shared metrics together. Not an ad hoc hallway conversation. A standing meeting with a standing agenda: which sources are trending up, which are trending down, and what actions follow. Fifteen minutes is enough if both people are looking at the same data before the meeting starts.

The mechanics are straightforward. The cultural shift is harder. It requires both teams to accept that their metrics are not just theirs. It requires intake to share data that might feel like a judgment on their performance. It requires marketing to accept feedback loops that slow down the comfortable narrative of “we delivered X leads this month.”

The Conversation Between Olivia and Dan That Most Firms Never Have

At firms without cross-functional metric ownership, here is what happens. Olivia notices that Vendor C's leads have a 35% rejection rate, mostly for “no valid case” and “statute expired.” She mentions it in a monthly report. Dan reads the report but has no direct visibility into the underlying data. The vendor keeps running. Three months and $45,000 later, someone escalates it to the managing partner, who asks why nobody flagged this sooner.

At firms with cross-functional metric ownership, here is what happens instead. In their biweekly review, Olivia and Dan both see that Vendor C's rejection rate has climbed from 22% to 35% over six weeks. They look at the rejection reason breakdown together. They see 61% of rejections are for statute of limitations issues, which points to a targeting problem on the vendor's side. Dan calls the vendor that afternoon with specific data. The vendor adjusts its geographic targeting. Rejection rate drops to 24% within three weeks.

The difference is not that the second firm has smarter people. The difference is that the second firm structured its data ownership so the right person gets the right signal at the right time. Olivia had the data. Dan had the vendor relationship. The meeting created the context for action.

This is the conversation most firms never have — not because the people are unwilling, but because the organizational structure never creates the moment for it. The data exists in two silos. The meeting is not on the calendar. And the shared dashboard does not exist.

Vendor C Performance Issue: Siloed vs. Shared

Siloed Ownership

  • Olivia notices 35% rejection rate
  • Mentions it in monthly report
  • Dan has no direct visibility
  • Vendor runs for 3 more months ($45K wasted)

Cross-Functional Ownership

  • Both see rejection climb from 22% to 35%
  • Review rejection reasons together
  • Dan calls vendor same day with data
  • Vendor adjusts targeting within 3 weeks

What Changes When Both Teams Look at the Same Numbers

When firms move to cross-functional metric ownership, three things change in the first 90 days.

Vendor accountability accelerates.Quality issues that used to take months to surface now surface in weeks. When rejection rate by source is visible to both teams, underperforming vendors get flagged faster. Firms that implement shared dashboards typically identify 15–20% in wasted vendor spend within the first quarter.

Intake stops getting blamed for marketing problems.When conversion rate varies dramatically across vendors, it becomes clear that intake's “low conversion” is actually a lead quality problem from specific sources. Olivia's team stops hearing “why aren't you signing more cases” when the real answer is “Vendor D is sending unqualified leads.” This is not a small thing. Intake teams that feel unfairly blamed disengage. Intake teams that feel their data is heard perform better.

Cost per case becomes a real number, not an estimate. When you combine marketing spend data with intake conversion data and case outcome data, you get true cost per case by vendor. Not cost per lead. Not cost per signed case. Cost per case that survives to settlement. That number is the only metric that matters for budget allocation, and it is impossible to calculate without cross-functional data ownership.

The operational change is modest — a shared dashboard, a biweekly meeting, a common set of definitions. The strategic impact is significant. Firms that connect intake data to marketing decisions consistently report a 15–20% improvement in marketing ROI within 90 days. Not because they found some clever optimization trick, but because they stopped making budget decisions with half the data.

Moving From Silos to Shared Ownership

If you are an intake manager reading this, the ask is not that you give up your metrics. It is that you share them with the person who can act on the marketing implications. Your data is more powerful than you realize — it is the missing feedback loop that marketing has been operating without.

If you are a marketing director reading this, the ask is not that you start micromanaging intake. It is that you start treating rejection rate, conversion rate, and withdrawal rate as vendor performance signals, not intake performance signals. When those numbers move, your first question should not be “what is wrong with intake?” It should be “which vendor changed?”

The firms that get attribution right are not the ones with the best technology or the biggest budgets. They are the ones that drew the ownership lines in the right places. Cost per case is a cross-functional metric. The data that feeds it should be cross-functional too.

Related guide: See our complete guide to PI intake performance — the 8 metrics every PI firm should track, benchmarks, and how to connect intake data to marketing attribution.

See it in action

Discover how RevenueScale tracks cost per case from click to settlement.

Book a Demo

Want to see Revenue Intelligence in action?

See how RevenueScale connects your marketing spend to case outcomes — so you can cut waste, scale winners, and prove ROI to partners.