If your PI firm has three offices, you are not running one intake operation. You are running three. Three teams with three different conversion rates, three different rejection standards, and three different definitions of what “qualified” actually means. The leads may come from the same vendors. The case criteria may be written on the same page of the same handbook. But the way those leads are handled — the speed, the rigor, the follow-up — varies more than most multi-location firms realize.
This is not a people problem. It is a systems problem. And it is one that gets worse as you grow. Every new office you add multiplies the inconsistency unless you build a framework that gives each location the same standards, the same data, and the same accountability structure — without removing the local judgment that makes intake effective in the first place.
The Multi-Location Intake Problem
Most PI firms open a second or third office and replicate the intake process they already have. They hire a local intake team, give them access to the CRM, and assume the process will carry over. For the first few months, it usually does. Then the drift starts.
Office A develops a habit of being aggressive on soft-tissue cases because their local market demands volume. Office B becomes more selective because a managing attorney there prefers higher-value cases. Office C, the newest location, is still figuring out its rhythm and rejecting leads that either of the other offices would sign.
None of this shows up in a firm-wide conversion rate. The blended number looks fine — maybe 16% or 17%. But underneath that number, three different intake cultures are producing three very different outcomes from the same lead flow.
Office A Conversion
21%
Aggressive on volume — signs more soft-tissue
Office B Conversion
14%
Selective criteria — fewer but higher-value cases
Office C Conversion
7%
New team — still calibrating rejection standards
The firm-wide blended rate of 14% hides three very different stories. And without location-level visibility, you cannot diagnose which differences are strategic choices and which are performance gaps.
Problem 1: Inconsistent Qualification Criteria Across Offices
The most common multi-location intake problem is not speed or staffing. It is that each office develops its own informal definition of what counts as a qualified lead. The firm may have a written set of case criteria — minimum treatment thresholds, excluded case types, liability standards — but the application of those criteria varies by location.
One office marks a lead as “rejected — does not meet criteria” while another office marks the same type of lead as “pending — needs more information.” One office counts a callback attempt as a contact; another requires a live conversation. One office rejects leads after two attempts; another makes five.
The result is that your disposition data — the foundation of every intake performance metric — means different things depending on which office produced it. When you pull a report showing rejection rates by location, you are not comparing apples to apples. You are comparing three different grading systems.
This is where most multi-location intake analysis breaks down. The data exists, but it is not standardized. And without standardization, you cannot benchmark. Without benchmarking, you cannot identify which offices need coaching and which are genuinely performing differently because of local market conditions.
| Office A | Office B | Office C | |
|---|---|---|---|
| Lead doesn't answer first call | Attempt 1 logged, retry queued | Marked 'no contact' | Rejected after 24 hours |
| Soft-tissue, minimal treatment | Signed — meets minimum | Pending — needs medical update | Rejected — below threshold |
| Lead requests callback tomorrow | Scheduled, tracked in CRM | Noted in comments, no task | Forgotten — no follow-up system |
| Out-of-jurisdiction lead | Transferred to correct office | Rejected — wrong location | Signed locally, flagged later |
Problem 2: No Cross-Location Benchmarking
When each office reports its own numbers in its own format — or worse, when the firm only looks at aggregate numbers — you lose the ability to ask the most important performance question: why is Office A converting at 21% while Office B converts at 14%?
There are only a few possible explanations. The lead mix is different. The intake team is more skilled. The local attorneys accept a broader range of cases. The follow-up process is faster. Or the qualification criteria are applied differently. Each of these explanations requires a different response. But without cross-location benchmarking on the same metrics using the same definitions, you cannot isolate which factor is driving the gap.
This is especially damaging when it comes to cost per case by location. If Office A converts at a higher rate, their cost per signed case from every vendor is lower — even if they receive the same leads at the same price. A vendor that looks expensive at one office may look profitable at another, and the difference is entirely about intake execution.
The same vendor, sending the same quality of leads at $250 per lead, produces three wildly different cost-per-case outcomes depending on which office handles the intake. Without location-level benchmarking, you might blame the vendor. The real issue is the 14-point conversion gap between your best and worst offices.
Problem 3: Lead Routing That Ignores Capacity Differences
Multi-location firms typically route leads based on geography. A lead in the Phoenix market goes to the Phoenix office. A lead in the Dallas market goes to Dallas. This makes sense as a starting point, but it ignores a critical variable: capacity.
When one office is running at 90% of its intake capacity and another is at 50%, geography-only routing creates two problems. The overloaded office starts triaging — prioritizing leads that look like easy signs and letting marginal leads fall through the cracks. Speed-to-contact drops. Follow-up attempts decrease. Conversion rate declines not because the leads are worse, but because the team does not have bandwidth to work them properly.
Meanwhile, the underutilized office has intake specialists sitting with open capacity, unable to help because the leads are routed elsewhere. The firm is paying for intake capacity it is not using while simultaneously losing leads at the office that is over capacity.
Smarter routing requires two things: real-time visibility into each office's current workload, and a set of rules that allow overflow routing when one location hits its capacity threshold. This does not mean abandoning geographic routing — it means supplementing it with capacity awareness so that leads are never sitting unworked while intake specialists elsewhere have open slots.
The Centralized-Standards Model
The solution is not to centralize intake into a single location. Most PI firms benefit from local intake teams who understand their market, their attorneys, and their referral relationships. The solution is to centralize the standards while keeping execution local.
This model has three components.
Component 1: Common Disposition Codes
Every office uses the same set of disposition codes with the same definitions. Not “similar” definitions — identical ones. A “rejected — does not meet criteria” disposition means the same thing in Phoenix as it does in Dallas. A “pending — awaiting medical records” status has the same escalation timeline everywhere.
This is the foundation. Without common codes, nothing else works. You cannot benchmark what you cannot compare. Build a disposition code dictionary — a one-page document that defines every code, gives examples, and specifies when each should be used. Train every intake specialist on it. Audit it quarterly.
Component 2: Shared Performance Benchmarks
Once your disposition codes are standardized, establish firm-wide benchmarks for the metrics that matter. These are not targets imposed from above — they are reference points that allow each office to see where they stand relative to the firm average and relative to each other.
The benchmarks that drive the most useful cross-location conversations:
- Conversion rate by lead source — are all offices converting the same vendor's leads at similar rates?
- Speed to first contact — measured in minutes, not hours. The goal is under five minutes for every office.
- Contact attempt count before disposition — how many attempts does each office make before marking a lead as unreachable?
- Rejection rate by reason code — are certain offices rejecting more leads for specific reasons?
- Time from first contact to signed retainer — how long does the full intake cycle take at each location?
Component 3: Unified Weekly Review
The benchmarks only matter if someone is reviewing them regularly. A weekly cross-location intake review — 30 minutes, focused on outliers — is what turns data into action.
The format is simple. Pull the five benchmark metrics for each office. Identify any metric where a location is more than two standard deviations from the firm average. Discuss those outliers. That is the meeting. No lengthy presentations, no reviewing every number. Just the outliers.
The Weekly Review That Surfaces Outliers Without Micromanaging
The biggest risk with cross-location oversight is that it becomes micromanagement. Local intake managers need autonomy to handle their market, their team, and their day-to-day judgment calls. The weekly review should not second-guess every disposition or challenge every rejection. It should surface patterns that warrant a closer look.
Here is what a productive weekly review looks like in practice:
Week 1:The data shows Office C's speed-to-contact has slipped from 3.2 minutes to 8.7 minutes over the past two weeks. The intake manager explains that they lost a team member and are covering with one fewer specialist. The action item is a temporary routing adjustment to send overflow leads to Office A during peak hours until the position is filled.
Week 2:Office B's rejection rate for “insufficient treatment” is 34%, compared to 18% and 21% at the other offices. The team reviews a sample of rejected leads and finds that Office B is applying a stricter treatment threshold than the firm standard specifies. The action item is a calibration session with the intake team and the managing attorney to align on criteria.
Week 3: All offices are within normal ranges. The meeting takes eight minutes. Everyone moves on.
That is the rhythm. Most weeks, the meeting is short because nothing is flagged. When something is flagged, it is specific, data-backed, and actionable. The local intake manager is not being told how to do their job — they are being shown a data point and asked to explain it. Sometimes the explanation is perfectly valid. Sometimes it reveals a gap that needs attention.
Without Centralized Standards
- Each office defines 'qualified' differently
- Firm-wide conversion rate hides location-level gaps
- Vendor performance varies by office with no explanation
- Lead routing ignores capacity — some offices overloaded, others idle
- Problems surface months later when case pipeline thins
With Centralized Standards + Local Execution
- Common disposition codes make benchmarking possible
- Location-level metrics surface gaps within one week
- Vendor performance differences traced to intake execution vs. lead quality
- Capacity-aware routing keeps speed-to-contact under 5 minutes everywhere
- Weekly outlier review catches issues before they affect case volume
How Connected Data Makes Multi-Location Intake Manageable
Everything described above depends on one thing: connected data. When each office's intake data lives in a separate spreadsheet, a separate tab, or a separate CRM instance, cross-location benchmarking is a manual exercise that takes hours every week. Most firms that attempt it eventually give up — not because the analysis is not valuable, but because the data preparation is too time-consuming.
A revenue intelligence platform that ingests intake data from all locations into a single view changes the economics of this work. Instead of spending 10 hours a week pulling data from three systems and normalizing it into a comparison format, you open a dashboard that already shows conversion rate, speed to contact, rejection rate, and cost per case by location — updated in real time.
The weekly review becomes a 15-minute meeting because the data is already prepared. The outliers are already flagged. The trends are already visible. The conversation shifts from “let me pull the numbers” to “here is what the numbers are telling us this week.”
Manual Multi-Location Reporting
10+ hrs/week
Pulling, normalizing, comparing data from each office
Connected Platform Reporting
15 min/week
Pre-built location benchmarks with automated outlier flags
Multi-location intake management is not about controlling every office from headquarters. It is about making sure every office is playing the same game with the same rules and the same scoreboard. When the standards are shared and the data is connected, local teams can execute with autonomy while the firm maintains visibility into what is working, where, and why.
The firms that figure this out do not just improve intake performance at their weakest office. They improve it everywhere — because every location can see what good looks like, measured the same way, every week.
Related guide: See our complete guide to multi-location PI firm marketing — attribution challenges, vendor management across markets, and building a multi-location dashboard.
Related guide: See our complete guide to PI intake performance — the 8 metrics every PI firm should track, benchmarks, and how to connect intake data to marketing attribution.
Related guide:This post is part of our pillar onRevenue Intelligence for Personal Injury Law Firms — start there for the full framework, including the Three Enemies of Revenue Intelligence and the full enrichment stack.
