Back to Blog
Performance Intelligence8 min read2026-02-09

How to Compare Marketing Performance Across Office Locations Without Distorting the Data

Comparing marketing performance across office locations seems straightforward — until you try it. Most PI firms discover their numbers reflect measurement artifacts, not actual performance differences.

How to Compare Marketing Performance Across Office Locations Without Distorting the Data

Comparing marketing performance across office locations seems straightforward — until you actually try to do it. Most PI firms that attempt a fair location comparison quickly discover that the numbers are not telling them what they think. One location looks dramatically better or worse than the others, but the reasons are buried in data quality issues, attribution gaps, and inconsistent definitions rather than actual performance differences.

This article covers how to run a location performance comparison that is actually fair — meaning the differences it surfaces reflect real performance variation, not measurement artifacts.

The Most Common Ways Location Comparisons Get Distorted

Different vendor mixes

If Location A sources 60% of its leads from high-performing exclusive vendors and Location B sources 60% from shared lead aggregators, the cost-per-case difference between the two locations reflects vendor portfolio decisions — not intake performance or market characteristics. Comparing the two without controlling for vendor mix is like comparing two sales reps when one is working warm referrals and the other is cold calling.

Different market competitive environments

A location in a high-competition urban market will naturally have higher cost per lead from most vendors than a location in a less saturated suburban market. This is structural, not operational. When Location 2 shows a 40% higher cost per case than Location 1, the first question is whether the vendor prices are simply higher in that market before assuming intake or conversion is the issue.

Inconsistent lead definitions

If Location A counts all inquiries as leads and Location B only counts inquiries that pass initial qualification, Location B will appear to have a dramatically higher conversion rate even if the two offices are converting at identical rates among qualified prospects. This definitional mismatch is one of the most common distortions in multi-location reporting — and it is invisible unless you specifically check for it.

Shared spend coded to one location

A vendor invoice booked entirely to Location 1 because that is where the marketing director sits — even though the vendor delivers leads to all three offices — produces a wildly distorted cost per case for Location 1 and zero cost for the others. This is surprisingly common in firms that never built location-specific spend allocation into their financial systems.

How to Build a Fair Comparison Framework

A fair multi-location comparison requires four things to be true simultaneously. Check all four before drawing any conclusions from the numbers.

1. Consistent lead definitions firm-wide

Document and enforce a single definition of what counts as a lead, a qualified lead, a signed case, and a rejection across every location. Build these definitions into required CRM fields with validation rules so intake specialists cannot skip them or enter inconsistent values. Audit the definitions quarterly — interpretations drift over time, especially as intake teams turn over.

2. Location-tagged lead records from the moment of intake

Every lead needs to carry a location tag that persists through the entire lifecycle — from initial contact through signed case and eventually to settlement. This tag is the anchor for all location-level reporting. If your CRM does not currently capture this, adding it as a required field at intake is the first infrastructure project.

3. Vendor spend allocated by market

Allocate every vendor invoice to the location or locations it serves. If a vendor serves multiple locations under one contract, use lead delivery volume as the allocation key and document the methodology. Apply it consistently each month so the cost per case calculation for each location reflects actual market-level spend, not accounting convenience.

4. Vendor-mix-adjusted benchmarks

When comparing cost per case across locations, note the vendor portfolio composition for each. If the vendor mixes differ significantly, run the comparison within vendor categories — compare cost per case from shared leads across all locations, and compare cost per case from exclusive leads across all locations. Apples-to-apples comparisons require controlling for the inputs, not just measuring the outputs.

Cost Per Case by Location — Before Vendor Mix Adjustment

The Metrics That Are Fair to Compare Across Locations

Once your data quality is consistent, these metrics make for reliable location comparisons:

  • Intake conversion rate by lead source category. Compare conversion rates within the same vendor type — not across different vendor mixes. This measures intake team performance independent of lead quality inputs.
  • Rejection rate by lead source category.High rejection rates at one location from a vendor with low rejection rates at another may indicate geographic mismatch — the vendor's coverage in that market produces lower-quality leads than in others.
  • Time from lead to signed case. Speed to sign is an intake performance metric that is more comparable across locations than cost per case because it is less affected by vendor pricing differences between markets.
  • Cost per signed case from exclusive lead sources. Exclusive leads tend to have more consistent pricing across markets. Comparing cost per case from exclusive sources gives you the cleanest view of market-level performance differences.

What to Do When One Location Looks Worse

When a location shows materially worse performance than your others — and you have confirmed the comparison methodology is fair — there are three explanations worth investigating before drawing conclusions.

  1. Vendor portfolio mismatch. The location may be relying on vendors who underperform in that specific market. Adjust the vendor mix before attributing the problem to intake or management.
  2. Intake process gap. If the vendor mix is comparable and the market is not demonstrably more competitive, look at intake conversion rates by time of day, day of week, and intake specialist. Process gaps show up in the data before they surface in reviews.
  3. Market-level competitive dynamics. Some markets are simply more expensive to acquire cases in. If cost per lead from identical vendors is 30% higher in Location 3 than in other markets, that may be the primary driver. The right response is adjusting cost-per-case targets for that market — not treating the location as underperforming against an inapplicable benchmark.
Fair Multi-Location Comparison Checklist
Standardize DefinitionsSame lead/case criteria firm-wide
Tag by LocationLocation field from intake to settlement
Allocate SpendVendor invoices by market served
Control for Vendor MixCompare within vendor categories

Building the Habit of Fair Comparison

The firms that run the most productive location reviews are the ones that have internalized the methodology questions as part of the process — not as a one-time calibration. Before every monthly location performance review, confirm that your vendor mix assumptions are documented, your definitions are current, and your spend allocation methodology has been applied consistently.

That discipline separates firms that make good multi-location decisions from firms that chase measurement artifacts while the real performance gaps go unaddressed.


RevenueScale gives multi-location PI firms the infrastructure to run fair, consistent performance comparisons across offices — with vendor spend allocation, standardized metrics, and location-level reporting built in. Book a demo to see it in action.

Related guide: See our complete guide to multi-location PI firm marketing — attribution challenges, vendor management across markets, and building a multi-location dashboard.

See it in action

Discover how RevenueScale tracks cost per case from click to settlement.

Book a Demo

Want to see Revenue Intelligence in action?

See how RevenueScale connects your marketing spend to case outcomes — so you can cut waste, scale winners, and prove ROI to partners.