Back to Blog
Revenue Intelligence5 min read2026-01-03

How to Evaluate Whether Your PI Marketing Agency Is Actually Performing

PI marketing agencies are good at showing you data. The question worth asking is whether the data they're showing you is the data that matters.

How to Evaluate Whether Your PI Marketing Agency Is Actually Performing

PI marketing agencies are good at showing you data. The question worth asking is whether the data they're showing you is the data that matters. Impressions, click-through rates, cost per click, lead volume — all of that is real. None of it tells you whether your agency is producing signed cases at a cost your firm can sustain.

Evaluating agency performance is one of the most important and most underdisciplined activities in PI marketing. This article gives you a framework — specific, data-based, and grounded in business outcomes — for determining whether your agency is performing or presenting.

The Core Problem With Agency Reporting

Most agencies report on the metrics they control. They control ad spend, creative, targeting, and media placement. They measure impressions, reach, clicks, and cost per lead. These are legitimate inputs — but they are inputs, not outcomes.

Your agency has no visibility into what happens after the lead arrives at your intake desk unless you give it to them. They don't know your conversion rates, your rejection reasons, or which of their leads signed retainers. So their reports, by default, optimize for the metrics they can measure — not the metrics that determine whether the relationship is profitable for your firm.

The evaluation framework that follows changes that dynamic. You bring the downstream data. Combined with what your agency provides, you get a complete picture.

The Four Questions That Matter

Question 1: What Is My Cost Per Signed Case From This Agency?

This is the first and most important question. Take every dollar you paid the agency — management fees, ad spend, retainer, production costs — and divide it by the number of cases you signed that originated from their channels. That number is your cost per signed case.

Compare it to your benchmark. If your firm's target cost per case for auto accident leads is $1,800 and your agency is producing cases at $3,200, that gap needs an explanation — and a plan.

If your agency can't help you calculate this number, that is itself diagnostic. Either they don't have the downstream data (which is a process problem you can fix by sharing intake data with them) or they don't use cost per signed case as a success metric (which tells you something important about how they're evaluating their own work).

Question 2: Is Cost Per Case Trending Up, Down, or Flat?

A single month of cost per case data is a snapshot. Six months of monthly data is a trend. Trends tell you whether performance is improving, deteriorating, or holding steady.

An agency producing cases at $2,100 this month but trending up from $1,600 six months ago has a performance problem that the current month alone doesn't reveal. An agency producing cases at $2,100 this month but trending down from $2,800 six months ago is improving — even if the current number is higher than a competitor.

Require your agency to show you 6-month trending data on cost per case in every monthly review. If they can't produce it, build it yourself from your intake records.

Question 3: How Does Lead Quality Compare Across Channels?

Most agencies manage multiple channels for you — paid search, social, LSA, display. Each channel produces different lead quality. An agency that reports only on aggregate metrics may be averaging strong search performance against weak social performance in a way that obscures both.

Ask your agency to break down cost per lead and — with your intake data — conversion rate by channel. This tells you where their performance is actually coming from. It also reveals whether they're allocating spend toward your strongest-performing channels or spreading budget in ways that maximize their management efficiency rather than your case volume.

Question 4: What Changed and Why?

When performance shifts — either up or down — your agency should be able to explain it. A 20% drop in case volume is meaningful. What your agency says when you ask about it is equally meaningful.

A strong agency has a hypothesis: “We saw a 22% drop in lead volume in March. Search impression share declined because two new competitors entered your target keywords. We recommend increasing bid caps on your top-converting terms and running a creative refresh on social. Here's what we expect to happen in 30 days.”

A weak agency deflects: “March was tough for the whole PI market. We're monitoring it.”

Attribution of change — with a specific action plan and a timeframe — is the difference between a vendor and a partner.

Agency Scorecard Dimensions
DimensionWeightScore 5 (Best)Score 1 (Worst)
Cost Per Case vs. Target30%>10% below target>20% above target
Conversion Rate Trend25%Improving 90-day trendDeclining 2+ months
Reporting Transparency20%Full channel-level, proactiveAggregate only, reactive
Strategic Value15%Consistent new ideasExecution only
Communication Quality10%Fast, substantiveSlow, generic

Building Your Agency Scorecard

Evaluate your agency on five dimensions monthly. Score each one on a 1-5 scale and track the composite score over time.

1. Cost Per Signed Case vs. Target

Score 5 if cost per case is more than 10% below your target. Score 3 if it's within 10% of target. Score 1 if it's more than 20% above target. This dimension carries the most weight — suggest 30% of the composite.

2. Conversion Rate Trend

Score based on the 90-day trend in lead-to-case conversion from agency channels. Improving trend scores 5. Flat trend scores 3. Declining for two or more consecutive months scores 1. Weight this 25%.

3. Reporting Transparency

Does the agency provide channel-level breakdowns? Do they proactively share performance problems or wait for you to find them? Can they produce data in a format that integrates with your intake tracking? Score 5 for full transparency and proactive communication, 1 for aggregate-only reporting and reactive communication. Weight 20%.

4. Strategic Value

Is your agency bringing you new ideas? Are they testing new channels, new creative approaches, new targeting strategies? Are their recommendations connected to your business goals or just to platform metrics? Score 5 for consistent proactive strategy input, 1 for execution-only with no strategic contribution. Weight 15%.

5. Response and Communication Quality

When you have a question or a problem, how fast does the agency respond? Are the answers substantive or generic? Does the account manager understand your firm's specific situation? Score 5 for consistent high-quality communication, 1 for slow responses and generic answers. Weight 10%.

Agency Evaluation Process
Calculate CPCAll-in agency cost / signed cases
Track 6-Month TrendIs CPC improving or deteriorating?
Break Down by ChannelSeparate search, social, LSA performance
Score & DecideMonthly scorecard drives budget decisions

The Benchmark Conversation

Before firing or replacing an underperforming agency, have the benchmark conversation. Share your cost-per-case data and your targets. Confirm that your agency understands what you're optimizing for and has the downstream data they need to adjust strategy.

Many agency relationships underperform because the agency is optimizing for the wrong metrics — not because the agency is incapable. If you've never told your agency what your target cost per signed case is, and you've never shared your intake conversion data, you haven't given them what they need to hit your targets.

Give the agency 60 days with your actual performance targets and your downstream data. If cost per case doesn't improve meaningfully after that, the scorecard gives you a defensible case for making a change.

When the Data Says to Make a Change

Some specific thresholds worth paying attention to:

  • Cost per signed case more than 40% above target for three consecutive months, despite defined performance targets and access to intake data.
  • Conversion rate declining for four or more consecutive months without a credible explanation and action plan from the agency.
  • Inability to produce channel-level performance data after being asked for it explicitly.
  • Scorecard composite below 2.5 for three consecutive months.

None of these are automatic triggers for termination — context always matters. A major algorithm change or a new competitor entering your market creates headwinds that no agency can fully mitigate in 90 days. But these thresholds should prompt a structured review and a specific improvement plan with defined milestones.

The firms that get the most from their agency relationships are the ones that bring clear data, clear expectations, and regular structured reviews. Agencies perform better when they know exactly what you're measuring. That's true whether the agency is outstanding or underperforming.

Related guide: See our complete guide to evaluating a PI marketing agency — 7 evaluation criteria, red flags to watch for, and how to hold agencies accountable with data.

Related guide: See our complete guide to revenue intelligence for PI firms — the four layers, the maturity model, and what RI replaces in your current stack.

See it in action

Discover how RevenueScale tracks cost per case from click to settlement.

Book a Demo

Want to see Revenue Intelligence in action?

See how RevenueScale connects your marketing spend to case outcomes — so you can cut waste, scale winners, and prove ROI to partners.