Back to Blog
Source Intelligence4 min read2026-04-05

How to Evaluate a Lead Vendors AI Claims Most Are Overstated

Your lead vendors have discovered a new favorite word: AI. It is in their pitch decks. It is on their websites. It is in every quarterly review where they…

How to Evaluate a Lead Vendors AI Claims Most Are Overstated

Your lead vendors have discovered a new favorite word: AI. It is in their pitch decks. It is on their websites. It is in every quarterly review where they explain why their costs went up but your results stayed flat. “We've invested in AI-powered lead quality optimization,” they say, as if that sentence explains anything.

Some of these claims are real. A few vendors are genuinely using machine learning to improve targeting, filter out low-quality leads, or optimize delivery timing. But most of what you are hearing is marketing language wrapped around basic automation, rule-based filters, or nothing at all.

The problem is that you, as a PI marketing director managing $100,000 to $750,000 per month across five or more vendors, do not have time to earn a computer science degree to sort the real from the fake. What you need is a practical evaluation framework — a set of questions that separate vendors who are actually using AI to deliver better results from vendors who added “AI-powered” to their homepage and called it innovation.

Here are five questions that will tell you everything you need to know.

Question 1: What Data Trains the Model?

Every legitimate AI system learns from data. The quality of the AI is directly limited by the quality and relevance of the data it trains on. When a vendor claims their AI improves lead quality, the first question is: what data is the model learning from?

There are several possibilities, and they are not equally valuable:

  • Their own delivery data— click rates, form completion rates, phone connection rates. This is the most common and least useful for your purposes. It tells the AI which leads are likely to connect, not which leads are likely to become signed cases.
  • Aggregated client outcome data— conversion rates, signed case rates, or quality scores reported back by multiple clients. This is better, but raises questions about how representative the aggregate is for your firm specifically.
  • Your firm's specific outcome data— the AI learns from what happens to leads after your intake team processes them. This is the gold standard, but very few vendors actually do this because it requires a data feedback loop most vendors do not have.

If a vendor cannot clearly explain what data their AI trains on, that is your first red flag. “Proprietary algorithms” is not an answer. “Our AI learns from billions of data points” is not an answer. The volume of data is irrelevant if the data does not connect to the outcome you care about — signed cases and settlements.

What to listen for: A credible vendor will be able to describe specifically what signals the model uses, how often it retrains, and what outcome variable it optimizes toward. If the optimization target is clicks or form fills rather than downstream case outcomes, the AI is optimizing for the wrong thing.

Question 2: How Is “Quality” Defined?

When a vendor says their AI improves lead quality, the next question is deceptively simple: what do you mean by quality?

This matters because “quality” in lead generation is not a universal standard. Different definitions produce very different outcomes:

  • A lead that answers the phone— this is the lowest bar, and some vendors define quality this way
  • A lead that matches basic criteria— right geography, right case type, valid contact information
  • A lead that your intake team qualifies— the person has a legitimate injury, was not at fault, and is interested in representation
  • A lead that becomes a signed case— the only definition that connects to revenue
  • A lead that becomes a case that settles— the definition that connects to actual money in your account

A vendor whose AI optimizes for “leads that answer the phone” is solving a different problem than a vendor whose AI optimizes for “leads that become signed cases.” Both might use the word “quality.” They are not talking about the same thing.

What to listen for:Ask the vendor to define quality in specific, measurable terms. Then ask how their AI's definition of quality maps to your cost per signed case. If there is a gap between what their AI optimizes for and what your firm measures as success, the AI is not solving your problem.

Question 3: What Is the Feedback Loop?

AI without a feedback loop is just a static filter. The value of machine learning is that it gets better over time as it receives new data about what worked and what did not. The question is whether the vendor's system actually has a mechanism to learn and improve.

A real feedback loop looks like this: the vendor sends leads, your firm processes them, outcome data flows back to the vendor (either automatically or through regular reporting), and the model adjusts its targeting or filtering based on that outcome data. Over time, the leads get better because the model is learning what a good lead looks like for your firm specifically.

What most vendors actually have is no feedback loop at all. They send leads. You process them. They have no idea what happened. Their “AI” operates on the same parameters it launched with six months ago because it has no new information to learn from.

Some vendors have a partial feedback loop — they ask you to report back on lead quality through a portal or a monthly spreadsheet. This is better than nothing, but it is manual, inconsistent, and typically limited to binary outcomes (good lead or bad lead) rather than the nuanced data that makes machine learning genuinely useful.

What to listen for: Ask the vendor how outcome data flows back into their model. Ask how often the model updates. Ask for an example of a specific change the model made based on feedback from a client like you. If they cannot provide a concrete example, the feedback loop is theoretical, not operational.

Question 4: Can You Verify Results Independently?

This is the question that separates vendors who believe in their AI from vendors who are using “AI” as a reason you should stop asking hard questions about performance.

If a vendor's AI is genuinely improving lead quality, that improvement should be verifiable in your own data. Your conversion rates from that vendor should be trending upward. Your cost per signed case from that vendor should be trending downward. Your rejection rates should be declining. These are all metrics you can measure independently, without relying on the vendor's self-reported numbers.

A vendor with a genuinely effective AI will welcome independent verification. They will encourage you to track outcomes by source. They will want to see your data because it feeds their model.

A vendor whose “AI” is mostly marketing will resist independent verification. They will point to their own dashboards and reports. They will show you metrics that look positive but do not connect to your cost per case. They will explain away discrepancies between their reported improvements and your actual results.

What to listen for:Tell the vendor you plan to track their lead quality independently using cost per signed case as the primary metric. Watch their reaction. A confident vendor says “great, let's compare notes.” A vendor selling AI-as-marketing says “our internal metrics show a different picture.”

Question 5: What Happens When the AI Is Wrong?

Every AI system makes mistakes. Models degrade over time as market conditions change. Training data becomes stale. Edge cases produce unexpected outputs. The question is not whether the vendor's AI will ever underperform. The question is what happens when it does.

A vendor with a mature AI system will have monitoring in place to detect when the model's performance degrades. They will have a process for retraining or adjusting. They will be able to describe a specific instance where their model underperformed and what they did about it.

A vendor whose AI claims are overstated will not have answers to these questions, because there is no real model to monitor. The “AI” is a set of static rules branded as intelligence. When those rules stop working, the vendor does not even know it until clients start complaining about lead quality.

What to listen for:Ask for a specific example of when the AI did not perform as expected and what the vendor did about it. A genuine answer sounds like: “Last quarter, our model started over-indexing on a particular demographic segment, and conversion rates dropped for clients in your market. We identified it within two weeks, retrained on updated data, and performance recovered within a month.” An evasive answer sounds like: “Our AI is continuously optimizing.”

The Core Insight: If You Cannot Validate It, It Is Marketing

Here is the principle that ties all five questions together: if a vendor's AI claims cannot be validated against your own cost per case data, those claims are marketing, not technology.

This is not cynicism. It is pragmatism. The entire point of using AI in lead generation is to produce better outcomes — more signed cases, lower cost per case, higher case quality. If those outcomes are not showing up in your independent tracking, the AI is not working for you, regardless of how sophisticated the vendor's explanation sounds.

The vendors who are genuinely using AI well will be the easiest to evaluate, because their results will show up in your numbers. The vendors who are using AI as a marketing buzzword will be the hardest to pin down, because they will always have an explanation for why their metrics look different from yours.

A Practical Evaluation Scorecard

When you sit down with a vendor who claims AI capabilities, score them across these five dimensions:

  • Training data clarity— can they explain exactly what data the model learns from? (Yes = 2 points, Vague = 1, No = 0)
  • Quality definition alignment— does their definition of “quality” map to your cost per case? (Yes = 2, Partial = 1, No = 0)
  • Feedback loop existence— is there a real, operational mechanism for outcome data to flow back into the model? (Yes = 2, Manual/Partial = 1, No = 0)
  • Independent verifiability— are improvements visible in your own tracking data? (Yes = 2, Unclear = 1, No = 0)
  • Failure transparency— can they describe a specific instance of model underperformance and their response? (Yes = 2, Generic = 1, No = 0)

A vendor scoring 8 to 10 is genuinely leveraging AI in a way that benefits your firm. A vendor scoring 4 to 7 has some real capability mixed with overstatement. A vendor scoring 0 to 3 is selling you the word “AI,” not the technology.

What This Means for Your Vendor Management

You do not need to become an AI expert to manage vendors effectively. You need to be an outcomes expert. Track cost per case by vendor. Track conversion rates by source. Track rejection rates and withdrawal rates over time. When a vendor claims their AI is improving, check whether that improvement shows up in the metrics that actually connect to revenue.

The firms that do this well will make better allocation decisions, pay less per signed case, and avoid being persuaded by polished presentations that substitute buzzwords for results. The firms that do not will continue to pay premium prices for leads that come with an AI sticker and the same conversion rates they had last year.

The technology is evolving quickly. The evaluation framework is not. Good data, clear definitions, verifiable results, and transparency about limitations — these have always been the markers of a vendor worth paying. AI does not change that. It just makes the question more urgent.

Related guide: See our complete guide to AI for personal injury law firms — what works now, what's hype, the data foundation you need, and the 4-phase adoption roadmap.

See it in action

Discover how RevenueScale tracks cost per case from click to settlement.

Book a Demo

Want to see Revenue Intelligence in action?

See how RevenueScale connects your marketing spend to case outcomes — so you can cut waste, scale winners, and prove ROI to partners.