When the “Expert” Is an Algorithm
When AI Provides Courtroom Expertise
Artificial Intelligence is a hot topic in every field, but, too often, when it comes to the legal system, it is lawyers’ and judges’ struggles with AI that make news, time after time. But while everyone is talking about “hallucinated” case citations, some are focusing on the other ways AI is entering the courtroom. For decades, expert evidence has required a human witness—a physician, engineer, accountant, or other specialist applying expertise to the facts of a case. But that framework isn’t prepared for an “analysis” generated by AI.
Courts are increasingly encountering situations where an AI algorithm identifies a suspect, reconstructs an accident, or flags conduct as suspicious. As those technologies become more common, courts are forced to confront an important evidentiary question: when a machine produces an analytical conclusion, how should that evidence be evaluated before it is presented to a jury?
Proposed Rule 707: A New Framework for Machine-Generated Evidence
The federal judiciary is about to address this issue directly. The Advisory Committee on Evidence Rules is considering a proposal for a new Federal Rule of Evidence 707, which would address the admissibility of certain forms of machine-generated conclusions. The goal is to prevent litigants from bypassing the reliability safeguards of Federal Rule of Evidence 702 by presenting algorithmic outputs without expert testimony.
Under the proposal, courts would examine whether a machine-generated conclusion functions similarly to expert testimony and, if so, whether the underlying methodology satisfies reliability standards comparable to those applied under Rule 702. Commentators describe the proposal as an attempt to prevent parties from using automated analysis as a backdoor around traditional expert-witness gatekeeping.
The rule remains under consideration, and debate continues over whether a new rule is necessary or whether existing evidentiary rules already provide sufficient tools.
From Ring Cameras to Tesla Data: Technology Producing Machine Conclusions
One reason the debate is gaining momentum is that algorithmic analysis now appears in technologies used every day.
A Ring doorbell camera provides a simple example. If the device records video of someone approaching a home, the footage is ordinary evidence. But many modern systems do more than record video. They can detect motion, classify activity, identify faces, or flag behavior that software determines is unusual.
At that point, the system is not simply recording events—it is interpreting them.
Similar issues arise in the context of vehicle data. Tesla vehicles collect large volumes of sensor information through cameras and radar systems. When accidents occur, parties increasingly rely on that data to reconstruct the events leading up to a crash. But the system’s conclusions—such as object detection, lane tracking, or automated braking—are generated through software interpreting environmental inputs.
Consumer electronics present another example. Apple’s Face ID system uses biometric facial mapping to verify identity. In other contexts, similar technologies are used to identify individuals in photographs or surveillance footage.
As these technologies become more common, courts will increasingly face the question of whether algorithmic outputs should—or even could—be treated as raw data or whether those outputs are actually analytical conclusions requiring reliability review.
Facial Recognition and Government Databases Raise New Evidentiary Questions
The issue becomes even more significant when facial recognition technology is used by government agencies.
Recent reporting indicates that U.S. Immigration and Customs Enforcement (ICE) has deployed a mobile facial-recognition tool known as Mobile Fortify, which allows agents to capture biometric data in the field and compare it against government databases. These systems rely on algorithms trained on large datasets to generate identification matches.
If such results are introduced in court, litigants may challenge the reliability of the algorithm, the training data used to develop it, or the system’s error rate. Those types of challenges closely resemble the reliability inquiries courts already conduct when evaluating expert testimony.
Why the Proposed Rule 707 Is Generating Debate
Not surprisingly, the proposed rule has generated significant debate.
Supporters argue that explicit guidance is needed because AI-driven tools are increasingly embedded in investigations, forensic analysis, and litigation. Without clear standards, courts and parties will struggle with how to evaluate the reliability of algorithmic conclusions.
Critics argue the proposal may be too broad. Many technologies—from breathalyzers to medical imaging devices—produce automated outputs. Some commentators worry that requiring reliability hearings for every technological output could unnecessarily complicate routine litigation.
Others believe the rule may be premature, noting that existing evidentiary rules such as Rules 702, 901, and 403 already allow courts to address reliability concerns.
A Familiar Issue for Wisconsin Courts
For Wisconsin lawyers, the debate may sound familiar.
In 2011, Wisconsin amended Wis. Stat. § 907.02 to align the state’s expert testimony rule with the federal Daubert reliability framework. Under the statute, expert testimony is admissible only if it is based on sufficient facts or data, is the product of reliable principles and methods, and reflects a reliable application of those methods to the facts of the case.
The Wisconsin Supreme Court confirmed the application of this standard in Seifert v. Balink, emphasizing the court’s role as a gatekeeper in evaluating expert methodology before evidence is presented to a jury.
As machine-generated analysis appears more frequently in Wisconsin litigation, state courts applying Section 907.02 may face many of the same reliability questions federal rulemakers are now attempting to address.
The Bottom Line for Wisconsin Litigators
Artificial Intelligence is increasingly producing the types of analytical conclusions that courts historically relied on human experts to provide. Whether those conclusions come from facial-recognition software, vehicle sensor systems, biometric identification tools, or consumer technologies, the core evidentiary question is the same: when a machine generates an analytical result, how should courts evaluate its reliability before allowing a jury to rely on it?
That question is at the heart of the proposed Federal Rule of Evidence 707, which would extend reliability scrutiny to certain forms of machine-generated analysis. Even if the rule is never adopted in its current form, the underlying problem will not disappear. Courts will increasingly face disputes over how algorithmic systems operate, what data they rely on, and whether their outputs function more like raw data or expert-style conclusions.
For Wisconsin litigators, unless and until Wisconsin adopts a rule similar to the proposed federal rule, those questions will likely arise within the existing framework of Wis. Stat. § 907.02 and the Daubert reliability standard, which already requires courts to evaluate the reliability of expert methodology.
The practical takeaway is straightforward: algorithmic evidence is already entering courtrooms, and courts will increasingly be asked to determine whether the technologies generating that evidence satisfy the same reliability principles that govern traditional expert testimony. Whether courts and litigators need to find ways to apply the current rules to this question, or whether state and federal judiciaries will adopt specific rules regulating machine-generated conclusions, this issue is not going away.
Stafford Rosenbaum LLP is a full-service law firm with two convenient office locations in Madison and Milwaukee, Wisconsin. Over 145 years of dedication to businesses, governments, nonprofits, and individuals has proven that effective client communication continues to be the heart of our practice.