New AI Rule, Old Standard: Proposed Federal Rule of Evidence 707 Aims to Apply Daubert Standard to AI-Generated Evidence
New AI Rule, Old Standard: Proposed Federal Rule of Evidence 707 Aims to Apply Daubert Standard to AI-Generated Evidence
In response to the rapidly increasing presence of AI-generated outputs in litigation, on June 10, 2025, the U.S. Judicial Conference’s Advisory Committee on Evidence Rules approved for publication for public comment a proposed new rule of evidence: Federal Rule of Evidence 707. The proposed new rule would subject machine-generated evidence, particularly, outputs from artificial intelligence systems, to the Daubert standard for reliability. That is, the proponent of AI-generated evidence would need to show that the evidence derives from a scientifically reliable process based on sufficient data and methods and that it is reliably applied to the facts of the case. Under the current rules, machine-created evidence produced by a human expert is already subject to the admissibility standard of Rule 702. But advocates of the new rule argue that the inherently human component of Rule 702 leaves machine or software output that is presented either on its own, or without the accompaniment of a human expert unchecked. This proposed rule aims to specifically target that evidence.
What Courts Should Consider Going Forward
The Committee anticipates that courts will use rule 707 to “[c]onsider[] whether the inputs into the process are sufficient for purposes of ensuring the validity of the resulting output. For example, the court should consider whether the training data for a machine learning process is sufficiently representative to render an accurate output for the population involved in the case at hand” and to “[c]onsider[] whether the process has been validated in circumstances sufficiently similar to the case at hand. For example, if the case at hand involves a DNA mixture of several contributors, likely related to each other, and a low quantity of DNA, the software should be shown to be valid in those circumstances before being admitted.” Committee on Rules of Practice and Procedure, Agenda Book, at pages 74-75 (May 2, 2025).
New Rule-New Challenges
The creation of a new rule could invite new problems. Meeting Rule 707’s disclosure requirements would foreseeably require hiring technical experts, analyzing proprietary models, and producing complex analyses. Strategically, parties could use the rule to challenge valid evidence as a delay or obstruction tactic, wielding the rule as a tool for legal gamesmanship rather than as a quality control. Not only would this increase the costs of litigation significantly across the board, but it would also permit an uneven playing field between under-resourced litigants and those with unconstrained resources. There is also the potential issue of over-exclusion, with courts erring on the side of exclusion when considering newer or experimental AI tools which may not meet the full rigor of Rule 707, through the proffered evidence is reliable and relevant. As machine learning continues to develop, there is also the question of how long the traditional Daubert analysis will suffice.
Rule 707 Acts as a Safeguard
Notwithstanding what may be unavoidable impacts or growing pains, as machine learning increasingly streamlines certain evidentiary aspects of litigation, practitioners are already beginning to rely on it and fairness demands protection against unchecked outcomes. Rule 707 aims to ensure that courts only admit methodologically sound and well-supported AI outputs. As AI can generate convincing but false evidence, Rule 707 offers a legal safeguard against the growing threat of data forgery. Additionally, creation of this rule would afford federal courts a clear and uniform approach to AI-generated evidence, which would help reduce the confusion and inconsistencies that currently cloud this novel area. More broadly, with established legal standards, developers would be incentivized to build AI systems that are explainable, auditable, and documented, thus fostering more responsible innovation in the tech industry.
Relying on Public Input
Public comment, which the Committee expects to be extensive, will inform whether to support or reject the rule. “The Committee believes that it will receive critically important information during the public comment period about the need for this new rule and that it will get input from experts on the kinds of machine-generated information that should be subject to the rule or that should be exempt from the rule.” Committee on Rules of Practice and Procedure, Agenda Book, at page 59 (June 10, 2025).
As AI continues to develop, so too must court procedures on how to accept machine-generated AI evidence. All federal practitioners must keep informed on this rapidly changing technology. As the Rules develop, attorneys must learn the correct questions to ask AI vendors when considering the purchase of AI products. Also, when engaging in hiring experts, the hiring party must know in advance what generative AI the expert may rely on to formulate opinions.