Artificial intelligence has the potential to reshape nearly every corner of the legal profession — from contract review and e-discovery to motion drafting and risk assessment. Generative AI, which creates new content from existing data sources, has gained recent attention among litigators for both its positive aspects as well as its drawbacks. Its rapid adoption has not been limited to law firms and corporate legal departments.  As with any emerging research tool, the judiciary is beginning to incorporate AI into its work. Recent incidents demonstrate, however, that no one is immune from the issues that make generative AI an imperfect resource at this point.

A Modern Cautionary Tale

In October 2025, two federal judges acknowledged that AI tools were used, without authorization, in the drafting of court orders that were later found to contain factual inaccuracies. In one of the instances, Reuters reported that a court order contained “incorrect plaintiffs and defendants” as well as allegations that were not in the complaint. Reports indicated that the AI research was included in a document which was then inadvertently put in the public docket before it went through a review process.

Research from IBM says that generative AI hallucinations can and do happen even in the best of circumstances, and recommends that AI output should always be reviewed and validated by a human being to make corrections if needed.

According to letters disclosed by U.S. Senate Judiciary Committee Chair Chuck Grassley, both judges confirmed that members of their staff relied on generative AI programs, including ChatGPT and Perplexity AI, to assist with opinion drafting.

In one judge’s chambers, a law school intern reportedly used ChatGPT to conduct legal research in a securities case, leading to the publication of an opinion that cited nonexistent statements and case law. In the other instance, the judge stated that a law clerk employed Perplexity AI “as a foundational drafting assistant” to synthesize information from the docket, producing a draft opinion that was prematurely uploaded to the court’s electronic system. Both rulings were later withdrawn and replaced, and both judges have since instituted written policies to regulate the use of AI within their chambers.

Senator Grassley praised the judges for acknowledging the errors but urged the federal judiciary to establish stronger AI oversight protocols, emphasizing that “the judiciary has an obligation to ensure the use of generative AI does not violate litigants’ rights or prevent fair treatment under the law.”

AI in the Court: Opportunities and Risks

The idea of AI-assisted judicial drafting is not inherently alarming. Courts have long relied on clerks and software tools to manage overwhelming workloads. In fact, AI has the potential to streamline legal research and accelerate opinion writing. But the recent orders demonstrate a fundamental truth: without human verification, even the most advanced AI models are prone to “hallucinations” — plausible but entirely fabricated statements presented as fact.

Unlike ordinary clerical errors however, AI hallucinations are particularly dangerous because they are often very convincing. This heightens the importance and necessity of carefully reviewing even the most convincingly phrased AI-generated content. For litigants and the judiciary, these events are a reminder the tension sometimes caused by innovation, and a reminder that the power of AI demands not only technical literacy but procedural safeguards.

Best Practices for Attorneys when Using AI

Many innovative tools are imperfect and require a break-in period.  Generative AI, with its power and peril, is one such tool.  Litigants and the judiciary are responsibly incorporating generative AI into their practices.  As we have seen, there have been speed bumps along the path.  In this new environment, vigilance has become a necessity.

When reviewing judicial opinions, lawyers should be alert for anomalies that could signal AI involvement. Misquoted sources, misplaced parties, or citations that cannot be verified through Westlaw, LexisNexis, or other trusted databases may warrant further scrutiny. Counsel should raise these issues professionally and on the record, as the attorneys in both of the recent cases did.

More broadly, lawyers and clients must recognize that the same technology that improves efficiency can also introduce systemic risks if used without guardrails. Firms and in-house teams should implement written AI-use policies that define approved tools, mandate human verification of outputs, and prohibit uploading confidential materials into public AI models.

Training programs for associates and paralegals should emphasize both the capabilities and limitations of generative AI, including its tendency to produce authoritative-sounding but unfounded statements. And when working with courts or agencies that disclose AI usage, practitioners should consider adapting review processes accordingly, verifying citations, factual assertions, and quotations before relying on them in subsequent filings.

The Broader Lesson for the Legal System

The federal judiciary’s encounter with AI hallucinations is another cautionary tale as the legal industry embraces the latest technology. These incidents highlight the growing need for institutional guidance. Proposed Federal Rule of Evidence 707, which would subject machine-generated evidence to the same reliability standards as human expert testimony, is a step toward addressing these challenges. We expect to see specific judicial policies surface regarding AI use with increasing frequency. Clear judicial policies on AI use, disclosure requirements for AI-assisted rulings, and transparent review protocols could all help restore trust in an era when technology increasingly mediates the law’s most fundamental functions.

Still, the lesson is not to retreat from AI. Used responsibly, generative tools can enhance fairness, improve access to justice, and reduce costs. The challenge lies in balancing innovation with verification — ensuring that human judgment remains at the center of legal decision-making.

The twin episodes outlined above serve as a warning to the entire legal community: artificial intelligence, for all its capability, is only as trustworthy as the humans who supervise it.

At RumbergerKirk, we are actively exploring the safe, responsible integration of AI into legal workflows and are focused on ensuring that innovation is not at the expense of accuracy. Our goal is to deliver high quality legal services to our clients, enhanced by technology, but always with the professional judgment, review and analysis that our clients deserve.