Artificial intelligence has reshaped nearly every corner of the legal profession — from contract review and e-discovery to motion drafting and risk assessment. But the rapid adoption of generative AI has not been limited to law firms and corporate legal departments. In an unprecedented twist, it has now reached the judiciary itself — with consequences that underscore both the power and peril of this technology.

A Modern Cautionary Tale

In October 2025, two federal judges — U.S. District Judge Henry Wingate of Mississippi and U.S. District Judge Julien Xavier Neals of New Jersey acknowledged that AI tools were used, without authorization, in the drafting of court orders that were later found to contain serious factual inaccuracies.

According to letters disclosed by U.S. Senate Judiciary Committee Chair Chuck Grassley, both judges confirmed that members of their staff had relied on generative AI programs, including ChatGPT and Perplexity AI, to assist with opinion drafting. The resulting orders, described by Grassley as “error-ridden,” included misidentified parties, nonexistent quotations, and factual assertions unsupported by the record.

In Judge Neals’s chambers, a law school intern reportedly used ChatGPT to conduct legal research in a securities case, leading to the publication of an opinion that cited nonexistent statements and case law. In Mississippi, Judge Wingate stated that a law clerk employed Perplexity AI “as a foundational drafting assistant” to synthesize information from the docket, producing a draft opinion that was prematurely uploaded to the court’s electronic system. Both rulings were later withdrawn and replaced, and both judges have since instituted written policies to regulate the use of AI within their chambers.

Senator Grassley praised the judges for acknowledging the errors but urged the federal judiciary to establish stronger AI oversight protocols, emphasizing that “the judiciary has an obligation to ensure the use of generative AI does not violate litigants’ rights or prevent fair treatment under the law.”

AI in the Judiciary: Promise and Peril

The idea of AI-assisted judicial drafting is not inherently alarming. Courts have long relied on clerks and software tools to manage overwhelming workloads. In fact, AI can streamline legal research, accelerate opinion writing, and promote consistency across cases. But the recent missteps demonstrate a fundamental truth: without human verification, even the most advanced AI models are prone to “hallucinations” — plausible but entirely fabricated statements presented as fact.

Unlike ordinary clerical errors, AI hallucinations have a particular danger: they are often convincing. A well-phrased but false citation can slip past even a diligent reviewer, and the authority of a judicial order gives such errors a veneer of legitimacy that can ripple through future cases, briefs, and public trust.

For litigants, the implications are sobering. A single erroneous factual finding or fabricated precedent can change the outcome of a motion, alter the course of settlement negotiations, or taint the perception of judicial impartiality. For the judiciary, these events expose the tension between innovation and integrity — a reminder that the power of AI demands not only technical literacy but procedural safeguards.

Why Lawyers Must Stay Vigilant

Attorneys can no longer assume that judicial orders are untouched by machine assistance. In this new environment, vigilance has become an ethical necessity.

When reviewing judicial opinions — especially those issued quickly or in high-volume dockets — lawyers should be alert for anomalies that could signal AI involvement. Misquoted sources, misplaced parties, or citations that cannot be verified through Westlaw, LexisNexis, or other trusted databases may warrant further scrutiny. Counsel should feel empowered to raise these issues professionally and on the record, as the attorneys in both the Mississippi and New Jersey cases did.

More broadly, lawyers and clients must recognize that the same technology that improves efficiency can also introduce systemic risks if used without guardrails. Firms and in-house teams should implement written AI-use policies that define approved tools, mandate human verification of outputs, and prohibit uploading confidential materials into public AI models.

Training programs for associates and paralegals should emphasize both the capabilities and limitations of generative AI — including its tendency to produce authoritative-sounding but unfounded statements. And when working with courts or agencies that disclose AI usage, practitioners should consider adapting review processes accordingly, verifying citations, factual assertions, and quotations before relying on them in subsequent filings.

The Broader Lesson for the Legal System

The federal judiciary’s encounter with AI hallucinations is not merely a cautionary tale about technology; it is a case study in accountability. Just as lawyers have an ethical duty to ensure competence and candor, the courts bear a corresponding responsibility to maintain public confidence in the accuracy of their decisions.

These incidents also highlight the growing need for institutional guidance. Proposed Federal Rule of Evidence 707 — which would subject machine-generated evidence to the same reliability standards as human expert testimony — is a step toward addressing these challenges. But broader reforms are likely needed. Clear judicial policies on AI use, disclosure requirements for AI-assisted rulings, and transparent review protocols could all help restore trust in an era when technology increasingly mediates the law’s most fundamental functions.

Still, the lesson is not to retreat from AI. Used responsibly, generative tools can enhance fairness, improve access to justice, and reduce costs. The challenge lies in balancing innovation with verification — ensuring that human judgment remains at the center of legal decision-making.

Conclusion: Balancing Innovation with Integrity

The twin episodes in Mississippi and New Jersey will likely be remembered as the judiciary’s brush with generative AI error — a moment that exposed both the promise and the peril of this technology. They serve as a warning not only to judges but to lawyers, clients, and litigants: artificial intelligence, for all its capability, is only as trustworthy as the humans who supervise it.

At RumbergerKirk, we view this not as a deterrent but as a call to action. Our attorneys and technology team are actively exploring the safe, responsible integration of AI into legal workflows — ensuring that innovation serves accuracy, not the other way around. The goal is simple: to deliver faster, smarter, and more reliable legal service while maintaining the professional judgment and human oversight that our clients deserve.