Judges Across the Country Make Clear: AI Hallucinations in Court Filings Will Not Be Tolerated

AI Hallucinations in Court Filings Will Not Be Tolerated

The rising popularity of AI-generative technology is often in the news these days, ranging from discussion of the enormous potential associated with said technologies to very real concerns about how the technologies work and whether the end result infringes the intellectual property rights of others. Another emerging narrative gaining increasing public attention is the use of AI-generative software within the legal industry—particularly when the technology generates fake caselaw, false quotations, and inaccurate legal analysis, all known as “hallucinations.”

Judge Fines Attorney $2000

For a time, it appeared that judges and attorneys were similarly aligned in attempting to determine the capabilities of AI generation software, as well as any drawbacks. But as the presence of AI hallucinations was discovered in court filings, judges soon recognized the need for greater scrutiny. In February of last year, the Superior Court of Massachusetts assessed a fine of $2,000 against an attorney for citing fictitious cases in a court brief that had been created using AI technology. In doing so, the court noted “two disturbing developments that are adversely affecting the practice of law,” the first being “the emerging tendency of increasingly popular generative artificial intelligence (“AI”) systems, such as ChatGPT and Google Bard, to fabricate and supply false or misleading information.”  The second development noted by the court was “the tendency of some attorneys and law firms to utilize AI in the preparation of motions, pleadings, memoranda, and other court papers” only to “blindly file their resulting work product in court without first checking to see if it incorporates false or misleading information.” Smith v. Falwell, et al., (Lawyers Weekly No. 12-007-24) (David, J.) (Suffolk Superior Court) Civil Action No. 2282cv01197 (Feb. 12, 2024).

Disclosure

There, after three separate filings were identified as including fictitious or non-existent caselaw, the offending law firm was forced to disclose that two recent law school graduates and a young associate had prepared the materials using AI. And while the supervising attorney had reviewed the work product for grammar and style, lead counsel had not confirmed the accuracy of the briefing. The court noted its appreciation for the honesty of lead counsel in disclosing these facts, but held that the disclosure did not exonerate the attorney and sanctioned the attorney $2,000.

At the time, the $2,000 sanction was considered a hefty fine. But in more recent months, courts (and judges) across the country have indicated that enough is enough: attorneys are now on more than sufficient notice of the dangers inherent in the use of AI generative technology and a failure to properly supervise such use—and ensure the accuracy of all court filings—will now come with much stricter penalties.

Fines Related to AI Hallucinations Increase in 2025

In February of this year, U.S. District Judge Kelly H. Rankin of the District of Wyoming ordered a $5,000 fine against three attorneys from Morgan & Morgan, the largest personal injury firm in the country, after the judge identified 8 non-existent cases cited in briefing submitted to the court. One attorney was personally sanctioned $3,000 and removed from the case after he admitted the brief was created using AI. Two other attorneys on the pleadings were allowed to remain in the case but fined $1,000 each.

Lacy v. State Farm

In May, Special Master Hon. Michael R. Wilner of the Central District of California sanctioned attorneys for a former L.A. district attorney who had brought suit against State Farm on claims regarding a professional liability policy. During the discovery phase of the case, plaintiff’s counsel filed a motion to compel that included purported opinions that were non-existent, inaccurately cited, or contained fabricated language. When the Special Master initially raised his concerns, counsel amended the brief but failed to remove the AI hallucinations. The court determined that the content was initially prepared by an attorney using generative AI software and then filed without proper review or verification by the supervising attorney, which to the court was “tantamount to bad faith” and revealed a persistent failure in oversight. As a result, the court struck the faulty briefing and denied the motion to compel. In addition, the two plaintiff’s firms were held jointly and severally liable for $31,100, representing $26,100 in fees incurred by the Special Master and $5,000 in fees incurred by defense counsel. The court also made clear that any further violations would be reported for disciplinary action. Lacy v. State Farm, C.D. Ca. May 22, 2025.

Damage to Attorney Reputation

The same month, Southern District of Indiana judge James Patrick Hanlon handed down a $6,000 sanction for hallucinations detected in court filings. Judge Hanlon’s fine was significantly lower than that recommended by the magistrate—who suggested a $15,000 fine as prior monetary sanctions “evidently failed to act as a deterrent here.” And while the magistrate noted that “confirming a case is good law is a basic, routine matter and something to be expected from a practicing attorney,” Judge Hanlon noted the need for a balance between deterrence and damage to the attorney’s professional reputation. Mid Cent. Op. Eng’rs Health & Welfare Fund v. HoosierVac, LLC, No. 2:24-cv-00326 (S.D. Ind. May 28, 2025). Regardless, the $6,000 fine assessed was one of the highest to date.

Johnson v. Dunn

Another large and well-respected firm, Butler Snow, was sanctioned months later in the Northern District of Alabama. There, upon discovering hallucinated caselaw and legal authority in a motion for leave, the judge noted that monetary sanctions were apparently proving ineffective as a deterrent and, instead, disqualified three Butler Snow partners from the case. The judge also ordered publication of his opinion in the Federal Supplement and reported the attorneys to the state bar in every state where they were licensed. Notably, the judge declined to sanction the firm based on the fact that the firm had an internal policy regarding the use of AI-generative technology and forbid the use of AI-generated content without practice group leader approval (one of the offenders here was a practice group co-leader). All attorneys listed in the pleadings were included in the sanction and the judge instructed all three to provide a copy of his order to their clients, opposing lawyers, and judges in every pending state and federal lawsuit in which they were involved as well as to every attorney at Butler Snow. Johnson v. Dunn, 2:21-cv-1701 (N.D. Ala. July 23, 2025).

AI Use Policy

Then in September of this year, the California 2d District court of Appeals handed down a whopping $10,000 individual fine after uncovering that 21 of 23 quotes in a brief were false AI hallucinations. In a scathing order, the court admonished “[s]imply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citation—whether provided by generative AI or any other source—that the attorney responsible for submitting the pleading has not personally read and verified.” In the wake of this case, the California Judicial Council issued guidelines requiring judges and staff to either ban the use of AI-generative technology or adopt an AI use policy by December 15, 2025. Likewise, at the request of the California Supreme Court, the California Bar is considering strengthening its code of conduct to address the issue directly. (All this as OpenAI markets ChatGPT as capable of passing the bar exam).

Further demonstrating that judges are concluding that monetary damages are not a sufficient deterrent in these situations, two attorneys at Cozen O’Connor ultimately were forced to admit to filing a document generated by generative AI in Nevada state court. Judge David Hardy offered each of them two options: (1) pay $2,500 each, be removed from the case and reported to the state bar; or (2) write their former law school deans and bar officials explaining their lapse in judgment and volunteer to speak on topics like AI and professional conduct. The firm claimed that is attorney Daniel Mann had filed an early, uncorrected version of a document generated by AI and promptly fired Mann. The other attorney on the file, Jan Tomasik, appears to still be with the firm.

Conclusion and Key Takeaways

Overall, if the past few months are any indication, judges across the country have made clear that attorneys nationwide are on more than sufficient notice of the dangers of using AI-generative software to prepare court filings and a failure to properly supervise and vet such use can and will result in stiff (and embarrassing) penalties.

The takeaways here are pretty clear: 

  • All attorneys listed on a court-filed document are responsible for the content and accuracy of that document.

  • Lead attorneys, in particular, must personally inspect and confirm the accuracy of all filed documents.

  • Law firms are more likely to escape sanctions in these types of situations if they have held internal discussions regarding the prevalence and danger of AI hallucinations and have an internal policy regarding the use of AI generative software in an effort to avoid the filing of content that contains AI hallucinations.


For more information about AI law, see our technology law services practice page.

Klemchuk PLLC is a leading IP law firm based in Dallas, Texas, focusing on litigation, anti-counterfeiting, trademarks, patents, and business law. Our experienced attorneys assist clients in safeguarding innovation and expanding market share through strategic investments in intellectual property.

This article is provided for informational purposes only and does not constitute legal advice. For guidance on specific legal matters under federal, state, or local laws, please consult with our IP Lawyers.

© 2026 Klemchuk PLLC | Explore our services