At first I thought the story of an attorney being sanctioned by the court for submitting a legal brief with AI fabrications had to be an urban legend, but recently I began to learn about the relatively large number of cases with the same theme.
The case of Mata v. Avianca, Inc. Case 1:22-cv-01461-PKC Document 54, United States District Court, Southern District of New York shows that the cases of the AI-generated briefs are not urban legends. The Court held in part:
In researching and drafting court submissions, good lawyers appropriately obtain assistance from junior lawyers, law students, contract lawyers, legal encyclopedias and databases such as Westlaw and LexisNexis. Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings. Rule 11, Fed. R. Civ. P. Peter LoDuca, Steven A. Schwartz and the law firm of Levidow, Levidow & Oberman P.C. (the “Levidow Firm”) (collectively, “Respondents”) abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.
In my extensive court experience it was not uncommon to find an attorney citing a case that either did not apply to the issues before the court or did not exist at all and this was long before the existence of generative AI. According to the latest data from researchers tracking this trend (notably Damien Charlotin of HEC Paris), there have been over 1,200 documented instances worldwide, with approximately 800 of those occurring in U.S. courts. See Kaste, Martin. “Penalties Stack up as AI Spreads through the Legal System.” WGCU News | PBS & NPR for Southwest Florida, April 3, 2026. https://www.wgcu.org/2026-04-03/penalties-stack-up-as-ai-spreads-through-the-legal-system.
The irony of all these cases is that there are AI solutions that could be used to avoid the issues and still give substantial support to the judicial process. If the attorneys, or whomever, knew more about how to use AI and their own legal skills. all these cases could be avoided. However, I do have several suggested lessons that we should all learn.
#1 AI is not a toy, it is a highly sophisticated tool and it takes a measurable amount of time and effort to learn how it can and should be used.
#2 Read the fine print. This is actually Rule Ten of The Rules of Genealogy. See https://genealogysstar.blogspot.com/2025/10/another-new-rule-of-genealogy-for-2025.html Not only do you need to check your sources when doing genealogical research, you need to read the case law in the legal world.
#3 String cites ask for trouble. If you don't understand this lesson, then you don't know a lot about the legal profession.
#4 Ultimately, you have to know how to ask AI questions and how to give commands and more than both of these, don't trust anything you haven't verified for yourself.
That's probably enough suggested lessons for today.