Fake News In Court:  Attorney Sanctioned for Citing Fictitious Case Law Generated by AI

Michael A. Delaney
Director and Chair, Litigation Department
Photo of Katherine B. Fiallo
Katherine B. Fiallo
Associate, Litigation Department
Published: New Hampshire Bar News
April 17, 2024

What’s in your pleading?  Do you know if your colleagues are using artificial intelligence?

In Smith v. Farwell, the trial judge had a problem.  When reviewing the plaintiff’s opposition to a motion to dismiss, the judge could not locate the cases cited by plaintiff’s counsel addressing the elements of a wrongful death action.  The judge scheduled a hearing looking for an explanation, but the plaintiff’s counsel had none to give.  So the judge sent him off to figure it out.  Plaintiff’s counsel then penned a letter to the court acknowledging that his opposition “inadvertently” included citations to multiple cases that “do not exist in reality.”  Smith v. Farwell, No. 2282CV01197 (Norfolk, SS. Mass. Superior Court February 12, 2024) at 5.  Not surprisingly, the judge convened another hearing.  It turned out that three opposition pleadings that had been filed by plaintiff included fictitious case cites to federal and state case law.  Id. at 6.  Plaintiff’s counsel explained that one associate attorney in his office, as well as two recent law school graduates who had not yet passed the bar, relied on an unidentified AI system “to locat[e] relevant legal authorities to support our argument[s].”  Id.  Even for the real cases cited, the judge could not determine if other mistakes, including typographical errors and inaccurate quotations that do not stand for the propositions stated, were caused by AI or human error.  Either way, the judge found that those mistakes weakened the plaintiff’s legal arguments and undermined counsel’s credibility.  Id. at 6, fn. 10.

At the sanctions hearing, the plaintiff’s counsel acknowledged that he was unaware of AI systems, and he did not know that AI systems can generate false or misleading information.  But he did acknowledge signing the pleading, which he reviewed only for “style, grammar and flow, but not for accuracy of the case citations.”  Id. at 6.  The court emphasized the seriousness of submitting incorrect and false statements to the court, and imposed a fine as a sanction. While the court acknowledged Plaintiff’s counsel’s lack of familiarity with AI systems as a reason for constraining the imposed sanction, more widespread adoption of AI across the legal profession may reduce the tenability of that defense in other cases.  The court cautioned the bar about the serious ethical risks posed by the use of AI, and it found that each practitioner has an ethical duty “to know whether AI technology is being used in the preparation of court papers that they plan to file in their cases, and if it is, to ensure that appropriate steps are being taken to verify the truthfulness and accuracy of any AI-generated content before papers are submitted.”  Id. at 15.

While AI technology is new, our professional conduct obligations are not.  Attorneys are required to show candor to the tribunal and to provide adequate supervision to their subordinates, including junior attorneys, law students, and support and administrative staff.  N.H. R. Prof. Conduct 3.3; N.H. R. Prof. Conduct 5.1, 5.2, 5.3.  One of the key points in Farwell is that the oppositions in question were drafted by “two recent law school graduates who had not yet passed the bar and one associate attorney.” Farwell, at 6. Plaintiff’s counsel had not reviewed the oppositions for the accuracy of the case citations, and in fact, did not know that the associate had utilized an AI system and could not identify which system had been used. While Plaintiff’s counsel said he trusted the associate’s work product, he did not supervise either the associate or the law school graduates.

The fact that AI can generate false information, including inaccurate case summaries and nonexistent cases, is well-publicized. These erroneous outputs are called “hallucinations”, or—as  coined by Kate Crawford, a professor at the University of Southern California at Annenberg and senior principal researcher at Microsoft Research—“hallucitations”.  Pranshu Verma and Will Oremus, ChatGPT invented a sexual harassment scandal and named a real law prof as the accused (April 5, 2023), https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/ .  Hallucinations can have profound consequences beyond the mere dissemination of false information.  As ChatGPT and other generative AI technologies draw on existing information, outputs may reflect real people or business names but fake cases and scenarios. The court in Farwell highlighted this issue by pointing to two different instances where individuals were falsely accused of misconduct by AI. In June 2023, an individual falsely accused of embezzlement in an erroneous case summary generated by ChatGPT sued OpenAI, the creator of ChatGPT, for libel.  Mack DeGuerin, OpenAI Sued for Libel After ChatGPT Accuses Man of Embezzlement (June 7, 2023), https://gizmodo.com/chatgpt-openai-libel-suit-hallucinate-mark-walters-ai-1850512647.  Attorneys and their clients may open themselves up to legal repercussions beyond court sanctions.

Generative AI undoubtedly has a lot of value to offer, both within the legal profession and outside of it, but only if it is used responsibly. One key issue is that there are few consistent norms about whether to cite information generated from AI chatbots, and if so, how to do it. The Bluebook has not yet issued guidance on how to cite outputs from generative AI sources, and many law schools, law reviews, and journals are left to develop their own approaches and norms.

As technological developments progress at a rate that often surpasses the legal profession’s capacity to generate and disseminate guidance, attorneys should take care to familiarize themselves with new advances and weigh their use of new technologies against standing ethical and professional responsibility norms. States are slowly catching up to the AI boom though. The state bar associations in Florida and California have already disseminated working opinions on the use of AI, and other states like New York have created task forces to assess the role of AI in the legal profession.  While the guidance develops, legal practitioners should remember that the first word in AI is artificial.