Judge withdraws legal decision after discovering fake quotes and citation errors linked to AI-like mistakes
A U.S. district court judge has withdrawn a decision in a biopharmaceutical securities case after lawyers pointed out that the ruling contained fabricated quotes and incorrect references to other legal cases—errors that closely resemble those seen in legal documents linked to artificial intelligence tools. In a letter to New Jersey Judge Julien Xavier Neals, lawyer Andrew Lichtman highlighted a "series of errors" in the judge’s opinion denying a motion to dismiss a lawsuit filed by pharmaceutical company CorMedix. The errors included misrepresenting the outcomes of three other cases and including "numerous instances" of false quotes attributed to previous rulings. As reported by Bloomberg Law, a notice added to the court’s docket on Wednesday stated that the original opinion and order were "entered in error" and that a revised decision would follow. While minor corrections to court rulings—such as fixing grammar, spelling, or formatting—are common, major changes like removing entire sections or redacting content are rare. Although there is no confirmed evidence that the judge used AI tools to draft the decision, the nature of the errors has raised concerns. Similar mistakes have appeared in legal filings where attorneys have used AI for research, leading to issues like false citations and fabricated case details. In recent weeks, attorneys representing MyPillow founder Mike Lindell were fined for using AI-generated citations, and Anthropic faced criticism when its Claude AI chatbot made an "embarrassing" citation error during a legal dispute with music publishers. These incidents underscore the risks of relying on AI for legal work, as large language models can produce inaccurate or entirely fabricated information. The situation highlights the growing challenges of integrating AI into legal processes. While tools like ChatGPT are becoming more common for research and drafting, the potential for errors remains a significant concern. Legal professionals are increasingly aware of these risks, and courts are beginning to scrutinize the use of AI-generated content more carefully. The case involving CorMedix remains under review, and the court is expected to issue a corrected ruling soon. The incident has sparked broader discussions about the reliability of AI in legal settings and the need for human oversight in critical decision-making.