← Back to incidents

ChatGPT Generated Fake Legal Citations in Multiple Court Cases Worldwide

High

Multiple lawyers across the US, Canada, and UK submitted ChatGPT-generated legal briefs containing fabricated case citations to courts in 2023-2024, leading to sanctions, fines, and new bar association guidelines on AI use in legal practice.

Category
Hallucination
Industry
Legal
Status
Resolved
Date Occurred
Jan 1, 2023
Date Reported
May 25, 2023
Jurisdiction
International
AI Provider
OpenAI
Model
ChatGPT
Application Type
chatbot
Harm Type
legal
Estimated Cost
$500,000
People Affected
50
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
settled
Regulatory Body
Multiple bar associations and courts
Fine Amount
$25,000
legal-hallucinationcourt-sanctionsattorney-ethicsAI-verificationjudicial-integritybar-association-response

Full Description

Beginning in early 2023, a pattern of incidents emerged where lawyers submitted legal briefs containing fabricated case citations generated by ChatGPT to courts across multiple jurisdictions. The most prominent case involved attorneys Steven Schwartz and Peter LoDuca of Levidow, Levidow & Oberman, who submitted a brief in Mata v. Avianca containing six non-existent cases with realistic citations, judicial opinions, and legal reasoning that ChatGPT had hallucinated. Following the Avianca revelation, similar incidents surfaced internationally. In Canada, lawyers were discovered to have submitted AI-generated briefs with fabricated citations in family court proceedings. In the UK, barristers faced disciplinary action for similar conduct in both civil and criminal cases. The Park v. Kim case in New York involved another instance where ChatGPT-generated fake precedents were submitted to court, while the Kruse v. Kruse divorce proceedings in Michigan revealed extensive use of fabricated case law. Judges across jurisdictions expressed serious concerns about the integrity of the legal system. Federal Judge P. Kevin Castel in the Avianca case stated that the fabricated cases had 'bogus judicial decisions with bogus quotes and bogus internal citations.' Courts began implementing new requirements for lawyers to certify the authenticity of their citations and disclose AI assistance in brief preparation. Bar associations responded with emergency guidance and new ethical rules. The American Bar Association issued Model Rule 5.5 amendments addressing AI use, while state bars implemented continuing education requirements on AI technology. The Law Society of England and Wales published comprehensive guidance on AI use in legal practice, emphasizing the lawyer's duty to verify all AI-generated content. The incidents revealed systemic issues with AI literacy in the legal profession and highlighted the need for technological competence standards. Many lawyers admitted they were unaware that ChatGPT could generate false information that appeared authentic. The widespread nature of these incidents across multiple jurisdictions demonstrated that this was not isolated misconduct but rather a profession-wide knowledge gap about AI capabilities and limitations.

Root Cause

ChatGPT hallucinated non-existent legal cases with realistic-looking citations when asked to provide legal research, and lawyers failed to verify the authenticity of the generated cases before submitting them to courts.

Mitigation Analysis

Mandatory verification protocols requiring lawyers to independently confirm all AI-generated citations through legal databases like Westlaw or LexisNexis could have prevented these incidents. Implementing institutional policies requiring disclosure of AI use in legal research, combined with supervised review processes for AI-generated content, would significantly reduce similar risks.

Litigation Outcome

Lawyers faced sanctions, fines, and disciplinary proceedings. Steven Schwartz and Peter LoDuca were fined $5,000 in the Avianca case.

Lessons Learned

The incidents revealed critical gaps in AI literacy among legal professionals and demonstrated the need for mandatory verification protocols when using AI tools for legal research. Bar associations worldwide have since implemented new ethical guidelines and continuing education requirements specifically addressing AI use in legal practice.

Sources

Lawyers sanctioned for submitting fake ChatGPT cases to federal court
American Bar Association · Jul 1, 2023 · news
Artificial Intelligence Guidance
Law Society of England and Wales · Nov 15, 2023 · regulatory action