← Back to incidents
ChatGPT Fabricated Legal Citations in Avianca Federal Court Brief
MajorAttorney Steven Schwartz used ChatGPT to research legal precedents for a personal injury case against Avianca Airlines. ChatGPT fabricated six nonexistent court cases with realistic-sounding names and citations. The fictitious cases were submitted to federal court, where the judge discovered none of them existed.
Category
Hallucination
Industry
Legal
Status
Resolved
Date Occurred
May 1, 2023
Date Reported
May 27, 2023
Jurisdiction
US
AI Provider
OpenAI
Model
ChatGPT (GPT-3.5)
Application Type
chatbot
Harm Type
legal
Estimated Cost
$5,000
People Affected
3
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
resolved
legal_researchhallucinated_citationscourt_sanctions
Full Description
Attorney Steven Schwartz of the law firm Levidow, Levidow & Oberman used ChatGPT to research legal precedents for a personal injury case, Mata v. Avianca, Inc., filed in the Southern District of New York in early 2023. On May 1, 2023, Schwartz submitted a motion opposing Avianca's motion to dismiss, which included citations to six federal court cases that ChatGPT had provided as supporting precedent. The opposing counsel for Avianca immediately flagged that they could not locate any of the cited cases in legal databases, prompting judicial scrutiny that would expose a significant AI hallucination incident.
ChatGPT-3.5 had fabricated six entirely fictitious legal cases when Schwartz asked it to find precedents supporting his client's position. The AI generated realistic-sounding case names such as "Varghese v. China Southern Airlines Co. Ltd." and "Shaboon v. Egyptian General Petroleum Corp.," complete with plausible docket numbers, dates, and detailed descriptions of judicial holdings. When Schwartz returned to ChatGPT to verify the cases after opposing counsel raised concerns, the AI doubled down on its fabrications, providing what appeared to be excerpts from the nonexistent judicial opinions and confirming the cases were real and accessible through legal databases.
Judge P. Kevin Castel ordered Schwartz to produce copies of the cited decisions, leading to the discovery that none of the six cases existed in any legal database. The court's independent verification confirmed the fabrications, resulting in sanctions against Schwartz and his law firm for submitting false information to the federal court. Schwartz faced potential disciplinary action from the New York State Bar, while his firm suffered significant reputational damage within the legal community. The incident compromised the underlying personal injury case and raised serious questions about attorney competence and the duty of care owed to clients.
Schwartz publicly testified that he was unaware ChatGPT could generate false information, stating he believed it functioned like a sophisticated search engine rather than a generative AI system. The law firm issued public apologies and implemented new protocols requiring verification of all AI-generated research through traditional legal databases. Judge Castel's written opinion became a seminal document in legal AI ethics, explicitly warning the legal profession about the risks of unverified AI research and establishing precedent for sanctions related to AI-generated content.
The Mata v. Avianca case triggered immediate policy responses across the legal industry and federal court system. Multiple federal judges began requiring attorneys to disclose AI tool usage in legal filings, while major law firms implemented mandatory training on AI limitations and verification protocols. The American Bar Association accelerated development of ethical guidelines for AI use in legal practice, and several state bar associations launched investigations into similar incidents. The case became a cautionary tale cited in legal technology conferences and continuing legal education programs, fundamentally altering how the legal profession approached AI integration and establishing verification requirements that became industry standard by late 2023.
Root Cause
ChatGPT generated plausible-sounding but entirely fictitious legal citations when asked to find cases supporting an argument. The model hallucinated case names, docket numbers, and even judicial opinions. The attorney did not verify the citations against any legal database before filing.
Mitigation Analysis
A cryptographic provenance trail would have flagged that these citations were AI-generated outputs rather than retrieved legal documents. More critically, an output verification step comparing AI-generated citations against legal databases (Westlaw, LexisNexis) would have caught the fabrications. This incident demonstrates the need for both provenance tracking and domain-specific validation of AI outputs in high-stakes professional contexts.
Litigation Outcome
Judge P. Kevin Castel sanctioned attorneys Steven Schwartz and Peter LoDuca $5,000 for submitting fabricated case citations generated by ChatGPT. The court found the attorneys acted in bad faith by failing to verify the AI-generated citations.
Lessons Learned
AI-generated legal research must be independently verified against authoritative legal databases. Courts increasingly require disclosure of AI tool usage in filings. The incident established that attorney responsibility for accuracy extends to AI-generated content.
Sources
Here is What Happens When Your Lawyer Uses ChatGPT
The New York Times · May 27, 2023 · news