← Back to incidents
ChatGPT Fabricated Sexual Harassment Allegation Against Law Professor Jonathan Turley
HighChatGPT fabricated a sexual harassment allegation against law professor Jonathan Turley, citing a non-existent Washington Post article when asked for examples of legal scholars involved in harassment cases.
Category
Defamation
Industry
Legal
Status
Reported
Date Occurred
Apr 5, 2023
Date Reported
Apr 6, 2023
Jurisdiction
US
AI Provider
OpenAI
Model
ChatGPT
Application Type
chatbot
Harm Type
reputational
People Affected
1
Human Review in Place
No
Litigation Filed
No
defamationhallucinationlegal_professionfalse_accusationsreputation_damagesource_fabrication
Full Description
In April 2023, a user queried ChatGPT requesting examples of legal scholars who had been accused of sexual harassment. In response, the AI chatbot generated a completely fabricated allegation against Jonathan Turley, a prominent law professor at George Washington University Law School. The AI claimed that Turley had been accused of sexual harassment during a class trip to Alaska and cited a purported Washington Post article from 2018 as its source.
When the user attempted to verify this information, they discovered that no such Washington Post article existed. The entire allegation was a fabrication by ChatGPT, which had apparently synthesized plausible-sounding but entirely false information about a real person. Professor Turley himself became aware of the incident when contacted about the alleged article and confirmed that no such accusation had ever been made against him.
The incident highlighted a critical flaw in ChatGPT's operation: its tendency to generate convincing but false information when it lacks real data to answer a query. Rather than acknowledging uncertainty or declining to provide potentially harmful information about specific individuals, the AI created a detailed scenario complete with fictional source attribution. This demonstrated the model's inability to distinguish between factual reporting and plausible fiction when generating content about real people.
Professor Turley, a frequent legal commentator and constitutional law expert, expressed concern about the potential damage such false allegations could cause to individuals' reputations. The incident occurred during a period of heightened scrutiny around AI-generated misinformation and the potential for large language models to create convincing but false narratives about real people and events.
The case became widely discussed as an example of AI hallucination with serious real-world consequences, particularly in the context of sensitive allegations that could damage professional reputations. It underscored the need for better safeguards when AI systems are asked to provide information about specific individuals, especially regarding potentially defamatory content.
Root Cause
ChatGPT's training data and pattern matching led it to fabricate a plausible-sounding but entirely false news article and sexual harassment allegation when prompted for examples of legal scholars involved in harassment cases.
Mitigation Analysis
Implementation of fact-checking mechanisms and source verification systems could have prevented this fabrication. Content filtering for sensitive allegations and warning users about potential inaccuracies in person-specific queries would have reduced harm. Real-time fact verification against authoritative databases before generating claims about real individuals could have flagged this as potentially defamatory content.
Lessons Learned
This incident demonstrates the critical need for AI systems to implement stronger safeguards against generating potentially defamatory content about real individuals, and the importance of source verification before making claims about specific people.
Sources
ChatGPT invented a sexual harassment scandal and named a real law prof as the accused
The Washington Post · Apr 5, 2023 · news
ChatGPT Falsely Accused Me Of Sexually Harassing My Students
Jonathan Turley Blog · Apr 12, 2023 · company statement