← Back to incidents

ChatGPT Falsely Accused Australian Mayor of Bribery, Prompting First AI Defamation Lawsuit Threat

High

ChatGPT falsely accused Australian Mayor Brian Hood of bribery conviction when he was actually a whistleblower in the scandal. Hood threatened the first-ever defamation lawsuit against an AI chatbot before the issue was resolved.

Category
Defamation
Industry
Government
Status
Resolved
Date Occurred
Apr 1, 2023
Date Reported
Apr 5, 2023
Jurisdiction
Australia
AI Provider
OpenAI
Model
GPT-3.5/GPT-4
Application Type
chatbot
Harm Type
reputational
People Affected
1
Human Review in Place
No
Litigation Filed
Yes
defamationhallucinationpublic_officialaustraliachatgptlegal_threatwhistleblowerfalse_accusation

Full Description

In April 2023, ChatGPT generated false information about Brian Hood, the mayor of Hepburn Shire in Victoria, Australia, incorrectly stating that he had been convicted of bribery in connection with a foreign corruption scandal involving the Reserve Bank of Australia's subsidiaries Securency and Note Printing Australia. The AI chatbot's response fundamentally misrepresented Hood's actual role in the scandal, which was as a key whistleblower who helped expose the corruption rather than as a perpetrator. The real facts of the case show that Hood was working as an executive at the Reserve Bank of Australia's Note Printing Australia subsidiary when he discovered evidence of bribery involving payments to foreign officials to secure currency printing contracts. Rather than participating in the corruption, Hood courageously reported the illegal activities to authorities and played a crucial role in exposing what became known as one of Australia's largest corporate bribery scandals. Several other executives were ultimately prosecuted and convicted, but Hood was never charged with any wrongdoing. When confronted with the AI-generated defamatory content, Hood's legal team sent a concerns notice to OpenAI, the company behind ChatGPT, demanding correction of the false information. This action represented what would have been the first defamation lawsuit filed against an AI chatbot anywhere in the world. The notice cited the significant reputational damage caused by the false accusations, particularly given Hood's public role as an elected official and his actual status as a whistleblower in the case. The incident highlighted critical issues around AI systems generating false information about real individuals, particularly when such information involves serious criminal allegations. Legal experts noted the complexity of pursuing defamation claims against AI systems, including questions about liability, jurisdiction, and the technical challenges of ensuring AI models don't repeat false information. The case also raised broader concerns about the reliability of AI-generated content and the need for better safeguards when AI systems discuss real people and legal matters. OpenAI ultimately addressed the concerns, though specific details of the resolution were not publicly disclosed. The incident served as a wake-up call for the AI industry about the legal risks of AI hallucinations involving real individuals and the potential for significant reputational harm when AI systems confidently state false information about people's criminal histories or legal troubles.

Root Cause

ChatGPT's training data likely contained inaccurate information or the model incorrectly synthesized factual information about the Reserve Bank of Australia bribery scandal, confusing Hood's role as whistleblower with that of the actual perpetrators.

Mitigation Analysis

This incident could have been prevented through fact-checking protocols for sensitive claims about individuals, particularly regarding criminal allegations. Real-time verification systems checking claims against authoritative databases, human review for defamatory content, and more conservative response generation for legal/criminal matters would have reduced this risk. Content filtering specifically trained to identify and flag potentially defamatory statements about public figures could also have intercepted this response.

Lessons Learned

This incident demonstrates the critical need for AI systems to have robust safeguards when generating content about real individuals, particularly regarding criminal allegations. It also highlights the emerging legal landscape around AI-generated defamation and the challenges of holding AI systems accountable for false statements.