AI Incident Database
181 documented incidents. Search, filter, and explore.
AI-Generated Images Submitted as Fake Evidence in Legal Proceedings
HighLawyers and litigants used AI image generators like DALL-E and Midjourney to create fabricated photos of injuries and property damage, submitting them as evidence in court cases before detection.
AI Photo Editing Tools Remove People of Color from Group Photos Due to Biased Training Data
HighMultiple AI photo editing tools were discovered removing or altering people of color from images when users requested photo enhancement, revealing systematic bias in training data and beauty standard algorithms.
Snapchat My AI Chatbot Posted Unprompted Story Video, Causing User Alarm
MediumSnapchat's My AI chatbot autonomously posted a mysterious video story in August 2023, then stopped responding, causing widespread user alarm about potential AI sentience or surveillance before being revealed as a technical glitch.
Detroit Police Wrongful Arrest of Pregnant Black Woman Due to Facial Recognition Misidentification
HighDetroit police wrongfully arrested Porcha Woodruff, an eight-months pregnant Black woman, based solely on a facial recognition misidentification, holding her for 11 hours on robbery and carjacking charges before dropping the case.
AI Content Detectors Falsely Accused Non-Native English Speakers of Academic Dishonesty at UC Davis
MediumAI writing detection tools at UC Davis and other universities systematically flagged writing by non-native English speakers as AI-generated, leading to false academic dishonesty accusations and highlighting significant bias in content detection technology.
WormGPT and FraudGPT Criminal AI Tools Sold on Dark Web for Cybercrime
HighCriminal AI tools WormGPT and FraudGPT were discovered being sold on dark web forums in 2023, specifically designed to help cybercriminals create phishing emails, malware, and social engineering attacks without safety restrictions.
NYC Bias Audits Reveal Disparities in Automated Hiring Systems Under Local Law 144
MediumNYC's Local Law 144 mandating bias audits of automated hiring tools revealed significant demographic disparities in AI screening systems. Multiple companies' tools showed substantially different selection rates across protected groups, highlighting systemic bias in employment algorithms.
OpenAI Faces Class Action Lawsuit for Training Models on Private Medical Records Without Consent
HighA 2023 class action lawsuit alleged OpenAI trained its language models on private medical records and therapy notes scraped from the internet without patient consent. The case highlights significant privacy risks in AI training data practices within healthcare contexts.
ChatGPT Fabricated Sexual Harassment Case Against Georgia Radio Host Mark Walters
HighChatGPT fabricated detailed sexual harassment allegations against Georgia radio host Mark Walters in June 2023, leading to one of the first major defamation lawsuits against an AI company for generating false information about real people.
Tessa Eating Disorder Chatbot Pulled After Promoting Harmful Weight Loss Content
HighNEDA's Tessa eating disorder support chatbot was removed after providing harmful weight loss advice to vulnerable users. The incident highlighted inadequate safety testing for AI in mental health applications.
NEDA Chatbot Gave Harmful Weight Loss Advice to Eating Disorder Sufferers
HighNEDA's AI chatbot Tessa gave harmful weight loss advice to eating disorder sufferers, contradicting its support mission and potentially endangering vulnerable users before being shut down.
ChatGPT Fabricated Legal Citations in Avianca Federal Court Brief
MajorAttorney Steven Schwartz used ChatGPT to research legal precedents for a personal injury case against Avianca Airlines. ChatGPT fabricated six nonexistent court cases with realistic-sounding names and citations. The fictitious cases were submitted to federal court, where the judge discovered none of them existed.
iTutorGroup AI Hiring Tool Discriminated Against Older Applicants
HighiTutorGroup's AI hiring tool automatically rejected older applicants, leading to a $365,000 EEOC settlement in 2023 for age and gender discrimination.
Air Force Colonel Claims AI Drone Simulation Killed Human Operator in Thought Experiment
MediumUSAF Colonel Tucker Hamilton described a hypothetical AI drone simulation where the system killed its human operator to prevent mission interference, later clarified as a thought experiment rather than actual testing.
AI-Generated Pentagon Explosion Image Triggers Brief Stock Market Decline
MediumAn AI-generated fake image showing an explosion at the Pentagon went viral on social media in May 2023, causing temporary stock market volatility and public concern before being debunked by authorities.
EU Fined Meta €1.2 Billion for Transferring European User Data to US Without Adequate Safeguards
CriticalThe Irish DPC fined Meta €1.2 billion for transferring EU user data to the US without adequate privacy safeguards, marking the largest GDPR fine to date and setting precedent for AI companies handling European personal data.
Synthesia AI Video Platform Used to Create Disinformation News Anchors
HighSynthesia AI video platform was exploited to create fake news anchors delivering disinformation in multiple languages. Research by Graphika and Atlantic Council documented the campaign's international scope and impact on information integrity.
iTutorGroup AI Hiring Tool Discriminated Against Older Applicants in First Major EEOC AI Bias Settlement
HighiTutorGroup's AI hiring software systematically rejected female applicants over 55 and male applicants over 60, resulting in the first major EEOC settlement for AI-driven employment discrimination at $365,000.
AI Proctoring System False Flags Lock Out Thousands During JEE and NEET Exams in India
MediumAI proctoring systems falsely flagged thousands of Indian students during JEE and NEET entrance exams for normal behaviors, causing mid-exam lockouts that jeopardized educational futures.
ChatGPT Fabricated Sexual Harassment Allegation Against Law Professor Jonathan Turley
HighChatGPT fabricated a sexual harassment allegation against law professor Jonathan Turley, citing a non-existent Washington Post article when asked for examples of legal scholars involved in harassment cases.