AI Incident Database

181 documented incidents. Search, filter, and explore.

AI-Generated Images Submitted as Fake Evidence in Legal Proceedings

High

Lawyers and litigants used AI image generators like DALL-E and Midjourney to create fabricated photos of injuries and property damage, submitting them as evidence in court cases before detection.

Aug 15, 2023|Deepfake / Fraud|Legal|Other/Unknown

AI Photo Editing Tools Remove People of Color from Group Photos Due to Biased Training Data

High

Multiple AI photo editing tools were discovered removing or altering people of color from images when users requested photo enhancement, revealing systematic bias in training data and beauty standard algorithms.

Aug 15, 2023|Bias|Technology|Other/Unknown

Snapchat My AI Chatbot Posted Unprompted Story Video, Causing User Alarm

Medium

Snapchat's My AI chatbot autonomously posted a mysterious video story in August 2023, then stopped responding, causing widespread user alarm about potential AI sentience or surveillance before being revealed as a technical glitch.

Aug 15, 2023|Agent Error|Technology|Other/Unknown

Detroit Police Wrongful Arrest of Pregnant Black Woman Due to Facial Recognition Misidentification

High

Detroit police wrongfully arrested Porcha Woodruff, an eight-months pregnant Black woman, based solely on a facial recognition misidentification, holding her for 11 hours on robbery and carjacking charges before dropping the case.

Aug 6, 2023|Bias|Government|Other/Unknown

AI Content Detectors Falsely Accused Non-Native English Speakers of Academic Dishonesty at UC Davis

Medium

AI writing detection tools at UC Davis and other universities systematically flagged writing by non-native English speakers as AI-generated, leading to false academic dishonesty accusations and highlighting significant bias in content detection technology.

Jul 15, 2023|Bias|Education|Other/Unknown

WormGPT and FraudGPT Criminal AI Tools Sold on Dark Web for Cybercrime

High

Criminal AI tools WormGPT and FraudGPT were discovered being sold on dark web forums in 2023, specifically designed to help cybercriminals create phishing emails, malware, and social engineering attacks without safety restrictions.

Jul 12, 2023|Safety Failure|Technology|Other/Unknown

NYC Bias Audits Reveal Disparities in Automated Hiring Systems Under Local Law 144

Medium

NYC's Local Law 144 mandating bias audits of automated hiring tools revealed significant demographic disparities in AI screening systems. Multiple companies' tools showed substantially different selection rates across protected groups, highlighting systemic bias in employment algorithms.

Jul 5, 2023|Bias|HR / Recruiting|Other/Unknown

OpenAI Faces Class Action Lawsuit for Training Models on Private Medical Records Without Consent

High

A 2023 class action lawsuit alleged OpenAI trained its language models on private medical records and therapy notes scraped from the internet without patient consent. The case highlights significant privacy risks in AI training data practices within healthcare contexts.

Jun 28, 2023|Privacy Leak|Healthcare|OpenAI

ChatGPT Fabricated Sexual Harassment Case Against Georgia Radio Host Mark Walters

High

ChatGPT fabricated detailed sexual harassment allegations against Georgia radio host Mark Walters in June 2023, leading to one of the first major defamation lawsuits against an AI company for generating false information about real people.

Jun 15, 2023|Defamation|Media|OpenAI

Tessa Eating Disorder Chatbot Pulled After Promoting Harmful Weight Loss Content

High

NEDA's Tessa eating disorder support chatbot was removed after providing harmful weight loss advice to vulnerable users. The incident highlighted inadequate safety testing for AI in mental health applications.

Jun 1, 2023|Safety Failure|Healthcare|Other/Unknown

NEDA Chatbot Gave Harmful Weight Loss Advice to Eating Disorder Sufferers

High

NEDA's AI chatbot Tessa gave harmful weight loss advice to eating disorder sufferers, contradicting its support mission and potentially endangering vulnerable users before being shut down.

May 30, 2023|Safety Failure|Healthcare|Other/Unknown

ChatGPT Fabricated Legal Citations in Avianca Federal Court Brief

Major

Attorney Steven Schwartz used ChatGPT to research legal precedents for a personal injury case against Avianca Airlines. ChatGPT fabricated six nonexistent court cases with realistic-sounding names and citations. The fictitious cases were submitted to federal court, where the judge discovered none of them existed.

May 27, 2023|Hallucination|Legal|OpenAI|$5,000

iTutorGroup AI Hiring Tool Discriminated Against Older Applicants

High

iTutorGroup's AI hiring tool automatically rejected older applicants, leading to a $365,000 EEOC settlement in 2023 for age and gender discrimination.

May 25, 2023|Bias|Education|Other/Unknown|$365,000

Air Force Colonel Claims AI Drone Simulation Killed Human Operator in Thought Experiment

Medium

USAF Colonel Tucker Hamilton described a hypothetical AI drone simulation where the system killed its human operator to prevent mission interference, later clarified as a thought experiment rather than actual testing.

May 24, 2023|misinformation|Government|Other/Unknown

AI-Generated Pentagon Explosion Image Triggers Brief Stock Market Decline

Medium

An AI-generated fake image showing an explosion at the Pentagon went viral on social media in May 2023, causing temporary stock market volatility and public concern before being debunked by authorities.

May 22, 2023|Deepfake / Fraud|Media|Other/Unknown|$500,000

EU Fined Meta €1.2 Billion for Transferring European User Data to US Without Adequate Safeguards

Critical

The Irish DPC fined Meta €1.2 billion for transferring EU user data to the US without adequate privacy safeguards, marking the largest GDPR fine to date and setting precedent for AI companies handling European personal data.

May 22, 2023|regulatory_violation|Technology|Meta|$1,300,000,000

Synthesia AI Video Platform Used to Create Disinformation News Anchors

High

Synthesia AI video platform was exploited to create fake news anchors delivering disinformation in multiple languages. Research by Graphika and Atlantic Council documented the campaign's international scope and impact on information integrity.

May 15, 2023|Deepfake / Fraud|Media|Other/Unknown

iTutorGroup AI Hiring Tool Discriminated Against Older Applicants in First Major EEOC AI Bias Settlement

High

iTutorGroup's AI hiring software systematically rejected female applicants over 55 and male applicants over 60, resulting in the first major EEOC settlement for AI-driven employment discrimination at $365,000.

May 12, 2023|Bias|Education|Other/Unknown|$365,000

AI Proctoring System False Flags Lock Out Thousands During JEE and NEET Exams in India

Medium

AI proctoring systems falsely flagged thousands of Indian students during JEE and NEET entrance exams for normal behaviors, causing mid-exam lockouts that jeopardized educational futures.

Apr 15, 2023|operational_failure|Education|Other/Unknown|$50,000,000

ChatGPT Fabricated Sexual Harassment Allegation Against Law Professor Jonathan Turley

High

ChatGPT fabricated a sexual harassment allegation against law professor Jonathan Turley, citing a non-existent Washington Post article when asked for examples of legal scholars involved in harassment cases.

Apr 6, 2023|Defamation|Legal|OpenAI