AI Incident Database

472 documented incidents. Search, filter, and explore.

IBM Watson for Oncology Recommended Unsafe Cancer Treatments

Critical

IBM Watson for Oncology recommended dangerous cancer treatments, including chemotherapy for patients with severe bleeding, due to flawed training on hypothetical cases rather than real outcomes data.

Jul 25, 2018|Medical Error|Healthcare|Other/Unknown|$62,000,000

South Wales Police Facial Recognition System Had 92% False Positive Rate at Champions League Final

High

South Wales Police facial recognition system at 2018 Champions League final falsely identified 2,470 innocent people with 92% false positive rate. Court later ruled the deployment unlawful, violating human rights.

Jul 5, 2018|Safety Failure|Government|Other/Unknown

Amazon Alexa Recorded and Sent Private Conversation to Random Contact

High

Amazon Alexa recorded a Portland couple's private conversation about hardwood floors and sent the audio file to a random business contact due to a series of voice recognition errors and lack of user confirmation for sensitive actions.

May 24, 2018|Privacy Leak|Technology|Other/Unknown

Google Duplex AI Made Restaurant Reservations Without Disclosing AI Identity

Medium

Google Duplex demonstrated realistic phone conversations with businesses without disclosing its AI nature, sparking widespread ethical concerns about AI deception and transparency requirements.

May 8, 2018|ethics|Technology|Google

Google Duplex Demonstration Raises AI Deception and Transparency Concerns

Medium

Google's 2018 Duplex demo showed AI making restaurant reservations without identifying itself as non-human, sparking widespread ethical concerns about AI deception and leading to mandatory disclosure requirements.

May 8, 2018|Other|Technology|Google

Tesla Autopilot Failed to Detect Barrier, Killing Apple Engineer Walter Huang

Critical

Tesla Autopilot failed to detect a concrete barrier on Highway 101, killing Apple engineer Walter Huang in March 2018. NTSB found the system misclassified the barrier and accelerated into it.

Mar 23, 2018|Safety Failure|Technology|Other/Unknown|$15,000,000

Uber Self-Driving Car Kills Pedestrian in Tempe, Arizona

Critical

An Uber self-driving test vehicle struck and killed 49-year-old pedestrian Elaine Herzberg in Tempe, Arizona, marking the first documented fatality involving a fully autonomous vehicle. The car's AI perception system detected Herzberg but repeatedly misclassified her and failed to initiate emergency braking.

Mar 19, 2018|Safety Failure|Technology|Other/Unknown

Uber Self-Driving Car Fatally Strikes Pedestrian Elaine Herzberg in Tempe

Critical

Uber's self-driving test vehicle fatally struck pedestrian Elaine Herzberg in March 2018 when AI perception systems misclassified her multiple times and failed to predict her path. NTSB investigation revealed disabled emergency braking and inadequate safety protocols.

Mar 19, 2018|Safety Failure|Technology|Other/Unknown

Facebook AI-Powered Ad Targeting Enabled Cambridge Analytica Political Manipulation

Critical

Cambridge Analytica used AI to analyze Facebook data from 87 million users, building psychographic profiles for targeted political manipulation in the 2016 election and Brexit. The incident resulted in a $5B FTC fine and raised critical questions about AI's role in democratic processes.

Mar 17, 2018|Bias|Media|Other/Unknown|$5,000,000,000

Grammarly Browser Extension Vulnerability Exposed All User Documents to Websites

Critical

Grammarly's browser extension contained a critical vulnerability that exposed all 22 million users' documents to any website they visited, discovered by Google Project Zero researcher Tavis Ormandy.

Feb 2, 2018|Privacy Leak|Technology|Other/Unknown

COMPAS Criminal Recidivism AI Performs No Better Than Random Untrained People

High

Dartmouth researchers found that COMPAS recidivism prediction AI used in criminal sentencing performed no better than random untrained people, despite being widely deployed across US courts.

Jan 17, 2018|Bias|Legal|Other/Unknown

Navya Autonomous Shuttle Froze During Truck Collision on Las Vegas Launch Day

Medium

A Navya autonomous shuttle in Las Vegas froze when a truck backed into it on the first day of public trials, highlighting limitations in autonomous vehicle evasive action programming despite correct threat detection.

Nov 8, 2017|Safety Failure|Technology|Other/Unknown|$5,000

AI Weather Routing System Directed Cargo Ship El Faro Into Hurricane Joaquin

Critical

The cargo ship El Faro sank in Hurricane Joaquin after AI weather routing systems directed it into the storm's path, killing all 33 crew members. The NTSB investigation revealed the automated systems prioritized efficiency over safety.

Oct 24, 2017|Safety Failure|Other|Other/Unknown|$500,000,000

Facebook AI Translation Error Led to Arrest of Palestinian Man for 'Good Morning' Post

High

Facebook's AI mistranslated a Palestinian man's Arabic 'Good morning' post as 'Attack them' in Hebrew, leading to his arrest by Israeli police before the error was discovered.

Oct 22, 2017|Bias|Technology|Other/Unknown

Australian Robodebt Automated Welfare Fraud Detection System Generated 400,000+ False Debt Notices

Critical

Australia's Robodebt scheme used flawed automated income averaging to generate over 400,000 false welfare debt notices from 2016-2019. The Royal Commission found the system illegal, leading to $1.8 billion in settlements and major government accountability reforms.

Sep 1, 2017|Agent Error|Government|Other/Unknown|$1,800,000,000

Microsoft Zo Chatbot Produced Offensive Content Despite Improved Safety Measures

Medium

Microsoft's Zo chatbot, launched in 2017 as an improved successor to Tay, still produced offensive content including religious bias when users found ways to bypass its safety filters.

Aug 3, 2017|Bias|Technology|Other/Unknown

YouTube Kids Algorithm Promoted Disturbing Content to Children (Elsagate)

Critical

YouTube Kids' recommendation algorithm promoted disturbing content disguised as children's programming to millions of children. The FTC fined YouTube $170 million in 2019 for COPPA violations related to this incident.

Jul 1, 2017|Safety Failure|Media|Google|$170,000,000

Palantir's AI System Used by ICE for Immigration Enforcement Targeting

High

Palantir's AI system enabled ICE to analyze data from schools and social services to identify and target undocumented immigrants for deportation, raising significant civil liberties concerns.

May 2, 2017|Bias|Government|Other/Unknown

State Farm and Allstate AI Insurance Pricing Accused of Racial Discrimination

High

Consumer Reports and ProPublica investigations revealed that State Farm, Allstate, and other major insurers used AI pricing models that systematically charged higher premiums in minority neighborhoods, affecting millions of consumers despite controlling for legitimate risk factors.

Apr 5, 2017|Bias|Insurance|Other/Unknown

Waze AI Navigation Directed Heavy Traffic Through Residential Neighborhoods

High

Waze's AI routing algorithm directed heavy traffic through residential neighborhoods and school zones, creating safety hazards and prompting cities to implement countermeasures against algorithmic routing decisions.

Mar 15, 2017|Safety Failure|Technology|Other/Unknown