AI Incident Database
472 documented incidents. Search, filter, and explore.
Australia's Robodebt AI Scheme Wrongly Demanded $1.7 Billion from Citizens
CriticalAustralia's automated Robodebt system used flawed income averaging algorithms to wrongly demand $1.7 billion from over 500,000 welfare recipients between 2016-2019, causing severe financial distress and contributing to suicides before being declared illegal by a Royal Commission.
Facebook Housing Ads Enabled Racial Discrimination Through Targeting System
HighFacebook's ad targeting system allowed advertisers to exclude users by race and ethnicity from housing ads, violating civil rights laws and resulting in a $115 million DOJ settlement.
Beauty.AI Contest Algorithm Exhibited Severe Racial Bias in Winner Selection
HighBeauty.AI's 2016 contest used AI to judge beauty from 6,000+ global submissions but selected almost exclusively white winners, revealing severe training data bias and algorithmic discrimination.
Wells Fargo AI-Automated Systems Enabled Creation of 3.5 Million Unauthorized Accounts
CriticalWells Fargo used automated systems to facilitate creation of 3.5 million unauthorized customer accounts between 2009-2016, resulting in $3 billion in regulatory settlements and widespread customer harm.
Chicago Strategic Subject List Algorithm Disproportionately Targeted Black Neighborhoods
HighChicago's predictive policing algorithm disproportionately targeted Black and Latino residents for police attention, increasing their likelihood of being shot by police while failing to reduce overall violence rates.
Knightscope K5 Security Robot Runs Over Toddler at Stanford Shopping Center
HighA 300-pound Knightscope K5 autonomous security robot at Stanford Shopping Center knocked down and ran over a 16-month-old toddler in July 2016, causing minor injuries when its sensors failed to detect the child.
Knightscope K5 Security Robot Knocked Down and Ran Over Toddler at Stanford Shopping Center
HighA Knightscope K5 autonomous security robot knocked down and ran over a 16-month-old child at Stanford Shopping Center when its obstacle detection systems failed to identify the toddler.
Tesla Autopilot Fatal Crashes and NHTSA Investigation
CriticalTesla's Autopilot system has been involved in hundreds of crashes investigated by NHTSA, including multiple fatalities where the AI failed to detect emergency vehicles, cross traffic, and road barriers, leading to ongoing regulatory scrutiny and litigation.
Ethereum DAO Smart Contract Vulnerability Exploited for $60 Million
CriticalThe Ethereum DAO autonomous organization was hacked for $60 million in June 2016 due to a reentrancy vulnerability in its smart contract code. The incident led to a controversial hard fork of the Ethereum blockchain to recover the stolen funds.
AI Risk Assessment Tools Exhibited Racial Bias in Prison Parole Decisions
HighAI risk assessment tools like COMPAS and PATTERN used across US prison systems exhibited racial bias, incorrectly flagging Black defendants as high-risk at nearly twice the rate of white defendants and keeping low-risk prisoners incarcerated longer.
COMPAS Recidivism Algorithm Showed Racial Bias in Criminal Sentencing
CriticalNorthpointe's COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm, used in courts across the United States to assess the likelihood of criminal defendants reoffending, was found to exhibit significant racial bias. A ProPublica investigation revealed that Black defendants were nearly twice as likely to be falsely flagged as future criminals compared to white defendants.
Amazon Same-Day Delivery Algorithm Systematically Excluded Predominantly Black Neighborhoods
HighBloomberg investigation revealed Amazon's same-day delivery algorithm systematically excluded predominantly Black neighborhoods in major US cities, creating patterns nearly identical to historical redlining maps.
Microsoft Tay AI Chatbot Posted Racist and Nazi Content After Coordinated Manipulation
HighMicrosoft's Tay chatbot began posting racist and Nazi content within 16 hours of launch after coordinated manipulation by users who exploited its learning mechanisms. The incident forced immediate shutdown and highlighted critical gaps in adversarial AI safety.
Microsoft Tay Chatbot Became Racist Within 24 Hours
HighMicrosoft's Tay chatbot was shut down within 16 hours of launch after coordinated trolling caused it to post racist and offensive tweets, demonstrating the risks of unsupervised AI learning from public social media interactions.
Theranos AI Blood Testing Algorithm Produced Dangerous False Results
CriticalTheranos used proprietary algorithms to manipulate and 'correct' inaccurate blood test results from faulty devices, leading to dangerous false medical data affecting over 175,000 patients and resulting in criminal fraud convictions.
Michigan MiDAS Algorithm Falsely Accused 40,000 of Unemployment Fraud
CriticalMichigan's automated unemployment fraud detection system falsely accused 40,000 people of fraud between 2013-2015 with a 93% error rate, leading to wrongful debt collection and a $20 million settlement.
Google Ad Algorithm Showed Gender Bias in High-Paying Job Advertisement Display
HighCarnegie Mellon researchers discovered Google's ad algorithm showed high-paying job ads to men significantly more often than women. The controlled study used fake profiles to demonstrate systematic gender bias in employment advertising, raising concerns about algorithmic discrimination in hiring practices.
Google Photos AI Labeled Black People as 'Gorillas'
CriticalGoogle Photos' AI image recognition system labeled photos of Black people as 'gorillas' in 2015. Google's response was to remove the 'gorilla' category entirely rather than fix the underlying algorithmic bias, which reportedly remained unresolved as of 2023.
Boeing 787 Dreamliner Integer Overflow Bug Could Cause Total Electrical Failure After 248 Days Continuous Operation
CriticalFAA discovered Boeing 787 software bug where integer overflow after 248 days of continuous generator operation could cause total electrical failure. Regulatory directive required power cycling every 120 days as interim fix.
Google Flu Trends AI Overestimated Flu Prevalence by 140% During 2012-2013 Season
MediumGoogle's AI system to predict flu outbreaks from search data overestimated flu prevalence by 140% in 2012-2013, demonstrating how algorithm drift and overfitting can compromise public health prediction systems.