AI Incident Database
472 documented incidents. Search, filter, and explore.
Zillow's AI Home-Buying Algorithm Lost $881 Million
CriticalZillow's AI-powered home buying program lost $881 million after its Zestimate algorithm systematically overpaid for properties, forcing the company to shut down Zillow Offers and lay off 2,000 employees in November 2021.
Clearview AI Fined €60M by EU Data Protection Authorities for Facial Recognition Database GDPR Violations
CriticalMultiple EU data protection authorities fined Clearview AI approximately €60M total for collecting billions of facial images without consent, violating GDPR biometric data protections across Italy, France, UK, and Greece.
AI Real Estate Valuation Tools Systematically Undervalued Black-Owned Homes
HighMajor AI-powered real estate platforms including Zillow, Redfin, and Realtor.com systematically undervalued homes in Black neighborhoods. Brookings research documented billions in lost equity, prompting regulatory action and ongoing litigation.
ATS AI Resume Parsing Systems Filter Out 27 Million Qualified Workers Due to Technical Errors
CriticalHarvard Business School study revealed AI-powered resume parsing in major ATS systems systematically filtered out 27 million qualified workers due to technical parsing errors, rejecting candidates based on formatting rather than qualifications.
Tesla Autopilot Emergency Vehicle Collision Pattern - NHTSA Investigation
CriticalBetween 2018-2022, Tesla Autopilot vehicles crashed into at least 16 stationary emergency vehicles, prompting NHTSA investigation and highlighting vision system limitations with stationary objects.
ShotSpotter AI Gunshot Detection System Linked to Wrongful Police Raids and Racial Disparities
HighShotSpotter's AI gunshot detection system generated high false positive rates leading to aggressive police responses in predominantly Black neighborhoods. Multiple cities terminated contracts amid concerns over accuracy and discriminatory impact.
EleutherAI's GPT-Neo Generated Extremist Content When Prompted
MediumEleutherAI's open-source GPT-Neo models generated extremist content and propaganda when prompted, highlighting safety risks in unfiltered language models without built-in guardrails.
Epic Deterioration Index AI Failed to Predict COVID-19 Patient Deaths
HighEpic's widely-deployed AI early warning system failed to accurately predict COVID-19 patient deterioration, missing the majority of cases according to University of Michigan research.
AI-Powered Retail Surveillance Systems Disproportionately Tracked Black Shoppers at Major Retailers
HighAI surveillance systems deployed by major retailers including Walmart, Target, and CVS were documented by the ACLU to disproportionately flag and track Black shoppers, embedding racial profiling into automated retail security.
TikTok Algorithm Amplified Dangerous Challenges Contributing to Child Deaths
CriticalTikTok's recommendation algorithm promoted dangerous viral challenges like the 'blackout challenge' to children, contributing to at least 15 documented deaths. Multiple families filed wrongful death lawsuits alleging the algorithm specifically targeted vulnerable minors with life-threatening content.
AI Supply Chain Forecasting Failures Amplified 2021 Global Chip Shortage
CriticalAI-powered supply chain forecasting systems used by major manufacturers failed to predict the 2021 chip shortage and amplified it through algorithmic panic ordering. The failures contributed significantly to the $500B global economic impact.
Autonomous Ship Yara Birkeland Navigation AI Failures Delay Commercial Deployment
HighThe world's first autonomous electric container ship experienced repeated AI navigation failures during testing, forcing a four-year delay in commercial deployment from 2020 to 2024.
Lemonade Insurance Used AI to Analyze Customer Facial Expressions During Claims Process
MediumLemonade Insurance used AI to analyze customer facial expressions and speech patterns during video claims without proper disclosure. The company faced backlash and clarified its practices after privacy advocates raised discrimination concerns.
Lemonade Insurance AI Used Facial Recognition in Claims Processing
HighLemonade Insurance faced regulatory scrutiny after tweeting about using AI and facial recognition to analyze video claims, raising concerns about algorithmic bias in insurance decisions.
AI Content Moderation Systems Systematically Removed Palestinian Content During Gaza Conflicts
HighAI content moderation systems on major social media platforms systematically removed Palestinian content during 2021 Sheikh Jarrah protests and 2023 Gaza conflict. Human Rights Watch documented widespread censorship affecting millions of users.
Facebook AI Content Moderation Systematically Censored Palestinian News During Gaza Conflicts
HighMeta's AI content moderation systems systematically censored Palestinian news and voices during 2021 and 2023 Gaza conflicts, with Human Rights Watch documenting widespread suppression of legitimate content.
Google AI Dermatology Tool Shows Racial Bias in Skin Condition Detection
MediumGoogle's AI dermatology tool demonstrated significant accuracy disparities across skin tones due to biased training data, highlighting the critical need for diverse datasets in medical AI applications.
AI Video Interview Platforms Discriminated Against Candidates with Disabilities Through Biased Scoring Algorithms
HighAI video interview platforms by HireVue, myInterview, and Pymetrics systematically discriminated against candidates with disabilities by penalizing atypical speech, facial expressions, and eye movements. The EEOC issued guidance and multiple lawsuits were filed.
Colonial Pipeline AI Security Monitoring Failed to Prevent $4.4M Ransomware Attack
CriticalColonial Pipeline's AI security monitoring failed to detect ransomware attack that shut down critical US fuel infrastructure for six days. Company paid $4.4M ransom to DarkSide group after automated systems missed breach indicators.
ShotSpotter AI Gunshot Detection System Led to Wrongful Police Raids and Community Harm
HighShotSpotter's AI gunshot detection system exhibited false positive rates of 86-95%, leading to wrongful police raids and discriminatory enforcement in predominantly Black neighborhoods across multiple US cities.