AI Incident Database

472 documented incidents. Search, filter, and explore.

Zillow's AI Home-Buying Algorithm Lost $881 Million

Critical

Zillow's AI-powered home buying program lost $881 million after its Zestimate algorithm systematically overpaid for properties, forcing the company to shut down Zillow Offers and lay off 2,000 employees in November 2021.

Nov 2, 2021|Financial Error|Finance|Other/Unknown|$881,000,000

Clearview AI Fined €60M by EU Data Protection Authorities for Facial Recognition Database GDPR Violations

Critical

Multiple EU data protection authorities fined Clearview AI approximately €60M total for collecting billions of facial images without consent, violating GDPR biometric data protections across Italy, France, UK, and Greece.

Oct 19, 2021|Privacy Leak|Technology|Other/Unknown|$70,000,000

AI Real Estate Valuation Tools Systematically Undervalued Black-Owned Homes

High

Major AI-powered real estate platforms including Zillow, Redfin, and Realtor.com systematically undervalued homes in Black neighborhoods. Brookings research documented billions in lost equity, prompting regulatory action and ongoing litigation.

Sep 1, 2021|Bias|Finance|Other/Unknown

ATS AI Resume Parsing Systems Filter Out 27 Million Qualified Workers Due to Technical Errors

Critical

Harvard Business School study revealed AI-powered resume parsing in major ATS systems systematically filtered out 27 million qualified workers due to technical parsing errors, rejecting candidates based on formatting rather than qualifications.

Sep 1, 2021|Bias|HR / Recruiting|Other/Unknown

Tesla Autopilot Emergency Vehicle Collision Pattern - NHTSA Investigation

Critical

Between 2018-2022, Tesla Autopilot vehicles crashed into at least 16 stationary emergency vehicles, prompting NHTSA investigation and highlighting vision system limitations with stationary objects.

Aug 13, 2021|Safety Failure|Technology|Other/Unknown|$50,000,000

ShotSpotter AI Gunshot Detection System Linked to Wrongful Police Raids and Racial Disparities

High

ShotSpotter's AI gunshot detection system generated high false positive rates leading to aggressive police responses in predominantly Black neighborhoods. Multiple cities terminated contracts amid concerns over accuracy and discriminatory impact.

Aug 1, 2021|Bias|Government|Other/Unknown|$50,000,000

EleutherAI's GPT-Neo Generated Extremist Content When Prompted

Medium

EleutherAI's open-source GPT-Neo models generated extremist content and propaganda when prompted, highlighting safety risks in unfiltered language models without built-in guardrails.

Jul 15, 2021|Safety Failure|Technology|Other/Unknown

Epic Deterioration Index AI Failed to Predict COVID-19 Patient Deaths

High

Epic's widely-deployed AI early warning system failed to accurately predict COVID-19 patient deterioration, missing the majority of cases according to University of Michigan research.

Jul 15, 2021|Medical Error|Healthcare|Other/Unknown

AI-Powered Retail Surveillance Systems Disproportionately Tracked Black Shoppers at Major Retailers

High

AI surveillance systems deployed by major retailers including Walmart, Target, and CVS were documented by the ACLU to disproportionately flag and track Black shoppers, embedding racial profiling into automated retail security.

Jul 13, 2021|Bias|Other|Other/Unknown

TikTok Algorithm Amplified Dangerous Challenges Contributing to Child Deaths

Critical

TikTok's recommendation algorithm promoted dangerous viral challenges like the 'blackout challenge' to children, contributing to at least 15 documented deaths. Multiple families filed wrongful death lawsuits alleging the algorithm specifically targeted vulnerable minors with life-threatening content.

Jul 1, 2021|Safety Failure|Technology|Other/Unknown

AI Supply Chain Forecasting Failures Amplified 2021 Global Chip Shortage

Critical

AI-powered supply chain forecasting systems used by major manufacturers failed to predict the 2021 chip shortage and amplified it through algorithmic panic ordering. The failures contributed significantly to the $500B global economic impact.

Jun 15, 2021|Agent Error|Technology|Other/Unknown

Autonomous Ship Yara Birkeland Navigation AI Failures Delay Commercial Deployment

High

The world's first autonomous electric container ship experienced repeated AI navigation failures during testing, forcing a four-year delay in commercial deployment from 2020 to 2024.

Jun 15, 2021|Safety Failure|Other|Other/Unknown|$25,000,000

Lemonade Insurance Used AI to Analyze Customer Facial Expressions During Claims Process

Medium

Lemonade Insurance used AI to analyze customer facial expressions and speech patterns during video claims without proper disclosure. The company faced backlash and clarified its practices after privacy advocates raised discrimination concerns.

May 25, 2021|privacy|Insurance|Other/Unknown

Lemonade Insurance AI Used Facial Recognition in Claims Processing

High

Lemonade Insurance faced regulatory scrutiny after tweeting about using AI and facial recognition to analyze video claims, raising concerns about algorithmic bias in insurance decisions.

May 25, 2021|Bias|Insurance|Other/Unknown

AI Content Moderation Systems Systematically Removed Palestinian Content During Gaza Conflicts

High

AI content moderation systems on major social media platforms systematically removed Palestinian content during 2021 Sheikh Jarrah protests and 2023 Gaza conflict. Human Rights Watch documented widespread censorship affecting millions of users.

May 21, 2021|Bias|Media|Other/Unknown

Facebook AI Content Moderation Systematically Censored Palestinian News During Gaza Conflicts

High

Meta's AI content moderation systems systematically censored Palestinian news and voices during 2021 and 2023 Gaza conflicts, with Human Rights Watch documenting widespread suppression of legitimate content.

May 21, 2021|Bias|Media|Other/Unknown

Google AI Dermatology Tool Shows Racial Bias in Skin Condition Detection

Medium

Google's AI dermatology tool demonstrated significant accuracy disparities across skin tones due to biased training data, highlighting the critical need for diverse datasets in medical AI applications.

May 18, 2021|Bias|Healthcare|Google

AI Video Interview Platforms Discriminated Against Candidates with Disabilities Through Biased Scoring Algorithms

High

AI video interview platforms by HireVue, myInterview, and Pymetrics systematically discriminated against candidates with disabilities by penalizing atypical speech, facial expressions, and eye movements. The EEOC issued guidance and multiple lawsuits were filed.

May 12, 2021|Bias|HR / Recruiting|Other/Unknown

Colonial Pipeline AI Security Monitoring Failed to Prevent $4.4M Ransomware Attack

Critical

Colonial Pipeline's AI security monitoring failed to detect ransomware attack that shut down critical US fuel infrastructure for six days. Company paid $4.4M ransom to DarkSide group after automated systems missed breach indicators.

May 8, 2021|Safety Failure|Other|Other/Unknown|$4,400,000

ShotSpotter AI Gunshot Detection System Led to Wrongful Police Raids and Community Harm

High

ShotSpotter's AI gunshot detection system exhibited false positive rates of 86-95%, leading to wrongful police raids and discriminatory enforcement in predominantly Black neighborhoods across multiple US cities.

May 1, 2021|Bias|Government|Other/Unknown|$15,000,000