AI Incident Database

472 documented incidents. Search, filter, and explore.

EU iBorderCtrl AI Lie Detector Deployed at Borders Despite Accuracy Concerns

High

The EU-funded iBorderCtrl AI lie detector was deployed at borders in Hungary, Latvia, and Greece despite lacking scientific validation for micro-expression deception detection.

Oct 31, 2019|algorithmic_bias|Government|Other/Unknown|$4,500,000

Optum AI Algorithm Shows Racial Bias in Healthcare Risk Predictions

Critical

Optum's widely-used healthcare risk prediction algorithm showed severe racial bias, requiring Black patients to be significantly sicker than white patients to receive the same care recommendations, affecting an estimated 200 million patients nationwide.

Oct 25, 2019|Bias|Healthcare|Other/Unknown

Optum Healthcare Algorithm Showed Racial Bias Against Black Patients

Critical

Optum's widely-used healthcare risk algorithm systematically underestimated care needs for Black patients by using healthcare spending as a proxy for health status, affecting over 10 million patients across major health systems.

Oct 25, 2019|Bias|Healthcare|Other/Unknown

Babylon Health AI Misdiagnosed Medical Conditions as Non-Urgent in NHS Service

High

Babylon Health's AI triage system in the NHS GP at Hand service incorrectly classified serious medical conditions as non-urgent. BBC investigation revealed systematic failures that could have delayed critical care for patients.

Oct 14, 2019|Medical Error|Healthcare|Other/Unknown

PredPol Predictive Policing Algorithm Reinforced Racial Bias in LAPD Deployment

High

LAPD's use of PredPol predictive policing software from 2011-2019 created feedback loops that disproportionately targeted Black and Latino neighborhoods, with multiple academic studies documenting systematic bias before the department ended the program.

Oct 1, 2019|Bias|Government|Other/Unknown

AI Voice Deepfake Defrauds UK Energy Company of $243,000

Medium

Criminals used AI voice cloning technology to impersonate a parent company CEO, successfully deceiving a UK energy firm executive into authorizing a $243,000 wire transfer in one of the first documented deepfake fraud cases.

Sep 5, 2019|Deepfake / Fraud|energy|Other/Unknown|$243,000

Uber's Surge Pricing Algorithm Shows Disparate Impact on Minority Neighborhoods

Medium

Academic research revealed Uber's AI surge pricing algorithm consistently charged higher prices in minority and lower-income neighborhoods due to supply-demand patterns that correlated with demographics.

Sep 1, 2019|Bias|Technology|Other/Unknown

AI Facial Recognition Used to Suppress Hong Kong Protesters

Critical

Hong Kong authorities deployed AI facial recognition through smart lampposts and CCTV networks to identify pro-democracy protesters in 2019-2020, leading to arrests and systematic suppression of assembly rights.

Aug 24, 2019|Safety Failure|Government|Other/Unknown

Facial Recognition at London King's Cross Station Operated Without Public Knowledge

High

Facial recognition cameras at London's King's Cross development operated without public knowledge for 18 months, processing millions of people's biometric data in violation of GDPR before being discovered and shut down.

Aug 11, 2019|surveillance|Other|Other/Unknown

CBP Facial Recognition Systems Show Racial and Demographic Bias in Border Screening

High

CBP's facial recognition systems at US border crossings demonstrated significant bias, with higher error rates for people of color, women, and elderly travelers. GAO investigations revealed systematic disparities affecting millions of border crossers annually.

Aug 1, 2019|Bias|Government|Other/Unknown

Siri AI Assistant Recorded Private Conversations and Sent to Apple Contractors

High

Apple's Siri assistant inadvertently recorded private conversations due to false wake word triggers, with contractors regularly hearing confidential medical information and intimate moments, leading to a $95M settlement.

Jul 26, 2019|Privacy Leak|Technology|Other/Unknown|$95,000,000

DoorDash AI Payment Algorithm Used Tips to Subsidize Base Pay Instead of Supplementing Driver Earnings

High

DoorDash's AI payment algorithm systematically used customer tips to reduce driver base pay rather than supplement it, affecting approximately 250,000 drivers over two years before being exposed in 2019, resulting in a $2.5M FTC settlement.

Jul 22, 2019|Financial Error|Technology|Other/Unknown|$2,500,000

Epic Systems Sepsis Prediction AI Tool Shows 67% False Alert Rate in University of Michigan Study

High

Epic Systems' sepsis prediction AI tool deployed at University of Michigan showed a 67% false alert rate, with only 7% of predictions confirmed as sepsis, causing alert fatigue among clinicians.

Jul 17, 2019|Medical Error|Healthcare|Other/Unknown

LinkedIn AI Profile Suggestions Showed Gender Bias in Career Recommendations

Medium

LinkedIn's AI-powered job and skill recommendations showed systematic gender bias, suggesting administrative roles to women and executive positions to men. The company acknowledged the issue and implemented changes to reduce bias in their algorithms.

Jul 17, 2019|Bias|HR / Recruiting|Other/Unknown

Autonomous Bus Strikes Pedestrian During Vienna Trial Due to Sensor Failure

High

An autonomous electric bus struck a pedestrian in Vienna during 2019 trial operations when its AI sensor systems failed to detect the person crossing the street.

Jun 25, 2019|Safety Failure|Technology|Other/Unknown

Chinese AI Surveillance Systems Enable Mass Detention of Uyghurs in Xinjiang

Critical

China deployed comprehensive AI surveillance systems including facial recognition, predictive policing, and data integration platforms to systematically identify and detain over one million Uyghur Muslims and other ethnic minorities in Xinjiang since 2017.

May 1, 2019|Bias|Government|Other/Unknown

AI-Powered Surveillance System Used for Uyghur Persecution in Xinjiang

Critical

Chinese authorities deployed AI-powered surveillance systems from companies including Huawei and Hikvision to systematically track and profile Uyghur Muslims in Xinjiang, contributing to mass detention and persecution. The technology used facial recognition and behavioral analysis to automatically target individuals based on ethnicity.

May 1, 2019|Bias|Government|Other/Unknown

Amazon Warehouse AI Productivity Tracking Led to Unsafe Working Conditions and Automated Terminations

High

Amazon's AI-powered warehouse productivity tracking system automatically terminated workers who couldn't meet algorithm-set quotas, leading to unsafe working speeds and injury rates significantly higher than industry averages.

Apr 29, 2019|Safety Failure|Technology|Other/Unknown|$50,000,000

AI Essay Grading Systems Systematically Penalize Non-Native English Speakers

High

Research revealed AI essay grading systems like e-rater systematically gave lower scores to non-native English speakers despite equivalent content quality, affecting standardized test outcomes for international students.

Apr 15, 2019|Bias|Education|Other/Unknown

Amazon Alexa Contractors Listened to Private User Conversations Without Consent

High

Bloomberg revealed Amazon employed thousands of contractors worldwide to listen to Alexa recordings from users' homes for speech training, exposing private conversations without adequate user consent and leading to privacy lawsuits.

Apr 10, 2019|Privacy Leak|Technology|Other/Unknown|$50,000,000