AI Incident Database
472 documented incidents. Search, filter, and explore.
EU iBorderCtrl AI Lie Detector Deployed at Borders Despite Accuracy Concerns
HighThe EU-funded iBorderCtrl AI lie detector was deployed at borders in Hungary, Latvia, and Greece despite lacking scientific validation for micro-expression deception detection.
Optum AI Algorithm Shows Racial Bias in Healthcare Risk Predictions
CriticalOptum's widely-used healthcare risk prediction algorithm showed severe racial bias, requiring Black patients to be significantly sicker than white patients to receive the same care recommendations, affecting an estimated 200 million patients nationwide.
Optum Healthcare Algorithm Showed Racial Bias Against Black Patients
CriticalOptum's widely-used healthcare risk algorithm systematically underestimated care needs for Black patients by using healthcare spending as a proxy for health status, affecting over 10 million patients across major health systems.
Babylon Health AI Misdiagnosed Medical Conditions as Non-Urgent in NHS Service
HighBabylon Health's AI triage system in the NHS GP at Hand service incorrectly classified serious medical conditions as non-urgent. BBC investigation revealed systematic failures that could have delayed critical care for patients.
PredPol Predictive Policing Algorithm Reinforced Racial Bias in LAPD Deployment
HighLAPD's use of PredPol predictive policing software from 2011-2019 created feedback loops that disproportionately targeted Black and Latino neighborhoods, with multiple academic studies documenting systematic bias before the department ended the program.
AI Voice Deepfake Defrauds UK Energy Company of $243,000
MediumCriminals used AI voice cloning technology to impersonate a parent company CEO, successfully deceiving a UK energy firm executive into authorizing a $243,000 wire transfer in one of the first documented deepfake fraud cases.
Uber's Surge Pricing Algorithm Shows Disparate Impact on Minority Neighborhoods
MediumAcademic research revealed Uber's AI surge pricing algorithm consistently charged higher prices in minority and lower-income neighborhoods due to supply-demand patterns that correlated with demographics.
AI Facial Recognition Used to Suppress Hong Kong Protesters
CriticalHong Kong authorities deployed AI facial recognition through smart lampposts and CCTV networks to identify pro-democracy protesters in 2019-2020, leading to arrests and systematic suppression of assembly rights.
Facial Recognition at London King's Cross Station Operated Without Public Knowledge
HighFacial recognition cameras at London's King's Cross development operated without public knowledge for 18 months, processing millions of people's biometric data in violation of GDPR before being discovered and shut down.
CBP Facial Recognition Systems Show Racial and Demographic Bias in Border Screening
HighCBP's facial recognition systems at US border crossings demonstrated significant bias, with higher error rates for people of color, women, and elderly travelers. GAO investigations revealed systematic disparities affecting millions of border crossers annually.
Siri AI Assistant Recorded Private Conversations and Sent to Apple Contractors
HighApple's Siri assistant inadvertently recorded private conversations due to false wake word triggers, with contractors regularly hearing confidential medical information and intimate moments, leading to a $95M settlement.
DoorDash AI Payment Algorithm Used Tips to Subsidize Base Pay Instead of Supplementing Driver Earnings
HighDoorDash's AI payment algorithm systematically used customer tips to reduce driver base pay rather than supplement it, affecting approximately 250,000 drivers over two years before being exposed in 2019, resulting in a $2.5M FTC settlement.
Epic Systems Sepsis Prediction AI Tool Shows 67% False Alert Rate in University of Michigan Study
HighEpic Systems' sepsis prediction AI tool deployed at University of Michigan showed a 67% false alert rate, with only 7% of predictions confirmed as sepsis, causing alert fatigue among clinicians.
LinkedIn AI Profile Suggestions Showed Gender Bias in Career Recommendations
MediumLinkedIn's AI-powered job and skill recommendations showed systematic gender bias, suggesting administrative roles to women and executive positions to men. The company acknowledged the issue and implemented changes to reduce bias in their algorithms.
Autonomous Bus Strikes Pedestrian During Vienna Trial Due to Sensor Failure
HighAn autonomous electric bus struck a pedestrian in Vienna during 2019 trial operations when its AI sensor systems failed to detect the person crossing the street.
Chinese AI Surveillance Systems Enable Mass Detention of Uyghurs in Xinjiang
CriticalChina deployed comprehensive AI surveillance systems including facial recognition, predictive policing, and data integration platforms to systematically identify and detain over one million Uyghur Muslims and other ethnic minorities in Xinjiang since 2017.
AI-Powered Surveillance System Used for Uyghur Persecution in Xinjiang
CriticalChinese authorities deployed AI-powered surveillance systems from companies including Huawei and Hikvision to systematically track and profile Uyghur Muslims in Xinjiang, contributing to mass detention and persecution. The technology used facial recognition and behavioral analysis to automatically target individuals based on ethnicity.
Amazon Warehouse AI Productivity Tracking Led to Unsafe Working Conditions and Automated Terminations
HighAmazon's AI-powered warehouse productivity tracking system automatically terminated workers who couldn't meet algorithm-set quotas, leading to unsafe working speeds and injury rates significantly higher than industry averages.
AI Essay Grading Systems Systematically Penalize Non-Native English Speakers
HighResearch revealed AI essay grading systems like e-rater systematically gave lower scores to non-native English speakers despite equivalent content quality, affecting standardized test outcomes for international students.
Amazon Alexa Contractors Listened to Private User Conversations Without Consent
HighBloomberg revealed Amazon employed thousands of contractors worldwide to listen to Alexa recordings from users' homes for speech training, exposing private conversations without adequate user consent and leading to privacy lawsuits.