AI Incident Database
472 documented incidents. Search, filter, and explore.
UK Ofqual A-Level Grading Algorithm Downgraded 40% of Students, Disproportionately Harming Disadvantaged Students
CriticalOfqual's A-level grading algorithm in 2020 downgraded nearly 40% of students' predicted grades, systematically discriminating against disadvantaged students. Mass protests forced the government to abandon the algorithm and revert to teacher assessments within days.
AI Proctoring Software Disproportionately Flagged Black Students as Cheating
HighAI proctoring software from companies like Proctorio flagged Black students as cheating at disproportionate rates due to facial recognition bias. Thousands of students faced false accusations during remote learning expansion in 2020.
Wirecard AI Risk Systems Failed to Detect $2 Billion Fraud Leading to Company Collapse
CriticalWirecard's AI-powered risk management systems failed to detect $2.1 billion in fictitious transactions over several years, contributing to one of Europe's largest corporate fraud scandals and the company's collapse in 2020.
Detroit Police Wrongfully Arrest Robert Williams Due to Facial Recognition Misidentification
HighDetroit Police wrongfully arrested Robert Williams in 2020 after facial recognition technology falsely matched him to a shoplifting suspect. He was detained for 30 hours before being released when the error was discovered.
Robinhood Algorithmic Trading Interface Linked to Young Trader's Suicide
CriticalA 20-year-old trader died by suicide after Robinhood's algorithmic interface showed a misleading $730,000 negative balance from options trading. The incident led to a $70M FINRA fine and major platform redesigns.
Babylon Health AI Symptom Checker Misdiagnosed Patients in UK NHS Trials
HighBabylon Health's AI symptom checker used in NHS GP at Hand service provided incorrect medical diagnoses in documented cases, raising safety concerns from doctors before the company's eventual bankruptcy.
Racial Bias in eGFR Kidney Function Algorithm Delays Treatment for Black Patients
CriticalA widely-used kidney function algorithm systematically overestimated kidney health in Black patients for over two decades, delaying critical transplant referrals and treatment for millions.
Palantir AI Platform Deployed Across UK NHS Without Adequate Public Consultation
HighPalantir's AI platform was deployed across UK NHS to process millions of patient records during COVID-19 without adequate public consultation or GDPR compliance, raising significant privacy concerns about surveillance company handling sensitive health data.
Google Search AI Promoted Dangerous COVID-19 Misinformation Including Bleach Injection
CriticalGoogle's AI-powered search features promoted dangerous COVID-19 misinformation including bleach injection cures during the pandemic. The WHO declared an 'infodemic' partly due to AI amplification of health misinformation, prompting Google to implement emergency content policies.
Amazon's AI Dynamic Pricing Algorithm Enabled COVID-19 Price Gouging on Essential Goods
HighAmazon's AI-powered dynamic pricing system enabled massive price gouging on essential COVID-19 supplies in early 2020, leading to FTC investigations and consumer refunds. The incident highlighted the need for algorithmic accountability in emergency pricing.
ZestFinance AI Credit Scoring Under CFPB Scrutiny for Potential Fair Lending Violations
HighZestFinance's AI credit scoring platform faced CFPB examination over potential fair lending violations, highlighting challenges in ensuring complex machine learning models don't embed discriminatory patterns that traditional auditing methods cannot detect.
Root Insurance AI Pricing Algorithm Investigated for Discriminatory Bias
HighRoot Insurance's smartphone-based AI pricing algorithm was investigated by Colorado regulators for potential discrimination against drivers in lower-income and minority neighborhoods through biased telematics data analysis.
Chinese AI Content Moderation Systems Censored Critical COVID-19 Health Information
CriticalChinese social media platforms' AI content moderation systems automatically censored early COVID-19 warnings from doctors and citizens, delaying public health response and contributing to pandemic spread.
Dutch Court Bans SyRI Welfare Fraud Detection System for Discriminating Against Immigrants
HighThe Netherlands' SyRI welfare fraud detection system was banned by a Dutch court in 2020 for systematically discriminating against immigrant communities through algorithmic profiling.
Clearview AI Builds 40+ Billion Image Facial Recognition Database Without Consent
CriticalClearview AI scraped 40+ billion facial images without consent to build a comprehensive surveillance database, resulting in $50+ million in fines and settlements across multiple jurisdictions for privacy violations.
Clearview AI Scraped Billions of Photos for Facial Recognition Without Consent
CriticalClearview AI scraped over 3 billion facial images from social media without consent to build a surveillance database sold to law enforcement. The company faced over $21 million in regulatory fines and a $50 million class action settlement.
Spain's VioGén Algorithm Criticized After Fatal Domestic Violence Cases Misclassified as Low Risk
CriticalSpain's VioGén algorithm for assessing domestic violence risk was criticized after multiple fatal incidents where victims were classified as low risk and received inadequate protection.
AI Emotion Detection Technology Used in Hiring Despite Lack of Scientific Validity
HighThe AI Now Institute found that AI emotion detection technology lacked scientific validity yet was being used by companies for hiring decisions, potentially creating unfair bias against job candidates.
Apple Card Algorithm Accused of Gender Discrimination in Credit Limits
HighApple's credit card, managed by Goldman Sachs, was accused of algorithmically discriminating against women by offering them significantly lower credit limits than men with equivalent or superior financial profiles. The controversy began when tech entrepreneur David Heinemeier Hansson posted that his wife received a credit limit 20x lower than his despite a higher credit score.
HireVue AI Video Interview Tool Faced FTC Complaint Over Facial Analysis Bias
HighHireVue's AI video interview platform faced FTC complaint over biased facial analysis technology that potentially discriminated against job candidates. The company discontinued facial analysis features in 2021.