AI Incident Database

472 documented incidents. Search, filter, and explore.

UK Ofqual A-Level Grading Algorithm Downgraded 40% of Students, Disproportionately Harming Disadvantaged Students

Critical

Ofqual's A-level grading algorithm in 2020 downgraded nearly 40% of students' predicted grades, systematically discriminating against disadvantaged students. Mass protests forced the government to abandon the algorithm and revert to teacher assessments within days.

Aug 13, 2020|Bias|Education|Other/Unknown

AI Proctoring Software Disproportionately Flagged Black Students as Cheating

High

AI proctoring software from companies like Proctorio flagged Black students as cheating at disproportionate rates due to facial recognition bias. Thousands of students faced false accusations during remote learning expansion in 2020.

Aug 1, 2020|Bias|Education|Other/Unknown

Wirecard AI Risk Systems Failed to Detect $2 Billion Fraud Leading to Company Collapse

Critical

Wirecard's AI-powered risk management systems failed to detect $2.1 billion in fictitious transactions over several years, contributing to one of Europe's largest corporate fraud scandals and the company's collapse in 2020.

Jun 25, 2020|Financial Error|Finance|Other/Unknown|$2,000,000,000

Detroit Police Wrongfully Arrest Robert Williams Due to Facial Recognition Misidentification

High

Detroit Police wrongfully arrested Robert Williams in 2020 after facial recognition technology falsely matched him to a shoplifting suspect. He was detained for 30 hours before being released when the error was discovered.

Jun 24, 2020|Bias|Government|Other/Unknown

Robinhood Algorithmic Trading Interface Linked to Young Trader's Suicide

Critical

A 20-year-old trader died by suicide after Robinhood's algorithmic interface showed a misleading $730,000 negative balance from options trading. The incident led to a $70M FINRA fine and major platform redesigns.

Jun 19, 2020|Safety Failure|Finance|Other/Unknown

Babylon Health AI Symptom Checker Misdiagnosed Patients in UK NHS Trials

High

Babylon Health's AI symptom checker used in NHS GP at Hand service provided incorrect medical diagnoses in documented cases, raising safety concerns from doctors before the company's eventual bankruptcy.

Jun 15, 2020|Medical Error|Healthcare|Other/Unknown|$25,000,000

Racial Bias in eGFR Kidney Function Algorithm Delays Treatment for Black Patients

Critical

A widely-used kidney function algorithm systematically overestimated kidney health in Black patients for over two decades, delaying critical transplant referrals and treatment for millions.

Jun 1, 2020|Bias|Healthcare|Other/Unknown

Palantir AI Platform Deployed Across UK NHS Without Adequate Public Consultation

High

Palantir's AI platform was deployed across UK NHS to process millions of patient records during COVID-19 without adequate public consultation or GDPR compliance, raising significant privacy concerns about surveillance company handling sensitive health data.

Apr 21, 2020|data_governance|Healthcare|Other/Unknown

Google Search AI Promoted Dangerous COVID-19 Misinformation Including Bleach Injection

Critical

Google's AI-powered search features promoted dangerous COVID-19 misinformation including bleach injection cures during the pandemic. The WHO declared an 'infodemic' partly due to AI amplification of health misinformation, prompting Google to implement emergency content policies.

Apr 1, 2020|Medical Error|Healthcare|Google

Amazon's AI Dynamic Pricing Algorithm Enabled COVID-19 Price Gouging on Essential Goods

High

Amazon's AI-powered dynamic pricing system enabled massive price gouging on essential COVID-19 supplies in early 2020, leading to FTC investigations and consumer refunds. The incident highlighted the need for algorithmic accountability in emergency pricing.

Mar 15, 2020|Other|Technology|Other/Unknown|$50,000,000

ZestFinance AI Credit Scoring Under CFPB Scrutiny for Potential Fair Lending Violations

High

ZestFinance's AI credit scoring platform faced CFPB examination over potential fair lending violations, highlighting challenges in ensuring complex machine learning models don't embed discriminatory patterns that traditional auditing methods cannot detect.

Mar 15, 2020|Bias|Finance|Other/Unknown|$5,000,000

Root Insurance AI Pricing Algorithm Investigated for Discriminatory Bias

High

Root Insurance's smartphone-based AI pricing algorithm was investigated by Colorado regulators for potential discrimination against drivers in lower-income and minority neighborhoods through biased telematics data analysis.

Mar 15, 2020|Bias|Insurance|Other/Unknown

Chinese AI Content Moderation Systems Censored Critical COVID-19 Health Information

Critical

Chinese social media platforms' AI content moderation systems automatically censored early COVID-19 warnings from doctors and citizens, delaying public health response and contributing to pandemic spread.

Feb 7, 2020|Safety Failure|Technology|Other/Unknown

Dutch Court Bans SyRI Welfare Fraud Detection System for Discriminating Against Immigrants

High

The Netherlands' SyRI welfare fraud detection system was banned by a Dutch court in 2020 for systematically discriminating against immigrant communities through algorithmic profiling.

Feb 5, 2020|Bias|Government|Other/Unknown

Clearview AI Builds 40+ Billion Image Facial Recognition Database Without Consent

Critical

Clearview AI scraped 40+ billion facial images without consent to build a comprehensive surveillance database, resulting in $50+ million in fines and settlements across multiple jurisdictions for privacy violations.

Jan 18, 2020|Privacy Leak|Technology|Other/Unknown|$50,000,000

Clearview AI Scraped Billions of Photos for Facial Recognition Without Consent

Critical

Clearview AI scraped over 3 billion facial images from social media without consent to build a surveillance database sold to law enforcement. The company faced over $21 million in regulatory fines and a $50 million class action settlement.

Jan 18, 2020|surveillance|Government|Other/Unknown|$50,000,000

Spain's VioGén Algorithm Criticized After Fatal Domestic Violence Cases Misclassified as Low Risk

Critical

Spain's VioGén algorithm for assessing domestic violence risk was criticized after multiple fatal incidents where victims were classified as low risk and received inadequate protection.

Dec 15, 2019|Bias|Government|Other/Unknown

AI Emotion Detection Technology Used in Hiring Despite Lack of Scientific Validity

High

The AI Now Institute found that AI emotion detection technology lacked scientific validity yet was being used by companies for hiring decisions, potentially creating unfair bias against job candidates.

Dec 11, 2019|Bias|HR / Recruiting|Other/Unknown

Apple Card Algorithm Accused of Gender Discrimination in Credit Limits

High

Apple's credit card, managed by Goldman Sachs, was accused of algorithmically discriminating against women by offering them significantly lower credit limits than men with equivalent or superior financial profiles. The controversy began when tech entrepreneur David Heinemeier Hansson posted that his wife received a credit limit 20x lower than his despite a higher credit score.

Nov 9, 2019|Bias|Finance|Other/Unknown

HireVue AI Video Interview Tool Faced FTC Complaint Over Facial Analysis Bias

High

HireVue's AI video interview platform faced FTC complaint over biased facial analysis technology that potentially discriminated against job candidates. The company discontinued facial analysis features in 2021.

Nov 6, 2019|Bias|HR / Recruiting|Other/Unknown