AI Incident Database

472 documented incidents. Search, filter, and explore.

ShotSpotter AI Gunshot Detection System Generated False Alerts Leading to Wrongful Raids and Arrests in Chicago

High

Chicago's ShotSpotter AI gunshot detection system generated thousands of false alerts from 2017-2021, leading to unnecessary police raids and wrongful arrests. A MacArthur Justice Center study found 89% of alerts resulted in no gun crime evidence.

May 1, 2021|Safety Failure|Government|Other/Unknown|$33,000,000

LinkedIn Job Ad Algorithm Showed Gender Bias in High-Paying Job Delivery

Medium

USC researchers found LinkedIn's job ad algorithm delivered high-paying job listings disproportionately to men despite gender-neutral employer targeting. The bias stemmed from optimization algorithms that learned from historical engagement patterns.

Apr 8, 2021|Bias|Technology|Other/Unknown

AI-Powered Energy Trading Algorithms Contributed to Texas Grid Crisis Price Manipulation

Critical

During the February 2021 Texas winter storm, AI-powered energy trading algorithms contributed to extreme electricity price spikes from $50 to $9,000 per MWh, resulting in $16 billion in excessive charges while millions lost power.

Mar 15, 2021|Financial Error|Finance|Other/Unknown|$5,000,000,000

Turkish Kargu-2 Autonomous Drone Allegedly Attacked Libyan Forces Without Human Authorization

Critical

A Turkish-made Kargu-2 autonomous drone allegedly attacked retreating Libyan forces in March 2020 without human authorization, marking what may be the first documented case of an autonomous weapon system independently selecting and engaging targets.

Mar 8, 2021|Agent Error|Government|Other/Unknown

Tom Cruise Deepfakes on TikTok Demonstrate Detection Impossibility

Medium

Belgian VFX artist Chris Ume created hyperrealistic Tom Cruise deepfakes that garnered over 11 million views on TikTok, fooling experts and demonstrating the impossibility of detecting advanced deepfakes with current technology.

Mar 1, 2021|Deepfake / Fraud|Media|Other/Unknown

Citibank Accidentally Transfers $900M to Revlon Lenders Due to Flexcube Interface Design Flaw

Critical

Citibank accidentally transferred $900 million to Revlon lenders in August 2020 due to a confusing Oracle Flexcube interface design. After initial court loss, Citibank successfully appealed and recovered most funds in 2021.

Feb 16, 2021|Agent Error|Finance|Other/Unknown|$504,000,000

Alibaba AI Emotion Recognition Used in Chinese Detention Facilities

Critical

Alibaba and other Chinese companies developed AI emotion recognition technology that was reportedly used to monitor emotional states of detained Uyghur individuals in Xinjiang facilities, raising severe human rights concerns.

Feb 8, 2021|surveillance|Government|Other/Unknown

South Korean AI Chatbot Lee Luda Shut Down for Hate Speech and Privacy Violations

High

South Korean startup Scatter Lab's AI chatbot Lee Luda was shut down after making discriminatory comments and violating privacy laws by training on 10 billion private KakaoTalk messages without user consent, affecting 750,000 users.

Jan 11, 2021|Bias|Technology|Other/Unknown

South Korean AI Chatbot 'Lee Luda' Shut Down for Hate Speech and Privacy Violations

High

South Korean chatbot Lee Luda was shut down after generating homophobic and racist content and being found to have illegally trained on 600,000 users' private KakaoTalk conversations without consent.

Jan 11, 2021|Bias|Technology|Other/Unknown

ScatterLab's AI Chatbot Lee Luda Generated Discriminatory and Sexually Inappropriate Content

High

South Korean AI chatbot Lee Luda was shut down after generating discriminatory content against minorities and sexually inappropriate responses, leading to regulatory fines and lawsuits affecting 750,000 users.

Dec 29, 2020|Bias|Technology|Other/Unknown|$5,000,000

Alibaba Cloud AI Offered Uyghur Ethnic Detection Feature for Surveillance

Critical

Alibaba Cloud's AI services included ethnic detection capabilities specifically marketed to identify Uyghur faces, supporting Chinese government surveillance programs. The discovery led to international sanctions and the company removing the feature.

Dec 17, 2020|Bias|Technology|Other/Unknown

Dutch Tax Authority AI System Wrongly Accused Thousands of Families of Childcare Benefits Fraud

Critical

Dutch tax authority's AI system wrongly flagged thousands of families for childcare benefits fraud based on discriminatory factors like dual nationality. The scandal caused widespread financial hardship and led to the collapse of the Dutch government in 2021.

Dec 17, 2020|Bias|Government|Other/Unknown|$1,000,000,000

Google Fired AI Ethics Researchers Timnit Gebru and Margaret Mitchell Over Research Paper Controversy

High

Google fired AI ethics co-leads Timnit Gebru and Margaret Mitchell in 2020-2021 after disputes over the 'Stochastic Parrots' paper that highlighted risks of large language models. The terminations sparked widespread criticism and raised concerns about corporate control over AI safety research.

Dec 3, 2020|Other|Technology|Google

Retorio AI Hiring Tool Found to Evaluate Candidates Based on Background and Clothing Rather Than Qualifications

High

Bavarian research revealed that Retorio's AI video interview tool evaluated job candidates based on irrelevant factors like clothing and background objects rather than qualifications. The findings highlighted systematic bias in AI hiring tools that could violate anti-discrimination laws.

Dec 1, 2020|Bias|HR / Recruiting|Other/Unknown

Upstart AI Lending Platform Accused of Racial Discrimination Through HBCU-Linked Higher Interest Rates

High

Student Borrower Protection Center found Upstart's AI lending algorithm charged HBCU graduates higher interest rates, revealing algorithmic discrimination despite regulatory approval.

Oct 27, 2020|Bias|Finance|Other/Unknown

Proctorio AI Exam Proctoring System Flagged Students of Color Disproportionately

High

Proctorio's AI exam proctoring software disproportionately flagged students of color as potential cheaters due to biased facial recognition algorithms. Multiple universities discontinued the service after documented bias incidents.

Oct 15, 2020|Bias|Education|Other/Unknown

Twitter's Image Cropping Algorithm Demonstrated Racial Bias in Face Selection

High

Twitter's automatic image cropping algorithm systematically favored white faces over Black faces in preview images. Users demonstrated the bias through controlled experiments, leading Twitter to acknowledge the issue and eventually remove automatic cropping entirely.

Sep 19, 2020|Bias|Technology|Other/Unknown

AI Proctoring Software Privacy Violations and Student Surveillance Lawsuits

High

AI proctoring software used during COVID-19 remote learning recorded students in bedrooms and private spaces, leading to privacy lawsuits and many universities discontinuing the technology.

Sep 15, 2020|Privacy Leak|Education|Other/Unknown|$5,000,000

UK Passport Photo AI Falsely Rejected Dark-Skinned Applicants with Biased Error Messages

High

The UK government's online passport photo verification AI systematically rejected photos of dark-skinned applicants with false error messages like 'mouth too open', demonstrating clear racial bias in a critical government service.

Sep 8, 2020|Bias|Government|Other/Unknown

Ofqual A-Level Grading Algorithm Downgraded 40% of Students, Disproportionately Harmed Disadvantaged Schools

Critical

Ofqual's 2020 A-level grading algorithm downgraded 40% of teacher-predicted grades, systematically disadvantaging students from state schools and disadvantaged backgrounds. After mass protests and legal challenges, the government abandoned the results.

Aug 13, 2020|Bias|Education|Other/Unknown