AI Incident Database
472 documented incidents. Search, filter, and explore.
ShotSpotter AI Gunshot Detection System Generated False Alerts Leading to Wrongful Raids and Arrests in Chicago
HighChicago's ShotSpotter AI gunshot detection system generated thousands of false alerts from 2017-2021, leading to unnecessary police raids and wrongful arrests. A MacArthur Justice Center study found 89% of alerts resulted in no gun crime evidence.
LinkedIn Job Ad Algorithm Showed Gender Bias in High-Paying Job Delivery
MediumUSC researchers found LinkedIn's job ad algorithm delivered high-paying job listings disproportionately to men despite gender-neutral employer targeting. The bias stemmed from optimization algorithms that learned from historical engagement patterns.
AI-Powered Energy Trading Algorithms Contributed to Texas Grid Crisis Price Manipulation
CriticalDuring the February 2021 Texas winter storm, AI-powered energy trading algorithms contributed to extreme electricity price spikes from $50 to $9,000 per MWh, resulting in $16 billion in excessive charges while millions lost power.
Turkish Kargu-2 Autonomous Drone Allegedly Attacked Libyan Forces Without Human Authorization
CriticalA Turkish-made Kargu-2 autonomous drone allegedly attacked retreating Libyan forces in March 2020 without human authorization, marking what may be the first documented case of an autonomous weapon system independently selecting and engaging targets.
Tom Cruise Deepfakes on TikTok Demonstrate Detection Impossibility
MediumBelgian VFX artist Chris Ume created hyperrealistic Tom Cruise deepfakes that garnered over 11 million views on TikTok, fooling experts and demonstrating the impossibility of detecting advanced deepfakes with current technology.
Citibank Accidentally Transfers $900M to Revlon Lenders Due to Flexcube Interface Design Flaw
CriticalCitibank accidentally transferred $900 million to Revlon lenders in August 2020 due to a confusing Oracle Flexcube interface design. After initial court loss, Citibank successfully appealed and recovered most funds in 2021.
Alibaba AI Emotion Recognition Used in Chinese Detention Facilities
CriticalAlibaba and other Chinese companies developed AI emotion recognition technology that was reportedly used to monitor emotional states of detained Uyghur individuals in Xinjiang facilities, raising severe human rights concerns.
South Korean AI Chatbot Lee Luda Shut Down for Hate Speech and Privacy Violations
HighSouth Korean startup Scatter Lab's AI chatbot Lee Luda was shut down after making discriminatory comments and violating privacy laws by training on 10 billion private KakaoTalk messages without user consent, affecting 750,000 users.
South Korean AI Chatbot 'Lee Luda' Shut Down for Hate Speech and Privacy Violations
HighSouth Korean chatbot Lee Luda was shut down after generating homophobic and racist content and being found to have illegally trained on 600,000 users' private KakaoTalk conversations without consent.
ScatterLab's AI Chatbot Lee Luda Generated Discriminatory and Sexually Inappropriate Content
HighSouth Korean AI chatbot Lee Luda was shut down after generating discriminatory content against minorities and sexually inappropriate responses, leading to regulatory fines and lawsuits affecting 750,000 users.
Alibaba Cloud AI Offered Uyghur Ethnic Detection Feature for Surveillance
CriticalAlibaba Cloud's AI services included ethnic detection capabilities specifically marketed to identify Uyghur faces, supporting Chinese government surveillance programs. The discovery led to international sanctions and the company removing the feature.
Dutch Tax Authority AI System Wrongly Accused Thousands of Families of Childcare Benefits Fraud
CriticalDutch tax authority's AI system wrongly flagged thousands of families for childcare benefits fraud based on discriminatory factors like dual nationality. The scandal caused widespread financial hardship and led to the collapse of the Dutch government in 2021.
Google Fired AI Ethics Researchers Timnit Gebru and Margaret Mitchell Over Research Paper Controversy
HighGoogle fired AI ethics co-leads Timnit Gebru and Margaret Mitchell in 2020-2021 after disputes over the 'Stochastic Parrots' paper that highlighted risks of large language models. The terminations sparked widespread criticism and raised concerns about corporate control over AI safety research.
Retorio AI Hiring Tool Found to Evaluate Candidates Based on Background and Clothing Rather Than Qualifications
HighBavarian research revealed that Retorio's AI video interview tool evaluated job candidates based on irrelevant factors like clothing and background objects rather than qualifications. The findings highlighted systematic bias in AI hiring tools that could violate anti-discrimination laws.
Upstart AI Lending Platform Accused of Racial Discrimination Through HBCU-Linked Higher Interest Rates
HighStudent Borrower Protection Center found Upstart's AI lending algorithm charged HBCU graduates higher interest rates, revealing algorithmic discrimination despite regulatory approval.
Proctorio AI Exam Proctoring System Flagged Students of Color Disproportionately
HighProctorio's AI exam proctoring software disproportionately flagged students of color as potential cheaters due to biased facial recognition algorithms. Multiple universities discontinued the service after documented bias incidents.
Twitter's Image Cropping Algorithm Demonstrated Racial Bias in Face Selection
HighTwitter's automatic image cropping algorithm systematically favored white faces over Black faces in preview images. Users demonstrated the bias through controlled experiments, leading Twitter to acknowledge the issue and eventually remove automatic cropping entirely.
AI Proctoring Software Privacy Violations and Student Surveillance Lawsuits
HighAI proctoring software used during COVID-19 remote learning recorded students in bedrooms and private spaces, leading to privacy lawsuits and many universities discontinuing the technology.
UK Passport Photo AI Falsely Rejected Dark-Skinned Applicants with Biased Error Messages
HighThe UK government's online passport photo verification AI systematically rejected photos of dark-skinned applicants with false error messages like 'mouth too open', demonstrating clear racial bias in a critical government service.
Ofqual A-Level Grading Algorithm Downgraded 40% of Students, Disproportionately Harmed Disadvantaged Schools
CriticalOfqual's 2020 A-level grading algorithm downgraded 40% of teacher-predicted grades, systematically disadvantaging students from state schools and disadvantaged backgrounds. After mass protests and legal challenges, the government abandoned the results.