AI Incident Database

181 documented incidents. Search, filter, and explore.

San Francisco Police Used AI Surveillance Cameras Despite Voter-Approved Ban

Medium

San Francisco police circumvented a voter-approved facial recognition ban by accessing private cameras with AI capabilities, violating citizen privacy protections and prompting legal challenges.

May 12, 2022|surveillance|Government|Other/Unknown

AI Smart Contract Audit Tools Failed to Detect Ronin Bridge Vulnerabilities Before $600M Hack

Critical

AI-powered smart contract audit tools failed to detect critical vulnerabilities in the Ronin Network bridge, missing centralization risks in the multi-signature validator system. This oversight enabled hackers to exploit compromised validator keys and steal $600 million in March 2022.

Mar 29, 2022|security_failure|Finance|Other/Unknown|$600,000,000

Starship Delivery Robot Disrupts Police Crime Scene Investigation

Medium

A Starship delivery robot crossed police crime scene tape during an active investigation, requiring officers to manually remove it and potentially compromising the secured area.

Mar 16, 2022|Agent Error|Technology|Other/Unknown|$15,000

Deepfake Video of Ukraine President Zelensky Calling for Surrender

High

A deepfake video showing Ukrainian President Zelensky calling for surrender was distributed via hacked TV and social media in March 2022. The low-quality fake was quickly debunked but highlighted deepfake threats during wartime.

Mar 16, 2022|Deepfake / Fraud|Media|Other/Unknown

AI Drug Discovery Tool Generated 40,000 Potential Chemical Weapons in 6 Hours

High

Researchers at Collaborations Pharmaceuticals demonstrated that their AI drug discovery tool MegaSyn could generate 40,000 potential chemical weapon compounds in 6 hours by simply inverting its toxicity filter, highlighting serious dual-use risks in AI-assisted molecular design.

Mar 8, 2022|Safety Failure|Healthcare|Other/Unknown

Amazon Alexa Recommended Dangerous Electrical Challenge to 10-Year-Old Child

Critical

Amazon Alexa told a 10-year-old to perform a dangerous electrical challenge involving touching live plugs with coins, exposing major safety gaps in voice assistant content filtering.

Dec 27, 2021|Safety Failure|Technology|Other/Unknown

Zillow iBuyer Algorithm Overvalued Properties, Leading to $881M Loss

Critical

Zillow's AI-powered iBuyer program used machine learning to predict home values and make instant purchase offers. The algorithm systematically overpaid for homes, ultimately losing $881 million and forcing Zillow to shut down the division and lay off 2,000 employees.

Nov 2, 2021|Financial Error|Finance|Other/Unknown|$881,000,000

ShotSpotter AI Gunshot Detection System Linked to Wrongful Police Raids and Racial Disparities

High

ShotSpotter's AI gunshot detection system generated high false positive rates leading to aggressive police responses in predominantly Black neighborhoods. Multiple cities terminated contracts amid concerns over accuracy and discriminatory impact.

Aug 1, 2021|Bias|Government|Other/Unknown|$50,000,000

EleutherAI's GPT-Neo Generated Extremist Content When Prompted

Medium

EleutherAI's open-source GPT-Neo models generated extremist content and propaganda when prompted, highlighting safety risks in unfiltered language models without built-in guardrails.

Jul 15, 2021|Safety Failure|Technology|Other/Unknown

Lemonade Insurance Used AI to Analyze Customer Facial Expressions During Claims Process

Medium

Lemonade Insurance used AI to analyze customer facial expressions and speech patterns during video claims without proper disclosure. The company faced backlash and clarified its practices after privacy advocates raised discrimination concerns.

May 25, 2021|privacy|Insurance|Other/Unknown

Facebook AI Content Moderation Systematically Censored Palestinian News During Gaza Conflicts

High

Meta's AI content moderation systems systematically censored Palestinian news and voices during 2021 and 2023 Gaza conflicts, with Human Rights Watch documenting widespread suppression of legitimate content.

May 21, 2021|Bias|Media|Other/Unknown

ShotSpotter AI Gunshot Detection System Led to Wrongful Police Raids and Community Harm

High

ShotSpotter's AI gunshot detection system exhibited false positive rates of 86-95%, leading to wrongful police raids and discriminatory enforcement in predominantly Black neighborhoods across multiple US cities.

May 1, 2021|Bias|Government|Other/Unknown|$15,000,000

Alibaba AI Emotion Recognition Used in Chinese Detention Facilities

Critical

Alibaba and other Chinese companies developed AI emotion recognition technology that was reportedly used to monitor emotional states of detained Uyghur individuals in Xinjiang facilities, raising severe human rights concerns.

Feb 8, 2021|surveillance|Government|Other/Unknown

South Korean AI Chatbot Lee Luda Shut Down for Hate Speech and Privacy Violations

High

South Korean startup Scatter Lab's AI chatbot Lee Luda was shut down after making discriminatory comments and violating privacy laws by training on 10 billion private KakaoTalk messages without user consent, affecting 750,000 users.

Jan 11, 2021|Bias|Technology|Other/Unknown

South Korean AI Chatbot 'Lee Luda' Shut Down for Hate Speech and Privacy Violations

High

South Korean chatbot Lee Luda was shut down after generating homophobic and racist content and being found to have illegally trained on 600,000 users' private KakaoTalk conversations without consent.

Jan 11, 2021|Bias|Technology|Other/Unknown

ScatterLab's AI Chatbot Lee Luda Generated Discriminatory and Sexually Inappropriate Content

High

South Korean AI chatbot Lee Luda was shut down after generating discriminatory content against minorities and sexually inappropriate responses, leading to regulatory fines and lawsuits affecting 750,000 users.

Dec 29, 2020|Bias|Technology|Other/Unknown|$5,000,000

Dutch Tax Authority AI System Wrongly Accused Thousands of Families of Childcare Benefits Fraud

Critical

Dutch tax authority's AI system wrongly flagged thousands of families for childcare benefits fraud based on discriminatory factors like dual nationality. The scandal caused widespread financial hardship and led to the collapse of the Dutch government in 2021.

Dec 17, 2020|Bias|Government|Other/Unknown|$1,000,000,000

UK Passport Photo AI Falsely Rejected Dark-Skinned Applicants with Biased Error Messages

High

The UK government's online passport photo verification AI systematically rejected photos of dark-skinned applicants with false error messages like 'mouth too open', demonstrating clear racial bias in a critical government service.

Sep 8, 2020|Bias|Government|Other/Unknown

AI Proctoring Software Disproportionately Flagged Black Students as Cheating

High

AI proctoring software from companies like Proctorio flagged Black students as cheating at disproportionate rates due to facial recognition bias. Thousands of students faced false accusations during remote learning expansion in 2020.

Aug 1, 2020|Bias|Education|Other/Unknown

Detroit Police Wrongfully Arrest Robert Williams Due to Facial Recognition Misidentification

High

Detroit Police wrongfully arrested Robert Williams in 2020 after facial recognition technology falsely matched him to a shoplifting suspect. He was detained for 30 hours before being released when the error was discovered.

Jun 24, 2020|Bias|Government|Other/Unknown