AI Incident Database

318 documented incidents. Search, filter, and explore.

Stanford Study Reveals AI Code Generation Tools Produce Security Vulnerabilities in 40% of Cases

High

Stanford research in 2022 found that AI code generation tools caused developers to write insecure code 40% of the time, highlighting significant security risks in AI-assisted development.

Aug 8, 2022|Safety Failure|Technology|Other/Unknown

Amazon Ring Doorbell AI-Powered Surveillance Network Shared User Footage with Police Without Warrants

High

Amazon Ring's AI-powered doorbell network shared user footage with over 2,000 police departments without warrants or user consent from 2019-2022, affecting millions of users before policy changes ended the practice.

Jul 13, 2022|Privacy Leak|Technology|Other/Unknown

Woebot AI Mental Health Chatbot Provided Inappropriate Crisis Responses and Therapeutic Advice

High

Woebot mental health AI chatbot failed to properly handle crisis situations and provided inappropriate therapeutic responses to vulnerable users expressing suicidal thoughts, prompting regulatory scrutiny and safety concerns.

May 25, 2022|Safety Failure|Healthcare|Other/Unknown

Starship Autonomous Delivery Robot Drove Through Active Police Crime Scene in Los Angeles

Medium

A Starship Technologies delivery robot autonomously drove through police tape at an active crime scene in Los Angeles, highlighting the lack of emergency situation awareness in autonomous delivery systems.

May 16, 2022|Safety Failure|Technology|Other/Unknown

San Francisco Police Used AI Surveillance Cameras Despite Voter-Approved Ban

Medium

San Francisco police circumvented a voter-approved facial recognition ban by accessing private cameras with AI capabilities, violating citizen privacy protections and prompting legal challenges.

May 12, 2022|surveillance|Government|Other/Unknown

Allegheny County Child Welfare AI Tool Showed Racial Bias in Family Risk Assessments

High

Allegheny County's AI child welfare screening tool disproportionately flagged Black families and public benefit recipients for investigations. An AP investigation revealed the algorithmic bias affected thousands of families over six years.

Apr 26, 2022|Bias|Government|Other/Unknown

Saudi Arabia's Neom City AI Surveillance System Raises Human Rights Concerns

High

Saudi Arabia's Neom megacity project includes comprehensive AI surveillance systems for facial recognition and behavioral tracking of residents. Leaked documents in 2022 revealed the extensive scope, drawing international human rights criticism.

Apr 25, 2022|Privacy Leak|Government|Other/Unknown

AI Smart Contract Audit Tools Failed to Detect Ronin Bridge Vulnerabilities Before $600M Hack

Critical

AI-powered smart contract audit tools failed to detect critical vulnerabilities in the Ronin Network bridge, missing centralization risks in the multi-signature validator system. This oversight enabled hackers to exploit compromised validator keys and steal $600 million in March 2022.

Mar 29, 2022|security_failure|Finance|Other/Unknown|$600,000,000

Starship Delivery Robot Disrupts Police Crime Scene Investigation

Medium

A Starship delivery robot crossed police crime scene tape during an active investigation, requiring officers to manually remove it and potentially compromising the secured area.

Mar 16, 2022|Agent Error|Technology|Other/Unknown|$15,000

Deepfake Video of Ukraine President Zelensky Calling for Surrender

High

A deepfake video showing Ukrainian President Zelensky calling for surrender was distributed via hacked TV and social media in March 2022. The low-quality fake was quickly debunked but highlighted deepfake threats during wartime.

Mar 16, 2022|Deepfake / Fraud|Media|Other/Unknown

Moscow AI Facial Recognition System Used for Political Repression of Anti-War Protesters

Critical

Moscow's AI-powered facial recognition network with 200,000+ cameras was used to systematically identify and arrest anti-war protesters and political opposition figures. The system enabled mass political repression following Russia's invasion of Ukraine.

Mar 15, 2022|Safety Failure|Government|Other/Unknown

AI Drug Discovery Tool Generated 40,000 Potential Chemical Weapons in 6 Hours

High

Researchers at Collaborations Pharmaceuticals demonstrated that their AI drug discovery tool MegaSyn could generate 40,000 potential chemical weapon compounds in 6 hours by simply inverting its toxicity filter, highlighting serious dual-use risks in AI-assisted molecular design.

Mar 8, 2022|Safety Failure|Healthcare|Other/Unknown

Tesla Autopilot Phantom Braking Investigation by NHTSA

High

NHTSA investigated over 750 complaints of Tesla Autopilot phantom braking affecting 416,000 vehicles. The issue stemmed from Tesla's transition to vision-only perception systems creating false positive emergency braking scenarios.

Feb 16, 2022|Safety Failure|Technology|Other/Unknown|$50,000,000

Amazon Alexa Recommended Dangerous Electrical Challenge to 10-Year-Old Child

Critical

Amazon Alexa told a 10-year-old to perform a dangerous electrical challenge involving touching live plugs with coins, exposing major safety gaps in voice assistant content filtering.

Dec 27, 2021|Safety Failure|Technology|Other/Unknown

AI Security Tools Failed to Detect Log4Shell Vulnerability in Supply Chain Analysis

Critical

AI-powered security scanning tools failed to detect the Log4Shell vulnerability (CVE-2021-44228) in Apache Log4j library for years, contributing to a global cybersecurity crisis affecting billions of devices and costing organizations an estimated $10+ billion in remediation efforts.

Dec 10, 2021|Safety Failure|Technology|Other/Unknown|$10,000,000,000

Zillow iBuyer Algorithm Overvalued Properties, Leading to $881M Loss

Critical

Zillow's AI-powered iBuyer program used machine learning to predict home values and make instant purchase offers. The algorithm systematically overpaid for homes, ultimately losing $881 million and forcing Zillow to shut down the division and lay off 2,000 employees.

Nov 2, 2021|Financial Error|Finance|Other/Unknown|$881,000,000

Clearview AI Fined €60M by EU Data Protection Authorities for Facial Recognition Database GDPR Violations

Critical

Multiple EU data protection authorities fined Clearview AI approximately €60M total for collecting billions of facial images without consent, violating GDPR biometric data protections across Italy, France, UK, and Greece.

Oct 19, 2021|Privacy Leak|Technology|Other/Unknown|$70,000,000

Tesla Autopilot Emergency Vehicle Collision Pattern - NHTSA Investigation

Critical

Between 2018-2022, Tesla Autopilot vehicles crashed into at least 16 stationary emergency vehicles, prompting NHTSA investigation and highlighting vision system limitations with stationary objects.

Aug 13, 2021|Safety Failure|Technology|Other/Unknown|$50,000,000

ShotSpotter AI Gunshot Detection System Linked to Wrongful Police Raids and Racial Disparities

High

ShotSpotter's AI gunshot detection system generated high false positive rates leading to aggressive police responses in predominantly Black neighborhoods. Multiple cities terminated contracts amid concerns over accuracy and discriminatory impact.

Aug 1, 2021|Bias|Government|Other/Unknown|$50,000,000

EleutherAI's GPT-Neo Generated Extremist Content When Prompted

Medium

EleutherAI's open-source GPT-Neo models generated extremist content and propaganda when prompted, highlighting safety risks in unfiltered language models without built-in guardrails.

Jul 15, 2021|Safety Failure|Technology|Other/Unknown