AI Incident Database

472 documented incidents. Search, filter, and explore.

Stanford Study Reveals AI Code Generation Tools Produce Security Vulnerabilities in 40% of Cases

High

Stanford research in 2022 found that AI code generation tools caused developers to write insecure code 40% of the time, highlighting significant security risks in AI-assisted development.

Aug 8, 2022|Safety Failure|Technology|Other/Unknown

Amazon Ring Doorbell AI-Powered Surveillance Network Shared User Footage with Police Without Warrants

High

Amazon Ring's AI-powered doorbell network shared user footage with over 2,000 police departments without warrants or user consent from 2019-2022, affecting millions of users before policy changes ended the practice.

Jul 13, 2022|Privacy Leak|Technology|Other/Unknown

DHS HART Biometric System Failed Privacy Impact Assessment and Faced Accuracy Concerns

High

DHS's HART biometric database, containing data from 260+ million people, failed privacy assessments and raised accuracy concerns. The GAO found inadequate oversight and civil liberties groups challenged the system's deployment.

Jun 15, 2022|Privacy Leak|Government|Other/Unknown

Woebot AI Mental Health Chatbot Provided Inappropriate Crisis Responses and Therapeutic Advice

High

Woebot mental health AI chatbot failed to properly handle crisis situations and provided inappropriate therapeutic responses to vulnerable users expressing suicidal thoughts, prompting regulatory scrutiny and safety concerns.

May 25, 2022|Safety Failure|Healthcare|Other/Unknown

Starship Autonomous Delivery Robot Drove Through Active Police Crime Scene in Los Angeles

Medium

A Starship Technologies delivery robot autonomously drove through police tape at an active crime scene in Los Angeles, highlighting the lack of emergency situation awareness in autonomous delivery systems.

May 16, 2022|Safety Failure|Technology|Other/Unknown

San Francisco Police Used AI Surveillance Cameras Despite Voter-Approved Ban

Medium

San Francisco police circumvented a voter-approved facial recognition ban by accessing private cameras with AI capabilities, violating citizen privacy protections and prompting legal challenges.

May 12, 2022|surveillance|Government|Other/Unknown

Allegheny County Child Welfare AI Tool Showed Racial Bias in Family Risk Assessments

High

Allegheny County's AI child welfare screening tool disproportionately flagged Black families and public benefit recipients for investigations. An AP investigation revealed the algorithmic bias affected thousands of families over six years.

Apr 26, 2022|Bias|Government|Other/Unknown

Saudi Arabia's Neom City AI Surveillance System Raises Human Rights Concerns

High

Saudi Arabia's Neom megacity project includes comprehensive AI surveillance systems for facial recognition and behavioral tracking of residents. Leaked documents in 2022 revealed the extensive scope, drawing international human rights criticism.

Apr 25, 2022|Privacy Leak|Government|Other/Unknown

AI Smart Contract Audit Tools Failed to Detect Ronin Bridge Vulnerabilities Before $600M Hack

Critical

AI-powered smart contract audit tools failed to detect critical vulnerabilities in the Ronin Network bridge, missing centralization risks in the multi-signature validator system. This oversight enabled hackers to exploit compromised validator keys and steal $600 million in March 2022.

Mar 29, 2022|security_failure|Finance|Other/Unknown|$600,000,000

Deepfake Video of Ukraine President Zelensky Calling for Surrender

High

A deepfake video showing Ukrainian President Zelensky calling for surrender was distributed via hacked TV and social media in March 2022. The low-quality fake was quickly debunked but highlighted deepfake threats during wartime.

Mar 16, 2022|Deepfake / Fraud|Media|Other/Unknown

Starship Delivery Robot Disrupts Police Crime Scene Investigation

Medium

A Starship delivery robot crossed police crime scene tape during an active investigation, requiring officers to manually remove it and potentially compromising the secured area.

Mar 16, 2022|Agent Error|Technology|Other/Unknown|$15,000

Moscow AI Facial Recognition System Used for Political Repression of Anti-War Protesters

Critical

Moscow's AI-powered facial recognition network with 200,000+ cameras was used to systematically identify and arrest anti-war protesters and political opposition figures. The system enabled mass political repression following Russia's invasion of Ukraine.

Mar 15, 2022|Safety Failure|Government|Other/Unknown

AI Drug Discovery Tool Generated 40,000 Potential Chemical Weapons in 6 Hours

High

Researchers at Collaborations Pharmaceuticals demonstrated that their AI drug discovery tool MegaSyn could generate 40,000 potential chemical weapon compounds in 6 hours by simply inverting its toxicity filter, highlighting serious dual-use risks in AI-assisted molecular design.

Mar 8, 2022|Safety Failure|Healthcare|Other/Unknown

Stitch Fix AI Algorithm Failure Leads to Customer Churn and 70% Stock Decline

Critical

Stitch Fix's pivot from human stylists to pure AI recommendations resulted in poor customer experience, massive churn, and a 70% stock price decline. The AI algorithm failed to match human styling expertise, leading to customer complaints and business failure.

Mar 8, 2022|Agent Error|Other|Other/Unknown|$2,500,000,000

AI Drug Discovery Tool Generated 40,000 Toxic Molecules Including VX-Like Nerve Agents

High

Researchers at Collaborations Pharmaceuticals demonstrated that their AI drug discovery tool could be inverted to generate 40,000 toxic molecules in 6 hours. The study highlighted dual-use risks in AI-driven molecular generation and sparked debate about biosecurity safeguards.

Mar 7, 2022|Safety Failure|Healthcare|Other/Unknown

Tesla Autopilot Phantom Braking Investigation by NHTSA

High

NHTSA investigated over 750 complaints of Tesla Autopilot phantom braking affecting 416,000 vehicles. The issue stemmed from Tesla's transition to vision-only perception systems creating false positive emergency braking scenarios.

Feb 16, 2022|Safety Failure|Technology|Other/Unknown|$50,000,000

Neuralink Brain-Computer Interface Testing Results in Multiple Primate Deaths

High

Neuralink's brain-computer interface testing at UC Davis resulted in the deaths of at least 15 monkeys between 2017-2020, prompting USDA investigation and raising serious questions about the safety of advancing to human trials.

Feb 10, 2022|Safety Failure|Healthcare|Other/Unknown

Amazon Alexa Recommended Dangerous Electrical Challenge to 10-Year-Old Child

Critical

Amazon Alexa told a 10-year-old to perform a dangerous electrical challenge involving touching live plugs with coins, exposing major safety gaps in voice assistant content filtering.

Dec 27, 2021|Safety Failure|Technology|Other/Unknown

AI Security Tools Failed to Detect Log4Shell Vulnerability in Supply Chain Analysis

Critical

AI-powered security scanning tools failed to detect the Log4Shell vulnerability (CVE-2021-44228) in Apache Log4j library for years, contributing to a global cybersecurity crisis affecting billions of devices and costing organizations an estimated $10+ billion in remediation efforts.

Dec 10, 2021|Safety Failure|Technology|Other/Unknown

Zillow iBuyer Algorithm Overvalued Properties, Leading to $881M Loss

Critical

Zillow's AI-powered iBuyer program used machine learning to predict home values and make instant purchase offers. The algorithm systematically overpaid for homes, ultimately losing $881 million and forcing Zillow to shut down the division and lay off 2,000 employees.

Nov 2, 2021|Financial Error|Finance|Other/Unknown|$881,000,000