AI Incident Database
318 documented incidents. Search, filter, and explore.
Stanford Study Reveals AI Code Generation Tools Produce Security Vulnerabilities in 40% of Cases
HighStanford research in 2022 found that AI code generation tools caused developers to write insecure code 40% of the time, highlighting significant security risks in AI-assisted development.
Amazon Ring Doorbell AI-Powered Surveillance Network Shared User Footage with Police Without Warrants
HighAmazon Ring's AI-powered doorbell network shared user footage with over 2,000 police departments without warrants or user consent from 2019-2022, affecting millions of users before policy changes ended the practice.
Woebot AI Mental Health Chatbot Provided Inappropriate Crisis Responses and Therapeutic Advice
HighWoebot mental health AI chatbot failed to properly handle crisis situations and provided inappropriate therapeutic responses to vulnerable users expressing suicidal thoughts, prompting regulatory scrutiny and safety concerns.
Starship Autonomous Delivery Robot Drove Through Active Police Crime Scene in Los Angeles
MediumA Starship Technologies delivery robot autonomously drove through police tape at an active crime scene in Los Angeles, highlighting the lack of emergency situation awareness in autonomous delivery systems.
San Francisco Police Used AI Surveillance Cameras Despite Voter-Approved Ban
MediumSan Francisco police circumvented a voter-approved facial recognition ban by accessing private cameras with AI capabilities, violating citizen privacy protections and prompting legal challenges.
Allegheny County Child Welfare AI Tool Showed Racial Bias in Family Risk Assessments
HighAllegheny County's AI child welfare screening tool disproportionately flagged Black families and public benefit recipients for investigations. An AP investigation revealed the algorithmic bias affected thousands of families over six years.
Saudi Arabia's Neom City AI Surveillance System Raises Human Rights Concerns
HighSaudi Arabia's Neom megacity project includes comprehensive AI surveillance systems for facial recognition and behavioral tracking of residents. Leaked documents in 2022 revealed the extensive scope, drawing international human rights criticism.
AI Smart Contract Audit Tools Failed to Detect Ronin Bridge Vulnerabilities Before $600M Hack
CriticalAI-powered smart contract audit tools failed to detect critical vulnerabilities in the Ronin Network bridge, missing centralization risks in the multi-signature validator system. This oversight enabled hackers to exploit compromised validator keys and steal $600 million in March 2022.
Starship Delivery Robot Disrupts Police Crime Scene Investigation
MediumA Starship delivery robot crossed police crime scene tape during an active investigation, requiring officers to manually remove it and potentially compromising the secured area.
Deepfake Video of Ukraine President Zelensky Calling for Surrender
HighA deepfake video showing Ukrainian President Zelensky calling for surrender was distributed via hacked TV and social media in March 2022. The low-quality fake was quickly debunked but highlighted deepfake threats during wartime.
Moscow AI Facial Recognition System Used for Political Repression of Anti-War Protesters
CriticalMoscow's AI-powered facial recognition network with 200,000+ cameras was used to systematically identify and arrest anti-war protesters and political opposition figures. The system enabled mass political repression following Russia's invasion of Ukraine.
AI Drug Discovery Tool Generated 40,000 Potential Chemical Weapons in 6 Hours
HighResearchers at Collaborations Pharmaceuticals demonstrated that their AI drug discovery tool MegaSyn could generate 40,000 potential chemical weapon compounds in 6 hours by simply inverting its toxicity filter, highlighting serious dual-use risks in AI-assisted molecular design.
Tesla Autopilot Phantom Braking Investigation by NHTSA
HighNHTSA investigated over 750 complaints of Tesla Autopilot phantom braking affecting 416,000 vehicles. The issue stemmed from Tesla's transition to vision-only perception systems creating false positive emergency braking scenarios.
Amazon Alexa Recommended Dangerous Electrical Challenge to 10-Year-Old Child
CriticalAmazon Alexa told a 10-year-old to perform a dangerous electrical challenge involving touching live plugs with coins, exposing major safety gaps in voice assistant content filtering.
AI Security Tools Failed to Detect Log4Shell Vulnerability in Supply Chain Analysis
CriticalAI-powered security scanning tools failed to detect the Log4Shell vulnerability (CVE-2021-44228) in Apache Log4j library for years, contributing to a global cybersecurity crisis affecting billions of devices and costing organizations an estimated $10+ billion in remediation efforts.
Zillow iBuyer Algorithm Overvalued Properties, Leading to $881M Loss
CriticalZillow's AI-powered iBuyer program used machine learning to predict home values and make instant purchase offers. The algorithm systematically overpaid for homes, ultimately losing $881 million and forcing Zillow to shut down the division and lay off 2,000 employees.
Clearview AI Fined €60M by EU Data Protection Authorities for Facial Recognition Database GDPR Violations
CriticalMultiple EU data protection authorities fined Clearview AI approximately €60M total for collecting billions of facial images without consent, violating GDPR biometric data protections across Italy, France, UK, and Greece.
Tesla Autopilot Emergency Vehicle Collision Pattern - NHTSA Investigation
CriticalBetween 2018-2022, Tesla Autopilot vehicles crashed into at least 16 stationary emergency vehicles, prompting NHTSA investigation and highlighting vision system limitations with stationary objects.
ShotSpotter AI Gunshot Detection System Linked to Wrongful Police Raids and Racial Disparities
HighShotSpotter's AI gunshot detection system generated high false positive rates leading to aggressive police responses in predominantly Black neighborhoods. Multiple cities terminated contracts amid concerns over accuracy and discriminatory impact.
EleutherAI's GPT-Neo Generated Extremist Content When Prompted
MediumEleutherAI's open-source GPT-Neo models generated extremist content and propaganda when prompted, highlighting safety risks in unfiltered language models without built-in guardrails.