AI Incident Database
397 documented incidents. Search, filter, and explore.
Zillow Zestimate AI Accused of Racial Bias in Home Valuations
HighStudies revealed that Zillow's Zestimate algorithm systematically undervalues homes in predominantly Black neighborhoods compared to similar properties in white areas, perpetuating housing discrimination through automated valuation models.
AI-Generated Art Wins Colorado State Fair Competition Sparking Artist Controversy
LowJason Allen won first place at Colorado State Fair's digital art competition using AI-generated artwork from Midjourney. The victory sparked controversy about AI art legitimacy and fair competition with human artists.
Stable Diffusion Generated Images Containing Getty Images Watermarks Expose Copyright Training Data
HighStable Diffusion generated images containing distorted Getty Images watermarks, revealing the model was trained on copyrighted Getty content without permission. This discovery became key evidence in Getty's copyright infringement lawsuit against Stability AI.
AI Image Generators Produce Malformed Hands and Fingers Across Major Platforms
MediumMajor AI image generators consistently produced anatomically incorrect hands with extra fingers, fused digits, or impossible positions, becoming a reliable indicator for detecting AI-generated content.
ALERTCalifornia AI Wildfire Detection System Generated Excessive False Alarms
MediumCalifornia's ALERTCalifornia AI wildfire detection system generated high rates of false alarms, mistaking clouds, fog, and industrial activity for fires. This diverted emergency resources and caused unnecessary public alarm.
Stanford Study Reveals AI Code Generation Tools Produce Security Vulnerabilities in 40% of Cases
HighStanford research in 2022 found that AI code generation tools caused developers to write insecure code 40% of the time, highlighting significant security risks in AI-assisted development.
Amazon Ring Doorbell AI-Powered Surveillance Network Shared User Footage with Police Without Warrants
HighAmazon Ring's AI-powered doorbell network shared user footage with over 2,000 police departments without warrants or user consent from 2019-2022, affecting millions of users before policy changes ended the practice.
DHS HART Biometric System Failed Privacy Impact Assessment and Faced Accuracy Concerns
HighDHS's HART biometric database, containing data from 260+ million people, failed privacy assessments and raised accuracy concerns. The GAO found inadequate oversight and civil liberties groups challenged the system's deployment.
Woebot AI Mental Health Chatbot Provided Inappropriate Crisis Responses and Therapeutic Advice
HighWoebot mental health AI chatbot failed to properly handle crisis situations and provided inappropriate therapeutic responses to vulnerable users expressing suicidal thoughts, prompting regulatory scrutiny and safety concerns.
Starship Autonomous Delivery Robot Drove Through Active Police Crime Scene in Los Angeles
MediumA Starship Technologies delivery robot autonomously drove through police tape at an active crime scene in Los Angeles, highlighting the lack of emergency situation awareness in autonomous delivery systems.
San Francisco Police Used AI Surveillance Cameras Despite Voter-Approved Ban
MediumSan Francisco police circumvented a voter-approved facial recognition ban by accessing private cameras with AI capabilities, violating citizen privacy protections and prompting legal challenges.
Allegheny County Child Welfare AI Tool Showed Racial Bias in Family Risk Assessments
HighAllegheny County's AI child welfare screening tool disproportionately flagged Black families and public benefit recipients for investigations. An AP investigation revealed the algorithmic bias affected thousands of families over six years.
Saudi Arabia's Neom City AI Surveillance System Raises Human Rights Concerns
HighSaudi Arabia's Neom megacity project includes comprehensive AI surveillance systems for facial recognition and behavioral tracking of residents. Leaked documents in 2022 revealed the extensive scope, drawing international human rights criticism.
AI Smart Contract Audit Tools Failed to Detect Ronin Bridge Vulnerabilities Before $600M Hack
CriticalAI-powered smart contract audit tools failed to detect critical vulnerabilities in the Ronin Network bridge, missing centralization risks in the multi-signature validator system. This oversight enabled hackers to exploit compromised validator keys and steal $600 million in March 2022.
Starship Delivery Robot Disrupts Police Crime Scene Investigation
MediumA Starship delivery robot crossed police crime scene tape during an active investigation, requiring officers to manually remove it and potentially compromising the secured area.
Deepfake Video of Ukraine President Zelensky Calling for Surrender
HighA deepfake video showing Ukrainian President Zelensky calling for surrender was distributed via hacked TV and social media in March 2022. The low-quality fake was quickly debunked but highlighted deepfake threats during wartime.
Moscow AI Facial Recognition System Used for Political Repression of Anti-War Protesters
CriticalMoscow's AI-powered facial recognition network with 200,000+ cameras was used to systematically identify and arrest anti-war protesters and political opposition figures. The system enabled mass political repression following Russia's invasion of Ukraine.
AI Drug Discovery Tool Generated 40,000 Potential Chemical Weapons in 6 Hours
HighResearchers at Collaborations Pharmaceuticals demonstrated that their AI drug discovery tool MegaSyn could generate 40,000 potential chemical weapon compounds in 6 hours by simply inverting its toxicity filter, highlighting serious dual-use risks in AI-assisted molecular design.
Stitch Fix AI Algorithm Failure Leads to Customer Churn and 70% Stock Decline
CriticalStitch Fix's pivot from human stylists to pure AI recommendations resulted in poor customer experience, massive churn, and a 70% stock price decline. The AI algorithm failed to match human styling expertise, leading to customer complaints and business failure.
AI Drug Discovery Tool Generated 40,000 Toxic Molecules Including VX-Like Nerve Agents
HighResearchers at Collaborations Pharmaceuticals demonstrated that their AI drug discovery tool could be inverted to generate 40,000 toxic molecules in 6 hours. The study highlighted dual-use risks in AI-driven molecular generation and sparked debate about biosecurity safeguards.