AI Incident Database

397 documented incidents. Search, filter, and explore.

Zillow Zestimate AI Accused of Racial Bias in Home Valuations

High

Studies revealed that Zillow's Zestimate algorithm systematically undervalues homes in predominantly Black neighborhoods compared to similar properties in white areas, perpetuating housing discrimination through automated valuation models.

Sep 1, 2022|Bias|Finance|Other/Unknown

AI-Generated Art Wins Colorado State Fair Competition Sparking Artist Controversy

Low

Jason Allen won first place at Colorado State Fair's digital art competition using AI-generated artwork from Midjourney. The victory sparked controversy about AI art legitimacy and fair competition with human artists.

Aug 30, 2022|Other|Media|Other/Unknown

Stable Diffusion Generated Images Containing Getty Images Watermarks Expose Copyright Training Data

High

Stable Diffusion generated images containing distorted Getty Images watermarks, revealing the model was trained on copyrighted Getty content without permission. This discovery became key evidence in Getty's copyright infringement lawsuit against Stability AI.

Aug 23, 2022|Copyright Violation|Media|Other/Unknown

AI Image Generators Produce Malformed Hands and Fingers Across Major Platforms

Medium

Major AI image generators consistently produced anatomically incorrect hands with extra fingers, fused digits, or impossible positions, becoming a reliable indicator for detecting AI-generated content.

Aug 15, 2022|Other|Media|Other/Unknown

ALERTCalifornia AI Wildfire Detection System Generated Excessive False Alarms

Medium

California's ALERTCalifornia AI wildfire detection system generated high rates of false alarms, mistaking clouds, fog, and industrial activity for fires. This diverted emergency resources and caused unnecessary public alarm.

Aug 15, 2022|Safety Failure|Government|Other/Unknown|$2,500,000

Stanford Study Reveals AI Code Generation Tools Produce Security Vulnerabilities in 40% of Cases

High

Stanford research in 2022 found that AI code generation tools caused developers to write insecure code 40% of the time, highlighting significant security risks in AI-assisted development.

Aug 8, 2022|Safety Failure|Technology|Other/Unknown

Amazon Ring Doorbell AI-Powered Surveillance Network Shared User Footage with Police Without Warrants

High

Amazon Ring's AI-powered doorbell network shared user footage with over 2,000 police departments without warrants or user consent from 2019-2022, affecting millions of users before policy changes ended the practice.

Jul 13, 2022|Privacy Leak|Technology|Other/Unknown

DHS HART Biometric System Failed Privacy Impact Assessment and Faced Accuracy Concerns

High

DHS's HART biometric database, containing data from 260+ million people, failed privacy assessments and raised accuracy concerns. The GAO found inadequate oversight and civil liberties groups challenged the system's deployment.

Jun 15, 2022|Privacy Leak|Government|Other/Unknown

Woebot AI Mental Health Chatbot Provided Inappropriate Crisis Responses and Therapeutic Advice

High

Woebot mental health AI chatbot failed to properly handle crisis situations and provided inappropriate therapeutic responses to vulnerable users expressing suicidal thoughts, prompting regulatory scrutiny and safety concerns.

May 25, 2022|Safety Failure|Healthcare|Other/Unknown

Starship Autonomous Delivery Robot Drove Through Active Police Crime Scene in Los Angeles

Medium

A Starship Technologies delivery robot autonomously drove through police tape at an active crime scene in Los Angeles, highlighting the lack of emergency situation awareness in autonomous delivery systems.

May 16, 2022|Safety Failure|Technology|Other/Unknown

San Francisco Police Used AI Surveillance Cameras Despite Voter-Approved Ban

Medium

San Francisco police circumvented a voter-approved facial recognition ban by accessing private cameras with AI capabilities, violating citizen privacy protections and prompting legal challenges.

May 12, 2022|surveillance|Government|Other/Unknown

Allegheny County Child Welfare AI Tool Showed Racial Bias in Family Risk Assessments

High

Allegheny County's AI child welfare screening tool disproportionately flagged Black families and public benefit recipients for investigations. An AP investigation revealed the algorithmic bias affected thousands of families over six years.

Apr 26, 2022|Bias|Government|Other/Unknown

Saudi Arabia's Neom City AI Surveillance System Raises Human Rights Concerns

High

Saudi Arabia's Neom megacity project includes comprehensive AI surveillance systems for facial recognition and behavioral tracking of residents. Leaked documents in 2022 revealed the extensive scope, drawing international human rights criticism.

Apr 25, 2022|Privacy Leak|Government|Other/Unknown

AI Smart Contract Audit Tools Failed to Detect Ronin Bridge Vulnerabilities Before $600M Hack

Critical

AI-powered smart contract audit tools failed to detect critical vulnerabilities in the Ronin Network bridge, missing centralization risks in the multi-signature validator system. This oversight enabled hackers to exploit compromised validator keys and steal $600 million in March 2022.

Mar 29, 2022|security_failure|Finance|Other/Unknown|$600,000,000

Starship Delivery Robot Disrupts Police Crime Scene Investigation

Medium

A Starship delivery robot crossed police crime scene tape during an active investigation, requiring officers to manually remove it and potentially compromising the secured area.

Mar 16, 2022|Agent Error|Technology|Other/Unknown|$15,000

Deepfake Video of Ukraine President Zelensky Calling for Surrender

High

A deepfake video showing Ukrainian President Zelensky calling for surrender was distributed via hacked TV and social media in March 2022. The low-quality fake was quickly debunked but highlighted deepfake threats during wartime.

Mar 16, 2022|Deepfake / Fraud|Media|Other/Unknown

Moscow AI Facial Recognition System Used for Political Repression of Anti-War Protesters

Critical

Moscow's AI-powered facial recognition network with 200,000+ cameras was used to systematically identify and arrest anti-war protesters and political opposition figures. The system enabled mass political repression following Russia's invasion of Ukraine.

Mar 15, 2022|Safety Failure|Government|Other/Unknown

AI Drug Discovery Tool Generated 40,000 Potential Chemical Weapons in 6 Hours

High

Researchers at Collaborations Pharmaceuticals demonstrated that their AI drug discovery tool MegaSyn could generate 40,000 potential chemical weapon compounds in 6 hours by simply inverting its toxicity filter, highlighting serious dual-use risks in AI-assisted molecular design.

Mar 8, 2022|Safety Failure|Healthcare|Other/Unknown

Stitch Fix AI Algorithm Failure Leads to Customer Churn and 70% Stock Decline

Critical

Stitch Fix's pivot from human stylists to pure AI recommendations resulted in poor customer experience, massive churn, and a 70% stock price decline. The AI algorithm failed to match human styling expertise, leading to customer complaints and business failure.

Mar 8, 2022|Agent Error|Other|Other/Unknown|$2,500,000,000

AI Drug Discovery Tool Generated 40,000 Toxic Molecules Including VX-Like Nerve Agents

High

Researchers at Collaborations Pharmaceuticals demonstrated that their AI drug discovery tool could be inverted to generate 40,000 toxic molecules in 6 hours. The study highlighted dual-use risks in AI-driven molecular generation and sparked debate about biosecurity safeguards.

Mar 7, 2022|Safety Failure|Healthcare|Other/Unknown