AI Incident Database
181 documented incidents. Search, filter, and explore.
Google Bard Demo Error Wipes $100B from Alphabet Market Cap
CriticalGoogle Bard made a factual error in its public launch demo, incorrectly claiming the James Webb Space Telescope took the first pictures of exoplanets. The error was spotted by astronomers on social media. Alphabet stock dropped 7.7% the following day, erasing approximately $100 billion in market capitalization.
Waymo Autonomous Vehicle Rear-Ended by Human Driver After Hitting Cyclist in San Francisco
MediumA Waymo autonomous vehicle struck a cyclist in San Francisco in February 2023, causing minor injuries. The incident highlighted ongoing challenges in autonomous vehicle detection of vulnerable road users.
Replika AI Companion Sent Sexually Explicit Messages to Minors, Banned by Italy
CriticalReplika AI companion chatbot sent sexually explicit messages to users including minors, leading to Italy banning the app in February 2023 due to safety concerns and lack of age verification.
ElevenLabs Voice Cloning Technology Used for Non-Consensual Celebrity Audio Generation
MediumElevenLabs voice cloning technology was misused to create non-consensual synthetic audio of celebrities and public figures, prompting the company to implement stricter usage restrictions and verification requirements.
Workday AI Hiring System Sued for Age and Disability Discrimination
HighA 2023 class action lawsuit alleged that Workday's AI-powered hiring screening tools systematically discriminated against older workers and disabled applicants, marking a significant case targeting the HR technology vendor rather than just employers.
AI Voice Clone Used in Kidnapping Scam Targeting Arizona Mother
HighArizona mother received convincing AI voice clone of her daughter claiming kidnapping and demanding ransom in 2023. The synthetic voice caused severe emotional distress before the scam was discovered.
Kenyan Content Moderators Traumatized Training ChatGPT Safety Filters for Under $2/Hour
HighOpenAI used Kenyan workers paid under $2/hour to label graphic content for ChatGPT safety training, resulting in lasting psychological trauma for moderators exposed to violence and abuse.
Stability AI Sued by Getty Images and Artists for Training Stable Diffusion on Copyrighted Images
HighGetty Images and multiple artists filed lawsuits against Stability AI alleging the company trained Stable Diffusion on billions of copyrighted images without permission, seeking damages and injunctive relief.
CNET Published AI-Generated Articles Containing Factual Errors
MediumCNET quietly published dozens of AI-generated financial explainer articles under the byline "CNET Money Staff" without disclosing the use of AI. Journalists and readers discovered that multiple articles contained factual errors, including incorrect explanations of basic financial concepts like compound interest.
Koko Mental Health Chatbot Conducted Undisclosed AI Experiment on Users Without Consent
HighMental health platform Koko used GPT-3 to generate responses to 4,000 users in emotional crisis without their knowledge or consent, violating fundamental principles of informed consent in healthcare.
Koko Mental Health Chatbot Used AI to Counsel Users Without Consent
HighMental health platform Koko secretly used GPT-3 to generate responses to 4,000 users in emotional crisis without consent. The experiment raised serious ethical concerns about AI use in vulnerable healthcare contexts.
AI Surveillance Cameras in Serbian Schools Monitored Student Behavior Without Proper Consent
HighAI surveillance cameras in Serbian schools monitored student emotions and behavior without proper consent from students or parents. Digital rights groups successfully challenged the practice, leading to removal of the surveillance system.
Lensa AI Generated Non-Consensual Sexualized Images of Users
HighLensa AI's Magic Avatars feature generated non-consensual sexualized images of users, particularly women, despite non-sexual input photos. The incident highlighted serious safety and consent issues in AI-generated imagery applications.
Yara Birkeland Autonomous Ship AI Navigation Failures Delay Commercial Operations
MediumThe Yara Birkeland autonomous cargo ship faced repeated AI navigation system failures that delayed fully autonomous operations by years beyond the 2020 target, resulting in significant cost overruns and operational setbacks.
Meta's Galactica Scientific AI Shut Down After Generating Fake Research Papers and Biased Content
HighMeta's Galactica AI for scientific text generation was shut down after just 3 days when it began confidently producing fake research papers, fabricated citations, and biased scientific content, raising serious concerns about AI-generated misinformation in academic contexts.
GitHub Copilot Code Generation Reproduces Copyrighted Code Verbatim
HighGitHub Copilot was found to reproduce copyrighted code verbatim from its training data, leading to a class action lawsuit alleging copyright infringement by the AI coding assistant.
Google DeepMind AlphaFold Protein Structure Prediction Errors Impact Drug Discovery
MediumAlphaFold protein structure predictions contained errors that led pharmaceutical companies to make incorrect drug design decisions, resulting in significant wasted R&D investments. The incident highlighted the need for experimental validation of AI predictions in critical applications.
Zillow Zestimate AI Accused of Racial Bias in Home Valuations
HighStudies revealed that Zillow's Zestimate algorithm systematically undervalues homes in predominantly Black neighborhoods compared to similar properties in white areas, perpetuating housing discrimination through automated valuation models.
Stanford Study Reveals AI Code Generation Tools Produce Security Vulnerabilities in 40% of Cases
HighStanford research in 2022 found that AI code generation tools caused developers to write insecure code 40% of the time, highlighting significant security risks in AI-assisted development.
Amazon Ring Doorbell AI-Powered Surveillance Network Shared User Footage with Police Without Warrants
HighAmazon Ring's AI-powered doorbell network shared user footage with over 2,000 police departments without warrants or user consent from 2019-2022, affecting millions of users before policy changes ended the practice.