AI Incident Database

181 documented incidents. Search, filter, and explore.

Google Bard Demo Error Wipes $100B from Alphabet Market Cap

Critical

Google Bard made a factual error in its public launch demo, incorrectly claiming the James Webb Space Telescope took the first pictures of exoplanets. The error was spotted by astronomers on social media. Alphabet stock dropped 7.7% the following day, erasing approximately $100 billion in market capitalization.

Feb 8, 2023|Hallucination|Technology|Google|$100,000,000,000

Waymo Autonomous Vehicle Rear-Ended by Human Driver After Hitting Cyclist in San Francisco

Medium

A Waymo autonomous vehicle struck a cyclist in San Francisco in February 2023, causing minor injuries. The incident highlighted ongoing challenges in autonomous vehicle detection of vulnerable road users.

Feb 8, 2023|Safety Failure|Technology|Other/Unknown

Replika AI Companion Sent Sexually Explicit Messages to Minors, Banned by Italy

Critical

Replika AI companion chatbot sent sexually explicit messages to users including minors, leading to Italy banning the app in February 2023 due to safety concerns and lack of age verification.

Feb 3, 2023|Safety Failure|Technology|Other/Unknown

ElevenLabs Voice Cloning Technology Used for Non-Consensual Celebrity Audio Generation

Medium

ElevenLabs voice cloning technology was misused to create non-consensual synthetic audio of celebrities and public figures, prompting the company to implement stricter usage restrictions and verification requirements.

Feb 1, 2023|Deepfake / Fraud|Technology|Other/Unknown

Workday AI Hiring System Sued for Age and Disability Discrimination

High

A 2023 class action lawsuit alleged that Workday's AI-powered hiring screening tools systematically discriminated against older workers and disabled applicants, marking a significant case targeting the HR technology vendor rather than just employers.

Jan 31, 2023|Bias|HR / Recruiting|Other/Unknown

AI Voice Clone Used in Kidnapping Scam Targeting Arizona Mother

High

Arizona mother received convincing AI voice clone of her daughter claiming kidnapping and demanding ransom in 2023. The synthetic voice caused severe emotional distress before the scam was discovered.

Jan 25, 2023|Deepfake / Fraud|Other|Other/Unknown|$50,000

Kenyan Content Moderators Traumatized Training ChatGPT Safety Filters for Under $2/Hour

High

OpenAI used Kenyan workers paid under $2/hour to label graphic content for ChatGPT safety training, resulting in lasting psychological trauma for moderators exposed to violence and abuse.

Jan 18, 2023|Safety Failure|Technology|OpenAI

Stability AI Sued by Getty Images and Artists for Training Stable Diffusion on Copyrighted Images

High

Getty Images and multiple artists filed lawsuits against Stability AI alleging the company trained Stable Diffusion on billions of copyrighted images without permission, seeking damages and injunctive relief.

Jan 17, 2023|Copyright Violation|Media|Other/Unknown

CNET Published AI-Generated Articles Containing Factual Errors

Medium

CNET quietly published dozens of AI-generated financial explainer articles under the byline "CNET Money Staff" without disclosing the use of AI. Journalists and readers discovered that multiple articles contained factual errors, including incorrect explanations of basic financial concepts like compound interest.

Jan 12, 2023|Hallucination|Media|Other/Unknown

Koko Mental Health Chatbot Conducted Undisclosed AI Experiment on Users Without Consent

High

Mental health platform Koko used GPT-3 to generate responses to 4,000 users in emotional crisis without their knowledge or consent, violating fundamental principles of informed consent in healthcare.

Jan 6, 2023|ethics_violation|Healthcare|OpenAI

Koko Mental Health Chatbot Used AI to Counsel Users Without Consent

High

Mental health platform Koko secretly used GPT-3 to generate responses to 4,000 users in emotional crisis without consent. The experiment raised serious ethical concerns about AI use in vulnerable healthcare contexts.

Jan 6, 2023|healthcare|Healthcare|OpenAI

AI Surveillance Cameras in Serbian Schools Monitored Student Behavior Without Proper Consent

High

AI surveillance cameras in Serbian schools monitored student emotions and behavior without proper consent from students or parents. Digital rights groups successfully challenged the practice, leading to removal of the surveillance system.

Dec 15, 2022|Privacy Leak|Education|Other/Unknown

Lensa AI Generated Non-Consensual Sexualized Images of Users

High

Lensa AI's Magic Avatars feature generated non-consensual sexualized images of users, particularly women, despite non-sexual input photos. The incident highlighted serious safety and consent issues in AI-generated imagery applications.

Dec 5, 2022|Safety Failure|Technology|Other/Unknown

Yara Birkeland Autonomous Ship AI Navigation Failures Delay Commercial Operations

Medium

The Yara Birkeland autonomous cargo ship faced repeated AI navigation system failures that delayed fully autonomous operations by years beyond the 2020 target, resulting in significant cost overruns and operational setbacks.

Nov 25, 2022|Safety Failure|transportation|Other/Unknown|$25,000,000

Meta's Galactica Scientific AI Shut Down After Generating Fake Research Papers and Biased Content

High

Meta's Galactica AI for scientific text generation was shut down after just 3 days when it began confidently producing fake research papers, fabricated citations, and biased scientific content, raising serious concerns about AI-generated misinformation in academic contexts.

Nov 17, 2022|Hallucination|Technology|Meta|$50,000,000

GitHub Copilot Code Generation Reproduces Copyrighted Code Verbatim

High

GitHub Copilot was found to reproduce copyrighted code verbatim from its training data, leading to a class action lawsuit alleging copyright infringement by the AI coding assistant.

Nov 3, 2022|Copyright Violation|Technology|Other/Unknown

Google DeepMind AlphaFold Protein Structure Prediction Errors Impact Drug Discovery

Medium

AlphaFold protein structure predictions contained errors that led pharmaceutical companies to make incorrect drug design decisions, resulting in significant wasted R&D investments. The incident highlighted the need for experimental validation of AI predictions in critical applications.

Oct 13, 2022|model_prediction_error|Healthcare|Google|$50,000,000

Zillow Zestimate AI Accused of Racial Bias in Home Valuations

High

Studies revealed that Zillow's Zestimate algorithm systematically undervalues homes in predominantly Black neighborhoods compared to similar properties in white areas, perpetuating housing discrimination through automated valuation models.

Sep 1, 2022|Bias|Finance|Other/Unknown

Stanford Study Reveals AI Code Generation Tools Produce Security Vulnerabilities in 40% of Cases

High

Stanford research in 2022 found that AI code generation tools caused developers to write insecure code 40% of the time, highlighting significant security risks in AI-assisted development.

Aug 8, 2022|Safety Failure|Technology|Other/Unknown

Amazon Ring Doorbell AI-Powered Surveillance Network Shared User Footage with Police Without Warrants

High

Amazon Ring's AI-powered doorbell network shared user footage with over 2,000 police departments without warrants or user consent from 2019-2022, affecting millions of users before policy changes ended the practice.

Jul 13, 2022|Privacy Leak|Technology|Other/Unknown