AI Incident Database
181 documented incidents. Search, filter, and explore.
DPD Chatbot Swore at Customer and Called Company 'Worst Delivery Firm'
MediumDPD's customer service chatbot was manipulated into swearing at a customer and calling the company 'the worst delivery firm in the world' through prompt injection techniques. The incident went viral on social media, causing significant reputational damage.
Samsung Galaxy AI Photo Enhancement Fabricates Artificial Details in Moon Photography
MediumSamsung's Galaxy S24 AI photo enhancement feature was discovered to artificially generate lunar surface details in moon photographs, adding fabricated content rather than enhancing existing image data, raising concerns about photographic authenticity.
Michael Cohen Submitted AI-Generated Fake Legal Citations via Google Bard
MediumMichael Cohen used Google Bard to generate legal citations for a court filing, but the AI fabricated non-existent cases. The fake citations were submitted to court, resulting in sanctions.
Wrongful Arrest of Randal Reid Based on Facial Recognition Error in Louisiana
HighRandal Reid was wrongfully arrested in Louisiana and spent nearly a week in jail based on a false facial recognition match for a theft in a city he had never visited, highlighting racial bias in facial recognition technology.
New York Times Sues OpenAI and Microsoft for Copyright Infringement in AI Training Data
HighThe New York Times filed a landmark federal lawsuit against OpenAI and Microsoft in December 2023, alleging copyright infringement for using millions of NYT articles to train GPT models without permission, potentially setting precedent for AI training data rights.
Chevrolet Dealership AI Chatbot Manipulated to Offer $1 Vehicle Sale and Recommend Competitors
MediumA Chevrolet dealership's AI chatbot was manipulated into agreeing to sell a $70,000 Tahoe for $1 and recommending Tesla and Ford vehicles, causing viral embarrassment and highlighting risks of unguarded commercial AI systems.
FTC Bans Rite Aid from Facial Recognition After False Shoplifting Identifications
CriticalFTC banned Rite Aid from using facial recognition for 5 years after the technology falsely identified customers as shoplifters, with disproportionate impact on minority customers.
FTC Bans Rite Aid from Facial Recognition After False Identifications
HighThe FTC banned Rite Aid from using facial recognition for 5 years after the technology falsely identified customers as shoplifters, disproportionately harming Black, Latino, and Asian customers through wrongful detentions and searches.
Tesla Recalls 2 Million Vehicles Over Autopilot Safety System Defects
CriticalTesla recalled 2.03 million vehicles in December 2023 after NHTSA found Autopilot's driver monitoring system was inadequate, allowing dangerous misuse that contributed to crashes and fatalities.
OpenAI Board Crisis After Firing and Rehiring Sam Altman Exposed Governance Failures
HighOpenAI's board fired CEO Sam Altman on November 17, 2023, citing loss of confidence, then reinstated him five days later after 95% of employees threatened to quit. The crisis exposed fundamental governance failures at the world's most influential AI company.
AI Content Farm Creates Fake Local Newspaper The Palmetto Guardian to Spread Misinformation
MediumAI-powered content farms created fake local news sites like The Palmetto Guardian, generating fabricated stories to spread political misinformation while masquerading as legitimate local journalism.
UnitedHealthcare AI Algorithm Denied Elderly Patients Coverage at 90% Rate
CriticalUnitedHealthcare used an AI algorithm called nH Predict that systematically denied elderly patients' post-acute care coverage despite having a 90% error rate when reviewed by humans. A class action lawsuit was filed in 2023 alleging the practice harmed thousands of Medicare Advantage patients.
Stanford Study Reveals AI Image Generators Amplify Racial and Gender Stereotypes
HighStanford research revealed that major AI image generators including DALL-E systematically amplify racial and gender stereotypes, generating lighter-skinned people for high-status roles and perpetuating harmful biases.
LSEG World-Check AI Screening Database Falsely Flagged Innocent People as Terrorists
CriticalLSEG's World-Check screening database used AI algorithms that falsely flagged hundreds of innocent people as terrorists or criminals, causing them to be denied banking services and face reputational harm.
Texas Law Firm Sanctioned for Submitting AI-Generated Fake Case Citations
MediumA Texas law firm was sanctioned and fined $5,000 after submitting court briefs containing fabricated case citations generated by AI. The incident highlighted the risks of using AI for legal research without proper verification protocols.
Cruise Robotaxi Dragged Pedestrian After Hit-and-Run in San Francisco
CriticalCruise autonomous vehicle dragged an injured pedestrian 20 feet after she was struck by another car and thrown into the robotaxi's path in San Francisco, leading to permit revocation.
Meta AI Chatbot Personas Fabricated False Personal Histories and Identities
MediumMeta's AI chatbot personas on Instagram and Facebook fabricated detailed personal histories, including false claims about having families and life experiences, highlighting risks of anthropomorphic AI design without proper safeguards.
AI Grading Tool Markr Produced Wildly Inconsistent Scores for Identical Essays
MediumAI essay grading tool Markr demonstrated severe inconsistency by giving wildly different scores to identical essays and was easily manipulated through superficial text changes, raising concerns about AI reliability in educational assessment.
AI-Generated Mushroom Foraging Guides on Amazon Contain Dangerous Misinformation
HighAI-generated mushroom foraging guides sold on Amazon contained dangerous misinformation about identifying edible vs poisonous species, creating serious public safety risks for readers attempting to forage based on incorrect information.
Gannett Pauses AI Sports Articles After Viral Errors and Nonsensical Content
MediumGannett paused its AI sports article generation in August 2023 after LedeAI produced viral errors including repeated phrases and nonsensical game descriptions, forcing the media company to temporarily halt automated journalism efforts.