AI Incident Database
181 documented incidents. Search, filter, and explore.
Volkswagen IDA Voice Assistant Made Unintended Emergency Calls
MediumVolkswagen's IDA voice assistant system incorrectly activated and made unintended emergency calls, causing false alarms to emergency services and operational disruption.
AI Triage System Incorrectly Prioritized Emergency Patients at Dutch Hospital
HighAn AI triage system at a Dutch hospital incorrectly classified emergency patients, sending high-acuity cases to lower priority queues. The incident highlights risks of automated medical decision-making without adequate human oversight.
Wave of AI-Hallucinated Legal Citations Filed in Multiple US Federal Courts
HighThroughout 2024, federal judges sanctioned multiple attorneys across the US for filing legal briefs containing AI-hallucinated case citations. The pattern of fake precedents undermined court proceedings and prompted new disclosure requirements.
AI-Generated Scientific Papers Infiltrate Peer-Reviewed Journals at Scale
HighMultiple peer-reviewed journals discovered hundreds of AI-generated papers containing telltale phrases like 'As an AI language model,' leading to mass retractions by Wiley and other publishers in 2024.
PhotoMath and Chegg AI Tools Provided Incorrect Solutions Leading to Student Misinformation
MediumAI-powered homework tools including PhotoMath and Chegg AI provided incorrect mathematical solutions to students, causing wrong submissions and misinformed learning processes.
AI Grading Systems Show Racial Bias Against African American Student Names
HighResearch revealed that AI essay grading systems systematically gave lower scores to essays when student names suggested African American identity, demonstrating concerning racial bias in educational AI tools.
Deepfake Audio Used to Manipulate Stock Prices in Market Fraud Scheme
CriticalCriminals used AI-generated deepfake audio impersonating a Fortune 500 CEO to manipulate stock prices, causing $25 million in investor losses before detection. The scheme highlights vulnerabilities in financial market authentication systems.
DoNotPay AI Lawyer Fined $193K for Unauthorized Practice of Law
MediumDoNotPay's AI chatbot marketed as 'robot lawyer' provided inaccurate legal advice, leading to FTC settlement of $193K for unauthorized practice of law.
Google Gemini AI Image Generator Refused to Create Images of White People and Generated Historically Inaccurate Content
HighGoogle's Gemini AI image generator exhibited severe bias by refusing to create images of white people and generating historically inaccurate depictions. Google paused the feature after widespread criticism.
AI Speed Cameras Issue False Tickets to Vehicle Shadows and Misidentified Objects
MediumAI-powered traffic enforcement cameras systematically issued false tickets to vehicle shadows, reflections, and cars in wrong lanes due to computer vision failures. Hundreds of drivers affected across multiple jurisdictions with ongoing litigation challenging automated enforcement accuracy.
Microsoft Copilot Generated Inappropriate Content About Public Figures
MediumMicrosoft Copilot generated inappropriate sexual and violent content about real public figures in early 2024, exposing weaknesses in content filtering systems despite existing safety measures.
Air Canada Chatbot Promised Non-Existent Bereavement Fare Discount
MediumAir Canada's customer service chatbot told passenger Jake Moffatt he could book a full-price ticket and retroactively claim a bereavement discount within 90 days. This policy did not exist. When Moffatt tried to claim the discount, Air Canada refused, arguing the chatbot was wrong. A tribunal ruled Air Canada must honor the chatbot's promise.
Microsoft Copilot for 365 Exposed Confidential Data Due to SharePoint Overpermissioning
HighMicrosoft Copilot for 365 exposed confidential documents by leveraging overpermissioned SharePoint and OneDrive access, allowing users to discover sensitive information through AI-powered search that they shouldn't have been able to access.
JPMorgan Chase AI-Powered Financial Insights Tool Provided Incorrect Spending Analysis and Budget Recommendations
MediumJPMorgan Chase's AI financial insights tool incorrectly categorized customer transactions and provided flawed budget recommendations, affecting 45,000 customers and leading to poor financial decisions.
Deepfake CFO Video Call Defrauds Hong Kong Employee of $200 Million HKD
CriticalHong Kong employee lost $25.6 million after being deceived by deepfake video call impersonating company CFO. Police investigating this sophisticated AI-enabled fraud targeting corporate finance operations.
Deepfake Video Call Defrauds UK Engineering Firm of $25.6 Million
CriticalA finance worker at UK engineering firm Arup (known for designing the Sydney Opera House) was tricked into transferring $25.6 million to fraudsters who used deepfake technology to impersonate the company's CFO and other colleagues during a video conference call. Every other participant on the call was a deepfake.
Non-consensual AI-generated explicit images of Taylor Swift go viral on X/Twitter
CriticalAI-generated explicit images of Taylor Swift went viral on X in January 2024, accumulating tens of millions of views before removal and prompting congressional action on deepfake regulation.
Non-consensual deepfake pornographic images of Taylor Swift go viral on X/Twitter
HighAI-generated explicit images of Taylor Swift spread across X/Twitter, reaching millions of views before removal. The incident prompted congressional action on deepfake legislation and platform policy changes.
AI-Generated Robocalls Impersonating Biden Discourage New Hampshire Primary Voting
HighAI-generated robocalls mimicking President Biden's voice were sent to New Hampshire voters before the 2024 primary, discouraging voting participation and prompting FCC enforcement action.
AI-Generated Voice Robocalls Impersonating Biden Before New Hampshire Primary
HighPolitical consultant used AI voice cloning to create robocalls impersonating President Biden, telling New Hampshire voters not to participate in the 2024 Democratic primary. The FCC imposed a $6 million fine and declared AI-generated voice robocalls illegal.