AI Incident Database

181 documented incidents. Search, filter, and explore.

Volkswagen IDA Voice Assistant Made Unintended Emergency Calls

Medium

Volkswagen's IDA voice assistant system incorrectly activated and made unintended emergency calls, causing false alarms to emergency services and operational disruption.

Mar 15, 2024|Agent Error|automotive|Other/Unknown|$100,000

AI Triage System Incorrectly Prioritized Emergency Patients at Dutch Hospital

High

An AI triage system at a Dutch hospital incorrectly classified emergency patients, sending high-acuity cases to lower priority queues. The incident highlights risks of automated medical decision-making without adequate human oversight.

Mar 15, 2024|Medical Error|Healthcare|Other/Unknown

Wave of AI-Hallucinated Legal Citations Filed in Multiple US Federal Courts

High

Throughout 2024, federal judges sanctioned multiple attorneys across the US for filing legal briefs containing AI-hallucinated case citations. The pattern of fake precedents undermined court proceedings and prompted new disclosure requirements.

Mar 15, 2024|Hallucination|Legal|Other/Unknown|$500,000

AI-Generated Scientific Papers Infiltrate Peer-Reviewed Journals at Scale

High

Multiple peer-reviewed journals discovered hundreds of AI-generated papers containing telltale phrases like 'As an AI language model,' leading to mass retractions by Wiley and other publishers in 2024.

Mar 15, 2024|misinformation|Education|Other/Unknown|$5,000,000

PhotoMath and Chegg AI Tools Provided Incorrect Solutions Leading to Student Misinformation

Medium

AI-powered homework tools including PhotoMath and Chegg AI provided incorrect mathematical solutions to students, causing wrong submissions and misinformed learning processes.

Mar 15, 2024|Hallucination|Education|Other/Unknown

AI Grading Systems Show Racial Bias Against African American Student Names

High

Research revealed that AI essay grading systems systematically gave lower scores to essays when student names suggested African American identity, demonstrating concerning racial bias in educational AI tools.

Mar 15, 2024|Bias|Education|Other/Unknown

Deepfake Audio Used to Manipulate Stock Prices in Market Fraud Scheme

Critical

Criminals used AI-generated deepfake audio impersonating a Fortune 500 CEO to manipulate stock prices, causing $25 million in investor losses before detection. The scheme highlights vulnerabilities in financial market authentication systems.

Mar 8, 2024|Deepfake / Fraud|Finance|Other/Unknown|$25,000,000

DoNotPay AI Lawyer Fined $193K for Unauthorized Practice of Law

Medium

DoNotPay's AI chatbot marketed as 'robot lawyer' provided inaccurate legal advice, leading to FTC settlement of $193K for unauthorized practice of law.

Mar 7, 2024|legal_regulatory|Legal|Other/Unknown|$193,000

Google Gemini AI Image Generator Refused to Create Images of White People and Generated Historically Inaccurate Content

High

Google's Gemini AI image generator exhibited severe bias by refusing to create images of white people and generating historically inaccurate depictions. Google paused the feature after widespread criticism.

Feb 21, 2024|Bias|Technology|Google|$50,000,000

AI Speed Cameras Issue False Tickets to Vehicle Shadows and Misidentified Objects

Medium

AI-powered traffic enforcement cameras systematically issued false tickets to vehicle shadows, reflections, and cars in wrong lanes due to computer vision failures. Hundreds of drivers affected across multiple jurisdictions with ongoing litigation challenging automated enforcement accuracy.

Feb 15, 2024|computer_vision|Government|Other/Unknown|$500,000

Microsoft Copilot Generated Inappropriate Content About Public Figures

Medium

Microsoft Copilot generated inappropriate sexual and violent content about real public figures in early 2024, exposing weaknesses in content filtering systems despite existing safety measures.

Feb 15, 2024|Safety Failure|Technology|OpenAI

Air Canada Chatbot Promised Non-Existent Bereavement Fare Discount

Medium

Air Canada's customer service chatbot told passenger Jake Moffatt he could book a full-price ticket and retroactively claim a bereavement discount within 90 days. This policy did not exist. When Moffatt tried to claim the discount, Air Canada refused, arguing the chatbot was wrong. A tribunal ruled Air Canada must honor the chatbot's promise.

Feb 15, 2024|Hallucination|transportation|Other/Unknown|$812

Microsoft Copilot for 365 Exposed Confidential Data Due to SharePoint Overpermissioning

High

Microsoft Copilot for 365 exposed confidential documents by leveraging overpermissioned SharePoint and OneDrive access, allowing users to discover sensitive information through AI-powered search that they shouldn't have been able to access.

Feb 12, 2024|Privacy Leak|Technology|Other/Unknown

JPMorgan Chase AI-Powered Financial Insights Tool Provided Incorrect Spending Analysis and Budget Recommendations

Medium

JPMorgan Chase's AI financial insights tool incorrectly categorized customer transactions and provided flawed budget recommendations, affecting 45,000 customers and leading to poor financial decisions.

Feb 8, 2024|Financial Error|Finance|Other/Unknown|$2,500,000

Deepfake CFO Video Call Defrauds Hong Kong Employee of $200 Million HKD

Critical

Hong Kong employee lost $25.6 million after being deceived by deepfake video call impersonating company CFO. Police investigating this sophisticated AI-enabled fraud targeting corporate finance operations.

Feb 4, 2024|Deepfake / Fraud|Finance|Other/Unknown|$25,600,000

Deepfake Video Call Defrauds UK Engineering Firm of $25.6 Million

Critical

A finance worker at UK engineering firm Arup (known for designing the Sydney Opera House) was tricked into transferring $25.6 million to fraudsters who used deepfake technology to impersonate the company's CFO and other colleagues during a video conference call. Every other participant on the call was a deepfake.

Feb 4, 2024|Deepfake / Fraud|engineering|Other/Unknown|$25,600,000

Non-consensual AI-generated explicit images of Taylor Swift go viral on X/Twitter

Critical

AI-generated explicit images of Taylor Swift went viral on X in January 2024, accumulating tens of millions of views before removal and prompting congressional action on deepfake regulation.

Jan 26, 2024|deepfake_nonconsensual|Media|Other/Unknown

Non-consensual deepfake pornographic images of Taylor Swift go viral on X/Twitter

High

AI-generated explicit images of Taylor Swift spread across X/Twitter, reaching millions of views before removal. The incident prompted congressional action on deepfake legislation and platform policy changes.

Jan 25, 2024|deepfake_abuse|Media|Other/Unknown

AI-Generated Robocalls Impersonating Biden Discourage New Hampshire Primary Voting

High

AI-generated robocalls mimicking President Biden's voice were sent to New Hampshire voters before the 2024 primary, discouraging voting participation and prompting FCC enforcement action.

Jan 22, 2024|Deepfake / Fraud|Government|Other/Unknown

AI-Generated Voice Robocalls Impersonating Biden Before New Hampshire Primary

High

Political consultant used AI voice cloning to create robocalls impersonating President Biden, telling New Hampshire voters not to participate in the 2024 Democratic primary. The FCC imposed a $6 million fine and declared AI-generated voice robocalls illegal.

Jan 22, 2024|Deepfake / Fraud|Government|Other/Unknown