AI Incident Database

472 documented incidents. Search, filter, and explore.

Tinder's Hidden Elo Score Algorithm Exposed for Reinforcing Dating Biases

Medium

Fast Company revealed Tinder's secret Elo score system that ranked users by desirability, creating potential bias in dating matches. The algorithm was deprecated following public backlash.

Mar 15, 2019|Bias|Technology|Other/Unknown

Spotify AI Playlist Algorithm Created Negative Feedback Loops Pushing Users Toward Depressive Content

Medium

Academic research revealed Spotify's AI recommendation algorithm created negative feedback loops, systematically pushing users who listened to sad music toward increasingly depressive content, raising concerns about algorithmic impact on mental health.

Mar 15, 2019|Safety Failure|Media|Other/Unknown

Chinese Social Credit System AI Algorithm Restricts Travel for Millions of Citizens

Critical

China's AI-powered social credit system automatically restricted travel and services for over 23 million citizens based on algorithmic scoring, creating widespread operational harm without adequate transparency or appeals processes.

Mar 1, 2019|Bias|Government|Other/Unknown

China's AI-Powered Social Credit System Restricts Millions from Travel

Critical

China's AI-driven social credit system blocked over 32 million travel ticket purchases by 2019, using algorithmic scoring to restrict citizens' freedom of movement based on financial and behavioral data. The system exemplifies risks of AI governance without transparency or human oversight.

Feb 28, 2019|Bias|Government|Other/Unknown

Instacart AI Payment Algorithm Uses Customer Tips to Subsidize Base Pay Instead of Adding to Worker Earnings

High

Instacart's AI payment system used customer tips to subsidize worker base pay rather than adding them on top, effectively stealing millions from delivery workers until public backlash forced policy changes.

Feb 6, 2019|Financial Error|Technology|Other/Unknown|$10,000,000

YouTube Recommendation Algorithm Created Radicalization Pipeline

High

YouTube's recommendation algorithm systematically directed users toward increasingly extreme content between 2016-2019, creating documented radicalization pipelines from mainstream to far-right conspiracy content, affecting millions of users globally.

Jan 29, 2019|Bias|Media|Google

YouTube Algorithm Systematically Recommended Extremist Content Creating Radicalization Pipeline

Critical

YouTube's recommendation algorithm systematically pushed users toward extremist content from 2016-2019, creating documented radicalization pathways that affected millions of users globally before policy changes were implemented.

Jan 25, 2019|Bias|Media|Google

Chinese AI Traffic Camera Fined Bus Advertisement as Jaywalker

Medium

Chinese AI traffic camera mistakenly identified face on bus advertisement as jaywalker, publicly shaming innocent person on violation display screen due to system's inability to distinguish between real faces and printed images.

Nov 22, 2018|surveillance|Government|Other/Unknown

Chinese AI Traffic Camera Falsely Identified Bus Advertisement as Jaywalker

Medium

In 2018, a Chinese AI traffic enforcement camera mistakenly identified a businesswoman's face on a bus advertisement as a jaywalker, publicly displaying her photo on a shame screen designed to deter traffic violations.

Nov 21, 2018|Safety Failure|Government|Other/Unknown

Xinhua News Agency Deploys AI-Generated News Anchors Raising Disinformation Concerns

Medium

China's Xinhua News Agency launched AI-generated news anchors in 2018, raising international concerns about state media using synthetic presenters for potential propaganda purposes without adequate disclosure.

Nov 8, 2018|Deepfake / Fraud|Media|Other/Unknown

UC Berkeley Study Finds Algorithmic Mortgage Lenders Discriminate Against Minority Borrowers

High

UC Berkeley researchers found algorithmic mortgage lenders charged minority borrowers 5.3 basis points more in interest rates than similarly qualified white borrowers, affecting 1.7 million borrowers annually and resulting in $765 million in excess payments.

Nov 1, 2018|Bias|Finance|Other/Unknown|$765,000,000

Boeing 737 MAX MCAS System Caused Two Fatal Crashes Killing 346 People

Critical

Boeing's MCAS automated flight system caused two fatal 737 MAX crashes killing 346 people by relying on single faulty sensors to override pilot control. Boeing concealed system details from pilots and regulators, leading to worldwide grounding and $2.5 billion legal settlement.

Oct 29, 2018|Safety Failure|Other|Other/Unknown|$2,500,000,000

Boeing 737 MAX MCAS Automated Flight System Failures Lead to Two Fatal Crashes

Critical

Boeing's MCAS automated flight control system caused two fatal 737 MAX crashes killing 346 people due to reliance on single faulty sensors and inadequate pilot oversight mechanisms.

Oct 29, 2018|Safety Failure|transportation|Other/Unknown|$20,000,000,000

Amazon AI Recruiting Tool Showed Systematic Gender Bias

Major

Amazon developed an internal AI recruiting tool that evaluated job applicants by scoring resumes. The system taught itself to penalize resumes containing indicators of female gender, systematically downranking women for technical roles. Amazon scrapped the tool after discovering the bias.

Oct 10, 2018|Bias|HR / Recruiting|Other/Unknown

Amazon's AI Recruitment Tool Systematically Discriminated Against Female Candidates

High

Amazon's AI recruiting tool trained on historical data systematically downgraded female candidates, penalizing resumes mentioning women's colleges, organizations, and using female-associated language patterns.

Oct 10, 2018|Bias|Technology|Other/Unknown

AWS Rekognition Facial Recognition System Shows Racial Bias in Congressional Test

High

MIT and ACLU testing revealed Amazon Rekognition falsely matched 28 US Congress members with criminal mugshots, with 39% of errors affecting people of color despite comprising only 20% of Congress.

Jul 26, 2018|Bias|Technology|Other/Unknown

Amazon Rekognition Falsely Matched 28 Members of Congress as Criminals in ACLU Test

High

ACLU testing revealed Amazon Rekognition falsely matched 28 Congress members as criminals, with disproportionate impact on people of color, highlighting significant racial bias in facial recognition technology used by law enforcement.

Jul 26, 2018|Bias|Government|Other/Unknown

Amazon Rekognition Facial Recognition System Exhibited Racial Bias in Congressional Test

High

ACLU testing revealed Amazon Rekognition falsely matched 28 Congress members with criminal mugshots, disproportionately affecting people of color. The incident highlighted systemic bias in facial recognition technology used by law enforcement.

Jul 26, 2018|Bias|Technology|Other/Unknown

Amazon Rekognition Facial Recognition System Sold to Police Despite Known Racial Bias

Critical

Amazon sold its Rekognition facial recognition system to police departments from 2016-2020 despite documented racial bias that caused higher error rates for people of color. The company implemented a moratorium in 2020 following protests and employee pressure.

Jul 26, 2018|Bias|Government|Other/Unknown

IBM Watson for Oncology Recommended Unsafe Cancer Treatments

High

IBM Watson for Oncology made unsafe cancer treatment recommendations after being trained on hypothetical rather than real patient data, leading to widespread physician overrides and hospital abandonment of the system.

Jul 25, 2018|Medical Error|Healthcare|Other/Unknown|$62,000,000