AI Incident Database
472 documented incidents. Search, filter, and explore.
Tinder's Hidden Elo Score Algorithm Exposed for Reinforcing Dating Biases
MediumFast Company revealed Tinder's secret Elo score system that ranked users by desirability, creating potential bias in dating matches. The algorithm was deprecated following public backlash.
Spotify AI Playlist Algorithm Created Negative Feedback Loops Pushing Users Toward Depressive Content
MediumAcademic research revealed Spotify's AI recommendation algorithm created negative feedback loops, systematically pushing users who listened to sad music toward increasingly depressive content, raising concerns about algorithmic impact on mental health.
Chinese Social Credit System AI Algorithm Restricts Travel for Millions of Citizens
CriticalChina's AI-powered social credit system automatically restricted travel and services for over 23 million citizens based on algorithmic scoring, creating widespread operational harm without adequate transparency or appeals processes.
China's AI-Powered Social Credit System Restricts Millions from Travel
CriticalChina's AI-driven social credit system blocked over 32 million travel ticket purchases by 2019, using algorithmic scoring to restrict citizens' freedom of movement based on financial and behavioral data. The system exemplifies risks of AI governance without transparency or human oversight.
Instacart AI Payment Algorithm Uses Customer Tips to Subsidize Base Pay Instead of Adding to Worker Earnings
HighInstacart's AI payment system used customer tips to subsidize worker base pay rather than adding them on top, effectively stealing millions from delivery workers until public backlash forced policy changes.
YouTube Recommendation Algorithm Created Radicalization Pipeline
HighYouTube's recommendation algorithm systematically directed users toward increasingly extreme content between 2016-2019, creating documented radicalization pipelines from mainstream to far-right conspiracy content, affecting millions of users globally.
YouTube Algorithm Systematically Recommended Extremist Content Creating Radicalization Pipeline
CriticalYouTube's recommendation algorithm systematically pushed users toward extremist content from 2016-2019, creating documented radicalization pathways that affected millions of users globally before policy changes were implemented.
Chinese AI Traffic Camera Fined Bus Advertisement as Jaywalker
MediumChinese AI traffic camera mistakenly identified face on bus advertisement as jaywalker, publicly shaming innocent person on violation display screen due to system's inability to distinguish between real faces and printed images.
Chinese AI Traffic Camera Falsely Identified Bus Advertisement as Jaywalker
MediumIn 2018, a Chinese AI traffic enforcement camera mistakenly identified a businesswoman's face on a bus advertisement as a jaywalker, publicly displaying her photo on a shame screen designed to deter traffic violations.
Xinhua News Agency Deploys AI-Generated News Anchors Raising Disinformation Concerns
MediumChina's Xinhua News Agency launched AI-generated news anchors in 2018, raising international concerns about state media using synthetic presenters for potential propaganda purposes without adequate disclosure.
UC Berkeley Study Finds Algorithmic Mortgage Lenders Discriminate Against Minority Borrowers
HighUC Berkeley researchers found algorithmic mortgage lenders charged minority borrowers 5.3 basis points more in interest rates than similarly qualified white borrowers, affecting 1.7 million borrowers annually and resulting in $765 million in excess payments.
Boeing 737 MAX MCAS System Caused Two Fatal Crashes Killing 346 People
CriticalBoeing's MCAS automated flight system caused two fatal 737 MAX crashes killing 346 people by relying on single faulty sensors to override pilot control. Boeing concealed system details from pilots and regulators, leading to worldwide grounding and $2.5 billion legal settlement.
Boeing 737 MAX MCAS Automated Flight System Failures Lead to Two Fatal Crashes
CriticalBoeing's MCAS automated flight control system caused two fatal 737 MAX crashes killing 346 people due to reliance on single faulty sensors and inadequate pilot oversight mechanisms.
Amazon AI Recruiting Tool Showed Systematic Gender Bias
MajorAmazon developed an internal AI recruiting tool that evaluated job applicants by scoring resumes. The system taught itself to penalize resumes containing indicators of female gender, systematically downranking women for technical roles. Amazon scrapped the tool after discovering the bias.
Amazon's AI Recruitment Tool Systematically Discriminated Against Female Candidates
HighAmazon's AI recruiting tool trained on historical data systematically downgraded female candidates, penalizing resumes mentioning women's colleges, organizations, and using female-associated language patterns.
AWS Rekognition Facial Recognition System Shows Racial Bias in Congressional Test
HighMIT and ACLU testing revealed Amazon Rekognition falsely matched 28 US Congress members with criminal mugshots, with 39% of errors affecting people of color despite comprising only 20% of Congress.
Amazon Rekognition Falsely Matched 28 Members of Congress as Criminals in ACLU Test
HighACLU testing revealed Amazon Rekognition falsely matched 28 Congress members as criminals, with disproportionate impact on people of color, highlighting significant racial bias in facial recognition technology used by law enforcement.
Amazon Rekognition Facial Recognition System Exhibited Racial Bias in Congressional Test
HighACLU testing revealed Amazon Rekognition falsely matched 28 Congress members with criminal mugshots, disproportionately affecting people of color. The incident highlighted systemic bias in facial recognition technology used by law enforcement.
Amazon Rekognition Facial Recognition System Sold to Police Despite Known Racial Bias
CriticalAmazon sold its Rekognition facial recognition system to police departments from 2016-2020 despite documented racial bias that caused higher error rates for people of color. The company implemented a moratorium in 2020 following protests and employee pressure.
IBM Watson for Oncology Recommended Unsafe Cancer Treatments
HighIBM Watson for Oncology made unsafe cancer treatment recommendations after being trained on hypothetical rather than real patient data, leading to widespread physician overrides and hospital abandonment of the system.