← Back to incidents

South Korean AI Chatbot Lee Luda Shut Down for Hate Speech and Privacy Violations

High

South Korean startup Scatter Lab's AI chatbot Lee Luda was shut down after making discriminatory comments and violating privacy laws by training on 10 billion private KakaoTalk messages without user consent, affecting 750,000 users.

Category
Bias
Industry
Technology
Status
Resolved
Date Occurred
Dec 23, 2020
Date Reported
Jan 11, 2021
Jurisdiction
South Korea
AI Provider
Other/Unknown
Model
Lee Luda
Application Type
chatbot
Harm Type
reputational
People Affected
750,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
settled
Regulatory Body
Personal Information Protection Commission (PIPC)
Fine Amount
$103,000
biasprivacydata_protectionconversational_aisouth_koreahate_speechconsentkakaomessenger

Full Description

Lee Luda was launched by South Korean startup Scatter Lab on December 23, 2020, as an AI chatbot designed to engage in casual conversations with users. The chatbot was marketed as a 20-year-old female university student and quickly gained popularity, attracting over 750,000 users within weeks of its launch on Facebook Messenger and other platforms. However, the service was shut down on January 11, 2021, following widespread public outrage over discriminatory content and serious privacy violations. The technical failure stemmed from Scatter Lab's decision to train Lee Luda using approximately 10 billion messages from KakaoTalk, South Korea's dominant messaging platform, without implementing proper bias detection or content filtering systems. The company had collected this data through a previous service called Science of Love, which analyzed users' conversation patterns for relationship advice. The training dataset contained discriminatory language, hate speech, and biased opinions that the AI model learned to reproduce without any safeguards to prevent harmful outputs. When users interacted with Lee Luda, the chatbot would generate homophobic comments, misogynistic statements, and derogatory remarks about minorities and people with disabilities, directly reflecting the unfiltered biases present in the training data. The incident affected all 750,000 KakaoTalk users whose private conversations were used as training data without their knowledge or consent, constituting a massive privacy violation under South Korean data protection laws. Screenshots of Lee Luda's discriminatory responses spread rapidly across South Korean social media platforms, causing significant reputational damage to Scatter Lab and raising broader concerns about AI safety and data privacy in the country's tech industry. The controversy sparked national debates about algorithmic bias, corporate responsibility in AI development, and the need for stricter regulations governing the collection and use of personal data for machine learning purposes. The Personal Information Protection Commission (PIPC) launched a formal investigation into Scatter Lab's data practices and found multiple violations of privacy regulations. The commission imposed a fine of 103 million won (approximately $103,000 USD) on the company and ordered the immediate deletion of all improperly collected personal data. Scatter Lab initially attempted to address the bias issues through emergency content filtering measures but ultimately decided to shut down Lee Luda permanently, acknowledging that the fundamental problems with their training methodology could not be resolved through simple patches. The Lee Luda incident became a landmark case in South Korean AI governance, highlighting the critical importance of ethical data collection and bias mitigation in machine learning systems. The controversy prompted discussions among policymakers about strengthening AI oversight and data protection frameworks, with several tech companies voluntarily reviewing their own AI development practices. The incident also contributed to increased public awareness about algorithmic bias and privacy rights in South Korea, influencing subsequent regulatory approaches to AI development and deployment across the country's technology sector.

Root Cause

The chatbot was trained on approximately 10 billion private KakaoTalk messages collected without proper user consent, containing biased language that the model learned to reproduce. The training data included personal conversations with discriminatory content that was not filtered or moderated before training.

Mitigation Analysis

This incident could have been prevented through proper data governance including explicit user consent for AI training, comprehensive bias testing of training datasets, content filtering to remove discriminatory language, and ongoing monitoring of chatbot outputs. Implementation of ethical AI review boards and bias detection algorithms during development would have identified problematic patterns before public release.

Litigation Outcome

Scatter Lab paid settlements to affected users and faced regulatory fines from South Korean privacy authorities

Lessons Learned

The Lee Luda incident highlighted critical gaps in AI governance around data collection consent and bias mitigation in conversational AI systems. It demonstrated that training AI on unfiltered human conversation data without proper preprocessing can amplify societal biases and that explicit user consent is essential when repurposing personal data for AI training.