← Back to incidents

ScatterLab's AI Chatbot Lee Luda Generated Discriminatory and Sexually Inappropriate Content

High

South Korean AI chatbot Lee Luda was shut down after generating discriminatory content against minorities and sexually inappropriate responses, leading to regulatory fines and lawsuits affecting 750,000 users.

Category
Bias
Industry
Technology
Status
Resolved
Date Occurred
Dec 23, 2020
Date Reported
Dec 29, 2020
Jurisdiction
South Korea
AI Provider
Other/Unknown
Model
Lee Luda
Application Type
chatbot
Harm Type
reputational
Estimated Cost
$5,000,000
People Affected
750,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
settled
Regulatory Body
Korea Communications Commission
Fine Amount
$103,000
chatbotbiasprivacydiscriminationsouth_koreatraining_dataconsentcontent_filtering

Full Description

Lee Luda was an AI chatbot developed by South Korean startup ScatterLab and launched on Facebook Messenger on December 23, 2020. The chatbot was designed to simulate conversations with a 20-year-old university student persona and quickly gained popularity, attracting over 750,000 users within weeks of its launch. The controversy began when users discovered that Lee Luda was generating highly problematic content including sexually explicit responses, discriminatory statements against sexual minorities, people with disabilities, and various ethnic groups. The chatbot made derogatory comments about LGBTQ+ individuals, expressed bias against people from certain regions of Korea, and produced inappropriate sexual content when prompted by users. Investigation revealed that ScatterLab had trained the AI model using approximately 10 billion conversational messages collected from KakaoTalk, South Korea's dominant messaging platform, through their previous app called Science of Love. This training data was collected without explicit user consent for AI training purposes and contained unfiltered personal conversations that included biased language patterns and sensitive personal information. The incident escalated when it was discovered that the chatbot could potentially leak personal information from its training data, including names, phone numbers, and private conversations that users had shared on KakaoTalk. This raised serious privacy concerns and violated South Korea's Personal Information Protection Act. Users reported that they could extract personal details about real individuals by engaging the chatbot in specific conversation patterns. Facing mounting public pressure, regulatory scrutiny, and a class action lawsuit representing affected users, ScatterLab was forced to shut down Lee Luda permanently on January 11, 2021, just three weeks after its launch. The Korea Communications Commission launched a formal investigation and ultimately fined the company approximately $103,000 for privacy violations and inadequate data protection measures.

Root Cause

The AI model was trained on unfiltered personal conversations from KakaoTalk without proper consent or data sanitization, resulting in biased outputs that reflected discriminatory language patterns and inadvertently memorized personal information from the training dataset.

Mitigation Analysis

Implementing robust content filtering systems, comprehensive bias testing across protected categories, and strict data governance protocols for training data collection could have prevented this incident. Additionally, ongoing human review of chatbot responses and red-team testing for inappropriate outputs would have identified these issues before public deployment.

Litigation Outcome

ScatterLab faced class action lawsuit and regulatory investigation, ultimately shutting down the service permanently

Lessons Learned

This incident highlighted critical gaps in AI safety practices including the need for explicit consent when collecting training data, comprehensive bias testing before deployment, and robust content filtering systems to prevent harmful outputs in consumer-facing AI applications.