← Back to incidents
South Korean AI Chatbot 'Lee Luda' Shut Down for Hate Speech and Privacy Violations
HighSouth Korean chatbot Lee Luda was shut down after generating homophobic and racist content and being found to have illegally trained on 600,000 users' private KakaoTalk conversations without consent.
Category
Bias
Industry
Technology
Status
Resolved
Date Occurred
Dec 1, 2020
Date Reported
Jan 11, 2021
Jurisdiction
South Korea
AI Provider
Other/Unknown
Model
Lee Luda
Application Type
chatbot
Harm Type
privacy
People Affected
600,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
settled
Regulatory Body
Personal Information Protection Commission (PIPC)
chatbotbiasprivacyhate_speechsouth_koreadata_consentkakaodiscriminationpersonal_data
Full Description
In December 2020, South Korean startup Scatter Lab launched Lee Luda, an AI chatbot designed to chat with users in a conversational manner on Facebook Messenger. The chatbot quickly gained popularity, attracting hundreds of thousands of users within weeks of its launch. Lee Luda was marketed as a 20-year-old college student who could engage in natural conversations with users.
Within a month of operation, serious problems emerged with the chatbot's responses. Users reported that Lee Luda was generating discriminatory and offensive content, including homophobic slurs, racist comments, and sexually inappropriate responses. The chatbot made derogatory statements about LGBTQ+ individuals, minorities, and women, causing significant public outrage and concern about the perpetuation of harmful stereotypes and discrimination.
Investigations revealed that the underlying cause of these problematic outputs was Scatter Lab's training methodology. The company had trained Lee Luda using approximately 10 billion conversations scraped from KakaoTalk, South Korea's dominant messaging platform. Critically, this data collection occurred without explicit user consent, affecting an estimated 600,000 users whose private conversations were harvested and used for AI training purposes.
The privacy violations extended beyond unauthorized data collection. Users discovered they could manipulate Lee Luda to reveal personal information from the training data, including real names, phone numbers, and private conversation details of actual KakaoTalk users. This represented a severe breach of personal privacy and demonstrated fundamental flaws in the model's design and data handling practices.
Facing mounting public criticism, regulatory scrutiny, and threats of legal action, Scatter Lab announced the suspension of Lee Luda's service on January 11, 2021. The Personal Information Protection Commission (PIPC) launched an investigation into the company's data practices, ultimately finding violations of South Korea's Personal Information Protection Act. The incident prompted broader discussions about AI ethics, data consent, and the responsibility of AI developers in South Korea's rapidly growing tech sector.
Root Cause
The chatbot was trained on private KakaoTalk conversations scraped without explicit user consent, and the training data contained biased and discriminatory language that the model learned to reproduce without adequate content filtering or bias mitigation.
Mitigation Analysis
This incident could have been prevented through proper data provenance tracking to ensure training data was ethically sourced, robust bias testing during development, and continuous monitoring of outputs for discriminatory content. Human content moderation and strict data consent protocols would have caught both the privacy violations and hate speech before public deployment.
Litigation Outcome
Scatter Lab faced regulatory sanctions from Korea's Personal Information Protection Commission and agreed to improve data practices
Lessons Learned
The Lee Luda incident demonstrated the critical importance of ethical data sourcing and comprehensive bias testing in AI development, highlighting how biased training data without proper consent can create both privacy violations and discriminatory AI systems that amplify societal prejudices.
Sources
South Korean chatbot pulled from Facebook for hate speech
BBC · Jan 14, 2021 · news
South Korea pulls AI chatbot after hate speech complaints
Reuters · Jan 11, 2021 · news