← Back to incidents
YouTube Kids Algorithm Promoted Disturbing Content to Children (Elsagate)
CriticalYouTube Kids' recommendation algorithm promoted disturbing content disguised as children's programming to millions of children. The FTC fined YouTube $170 million in 2019 for COPPA violations related to this incident.
Category
Safety Failure
Industry
Media
Status
Resolved
Date Occurred
Jan 1, 2017
Date Reported
Jul 1, 2017
Jurisdiction
US
AI Provider
Google
Application Type
embedded
Harm Type
physical
Estimated Cost
$170,000,000
People Affected
50,000,000
Human Review in Place
No
Litigation Filed
No
Regulatory Body
Federal Trade Commission (FTC)
Fine Amount
$170,000,000
childrencontent_moderationrecommendation_algorithmyoutubecoppaftcelsagatechild_safetyalgorithmic_harm
Full Description
In early 2017, parents and researchers began discovering a disturbing trend on YouTube Kids, Google's supposedly child-safe video platform. Content creators had begun producing videos featuring popular children's characters like Elsa from Frozen, Spider-Man, and Peppa Pig in violent, sexually suggestive, or psychologically disturbing scenarios. These videos, collectively known as 'Elsagate,' were specifically designed to exploit YouTube's recommendation algorithm to reach child audiences.
The videos typically featured bright colors, familiar characters, and titles that would appeal to children, but contained inappropriate content including violence, toilet humor, sexual themes, and disturbing imagery. Content creators discovered that YouTube's algorithm prioritized engagement metrics like watch time and clicks over content appropriateness, allowing them to game the system. The algorithm learned to recommend these videos to children who had watched legitimate children's content, creating a pipeline of inappropriate material.
YouTube's recommendation system failed catastrophically because it relied primarily on engagement signals rather than content safety verification. The algorithm identified that children were watching these videos for extended periods and clicking through to similar content, interpreting this as positive engagement. The platform's automated content moderation systems were insufficient to detect the subtle but harmful nature of these videos, which technically didn't violate obvious content policies but were psychologically inappropriate for children.
The scale of the problem became apparent when researchers and journalists began investigating in mid-2017. Millions of children were exposed to this content through the YouTube Kids app, which parents trusted as a safe environment. The incident highlighted fundamental flaws in algorithmic content curation for vulnerable populations and raised serious questions about platform responsibility for child safety.
The Federal Trade Commission launched an investigation into YouTube's handling of children's data and content, ultimately resulting in a record-breaking $170 million fine in September 2019 for violations of the Children's Online Privacy Protection Act (COPPA). The settlement required YouTube to implement stricter content moderation, obtain parental consent for data collection from children, and pay the largest COPPA penalty in history.
Following the scandal and regulatory action, YouTube implemented significant policy changes including enhanced human review of children's content, stricter monetization policies for children's videos, and improved algorithmic safeguards. The company also limited data collection on videos designated as child-directed content and restricted targeted advertising on such videos, fundamentally changing how the platform operates for younger audiences.
Root Cause
YouTube's recommendation algorithm prioritized engagement metrics over content safety, allowing content creators to game the system by using popular children's characters in disturbing videos that the algorithm then promoted to child audiences.
Mitigation Analysis
Human content moderation at scale before algorithmic promotion, stricter verification of children's content creators, and algorithmic safeguards that prioritize child safety over engagement metrics could have prevented this harm. Additionally, parental controls and transparency in recommendation logic would have provided better protection.
Lessons Learned
The Elsagate incident demonstrates that algorithmic systems optimizing for engagement can be systematically exploited to harm vulnerable populations. It highlighted the critical need for specialized safety controls and human oversight when AI systems serve children or other protected groups.
Sources
Google and YouTube Will Pay Record $170 Million for Alleged Violations of Children's Privacy Law
Federal Trade Commission · Sep 4, 2019 · regulatory action
On YouTube Kids, Startling Videos Slip Past Filters
New York Times · Nov 4, 2017 · news
YouTube's Algorithm Keeps Pushing Children Into Dark Corners
WIRED · Mar 25, 2018 · news
An update on our continued work to tackle violative content
YouTube Official Blog · Nov 9, 2017 · company statement