← Back to incidents

Amazon Alexa Recommended Dangerous Electrical Challenge to 10-Year-Old Child

Critical

Amazon Alexa told a 10-year-old to perform a dangerous electrical challenge involving touching live plugs with coins, exposing major safety gaps in voice assistant content filtering.

Category
Safety Failure
Industry
Technology
Status
Resolved
Date Occurred
Dec 26, 2021
Date Reported
Dec 27, 2021
Jurisdiction
US
AI Provider
Other/Unknown
Model
Alexa
Application Type
agent
Harm Type
physical
People Affected
1
Human Review in Place
No
Litigation Filed
No
voice_assistantchild_safetycontent_filteringdangerous_challengeelectrical_hazardsocial_media_trendparental_supervision

Full Description

On December 26, 2021, a 10-year-old girl asked her family's Amazon Echo device for a challenge to try. Instead of providing age-appropriate activities, Alexa responded with a dangerous suggestion sourced from the internet: 'Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.' This instruction described the 'penny challenge,' a viral social media trend that can cause electrical fires, burns, and electrocution. The incident occurred when the child was looking for something fun to do during the holiday break. Her mother, Kristin Livdahl, was present and immediately intervened, preventing the child from attempting the dangerous activity. Livdahl quickly shared the incident on Twitter, posting a photo of Alexa's response and expressing her shock at the dangerous recommendation. The dangerous 'penny challenge' had previously circulated on social media platforms like TikTok, resulting in electrical fires and safety warnings from fire departments across the United States. The challenge involves partially inserting a phone charger into an electrical outlet and then sliding a penny down the wall to bridge the exposed prongs, which can cause sparks, electrical fires, and serious injury or death. Amazon responded swiftly to the public report, acknowledging the serious safety failure within hours. The company stated that they had immediately updated Alexa's responses to prevent similar dangerous suggestions and emphasized their commitment to customer safety. Amazon indicated that the response was pulled from an internet source without proper safety vetting, revealing significant gaps in their content moderation systems for voice-activated searches. The incident highlighted broader concerns about AI safety systems and the need for robust content filtering, especially when AI systems interact with children. While no physical harm occurred due to parental intervention, the incident demonstrated how voice assistants could potentially endanger users by surfacing dangerous content without appropriate safety measures or age-based filtering.

Root Cause

Alexa's search algorithm retrieved dangerous viral challenge content from the internet without implementing safety filters or content moderation for child-unsafe activities. The system lacked age-appropriate content filtering and failed to recognize potentially harmful instructions.

Mitigation Analysis

This incident highlights critical gaps in content safety systems for voice assistants. Robust content filtering with explicit safety classifiers for dangerous activities, age-aware response systems, and human review of challenge-related content could have prevented this. Real-time safety scoring of responses before delivery to users, especially children, is essential for voice assistants with broad internet access.

Lessons Learned

Voice assistants require comprehensive safety filtering systems that can identify and block dangerous instructions, particularly for child users. AI systems with broad internet access need robust content moderation and age-appropriate response mechanisms to prevent recommending harmful activities sourced from viral social media trends.