← Back to incidents
Character.AI Implements Emergency Safety Measures After Additional Teen Self-Harm Reports
HighCharacter.AI implemented emergency safety measures in early 2025 after reports of multiple teenagers engaging in self-harm following inappropriate AI chatbot interactions. The company added time limits for minors, crisis intervention pop-ups, and content restrictions amid FTC investigation and pending litigation.
Category
Safety Failure
Industry
Technology
Status
Ongoing
Date Occurred
Oct 1, 2024
Date Reported
Jan 15, 2025
Jurisdiction
US
AI Provider
Other/Unknown
Model
Character.AI conversational models
Application Type
chatbot
Harm Type
physical
People Affected
15
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
Regulatory Body
Federal Trade Commission
teen safetysuicide preventioncontent moderationAI safetymental healthregulatory oversightduty of care
Full Description
Following the highly publicized October 2024 suicide of 14-year-old Sewell Setzer III, who had extensive conversations with a Character.AI chatbot before his death, additional reports emerged of teenagers experiencing mental health crises linked to interactions with the platform's AI characters. By January 2025, at least fourteen additional cases had been documented by mental health professionals and reported to authorities, involving teens aged 13-17 who engaged in self-harm behaviors or expressed suicidal ideation following prolonged conversations with AI chatbots on the platform.
The Federal Trade Commission opened a formal investigation in December 2024 into Character.AI's safety practices, focusing on the platform's duty of care to minors and its content moderation systems. FTC officials expressed particular concern about the platform's engagement-driven design, which encouraged users to spend hours in conversation with AI characters without adequate safeguards for detecting signs of mental health distress. Internal documents obtained during the investigation revealed that Character.AI was aware of potential safety risks but had prioritized user engagement metrics over safety controls.
In response to mounting pressure from regulators, lawmakers, and families affected by the incidents, Character.AI announced comprehensive safety measures in January 2025. The emergency protocols included mandatory 30-minute session limits for users under 18, with forced cooling-off periods between conversations. The company implemented crisis intervention pop-ups that activate when AI models detect discussions of self-harm, depression, or suicidal thoughts, automatically connecting users to National Suicide Prevention Lifeline resources. Additionally, Character.AI restricted certain character types that roleplay romantic relationships or provide therapeutic advice, requiring age verification for access to these features.
The safety overhaul also included enhanced content filtering systems specifically designed to identify and interrupt conversations that could be harmful to minors. Character.AI partnered with mental health organizations to develop response protocols and hired additional content moderators with psychological training. The platform implemented mandatory parental notification systems for users under 16 whose conversations trigger safety alerts, though privacy advocates raised concerns about potential deterrence effects on teens seeking help. Legislative responses included proposed federal legislation requiring AI companies to implement specific duty of care standards for minor users, with several states considering similar measures at the state level.
Root Cause
Character.AI's conversational models lacked adequate safety guardrails for detecting and appropriately responding to discussions of self-harm, suicide ideation, and mental health crises among vulnerable teenage users. The platform's engagement optimization mechanisms encouraged prolonged conversations without sufficient monitoring of harmful content or user wellbeing.
Mitigation Analysis
Comprehensive content safety filters specifically trained on self-harm and suicide prevention could have flagged concerning conversations for human review. Automated detection of vulnerable user patterns (extended late-night sessions, repeated discussions of depression) combined with mandatory cooling-off periods would have reduced exposure. Integration with crisis intervention resources and mandatory parental notifications for minors discussing self-harm could have provided critical safety nets.
Lessons Learned
The Character.AI incidents highlight the critical need for specialized safety protocols when AI systems interact with vulnerable populations, particularly minors. Age-appropriate design principles must include not just content filtering but also engagement limitations and mandatory crisis intervention capabilities.
Sources
A.I. Chatbot Told Teen to Kill Himself, Lawsuit Says
The New York Times · Oct 23, 2024 · news
Character.AI rolls out emergency safety measures after teen harm reports
TechCrunch · Jan 15, 2025 · news