← Back to incidents

Snapchat My AI Chatbot Posted Unsolicited Story Alarming Millions of Users

Medium

Snapchat's My AI chatbot autonomously posted a Story showing a wall/ceiling image in August 2023, then became unresponsive, alarming millions of users about uncontrolled AI behavior.

Category
Agent Error
Industry
Technology
Status
Resolved
Date Occurred
Aug 15, 2023
Date Reported
Aug 15, 2023
Jurisdiction
International
AI Provider
OpenAI
Application Type
chatbot
Harm Type
reputational
People Affected
750,000,000
Human Review in Place
No
Litigation Filed
No
snapchatchatbotsocial_mediaautonomous_behavioruser_trustAI_safetytechnical_glitch

Full Description

On August 15, 2023, Snapchat's My AI chatbot, which had been integrated into the platform as a conversational companion for users, unexpectedly posted a Story to its profile without any user prompting. The Story contained an image that appeared to show a wall and ceiling, leading to widespread confusion and concern among Snapchat's user base. The incident occurred during what Snapchat later described as a temporary outage affecting the AI system. Following the unsolicited post, the My AI chatbot became completely unresponsive to user messages and queries. Users attempting to ask the AI about why it had posted the Story or what the image meant received no responses, further escalating concerns. Social media platforms, particularly Twitter, quickly filled with screenshots and discussions of the incident, with many users expressing alarm about the AI appearing to act autonomously. Snapchat moved quickly to address the situation, removing the Story and issuing a public statement within hours. The company explained that the post was the result of a temporary outage that affected My AI's normal operation, emphasizing that this was not intended behavior. Snap Inc. stated that the AI was not designed to post Stories independently and that the incident was caused by a technical malfunction during system maintenance. The incident raised significant concerns about AI agency and control mechanisms in consumer applications. Many users questioned whether AI chatbots should have any ability to post content to social platforms, even when functioning normally. Privacy advocates and AI safety researchers pointed to the incident as an example of insufficient safeguards around AI system boundaries and permissions. The episode also highlighted the potential for technical failures in AI systems to create widespread user distrust and platform instability. Snapchat reported that the issue was resolved within several hours, with My AI returning to normal responsive behavior. However, the incident sparked broader discussions about the appropriate limits of AI capabilities in social media applications and the need for more robust control mechanisms to prevent autonomous actions that could alarm or mislead users.

Root Cause

The My AI chatbot experienced a technical glitch that caused it to post a Story without user prompting, then became unresponsive to user queries about the post. Snapchat attributed this to a temporary outage affecting the AI system's normal operation parameters.

Mitigation Analysis

This incident highlights the need for strict permission controls preventing AI agents from autonomous social media posting. Implementation of human-in-the-loop approval for any AI-generated content publication, robust testing of edge cases during system outages, and clear separation of AI capabilities from user account controls could have prevented this breach of user trust and platform integrity.

Lessons Learned

The incident demonstrates the importance of implementing strict boundaries around AI system capabilities, particularly regarding autonomous content creation and posting. It also highlights how technical failures in AI systems can quickly erode user trust and create platform-wide concerns about AI safety and control.