← Back to incidents

ChatGPT Bug Exposed User Chat Histories and Payment Information

High

In March 2023, a Redis cache bug in ChatGPT exposed chat histories and payment information to unauthorized users. The incident affected approximately 100,000 users and led to temporary service suspension and regulatory scrutiny.

Category
Privacy Leak
Industry
Technology
Status
Resolved
Date Occurred
Mar 20, 2023
Date Reported
Mar 24, 2023
Jurisdiction
International
AI Provider
OpenAI
Model
ChatGPT
Application Type
chatbot
Harm Type
privacy
Estimated Cost
$5,000,000
People Affected
100,000
Human Review in Place
No
Litigation Filed
No
Regulatory Body
Italian Data Protection Authority
privacy_breachsession_managementredispayment_datathird_party_vulnerabilitydata_isolation

Full Description

On March 20, 2023, OpenAI's ChatGPT service experienced a significant privacy breach due to a bug in its underlying infrastructure. Users began reporting that they could see other users' chat conversation titles in their sidebar, indicating a serious cross-user data leak. The issue was first noticed by users on social media platforms, with reports escalating throughout the day as more users observed unauthorized access to others' private conversations. The technical root cause was identified as a bug in an open-source Redis client library that OpenAI used for caching user session data. This bug caused corruption in the cached data, leading to session information being mixed between different users. While most affected users only saw chat titles from other accounts, a subset of ChatGPT Plus subscribers experienced a more severe breach where their payment-related information became visible to other users. This included full names, email addresses, the last four digits of credit card numbers, credit card expiration dates, and billing addresses. OpenAI responded by taking ChatGPT offline for several hours on March 24, 2023, to investigate and fix the issue. The company confirmed that the bug affected a small percentage of users during a specific time window on March 20. According to OpenAI's incident report, approximately 1.2% of ChatGPT Plus subscribers who were active during a specific 9-hour window had their payment information potentially exposed. The company estimated that around 100,000 total users were affected by the chat title visibility issue. The incident triggered immediate regulatory attention, particularly from the Italian Data Protection Authority (Garante), which had already been scrutinizing ChatGPT's data practices. The regulator demanded detailed explanations about the breach and OpenAI's data protection measures. The incident also prompted internal security reviews at organizations that had integrated ChatGPT into their workflows, with some temporarily suspending usage pending security assessments. OpenAI implemented several remedial measures following the incident, including upgrading their Redis client library, implementing additional session isolation controls, and enhancing their monitoring systems to detect similar cross-user data access anomalies. The company also directly notified affected ChatGPT Plus subscribers about the payment information exposure and offered credit monitoring services. This incident highlighted the critical importance of secure session management in AI applications and the potential for third-party library vulnerabilities to compromise user privacy at scale.

Root Cause

A bug in ChatGPT's Redis cache implementation caused session data to leak between users, allowing access to chat titles in the sidebar and, for ChatGPT Plus users, payment information stored in the system. The issue was triggered by an open-source Redis library bug that corrupted cached user session data.

Mitigation Analysis

This incident could have been prevented through proper session isolation testing, data encryption at rest for sensitive payment information, and more rigorous security testing of third-party dependencies like the Redis library. Real-time monitoring for anomalous cross-user data access and implementing zero-trust data access controls would have detected and limited the exposure scope.

Lessons Learned

This incident demonstrates how third-party dependencies can introduce significant security vulnerabilities in AI systems, emphasizing the need for comprehensive security testing of all system components. It also highlights the importance of implementing proper data isolation and encryption practices for sensitive user information in cloud-based AI services.

Sources