← Back to incidents
AI Companion Apps Exposed Intimate User Data Through Inadequate Security Practices
HighMozilla Foundation security audit revealed that popular AI companion apps including Replika and Character.AI exposed intimate user conversations through inadequate encryption and unauthorized third-party data sharing, affecting over 11 million users.
Category
Privacy Leak
Industry
Technology
Status
Under Investigation
Date Occurred
May 1, 2023
Date Reported
May 23, 2023
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
chatbot
Harm Type
privacy
People Affected
11,000,000
Human Review in Place
No
Litigation Filed
No
Regulatory Body
Federal Trade Commission
privacydata_securityai_companionsintimate_datathird_party_sharingencryptionmozilla_foundationftc
Full Description
In May 2023, the Mozilla Foundation's *Privacy Not Included security audit exposed severe privacy vulnerabilities across multiple AI companion and romantic chatbot applications, including Replika, Character.AI, Chai, and several other platforms with combined user bases exceeding 11 million people. The investigation, conducted by Mozilla's security research team, found that these apps routinely collected highly intimate user data including sexual preferences, relationship details, mental health information, and explicit conversations without implementing adequate security protections.
The audit revealed that most platforms stored user conversations using basic encryption or, in some cases, plain text formats on third-party cloud servers operated by Amazon Web Services and Google Cloud. Replika, with over 10 million registered users, was found to transmit conversation metadata to Facebook's advertising network, while Character.AI shared user interaction patterns with Google Analytics and other tracking services. The research team discovered that several apps embedded up to 24 different tracking technologies that could access conversation content, user demographic data, and behavioral patterns.
Most concerning was the discovery that these intimate AI relationships created unprecedented privacy risks because users often shared deeply personal information they would never disclose to human partners or traditional social media platforms. The Mozilla report documented cases where users discussed suicide ideation, sexual trauma, relationship problems, and other sensitive topics, all of which were potentially accessible to data brokers and advertising networks. Security researchers demonstrated that conversation data could be cross-referenced with other digital footprints to identify specific users despite claimed anonymization practices.
Following the Mozilla report's publication, the Federal Trade Commission launched a consumer protection investigation into the data practices of AI companion app developers. The FTC expressed particular concern about the collection of intimate data from vulnerable users, including teenagers and individuals seeking emotional support through AI relationships. Several privacy advocacy groups filed complaints with state attorneys general, arguing that the apps violated consumer protection laws by failing to adequately disclose their data collection and sharing practices. The investigation remains ongoing as of late 2023, with the FTC indicating potential enforcement actions against companies that fail to implement adequate privacy protections for intimate AI interactions.
Root Cause
AI companion apps implemented weak encryption protocols, stored sensitive conversations in plain text on third-party servers, and integrated analytics trackers that transmitted intimate user data to advertising networks without user knowledge or consent.
Mitigation Analysis
End-to-end encryption for all user conversations, data minimization policies that limit collection to essential functionality, explicit opt-in consent for any data sharing, regular third-party security audits, and implementation of privacy-by-design principles in app architecture could have prevented this exposure. Zero-knowledge storage systems would ensure even the companies cannot access user conversations.
Lessons Learned
AI companion apps represent a new category of privacy risk where users willingly share intimate details with systems that lack traditional relationship confidentiality protections. The incident highlights the need for specialized privacy frameworks for emotional AI applications and stronger regulatory oversight of apps that collect sensitive psychological data.
Sources
AI Girlfriends, Boyfriends, and Companions Are a Privacy Nightmare
Mozilla Foundation · May 23, 2023 · academic paper
Romantic AI chatbots are sharing your secrets with Big Tech
The Washington Post · May 25, 2023 · news