← Back to incidents
Lensa AI Generated Non-Consensual Sexualized Images of Users
HighLensa AI's Magic Avatars feature generated non-consensual sexualized images of users, particularly women, despite non-sexual input photos. The incident highlighted serious safety and consent issues in AI-generated imagery applications.
Category
Safety Failure
Industry
Technology
Status
Reported
Date Occurred
Nov 1, 2022
Date Reported
Dec 5, 2022
Jurisdiction
International
AI Provider
Other/Unknown
Model
Stable Diffusion
Application Type
other
Harm Type
privacy
People Affected
10,000,000
Human Review in Place
No
Litigation Filed
No
biascontent_generationnon_consensualgender_biasstable_diffusionimage_generationprivacysafety
Full Description
In late 2022, Lensa AI's Magic Avatars feature became the most downloaded app in app stores, offering users AI-generated artistic portraits from their uploaded photos. The feature was powered by Stable Diffusion, an open-source image generation model trained on billions of internet images. Users reported that the app frequently generated sexualized, nude, or semi-nude images of themselves, particularly affecting women users, even when their input photos were fully clothed and non-sexual in nature.
The incident came to widespread public attention in early December 2022 when technology journalists and social media users began sharing examples of the inappropriate outputs. Women users reported receiving avatars that emphasized sexual characteristics, removed clothing, or placed them in sexual poses that bore no relation to their original photos. The app's algorithm appeared to have learned biases from its training data that associated female faces and bodies with sexualized content.
Lensa AI, developed by Prisma Labs, faced immediate backlash from users, privacy advocates, and AI safety researchers. Critics pointed out that the app's terms of service granted the company broad rights to use uploaded photos, raising concerns about data privacy and the potential for misuse of intimate AI-generated content. The company's initial response was defensive, claiming that users could report inappropriate content, but this reactive approach failed to address the systemic bias in the underlying model.
The incident highlighted broader issues with AI image generation models trained on unfiltered internet data, which often contains biased or inappropriate content. Researchers noted that Stable Diffusion and similar models had known issues with generating sexualized content, particularly of women, due to the prevalence of such imagery in their training datasets. The lack of adequate content filtering and safety measures in Lensa AI's implementation allowed these biases to directly impact millions of users who had not consented to receiving such content.
The controversy damaged Lensa AI's reputation and sparked broader conversations about consent, bias, and safety in AI applications. While the app remained available, user trust was significantly eroded, and the incident became a cautionary tale about deploying AI models without adequate safety testing and bias mitigation. The case demonstrated how AI systems can perpetuate and amplify harmful biases, particularly those related to gender and sexuality, when deployed at scale without proper safeguards.
Root Cause
The underlying Stable Diffusion model was trained on internet data containing sexualized imagery, creating biases that caused the system to generate inappropriate sexual content even from non-sexual input photos. The Magic Avatars feature lacked adequate content filtering and safety guardrails.
Mitigation Analysis
Content filtering systems should have been implemented to detect and block sexualized outputs before delivery to users. Human review of generated content samples during development could have identified the bias. Training data curation and bias testing specifically for gender representation could have revealed the model's tendency to sexualize female subjects. Real-time output monitoring and user reporting mechanisms were inadequate.
Lessons Learned
The incident underscored the critical importance of bias testing and content filtering in AI applications that generate personal content. It demonstrated that open-source models with known biases require additional safety layers when deployed in consumer applications, and that reactive content moderation is insufficient for addressing systemic bias issues.
Sources
The viral AI avatar app Lensa undressed me—without my consent
MIT Technology Review · Dec 12, 2022 · news
The AI app that's stripping users (especially women) without consent
Washington Post · Dec 8, 2022 · news