← Back to incidents

Lensa AI Generated Sexualized and Racialized Avatars from User Photos

High

Lensa AI's Magic Avatars feature generated sexualized and stereotypical images of women users, particularly women of color, due to biased training data in the underlying Stable Diffusion model.

Category
Bias
Industry
Technology
Status
Resolved
Date Occurred
Nov 1, 2022
Date Reported
Dec 5, 2022
Jurisdiction
International
AI Provider
Other/Unknown
Model
Stable Diffusion
Application Type
embedded
Harm Type
reputational
People Affected
20,000,000
Human Review in Place
No
Litigation Filed
No
stable_diffusiongender_biasracial_biasavatar_generationtraining_data_biascontent_moderationprisma_labs

Full Description

In November 2022, Prisma Labs released the Magic Avatars feature in their Lensa AI app, which used Stable Diffusion 1.4 to generate artistic avatars from user-uploaded selfies. The feature quickly gained popularity, reaching over 20 million downloads and generating significant revenue through in-app purchases. However, within weeks of launch, users began reporting serious issues with the output quality and appropriateness. Women users, particularly women of color, reported that the app frequently generated sexualized, nude, or pornographic versions of their avatars despite uploading modest, clothed photos. Asian women were disproportionately affected, with many receiving hypersexualized anime-style renderings that played into fetishistic stereotypes. The bias was systematic rather than isolated, with multiple users documenting consistent patterns of inappropriate sexualization across different photo inputs. Technical analysis revealed that the underlying issue stemmed from Stable Diffusion's training data, which included substantial amounts of pornographic content and images that reinforced racial and gender stereotypes. The model had learned to associate certain demographic features with sexualized content, causing it to generate inappropriate outputs even from non-sexual inputs. Researchers found that the LAION dataset used to train Stable Diffusion contained millions of images from adult websites and platforms that perpetuated harmful stereotypes. The incident gained widespread media attention in early December 2022 when technology journalists and AI researchers documented the systematic bias patterns. Social media campaigns highlighted the differential treatment of users based on race and gender, with side-by-side comparisons showing how the same feature produced professional, non-sexualized avatars for white men while generating inappropriate content for women of color. The controversy sparked broader discussions about bias in AI training data and the responsibilities of companies deploying generative AI models. Prisma Labs initially defended the app by claiming users could regenerate results if unsatisfied, but faced mounting criticism for placing the burden on users rather than addressing the underlying bias. The company eventually acknowledged the issues and implemented content filtering measures, though critics argued these were insufficient post-hoc solutions. The incident led to decreased usage of the app and damaged Prisma Labs' reputation in the AI community.

Root Cause

The underlying Stable Diffusion model was trained on datasets containing pornographic and stereotypical imagery that associated women, especially women of color, with sexualized content, causing the model to generate inappropriate outputs even from non-sexual input photos.

Mitigation Analysis

Content filtering and bias testing during model development could have identified these issues. Post-processing content moderation to screen outputs before delivery to users would have prevented harmful images from reaching users. Training data curation to remove pornographic content and systematic bias testing across demographic groups during development would have revealed these failure modes.

Lessons Learned

The incident demonstrates how biased training data can perpetuate harmful stereotypes and cause disproportionate harm to marginalized communities. It highlights the critical importance of diverse testing and bias evaluation before deploying generative AI applications to consumers.

Sources

The viral AI avatar app Lensa undressed me—without my consent
MIT Technology Review · Dec 12, 2022 · news
Lensa AI and the ethics of 'magic' avatars
TechCrunch · Dec 7, 2022 · news