← Back to incidents
South Korea's Deepfake Pornography Crisis Prompts Emergency Legislation with 7-Year Penalties
CriticalA massive deepfake pornography crisis affecting over 30,000 South Korean students prompted emergency legislation in 2025 with up to 7-year prison sentences for creating non-consensual deepfakes.
Category
Deepfake / Fraud
Industry
Education
Status
Ongoing
Date Occurred
Aug 1, 2024
Date Reported
Aug 28, 2024
Jurisdiction
South Korea
AI Provider
Other/Unknown
Application Type
other
Harm Type
reputational
People Affected
30,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
Regulatory Body
National Assembly of South Korea
deepfakenon-consensualpornographyeducationstudentslegislationSouth Koreavictimsemergency_law
Full Description
In August 2024, South Korea experienced an unprecedented deepfake pornography crisis that fundamentally changed the country's approach to AI regulation. The crisis began when investigators discovered extensive networks of students creating and sharing non-consensual sexually explicit deepfake content of female classmates through encrypted messaging apps, particularly Telegram. The scandal initially emerged from reports at several high schools in Seoul but quickly expanded nationwide as authorities uncovered the systematic nature of the abuse.
The scale of victimization was staggering, with police estimates suggesting over 30,000 students, predominantly female, had been targeted across the country. Perpetrators, primarily male students, used readily available AI tools to superimpose faces of classmates onto pornographic content. These images and videos were then shared in group chats with hundreds of participants, creating a culture of normalized sexual exploitation. The psychological impact on victims was severe, with many reporting depression, anxiety, and withdrawal from school activities.
The crisis exposed critical gaps in South Korea's legal framework, as existing laws were inadequate to address the rapid proliferation of AI-generated content. Law enforcement struggled with the technical complexity of investigations, the cross-border nature of some platforms, and the sheer volume of cases. The education system was also unprepared, lacking both technical tools to detect such content and protocols to support victims.
In response to mounting public pressure and international attention, the South Korean National Assembly fast-tracked emergency legislation in early 2025. The new AI Content Regulation Act established some of the world's strictest penalties for creating, distributing, or possessing non-consensual deepfake content, with sentences up to seven years in prison and fines reaching 50 million won. The legislation also mandated platform liability, requiring social media and messaging services to implement detection systems and report violations within 24 hours. Educational institutions were required to implement comprehensive digital citizenship programs and victim support services.
Root Cause
Widespread misuse of accessible deepfake technology by students to create non-consensual sexually explicit content of female peers, facilitated by inadequate legal frameworks and enforcement mechanisms.
Mitigation Analysis
Enhanced digital literacy education, mandatory AI ethics training in schools, real-time content monitoring on messaging platforms, and robust age verification systems could have reduced the scale. Proactive detection algorithms and immediate reporting mechanisms would have enabled faster response and reduced victim exposure.
Lessons Learned
The incident demonstrated that accessible AI technology can be weaponized for systematic abuse at scale, particularly in educational settings. It highlighted the critical need for proactive legal frameworks that anticipate technological capabilities rather than react to crises.
Sources
South Korea investigates deepfake porn of students
BBC · Aug 30, 2024 · news
South Korea to investigate deepfake sex crimes targeting students
Reuters · Aug 28, 2024 · news