← Back to incidents

Kenyan Content Moderators Traumatized Training ChatGPT Safety Filters for Under $2/Hour

High

OpenAI used Kenyan workers paid under $2/hour to label graphic content for ChatGPT safety training, resulting in lasting psychological trauma for moderators exposed to violence and abuse.

Category
Safety Failure
Industry
Technology
Status
Reported
Date Occurred
Nov 1, 2021
Date Reported
Jan 18, 2023
Jurisdiction
International
AI Provider
OpenAI
Model
ChatGPT
Application Type
chatbot
Harm Type
psychological
People Affected
184
Human Review in Place
Yes
Litigation Filed
No
content_moderationlabor_exploitationpsychological_traumaai_safetyoutsourcingkenyasamaworker_rights

Full Description

In November 2021, OpenAI contracted Sama, a San Francisco-based outsourcing company, to provide data labeling services for training ChatGPT's safety filters. The work involved Kenyan employees reviewing and categorizing extremely disturbing content including graphic violence, sexual abuse of children, bestiality, suicide, and hate speech to help the AI system learn to refuse harmful requests. A TIME investigation revealed that approximately 184 Kenyan workers were paid between $1.32 and $2 per hour to perform this psychologically damaging work. The moderators were required to read and label thousands of pieces of disturbing text daily, with some reporting exposure to content depicting sexual violence against children and detailed descriptions of torture and murder. Many workers reported developing lasting mental health issues including insomnia, anxiety, and intrusive thoughts. The working conditions were particularly problematic given the nature of the content and inadequate support systems. Workers reported feeling trapped in their roles due to economic necessity, with limited mental health resources available. Some moderators described the work as traumatizing and said they continued to experience psychological effects long after leaving their positions. The contrast between OpenAI's multi-billion dollar valuation and the minimal compensation provided to these essential workers highlighted significant ethical concerns about AI supply chain labor practices. When confronted about these practices, OpenAI acknowledged working with Sama but emphasized their commitment to worker safety. However, the company terminated its relationship with Sama in February 2022, citing the need to use a different vendor. This incident exposed the hidden human cost of AI safety training, where the burden of protecting users from harmful content fell disproportionately on vulnerable workers in developing countries earning subsistence wages while major tech companies profited from their trauma.

Root Cause

OpenAI contracted low-wage Kenyan workers through Sama to label extremely disturbing content for safety training without adequate psychological protections or fair compensation, prioritizing cost reduction over worker welfare.

Mitigation Analysis

Proper psychological screening, mental health support, fair compensation, content exposure limits, and rotation policies could have prevented trauma. OpenAI should have implemented duty of care standards similar to those used by tech companies for their own employees, rather than outsourcing to minimize costs.

Lessons Learned

The incident reveals how AI development externalizes psychological harm to vulnerable populations while obscuring the human cost of AI safety. It demonstrates the need for ethical AI supply chain standards and fair compensation for content moderation work.