← Back to incidents

OpenAI Kenyan Content Moderators Suffered PTSD from Training Data

High

OpenAI paid Kenyan workers less than $2/hour through contractor Sama to label graphic content including child abuse for ChatGPT safety training. Workers suffered PTSD and trauma from exposure to disturbing material without adequate mental health support.

Category
Safety Failure
Industry
Technology
Status
Litigation Pending
Date Occurred
Nov 1, 2021
Date Reported
Jan 18, 2023
Jurisdiction
International
AI Provider
OpenAI
Model
ChatGPT
Application Type
chatbot
Harm Type
physical
People Affected
200
Human Review in Place
Yes
Litigation Filed
Yes
Litigation Status
pending
content_moderationoutsourcingworker_safetyai_safetyptsdsamakenyalabor_conditions

Full Description

In January 2023, a TIME investigation revealed that OpenAI had contracted with Sama, a San Francisco-based company, to provide content moderation services for training ChatGPT's safety systems between November 2021 and February 2022. The work involved Kenyan employees labeling extremely disturbing content including graphic descriptions and depictions of child sexual abuse, bestiality, murder, suicide, torture, and other traumatic material to help train ChatGPT to recognize and refuse to generate harmful content. Approximately 200 Kenyan workers at Sama's facility in Kibera, Nairobi, were paid between $1.32 and $2.00 per hour to review and categorize this content. The workers reported severe psychological trauma from daily exposure to disturbing material over several months. Many developed symptoms consistent with PTSD, including nightmares, anxiety, depression, and difficulty sleeping. Workers described feeling overwhelmed by the graphic nature of the content and reported that the mental health support provided was inadequate for the severity of their exposure. The content moderation work was critical for OpenAI's development of ChatGPT's safety filters and refusal training. The labeled data helped teach the AI system to recognize requests for harmful content and respond appropriately. However, the investigation revealed that workers were often required to read and categorize hundreds of pieces of disturbing content daily without sufficient breaks or psychological support. Some workers reported being told they could not discuss the work due to confidentiality agreements, which isolated them from potential support networks. Following the TIME investigation, Sama announced it would no longer take on content moderation work, stating that such projects were not aligned with its mission of dignified work. The company had previously positioned itself as providing ethical AI services to global companies. OpenAI faced criticism for its outsourcing practices and the working conditions of those who helped make its AI systems safer. The incident highlighted the hidden human costs of AI safety work and raised questions about the ethical responsibilities of AI companies in their supply chains.

Root Cause

OpenAI outsourced content moderation to low-paid Kenyan workers through contractor Sama without adequate mental health support or trauma counseling while exposing them to extremely graphic content for AI safety training.

Mitigation Analysis

Proper mental health support, trauma counseling, fair compensation, and rotation schedules could have reduced psychological harm. Better vendor oversight and due diligence on working conditions by OpenAI could have ensured ethical labor practices. Alternative technical approaches like synthetic data generation or more automated filtering could reduce human exposure to traumatic content.

Lessons Learned

The incident demonstrates that AI safety training relies heavily on human labor that can cause severe psychological harm. Companies must ensure ethical working conditions and adequate compensation throughout their supply chains when developing AI safety systems.