← Back to incidents

OpenAI GPT Store Launched With Harmful and Copyright-Infringing Bots

Medium

OpenAI's GPT Store launched in January 2024 with numerous problematic custom GPTs that facilitated academic dishonesty and copyright infringement. Investigations revealed inadequate content moderation and policy enforcement during the initial rollout.

Category
Copyright Violation
Industry
Technology
Status
Resolved
Date Occurred
Jan 10, 2024
Date Reported
Jan 12, 2024
Jurisdiction
US
AI Provider
OpenAI
Model
GPT-4
Application Type
api integration
Harm Type
reputational
Human Review in Place
No
Litigation Filed
No
content_moderationmarketplacecustom_gptsacademic_integritycopyrightpolicy_enforcement

Full Description

On January 10, 2024, OpenAI officially launched its GPT Store, a marketplace for custom ChatGPT applications created by users. The store was positioned as a way for developers to monetize their custom GPTs and for users to discover specialized AI tools. However, within days of the launch, multiple investigations revealed significant content moderation failures that allowed harmful and policy-violating applications to proliferate on the platform. TechCrunch and other technology publications conducted systematic reviews of the GPT Store and discovered numerous problematic applications. These included GPTs explicitly designed to help students cheat on assignments and exams, applications that could generate content mimicking copyrighted works, and bots that impersonated real individuals including celebrities and public figures. Some GPTs were found to bypass OpenAI's usage policies by using euphemistic descriptions while actually facilitating prohibited activities. Specific examples identified by researchers included academic writing assistants that promised to help users avoid plagiarism detection software, GPTs that claimed to generate content 'in the style of' specific copyrighted authors and franchises, and applications that offered to create fake academic citations and references. Additionally, some GPTs were found to have misleading names and descriptions that didn't accurately reflect their actual capabilities or intended use cases. OpenAI's response to these findings was initially limited, with the company relying primarily on user reporting and automated detection systems rather than comprehensive pre-publication review. The incident highlighted the challenges of content moderation at scale for AI-generated applications and raised questions about OpenAI's readiness to operate a public marketplace for AI tools. The company subsequently began removing violating GPTs and updating its review processes, but the initial launch period demonstrated significant gaps in policy enforcement and quality control that could have been addressed through more rigorous pre-launch testing and human oversight.

Root Cause

OpenAI's content moderation systems failed to adequately screen custom GPTs before making them publicly available in the GPT Store, allowing harmful applications to bypass safety guidelines and policy restrictions during the initial launch period.

Mitigation Analysis

Implementing mandatory pre-publication human review of custom GPTs, automated scanning for copyright-infringing prompts and outputs, and stricter validation of GPT descriptions against prohibited use cases could have prevented these violations. Real-time monitoring of GPT interactions and user reporting mechanisms with rapid response protocols would help identify problematic applications post-launch.

Lessons Learned

The incident demonstrates that AI marketplaces require robust content moderation and human oversight before launch, as automated systems alone cannot effectively identify subtle policy violations or creative attempts to circumvent usage restrictions in custom AI applications.

Sources

OpenAI's GPT Store is filling up with spam
TechCrunch · Jan 12, 2024 · news