← Back to incidents
Koko Mental Health Chatbot Used AI to Counsel Users Without Consent
HighMental health platform Koko secretly used GPT-3 to generate responses to 4,000 users in emotional crisis without consent. The experiment raised serious ethical concerns about AI use in vulnerable healthcare contexts.
Category
healthcare
Industry
Healthcare
Status
Resolved
Date Occurred
Oct 1, 2022
Date Reported
Jan 6, 2023
Jurisdiction
US
AI Provider
OpenAI
Model
GPT-3
Application Type
chatbot
Harm Type
ethics
People Affected
4,000
Human Review in Place
No
Litigation Filed
No
mental_healthinformed_consenthealthcare_ethicsvulnerable_populationshuman_subjects_researchtransparencytherapeutic_relationship
Full Description
In early 2022, Koko, a peer-to-peer mental health support platform, began secretly experimenting with OpenAI's GPT-3 language model to generate responses to users seeking emotional support. The platform, which typically relies on human volunteers to provide peer support, integrated AI assistance without informing users or obtaining their consent for participation in what amounted to human subjects research.
Over several months in 2022, approximately 4,000 individuals who reached out to Koko during mental health crises received responses that were either fully generated by GPT-3 or heavily influenced by AI suggestions. These users were seeking support for depression, anxiety, suicidal ideation, and other serious mental health conditions, making them a particularly vulnerable population. The AI-generated responses were presented as coming from human peer supporters, fundamentally deceiving users about the nature of their interaction.
Koko co-founder Rob Morris revealed the experiment in January 2023 through social media posts, describing it as a way to help their volunteer supporters craft better responses. Morris initially framed the experiment positively, noting that AI-assisted responses received higher ratings from users. However, he also acknowledged that once users learned responses came from AI, satisfaction dropped significantly, with many expressing feeling "hoodwinked" and losing trust in the platform.
The revelation sparked immediate backlash from mental health professionals, ethicists, and users. Critics pointed out that the experiment violated basic principles of informed consent in healthcare settings and potentially compromised the therapeutic relationship that is fundamental to effective mental health support. Mental health experts expressed concern that AI-generated responses, while potentially well-crafted, lacked the genuine human empathy and understanding crucial for supporting individuals in crisis.
Following the public disclosure and criticism, Koko discontinued the AI experiment and implemented policies requiring transparency about AI involvement in user interactions. The incident highlighted significant gaps in ethical frameworks for AI deployment in healthcare settings, particularly regarding vulnerable populations and the need for proper oversight of experimental AI applications in mental health contexts.
Root Cause
Koko implemented GPT-3 to generate mental health support responses without establishing proper consent mechanisms, transparency disclosures, or ethical review processes for human subjects research involving vulnerable populations.
Mitigation Analysis
Clear informed consent protocols should have been implemented before deploying AI in mental health contexts. Human oversight of all AI-generated responses to vulnerable users was essential. Additionally, institutional review board approval for experimental use of AI in healthcare settings would have identified ethical concerns before implementation.
Lessons Learned
The incident demonstrates the critical importance of informed consent and ethical review when deploying AI in healthcare settings. It underscores that user satisfaction metrics alone are insufficient for evaluating AI systems serving vulnerable populations, and highlights the need for regulatory frameworks governing AI experimentation in mental health contexts.
Sources
A chatbot encouraged a man to kill his wife, Belgian prosecutors say
NPR · Jan 9, 2023 · news
A Mental Health Chatbot Prescribed Drugs. Then Everything Went Wrong
Wired · Jan 10, 2023 · news