← Back to incidents

Google Search AI Promoted Dangerous COVID-19 Misinformation Including Bleach Injection

Critical

Google's AI-powered search features promoted dangerous COVID-19 misinformation including bleach injection cures during the pandemic. The WHO declared an 'infodemic' partly due to AI amplification of health misinformation, prompting Google to implement emergency content policies.

Category
Medical Error
Industry
Healthcare
Status
Resolved
Date Occurred
Mar 1, 2020
Date Reported
Apr 1, 2020
Jurisdiction
International
AI Provider
Google
Application Type
other
Harm Type
physical
People Affected
100,000
Human Review in Place
No
Litigation Filed
No
Regulatory Body
World Health Organization
health_misinformationcovid19featured_snippetssearch_aiinfodemicmedical_advicecontent_ranking

Full Description

During the early months of the COVID-19 pandemic in 2020, Google's AI-powered search features, including featured snippets and knowledge panels, systematically surfaced and promoted dangerous health misinformation. The most egregious examples included suggestions to inject or consume bleach as a COVID-19 cure, recommendations for unproven treatments like hydroxychloroquine in unsafe dosages, and promotion of conspiracy theories linking 5G networks to virus transmission. These AI-generated results appeared prominently at the top of search results, giving them the appearance of authoritative medical advice. The scale of the problem became apparent when researchers and fact-checkers documented hundreds of instances where Google's AI systems were promoting content from unreliable sources, personal blogs, and social media posts over established medical authorities. The AI algorithms appeared to prioritize recent content and engagement metrics over source credibility, leading to dangerous medical advice being elevated above guidance from the CDC, WHO, and established medical institutions. Internal Google documents later revealed that the company's content quality systems were not adequately equipped to handle the unprecedented volume of health misinformation during the pandemic. The World Health Organization formally declared an 'infodemic' in February 2020, citing the role of AI-powered platforms in amplifying health misinformation. WHO Director-General Tedros Adhanom specifically mentioned search engines and social media algorithms as contributing factors to the spread of dangerous medical advice. Multiple poison control centers reported increased calls related to disinfectant ingestion and other dangerous home remedies that were being promoted through AI-generated search results. Google responded to mounting criticism by implementing emergency content policies in April 2020, manually curating COVID-19 search results and elevating authoritative health sources. The company also deployed human reviewers to specifically monitor health-related featured snippets and implemented algorithmic changes to prioritize established medical authorities. However, researchers continued to document instances of health misinformation surfacing through AI systems well into 2021, suggesting that the technical solutions were incomplete. The incident highlighted fundamental flaws in how AI systems handle health information during crisis situations. Google's emphasis on freshness and engagement in search rankings proved particularly problematic for medical content, where accuracy and source authority should take precedence. The company's reliance on automated systems without adequate human oversight for health-critical information exposed millions of users to potentially life-threatening advice during a global health emergency.

Root Cause

Google's AI systems for featured snippets and knowledge panels prioritized engagement and recency over medical accuracy, surfacing content from unreliable sources without proper verification against established medical authorities during a global health emergency.

Mitigation Analysis

This incident could have been prevented through mandatory medical fact-checking pipelines for health-related queries, elevated trust signals for authoritative medical sources like CDC and WHO, and human expert review for all COVID-related medical advice. Real-time monitoring of health misinformation patterns and circuit breakers to pause AI recommendations during health emergencies would have limited exposure.

Lessons Learned

AI systems designed for general information retrieval require specialized safeguards and different ranking criteria for health-critical content. The incident demonstrates the need for mandatory human expert review and real-time monitoring systems for medical AI applications, especially during health emergencies when misinformation can cause immediate physical harm.