← Back to incidents
Google AI Overviews Provided Dangerous Advice Including Eating Rocks and Using Glue on Pizza
HighGoogle's AI Overviews feature provided dangerous advice including eating rocks for minerals and using glue on pizza, sourced from satirical content. Google quickly scaled back the feature after widespread criticism.
Category
Hallucination
Industry
Technology
Status
Resolved
Date Occurred
May 14, 2024
Date Reported
May 23, 2024
Jurisdiction
International
AI Provider
Google
Application Type
embedded
Harm Type
physical
Human Review in Place
No
Litigation Filed
No
googlesearchai_overviewsdangerous_advicesatirical_contenthealth_safetysource_quality
Full Description
In May 2024, Google launched AI Overviews, an enhanced search feature that provided AI-generated summaries at the top of search results. Almost immediately after the public rollout, users began discovering and sharing screenshots of bizarre and dangerous advice generated by the system. Notable examples included telling users to eat rocks for minerals, suggesting adding glue to pizza to prevent cheese from sliding off, and recommending gasoline for spaghetti recipes.
The erroneous advice stemmed from the AI system's inability to distinguish between legitimate sources and satirical content. The glue-on-pizza recommendation, for instance, was traced back to a joke comment from Reddit user 'fucksmith' made over a decade earlier. Similarly, other dangerous suggestions appeared to originate from parody websites, satirical forums, and joke responses that had been indexed by Google's search engine and subsequently treated as authoritative sources by the AI system.
The incident gained widespread attention on social media platforms, with users sharing screenshots of the absurd recommendations under hashtags like #GoogleAIFail. Technology journalists and AI safety experts quickly highlighted the serious safety implications of an AI system providing potentially lethal advice with the apparent authority of Google's search platform. The examples demonstrated how AI systems could amplify and legitimize dangerous misinformation when they lack proper safeguards.
Google responded by acknowledging the issues and stating that they were 'isolating cases' that didn't represent the system's overall performance. The company emphasized that AI Overviews generally provided high-quality information but admitted that the system had limitations, particularly with unusual queries and satirical content. Google implemented rapid fixes to filter out problematic sources and adjusted the types of queries that would trigger AI Overviews.
Within days of the incident becoming public, Google significantly scaled back the rollout of AI Overviews, reducing the frequency with which they appeared in search results. The company also implemented additional quality controls and began excluding certain categories of content sources. Industry analysts noted that the incident highlighted the challenges of deploying generative AI at scale without adequate safety measures, particularly when integrating with critical information services that users rely on for health and safety guidance.
Root Cause
Google's AI Overviews feature ingested satirical and joke content from sources like Reddit without distinguishing between serious advice and humor, leading to the generation of dangerous recommendations presented as factual information.
Mitigation Analysis
This incident could have been prevented through better source quality filtering to exclude satirical content, human review of health and safety-related responses before deployment, and more robust training to identify and flag potentially harmful advice. Content provenance tracking could have helped identify when responses were sourced from obviously satirical or joke contexts.
Lessons Learned
The incident demonstrates the critical importance of source quality control and content filtering when deploying AI systems that could provide health or safety advice. It also highlights the need for distinguishing between authoritative and satirical sources in training data.
Sources
Google is scaling back AI Overviews after they told users to eat glue
The Verge · May 24, 2024 · news
Google AI tells users to add glue to their pizza
BBC · May 24, 2024 · news