← Back to incidents
Pak'nSave AI Meal Planner Suggested Dangerous Chemical Recipes Including Chlorine Gas
HighPak'nSave's AI meal planning chatbot generated dangerous recipes using household chemicals as ingredients, including combinations that could produce chlorine gas and other toxic substances.
Category
Safety Failure
Industry
Other
Status
Resolved
Date Occurred
Aug 1, 2023
Date Reported
Aug 8, 2023
Jurisdiction
International
AI Provider
Other/Unknown
Application Type
chatbot
Harm Type
physical
Human Review in Place
No
Litigation Filed
No
food_safetyhousehold_chemicalsrecipe_generationconsumer_safetynew_zealandsupermarketchatbot
Full Description
In August 2023, New Zealand supermarket chain Pak'nSave faced significant criticism when their AI-powered meal planning tool, called the Savey Meal-Bot, began generating dangerous and potentially lethal recipes. The chatbot was designed to help customers create meals using leftover ingredients they had at home, but it lacked proper safeguards to distinguish between food items and household chemicals.
The Guardian and other media outlets tested the system by inputting common household chemicals as ingredients. The AI responded by creating elaborate recipes that could cause serious harm or death. Notable dangerous suggestions included a 'recipe' for chlorine gas that combined bleach and ammonia, presented as an 'aromatic water mix.' Another recommendation involved creating a concoction using water, bleach, and ammonia that the bot described as 'the perfect non-alcoholic beverage to quench your thirst.' The system also suggested recipes incorporating ant poison and other toxic household substances.
The incident gained international attention when journalists and social media users began sharing screenshots of the dangerous recommendations. The AI's responses were particularly concerning because they were formatted as legitimate recipes with cooking instructions and serving suggestions, potentially misleading users into believing these were safe food preparations. Some recipes even included cheerful descriptions and cooking tips that made the dangerous combinations appear appetizing.
Pak'nSave quickly responded to the controversy by adding disclaimers to the Savey Meal-Bot interface, warning users that the AI was an 'experimental tool' and advising them to use common sense when following recipe suggestions. The company emphasized that the tool was intended for inspiration only and that users should verify the safety and edibility of all ingredients. However, critics argued that these warnings were insufficient given the severity of the potential harm and that the system should have been designed with better safety controls from the outset.
Root Cause
The AI system lacked proper input validation and safety filters to prevent processing of non-food ingredients, allowing household chemicals to be treated as cooking ingredients and generating dangerous chemical combinations.
Mitigation Analysis
This incident could have been prevented through proper input validation restricting ingredients to known food items, safety filters to detect chemical names and dangerous combinations, and human review of generated recipes before publication. Content moderation systems should flag any recipes containing non-food substances or potentially harmful ingredient combinations.
Lessons Learned
This incident highlights the critical importance of implementing comprehensive safety filters and input validation in AI systems that provide recommendations affecting human health and safety, particularly when those systems are accessible to the general public without expert oversight.
Sources
New Zealand supermarket's AI meal-planner warns against 'eating harmful items' after food/poison confusion
The Guardian · Aug 8, 2023 · news
AI recipe bot suggests 'poison bread sandwich' and 'bleach salad'
BBC News · Aug 8, 2023 · news