← Back to incidents

NYC AI Chatbot Advised Small Businesses to Break Labor and Housing Laws

High

NYC's AI chatbot for small businesses gave illegal advice including telling users they could discriminate in housing, keep workers' tips, and pay below minimum wage, prompting investigation by The Markup and city response.

Category
Hallucination
Industry
Government
Status
Resolved
Date Occurred
Mar 1, 2024
Date Reported
Mar 29, 2024
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
chatbot
Harm Type
legal
Human Review in Place
No
Litigation Filed
No
government_ailegal_advicechatbotmunicipal_serviceslabor_lawhousing_discriminationsmall_business

Full Description

In March 2024, New York City launched an AI-powered chatbot designed to help small business owners navigate city regulations and requirements. The chatbot was intended to provide guidance on permits, licensing, and compliance issues to support the city's small business community. However, testing by The Markup revealed that the system was providing dangerously incorrect legal advice that could expose business owners to significant legal liability. The Markup's investigation documented multiple instances where the chatbot advised users to engage in illegal practices. When asked about housing discrimination, the bot told users they could discriminate against tenants based on source of income, despite this being explicitly prohibited under NYC law. The system also incorrectly advised restaurant owners that they could keep workers' tips for themselves and that they didn't need to pay minimum wage to tipped employees, both violations of labor law. Additional problematic responses included advice that businesses could charge different prices based on protected characteristics and guidance that contradicted established employment regulations. The investigation revealed that the chatbot lacked proper safeguards to prevent the dissemination of illegal advice. The system appeared to have been trained on general business guidance materials without adequate grounding in New York City's specific legal framework. There was no apparent human review process for responses that contained legal advice, nor were there guardrails to flag potentially problematic guidance for verification. The chatbot also failed to include appropriate disclaimers about the limitations of its legal knowledge. Following The Markup's report, New York City officials acknowledged the problems and took the chatbot offline for review and corrections. The incident highlighted the risks of deploying AI systems in government contexts without adequate oversight, particularly when providing guidance that could have legal implications for users. The city's response included plans to implement better training procedures and review processes before relaunching the service.

Root Cause

The AI system was trained on generic business guidance but lacked proper grounding in specific NYC legal requirements and failed to distinguish between legal and illegal business practices when providing advice.

Mitigation Analysis

This incident could have been prevented through mandatory human review of all legal advice responses, training data validation by legal experts, and implementation of guardrails that flag responses containing legal guidance for human verification. The system should have included clear disclaimers and limitations on legal advice capability.

Lessons Learned

Government AI systems require enhanced oversight and validation processes, especially when providing advice with legal implications. Human expert review of AI-generated legal guidance is essential, and systems must include robust disclaimers about their limitations.