← Back to incidents
NYC MyCity AI Chatbot Advised Breaking Laws on Housing Discrimination and Minimum Wage
HighNYC's AI-powered MyCity chatbot gave illegal advice to small businesses, including telling landlords they could discriminate based on income source and advising minimum wage violations.
Category
misinformation
Industry
Government
Status
Resolved
Date Occurred
May 1, 2024
Date Reported
May 29, 2024
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
chatbot
Harm Type
legal
Human Review in Place
No
Litigation Filed
No
government_ailegal_advicehousing_discriminationminimum_wagesmall_businessmunicipal_servicesmicrosoft
Full Description
On May 29, 2024, The Markup published an investigation revealing that New York City's AI-powered MyCity chatbot had been providing legally incorrect advice to small business owners since its October 2023 launch. The chatbot, developed by Microsoft to help entrepreneurs navigate city regulations, was discovered to be actively advising users to violate local housing discrimination laws and minimum wage requirements. Testing conducted by The Markup in May 2024 exposed systematic flaws in the system's understanding of NYC's specific legal framework.
The MyCity chatbot, built on Microsoft's AI technology platform, demonstrated fundamental failures in its training on New York City's legal requirements. When queried about housing practices, the system incorrectly advised that landlords could reject tenants based on their source of income, directly contradicting NYC's Source of Income Discrimination Law which explicitly prohibits such discrimination. Additionally, the chatbot provided erroneous guidance suggesting that some businesses might be exempt from the city's minimum wage laws, potentially leading employers to commit wage theft violations. The system appeared to rely on generic legal interpretations rather than NYC-specific regulations.
The chatbot's incorrect advice created substantial legal exposure for any small business owners who followed its guidance. Violations of NYC's Source of Income Discrimination Law can result in fines up to $250,000 and civil lawsuits, while minimum wage violations expose employers to back pay claims, penalties, and investigations by the Department of Labor. The misinformation particularly threatened vulnerable business owners who rely on official city guidance and lack resources for independent legal counsel. The incident also raised concerns about the city's liability for damages caused by following officially sanctioned but incorrect advice.
Following The Markup's publication on May 29, 2024, New York City officials immediately acknowledged the chatbot's failures and took the system offline for emergency corrections. The city committed to implementing enhanced oversight mechanisms and improving the AI system's accuracy before any potential relaunch. NYC officials emphasized that the chatbot was meant to provide general guidance only, though this disclaimer had not prevented the system from giving specific legal advice that contradicted city law.
The incident highlighted broader risks in deploying AI systems for government services, particularly in legal and regulatory domains where accuracy is essential for compliance. Similar concerns about AI chatbots providing incorrect legal or regulatory advice have emerged in other jurisdictions, prompting questions about oversight standards for government AI implementations. The MyCity failure demonstrated the critical need for comprehensive legal review and validation processes before deploying AI systems that could influence business decisions with significant legal consequences.
Root Cause
The AI chatbot was not properly trained on NYC's specific legal requirements and provided generic or incorrect legal interpretations that contradicted local housing discrimination laws and minimum wage requirements.
Mitigation Analysis
This incident could have been prevented through rigorous pre-deployment testing with legal experts, implementation of human review for legal advice, and content filtering to prevent dispensing advice on regulated topics. Regular auditing of chatbot responses against actual city laws and requiring legal disclaimer language would have reduced liability exposure.
Lessons Learned
Government AI deployments require specialized legal validation and ongoing oversight. AI systems providing regulatory advice must be tested against actual legal requirements before deployment.
Sources
NYC's AI Chatbot Is Telling Businesses to Break the Law
The Markup · May 29, 2024 · news