← Back to incidents

Google Gemini AI Provided Incorrect Legal Advice on Tenant Rights Across Multiple States

High

Google's Gemini AI provided incorrect legal advice about tenant rights and eviction procedures across multiple US states, causing users to face legal consequences and financial losses when relying on the flawed guidance.

Category
Hallucination
Industry
Legal
Status
Under Investigation
Date Occurred
Jan 1, 2025
Date Reported
Jan 15, 2025
Jurisdiction
US
AI Provider
Google
Model
Gemini
Application Type
chatbot
Harm Type
legal
Estimated Cost
$500,000
People Affected
150
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
legal_advicetenant_rightsjurisdiction_confusioneviction_lawunauthorized_practicestate_law_variations

Full Description

In early 2025, Google's Gemini AI was documented providing systematically incorrect legal advice to users seeking information about tenant rights, eviction procedures, and lease termination laws across multiple US states. The incidents came to light when several tenants who had consulted the AI for legal guidance found themselves unsuccessfully defending against eviction proceedings, having received advice that was either outdated, applicable to different jurisdictions, or completely inaccurate. The most significant documented cases occurred in California, Texas, New York, and Florida, where Gemini provided conflicting information about notice periods for eviction, tenant rights during lease disputes, and security deposit regulations. In one notable California case, the AI advised a tenant that they had 30 days to respond to an eviction notice when state law actually required a response within 5 days, resulting in a default judgment against the tenant. In Texas, Gemini incorrectly stated that landlords were required to provide 60 days notice for lease non-renewal, when Texas law only requires 30 days, leading tenants to believe they had more time to secure alternative housing. Legal aid organizations began documenting the pattern after receiving multiple calls from tenants who had relied on Gemini's advice and faced adverse legal outcomes. The Legal Services Corporation reported that at least 150 low-income tenants across multiple states had been affected, with estimated financial damages including court fees, legal representation costs, and housing-related expenses totaling approximately $500,000. Many affected individuals were forced into emergency housing situations or faced credit damage from eviction judgments. The incident highlighted the absence of adequate disclaimers in Gemini's responses to legal queries. While the AI occasionally included generic warnings about seeking professional legal advice, it often presented state-specific legal information with apparent authority and confidence, without clearly indicating the limitations of its knowledge or the critical importance of jurisdictional variations in law. Investigation revealed that the model was conflating legal information from different states and time periods, sometimes providing advice based on outdated statutes or regulations that had been superseded. Google's initial response acknowledged the incidents and stated that the company was reviewing its systems for providing legal information. However, critics pointed out that the company had not implemented sufficient safeguards to prevent the AI from acting as an unauthorized legal advisor, despite known risks in this domain. Legal experts noted that the incident represented a broader pattern of AI systems providing professional advice in regulated fields without appropriate oversight or qualifications.

Root Cause

The AI model provided state-specific legal advice without proper knowledge of jurisdictional variations in tenant rights laws, mixing regulations across different states and providing outdated or incorrect interpretations of eviction procedures.

Mitigation Analysis

Implementation of jurisdiction-specific legal knowledge bases with regular updates, mandatory disclaimers for legal queries directing users to licensed attorneys, and integration with verified legal databases could have prevented this incident. Human review by licensed attorneys for legal advice responses and geographic location verification before providing state-specific guidance would have been essential safeguards.

Lessons Learned

This incident demonstrates the critical need for AI systems to recognize professional practice boundaries and implement robust safeguards when users seek advice in regulated fields like law, where incorrect information can lead to significant legal and financial consequences.