← Back to incidents
Major Airline AI Chatbots Provide Incorrect Refund and Rebooking Information, Following Air Canada Precedent
HighAI chatbots at United, Delta, and American Airlines provided incorrect refund and rebooking information to thousands of passengers, prompting DOT investigation and class-action lawsuits following the Air Canada precedent.
Category
Hallucination
Industry
Technology
Status
Under Investigation
Date Occurred
Jan 15, 2025
Date Reported
Jan 28, 2025
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
chatbot
Harm Type
financial
Estimated Cost
$50,000,000
People Affected
25,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
Regulatory Body
Department of Transportation
airlinechatbotcustomer_serviceDOT_regulationsrefund_policyconsumer_protectionclass_action
Full Description
In January 2025, AI customer service chatbots deployed by United Airlines, Delta Air Lines, and American Airlines began providing systematically incorrect information about passenger refund rights, rebooking policies, and compensation eligibility under Department of Transportation regulations. The incidents came to light when passengers who relied on chatbot guidance found their refund claims denied and were forced to pay additional rebooking fees that should have been waived under DOT rules.
The most significant cases involved passengers whose flights were cancelled due to airline operational issues. The AI chatbots incorrectly informed customers that they were only eligible for travel vouchers rather than full cash refunds, directly contradicting DOT regulations that mandate cash refunds for airline-caused cancellations. Additionally, the bots provided incorrect information about rebooking deadlines, compensation amounts, and passenger rights during extended delays.
Following the 2023 Air Canada chatbot case where a tribunal ruled that airlines are bound by their chatbot's promises, consumer advocacy groups and law firms quickly mobilized. Class-action lawsuits were filed against all three carriers by February 2025, with plaintiffs arguing that the airlines are legally bound by the incorrect information provided by their AI systems. The lawsuits estimate that over 25,000 passengers were affected, with individual damages ranging from $200 to $3,000 per passenger.
The Department of Transportation launched a formal investigation in February 2025, examining whether the airlines' use of AI chatbots violated consumer protection regulations. DOT officials indicated they would review whether airlines have adequate oversight mechanisms for AI-generated customer communications and whether new regulations are needed to govern AI deployment in customer service roles. The investigation is ongoing, with potential enforcement actions and fines pending.
Root Cause
AI chatbots were trained on outdated or incomplete policy documents and lacked proper validation mechanisms to verify information accuracy against current DOT regulations and airline policies before providing responses to customers.
Mitigation Analysis
Implementation of human oversight for policy-related queries, regular training data updates synchronized with regulatory changes, and real-time validation against authoritative policy databases could have prevented misinformation. Pre-deployment testing should include edge cases around passenger rights and refund scenarios with legal review.
Lessons Learned
The incident demonstrates that AI systems providing regulatory or policy information require robust validation mechanisms and human oversight. Airlines and other regulated industries must ensure AI training data reflects current regulations and implement real-time fact-checking against authoritative sources.
Sources
Major US Airlines Face Lawsuits Over AI Chatbot Misinformation
Reuters · Feb 15, 2025 · news
DOT Launches Investigation into Airline AI Chatbot Practices
Department of Transportation · Feb 8, 2025 · regulatory action