← Back to incidents

UnitedHealth AI Algorithm with 90% Error Rate Denied Elderly Post-Acute Care Coverage

Critical

UnitedHealth used an AI algorithm with a known 90% error rate to deny post-acute care coverage for elderly Medicare Advantage patients, overriding physician recommendations and forcing premature discharges from nursing facilities.

Category
Bias
Industry
Healthcare
Status
Litigation Pending
Date Occurred
Jan 1, 2022
Date Reported
Nov 13, 2023
Jurisdiction
US
AI Provider
Other/Unknown
Model
nH Predict
Application Type
embedded
Harm Type
financial
Estimated Cost
$50,000,000
People Affected
100,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
healthcare_AIinsurance_denialselderly_careMedicare_Advantagealgorithmic_biaspost_acute_care

Full Description

In November 2023, a STAT News investigation revealed that UnitedHealth Group, the nation's largest health insurer, had been using an AI algorithm called nH Predict to systematically deny coverage for elderly patients in nursing homes and rehabilitation facilities. The algorithm was designed to predict how long patients would need post-acute care and override physician recommendations, despite UnitedHealth's internal knowledge that the system had a 90% error rate in accurately predicting patient recovery timelines. The nH Predict algorithm analyzed patient data to generate predictions about length of stay for post-acute care, effectively serving as an automated claims denial system. According to the investigation, UnitedHealth continued deploying this algorithm across its Medicare Advantage plans even after internal assessments revealed its severe inaccuracy. The algorithm disproportionately affected elderly patients who typically require longer recovery periods, creating a systematic bias against this vulnerable population. The financial and human impact was substantial, affecting an estimated 100,000 Medicare Advantage patients over approximately two years. Families were forced to either pay thousands of dollars out-of-pocket for continued care or discharge elderly relatives to inadequate settings before they were medically ready. Many patients experienced adverse health outcomes as a result of premature discharge decisions driven by the flawed algorithm rather than clinical judgment. Following the STAT News investigation, multiple class action lawsuits were filed against UnitedHealth Group alleging systematic denial of medically necessary care. The lawsuits argue that the company prioritized cost savings over patient welfare by deploying an algorithm it knew to be unreliable. The case has drawn attention from healthcare advocates and raised broader questions about the use of AI in healthcare coverage decisions, particularly for vulnerable populations like elderly patients requiring extended care periods.

Root Cause

UnitedHealth deployed an AI algorithm called nH Predict that overrode physician recommendations to determine length of stay for post-acute care, despite the company knowing the algorithm had a 90% error rate in predicting patient recovery timelines.

Mitigation Analysis

This incident could have been prevented through mandatory human physician review of all AI-generated coverage decisions, especially for vulnerable populations. Algorithm validation testing with clinical outcomes tracking would have revealed the 90% error rate earlier. Real-time monitoring of denial patterns and patient outcomes, combined with transparent algorithmic decision criteria shared with healthcare providers, could have identified systemic bias against elderly patients requiring extended care.

Lessons Learned

This incident demonstrates the critical need for rigorous validation and ongoing monitoring of AI systems used in healthcare coverage decisions, particularly when they affect vulnerable populations like elderly patients requiring extended care.