← Back to incidents
UnitedHealthcare AI Algorithm Denied Elderly Patients Coverage at 90% Rate
CriticalUnitedHealthcare used an AI algorithm called nH Predict that systematically denied elderly patients' post-acute care coverage despite having a 90% error rate when reviewed by humans. A class action lawsuit was filed in 2023 alleging the practice harmed thousands of Medicare Advantage patients.
Category
Bias
Industry
Healthcare
Status
Litigation Pending
Date Occurred
Jan 1, 2019
Date Reported
Nov 13, 2023
Jurisdiction
US
AI Provider
Other/Unknown
Model
nH Predict
Application Type
api integration
Harm Type
financial
Estimated Cost
$50,000,000
People Affected
100,000
Human Review in Place
Yes
Litigation Filed
Yes
Litigation Status
pending
healthcaremedicareelderly_careinsurance_denialalgorithmic_biasclass_actionpost_acute_care
Full Description
In November 2023, a STAT News investigation revealed that UnitedHealthcare, one of America's largest health insurers, had been using an AI algorithm called nH Predict to systematically deny coverage for post-acute care services to elderly Medicare Advantage patients. The algorithm was designed to predict how long patients would need services such as nursing home care, home health assistance, and rehabilitation therapy following hospital discharge.
Internal company documents obtained through the investigation showed that the AI system had a staggering 90% override rate when human reviewers examined its decisions upon appeal. This meant that in 9 out of 10 cases where patients or their families challenged the algorithm's denial of coverage, human medical professionals determined that the care was actually medically necessary and should be covered. Despite this extraordinarily high error rate, UnitedHealthcare continued to rely on the algorithm for initial coverage determinations.
The investigation found that the algorithm's use began around 2019 and affected tens of thousands of elderly patients who required post-acute care services. Many patients and their families were forced to either pay out-of-pocket for essential medical services or forgo care entirely when coverage was denied. The financial impact on individual families was often devastating, with nursing home costs alone typically ranging from $5,000 to $15,000 per month.
UnitedHealthcare's own internal communications, revealed through the investigation, showed that company executives were aware of the algorithm's high override rate but continued its deployment. The company appeared to use the algorithm as a cost-saving measure, banking on the fact that many elderly patients would not have the resources or knowledge to appeal coverage denials, even when those denials were medically inappropriate.
In November 2023, a class action lawsuit was filed against UnitedHealthcare in federal court, alleging that the company's use of the flawed AI algorithm violated federal laws governing Medicare Advantage plans. The lawsuit seeks both monetary damages for affected patients and injunctive relief to prevent continued use of the biased algorithm. The case represents one of the most significant legal challenges to date regarding the use of AI systems in healthcare coverage decisions.
The incident has prompted broader scrutiny of how health insurers use AI and algorithmic decision-making tools in coverage determinations. Healthcare advocates have called for increased regulatory oversight of AI systems used in medical decision-making, particularly when those systems affect vulnerable populations like elderly patients who may have limited ability to navigate complex appeals processes.
Root Cause
UnitedHealthcare implemented an AI algorithm with known systematic bias that denied coverage at rates far exceeding clinical necessity, with human reviewers overturning algorithm decisions 90% of the time when appealed, indicating fundamental algorithmic flaws in decision-making criteria.
Mitigation Analysis
Continuous algorithmic auditing with bias detection metrics could have identified the 90% override rate as evidence of systematic failure. Mandatory human review of all algorithm decisions before claim denial, rather than only upon appeal, would have prevented most wrongful denials. Regular model retraining with oversight from clinical experts and patient advocates could have addressed biased decision patterns.
Litigation Outcome
Class action lawsuit filed in November 2023 seeking damages for wrongfully denied claims and injunctive relief to stop the practice.
Lessons Learned
The incident demonstrates the critical need for rigorous validation and ongoing monitoring of AI systems used in healthcare decisions, particularly when serving vulnerable populations. High override rates should trigger immediate algorithm suspension and retraining rather than continued deployment for cost-saving purposes.
Sources
UnitedHealth uses AI model with 90% error rate to deny care, lawsuit alleges
STAT News · Nov 13, 2023 · news
Lawsuit: UnitedHealth Used Faulty AI to Deny Elderly Patients Care
Courthouse News Service · Nov 14, 2023 · news