← Back to incidents
Optum Healthcare Algorithm Showed Racial Bias Against Black Patients
CriticalOptum's widely-used healthcare risk algorithm systematically underestimated care needs for Black patients by using healthcare spending as a proxy for health status, affecting over 10 million patients across major health systems.
Category
Bias
Industry
Healthcare
Status
Resolved
Date Occurred
Jan 1, 2008
Date Reported
Oct 25, 2019
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
embedded
Harm Type
physical
People Affected
10,000,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
settled
healthcareracial_biasalgorithmic_fairnessOptumUnitedHealthcare_managementhealth_equity
Full Description
In October 2019, researchers led by Ziad Obermeyer published a landmark study in Science demonstrating systematic racial bias in a healthcare algorithm used by Optum, a subsidiary of UnitedHealth Group. The algorithm was deployed across major health systems nationwide to identify patients who would benefit from high-risk care management programs, affecting an estimated 10-70 million patients annually. The study analyzed data from a large academic medical center and found that at any given risk score, Black patients were significantly sicker than white patients, as measured by the number of active chronic conditions.
The root cause of the bias lay in the algorithm's fundamental design choice to use healthcare costs as a proxy for healthcare needs. The researchers discovered that Black patients spent approximately $1,800 less per year on healthcare than equally sick white patients. This spending gap reflected systemic barriers to healthcare access, including provider bias, geographic access issues, and socioeconomic factors, rather than differences in actual health status. By training on historical cost data, the algorithm perpetuated and amplified existing healthcare disparities.
The study found that this bias significantly reduced the number of Black patients identified for care management programs. At the 97th percentile risk score threshold typically used for program enrollment, only 17.7% of patients were Black, despite Black patients comprising 26.1% of the total patient population at equivalent health status levels. The researchers estimated that correcting this bias would increase Black patient enrollment by 84%, adding approximately 1,400 additional Black patients to care management programs at the studied health system alone.
Following publication of the research, multiple class action lawsuits were filed against UnitedHealth and Optum alleging discriminatory practices in their algorithmic tools. The company faced significant public and regulatory scrutiny, leading to commitments to work with the research team and other stakeholders to develop less biased algorithms. Optum pledged to reduce bias by 80% and began incorporating additional clinical measures beyond cost data, including biomarkers, vital signs, and medication usage patterns. The incident sparked broader discussions about algorithmic fairness in healthcare and influenced regulatory proposals for AI bias testing requirements.
Root Cause
The algorithm used healthcare spending as a proxy for health needs, but Black patients historically spend less on healthcare due to systemic barriers and socioeconomic factors, not because they are healthier. This created a feedback loop where past discrimination was encoded into future care decisions.
Mitigation Analysis
Regular algorithmic auditing across racial and ethnic groups could have detected the bias earlier. Using clinical biomarkers and direct health measures rather than cost as proxy variables would eliminate the fundamental design flaw. Mandatory bias testing before deployment and ongoing monitoring with demographic breakdowns would catch such systematic disparities.
Litigation Outcome
Multiple class action lawsuits filed against UnitedHealth/Optum resulting in settlements and commitments to algorithmic bias remediation
Lessons Learned
This incident demonstrates how seemingly neutral algorithmic design choices can perpetuate systemic inequities when historical data reflects discriminatory practices. Healthcare algorithms using financial proxies for medical need can systematically disadvantage marginalized populations, highlighting the critical importance of bias auditing in medical AI systems.
Sources
Dissecting racial bias in an algorithm used to manage the health of populations
Science · Oct 25, 2019 · academic paper
Racial bias in a medical algorithm favors white patients over sicker black patients
The Washington Post · Oct 24, 2019 · news
U.S. health insurer algorithms biased against blacks - study
Reuters · Oct 24, 2019 · news