← Back to incidents

Cigna AI System PXDX Rejected 300,000 Health Insurance Claims in Two Months

Critical

Cigna used AI system PXDX to reject over 300,000 health insurance claims in two months with doctors spending only 1.2 seconds per review. Class action lawsuit alleges violations of state laws requiring meaningful medical evaluation.

Category
algorithmic_bias
Industry
Healthcare
Status
Litigation Pending
Date Occurred
Mar 1, 2022
Date Reported
Mar 25, 2023
Jurisdiction
US
AI Provider
Other/Unknown
Model
PXDX
Application Type
other
Harm Type
financial
Estimated Cost
$50,000,000
People Affected
300,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
health insuranceclaims denialmedical AIhealthcare accessregulatory complianceclass action

Full Description

In March and April 2022, Cigna Healthcare deployed an AI system called PXDX (PxDx Medical Management System) to review and reject health insurance claims at an unprecedented scale and speed. According to internal company data obtained by ProPublica, the system flagged over 300,000 claims for denial during this two-month period, with the company's physicians spending an average of only 1.2 seconds reviewing each claim before approving the AI's denial recommendation. The PXDX system was designed to identify claims that could be denied based on various criteria, but the investigation revealed that physicians were essentially rubber-stamping the AI's decisions without conducting meaningful medical reviews. Internal Cigna documents showed that some physicians were approving denials for hundreds of claims per hour, making it impossible to conduct the thorough medical evaluation required by state insurance laws and professional standards. The rapid-fire denials affected patients across multiple states who had submitted claims for various medical services including diagnostic tests, procedures, and treatments. Many patients were forced to pay out-of-pocket for covered services or navigate lengthy appeals processes to overturn the denials. The scale and speed of denials was unprecedented in the health insurance industry, raising serious questions about whether meaningful medical review was occurring. Following ProPublica's investigation published in March 2023, multiple class action lawsuits were filed against Cigna alleging that the company violated state laws requiring physicians to conduct meaningful medical reviews before denying claims. The lawsuits argue that the 1.2-second average review time demonstrates that no genuine medical evaluation was taking place, effectively delegating medical decisions to an algorithm in violation of insurance regulations. The incident highlighted broader concerns about the use of AI in healthcare decision-making and the adequacy of human oversight in automated systems that directly impact patient care and access to medical services. Industry experts noted that while AI can assist in claims processing, the speed and scale of Cigna's denials suggested an over-reliance on automated decisions without proper medical supervision.

Root Cause

Cigna deployed an algorithmic system that automatically flagged claims for denial with physicians spending an average of only 1.2 seconds per claim review, effectively rubber-stamping AI decisions rather than conducting meaningful medical evaluation as required by law.

Mitigation Analysis

Meaningful human oversight with adequate review time per claim could have prevented this systematic denial pattern. Enhanced audit trails tracking actual physician review time, randomized quality control sampling of AI recommendations, and regulatory compliance monitoring requiring documented medical justification for denials would have identified the inadequate review process.

Litigation Outcome

Class action lawsuit filed alleging Cigna violated state laws requiring meaningful medical review of claim denials

Lessons Learned

The incident demonstrates the critical importance of meaningful human oversight in AI systems that make consequential decisions about healthcare access. It highlights the need for regulatory frameworks that ensure AI augments rather than replaces professional medical judgment in insurance determinations.