← Back to incidents

Cigna AI System PXDX Denies 300,000 Health Insurance Claims in Mass Batch Processing

Critical

Cigna used AI system PXDX to automatically deny over 300,000 health insurance claims in two months, with physicians spending only 1.2 seconds per review. Class action lawsuit filed alleging systematic denial of legitimate medical claims.

Category
Medical Error
Industry
Healthcare
Status
Litigation Pending
Date Occurred
Mar 1, 2022
Date Reported
Mar 25, 2023
Jurisdiction
US
AI Provider
Other/Unknown
Model
PXDX
Application Type
api integration
Harm Type
financial
Estimated Cost
$50,000,000
People Affected
300,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
health_insuranceclaims_processingmedical_aipatient_harmmass_denialserisa_violationhealthcare_access

Full Description

In March and April 2022, Cigna Healthcare deployed an artificial intelligence system called PXDX to process medical insurance claims at unprecedented scale. According to ProPublica's investigation published in March 2023, this system rejected over 300,000 claims during a two-month period, with accompanying physician reviews averaging just 1.2 seconds per claim. The investigation revealed that physicians were expected to review and approve dozens of claim denials per minute, making meaningful medical assessment impossible. The PXDX system was designed to identify claims that could be denied based on administrative or medical necessity criteria. However, the investigation found that the system's primary function appeared to be maximizing denials rather than conducting thorough medical reviews. Internal documents and whistleblower accounts suggested that physicians were under pressure to maintain high denial rates and process claims at speeds incompatible with proper medical evaluation. Many of the denied claims were for routine medical procedures, diagnostic tests, and treatments that would typically be considered medically necessary. The impact on patients was severe and immediate. Thousands of patients faced unexpected out-of-pocket expenses for medical care they had reasonably expected to be covered under their insurance plans. Many patients were forced to delay or forgo necessary medical treatment due to financial constraints. Some patients reported having to choose between paying for medications and other essential expenses. The denials affected a wide range of medical services, including cancer treatments, mental health services, physical therapy, and diagnostic imaging. A class action lawsuit was filed against Cigna in federal court, alleging that the company used artificial intelligence to systematically deny legitimate claims in violation of ERISA (Employee Retirement Income Security Act) and state insurance regulations. The lawsuit claims that Cigna prioritized cost savings over patient care and used AI as a tool to implement blanket denials without proper medical justification. Plaintiffs argue that the 1.2-second review times prove that no meaningful medical evaluation took place, violating both industry standards and legal requirements for claims processing. The incident has raised broader questions about the use of artificial intelligence in healthcare decision-making and insurance claim processing. Healthcare advocacy groups have called for increased regulation of AI systems used in medical contexts, arguing that current oversight is insufficient to protect patient rights. The case has also highlighted the need for transparency in AI-driven healthcare decisions and the importance of maintaining human oversight in medical determinations that directly affect patient care and access to treatment.

Root Cause

Cigna deployed an AI system called PXDX that automatically rejected claims without meaningful physician review, with doctors spending only 1.2 seconds per claim on average. The system appears to have been designed to maximize claim denials rather than provide appropriate medical review.

Mitigation Analysis

Meaningful human physician review with sufficient time allocation (minimum 5-10 minutes per complex claim) could have prevented mass denials. Implementation of clinical audit trails, statistical monitoring for unusual denial patterns, and requirement for detailed medical justification for each denial would have detected the systematic nature of these rejections. Regular external audits of AI decision-making processes and appeals data analysis would have revealed the problematic patterns.

Lessons Learned

This incident demonstrates the critical need for meaningful human oversight in AI systems that make healthcare decisions. It highlights how AI can be misused to systematically deny legitimate claims at scale, and the importance of regulatory frameworks that ensure AI serves patient welfare rather than cost reduction.