← Back to incidents
AI Triage System Incorrectly Prioritized Emergency Patients at Dutch Hospital
HighAn AI triage system at a Dutch hospital incorrectly classified emergency patients, sending high-acuity cases to lower priority queues. The incident highlights risks of automated medical decision-making without adequate human oversight.
Category
Medical Error
Industry
Healthcare
Status
Reported
Date Occurred
—
Date Reported
Mar 15, 2024
Jurisdiction
EU
AI Provider
Other/Unknown
Application Type
embedded
Harm Type
physical
Human Review in Place
No
Litigation Filed
No
medical_aitriage_systememergency_departmentpatient_safetyhealthcare_automationmisclassification
Full Description
A Dutch hospital's AI-powered emergency department triage system was found to be systematically misclassifying patient severity levels, directing patients requiring urgent medical attention into lower priority treatment queues. The automated triage system, designed to streamline emergency department workflow by rapidly assessing patient symptoms and vital signs, demonstrated significant accuracy issues when processing real-world emergency cases.
The misclassification errors were discovered through routine quality assurance reviews and analysis of patient flow patterns within the emergency department. Medical staff noticed unusual delays in treatment for patients who later required intensive interventions, prompting a comprehensive audit of the AI system's performance. The investigation revealed multiple instances where patients presenting with serious conditions were assigned lower triage scores than clinically appropriate.
The AI system's failures appeared to stem from training data that inadequately represented the full spectrum of emergency presentations, particularly edge cases and atypical symptom presentations common in real emergency departments. The algorithm struggled with complex cases involving multiple comorbidities, elderly patients with non-specific symptoms, and presentations that deviated from textbook patterns. Additionally, the system lacked robust validation testing across diverse patient demographics typical of urban emergency departments.
The incident prompted immediate intervention by hospital administrators, who implemented enhanced human oversight protocols and temporarily suspended full automation of the triage process. Medical staff were instructed to manually verify all AI-generated triage recommendations, particularly for patients assigned to lower acuity categories. The hospital initiated a comprehensive review of the AI system's training data, algorithmic decision-making processes, and validation procedures to identify and address the root causes of the misclassification errors.
Root Cause
The AI triage system failed to properly classify patient severity levels, likely due to training data limitations, algorithmic bias, or insufficient validation on diverse patient populations typical of emergency department presentations.
Mitigation Analysis
Implementation of mandatory human physician oversight for all AI triage recommendations, continuous monitoring of triage accuracy metrics across patient demographics, and regular retraining with diverse emergency department data could have prevented misclassifications. Real-time alerting systems for unusual triage patterns and requirement for dual verification on high-risk cases would provide additional safeguards.
Lessons Learned
The incident demonstrates the critical importance of comprehensive validation testing for medical AI systems across diverse patient populations and clinical scenarios. It highlights the need for continuous human oversight in high-stakes medical decision-making and the risks of over-reliance on automated systems in emergency care settings.
Sources
AI Triage Systems in Emergency Medicine: Performance Analysis and Safety Concerns
Nature Digital Medicine · Mar 15, 2024 · academic paper
Automated Triage Failures Raise Questions About AI Safety in European Hospitals
BMJ · Mar 18, 2024 · news