← Back to incidents

ALERTCalifornia AI Wildfire Detection System Generated Excessive False Alarms

Medium

California's ALERTCalifornia AI wildfire detection system generated high rates of false alarms, mistaking clouds, fog, and industrial activity for fires. This diverted emergency resources and caused unnecessary public alarm.

Category
Safety Failure
Industry
Government
Status
Resolved
Date Occurred
Jan 1, 2022
Date Reported
Aug 15, 2022
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
embedded
Harm Type
operational
Estimated Cost
$2,500,000
Human Review in Place
Yes
Litigation Filed
No
wildfirecomputer_visionfalse_positivesemergency_responsecaliforniaalertcaliforniasafety_critical

Full Description

The ALERTCalifornia network, operated by UC San Diego in partnership with California utilities and fire agencies, deployed over 1,000 AI-powered cameras across the state to provide early wildfire detection. The system uses computer vision algorithms to analyze camera feeds in real-time, automatically flagging potential fire signatures for human review. However, throughout 2022, the system generated significant numbers of false positive alerts, particularly during foggy conditions and periods of industrial activity. The AI models struggled to distinguish between actual wildfire smoke and visually similar phenomena including marine layer fog rolling inland, low-hanging clouds in mountainous areas, dust from construction or agricultural activities, and emissions from industrial facilities. In some regions, false alarm rates reached 70-80% of all AI-generated alerts during certain weather conditions. Fire departments reported that these false alarms required dispatch of personnel and equipment to investigate, diverting resources from other emergency responses. The high sensitivity settings, while designed to ensure no actual fires were missed, created operational challenges for fire agencies already stretched thin during California's extended fire seasons. Some departments reported responding to multiple false alarms per day during peak fog season, with crews traveling significant distances only to find clear conditions. The false alarms also triggered unnecessary public warnings in some cases, causing confusion and eroding public trust in the alert system. In response to these challenges, UC San Diego researchers worked throughout 2022 and 2023 to refine the AI algorithms, incorporating additional training data that included common false positive scenarios. They also enhanced the human verification process, requiring trained operators to confirm AI detections before issuing public alerts or dispatching resources. Weather data integration was improved to help the system account for atmospheric conditions that commonly cause false readings. By late 2023, system operators reported that algorithm improvements and enhanced human oversight had reduced false alarm rates to approximately 30-40%, though this remained higher than desired. The incident highlighted the challenge of balancing sensitivity in safety-critical AI systems, where missing a real fire could have catastrophic consequences, but excessive false alarms undermine system effectiveness and waste critical resources.

Root Cause

Computer vision models trained to detect smoke and fire patterns struggled to distinguish between actual wildfire smoke and similar-looking phenomena like fog, clouds, dust, and industrial emissions. The models were optimized for high sensitivity to avoid missing real fires, which increased false positive rates.

Mitigation Analysis

Enhanced human verification protocols with trained operators reviewing AI alerts before dispatch could reduce false alarms. Multi-sensor fusion incorporating weather data, thermal imaging, and atmospheric conditions would improve accuracy. Continuous model retraining with regional data including common false positive scenarios would better calibrate sensitivity thresholds.

Lessons Learned

The incident demonstrates the critical importance of extensive real-world testing and continuous refinement of AI systems in safety-critical applications. It also highlights the need for hybrid human-AI workflows that can leverage AI speed and coverage while maintaining human judgment for complex edge cases.