← Back to incidents

ShotSpotter AI Gunshot Detection System Linked to Wrongful Police Raids and Racial Disparities

High

ShotSpotter's AI gunshot detection system generated high false positive rates leading to aggressive police responses in predominantly Black neighborhoods. Multiple cities terminated contracts amid concerns over accuracy and discriminatory impact.

Category
Bias
Industry
Government
Status
Ongoing
Date Occurred
Jan 1, 2019
Date Reported
Aug 1, 2021
Jurisdiction
US
AI Provider
Other/Unknown
Model
ShotSpotter acoustic detection system
Application Type
other
Harm Type
legal
Estimated Cost
$50,000,000
People Affected
10,000
Human Review in Place
Yes
Litigation Filed
Yes
Litigation Status
pending
policesurveillanceacoustic_detectionfalse_positivesracial_biascommunity_policingcivil_rights

Full Description

ShotSpotter Technologies deployed acoustic gunshot detection systems across more than 130 cities nationwide, using microphone networks and machine learning algorithms to identify and locate potential gunshots. The company marketed the system as providing rapid police response capabilities, with alerts typically triggering within 60 seconds of detection. However, extensive analysis revealed significant accuracy problems and discriminatory deployment patterns that raised serious civil rights concerns. Investigative reporting and academic studies found that ShotSpotter generated false positive rates ranging from 86% to 97% in various jurisdictions, with the vast majority of alerts failing to yield evidence of actual gunfire. In Chicago alone, between 2019 and 2021, over 40,000 ShotSpotter alerts resulted in police finding evidence of gunshots in fewer than 10% of cases. The system was also found to detect other loud noises including fireworks, construction sounds, and vehicle backfires, misclassifying them as gunshots. The deployment pattern of ShotSpotter systems showed clear racial and economic disparities, with sensors predominantly installed in Black and Latino neighborhoods rather than being distributed evenly across cities. This geographic bias meant that residents of these communities experienced disproportionately high rates of police responses triggered by false alerts. Police officers responding to ShotSpotter alerts often arrived with heightened alertness expecting active shooters, leading to more aggressive encounters with civilians and increased risk of escalation. Chicago became a focal point for criticism when the system was implicated in several high-profile incidents, including the death of 13-year-old Adam Toledo, where ShotSpotter alerts contributed to the police pursuit that resulted in the fatal shooting. Audio evidence later suggested the system may have detected the actual gunshot that killed Toledo, not a preceding shot as initially claimed. Following extensive community pressure and a mayoral campaign promise, Chicago terminated its $33 million ShotSpotter contract in 2024, joining other cities including Dayton, Ohio and Charlotte, North Carolina that dropped the system. The controversy highlighted broader concerns about algorithmic bias in policing technology and the lack of independent validation for AI systems used in law enforcement. Critics argued that ShotSpotter's business model created perverse incentives to maintain high alert volumes regardless of accuracy, while the company's ability to manually reclassify alerts raised questions about potential manipulation of evidence in criminal cases.

Root Cause

The acoustic detection algorithm produced high false positive rates, reportedly up to 86% in some areas. The system was predominantly deployed in Black and Latino neighborhoods, amplifying existing policing disparities. Human operators could manually reclassify alerts, potentially introducing additional bias.

Mitigation Analysis

Enhanced algorithm validation with diverse acoustic environments could reduce false positives. Independent auditing of deployment patterns would address geographic bias. Requiring corroborating evidence before police response could prevent overreaction to alerts. Regular accuracy reporting and community oversight would improve accountability and system performance.

Litigation Outcome

Multiple civil rights lawsuits filed against cities and ShotSpotter alleging discriminatory deployment and false alerts leading to constitutional violations

Lessons Learned

The ShotSpotter controversy demonstrates the risks of deploying AI systems in high-stakes law enforcement applications without rigorous accuracy validation and bias testing. Geographic deployment patterns can amplify existing social disparities, requiring careful oversight of how AI tools are distributed across communities.