← Back to incidents
Chicago Strategic Subject List Algorithm Disproportionately Targeted Black Neighborhoods
HighChicago's predictive policing algorithm disproportionately targeted Black and Latino residents for police attention, increasing their likelihood of being shot by police while failing to reduce overall violence rates.
Category
Bias
Industry
Government
Status
Resolved
Date Occurred
Jan 1, 2013
Date Reported
Aug 8, 2016
Jurisdiction
US
AI Provider
Other/Unknown
Model
Strategic Subject List (SSL)
Application Type
other
Harm Type
physical,social
People Affected
398,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
settled
predictive_policingalgorithmic_biasracial_discriminationchicagocriminal_justicepolice_targeting
Full Description
The Chicago Police Department implemented the Strategic Subject List (SSL) algorithm in 2013 as part of its predictive policing initiative. Developed by Illinois Institute of Technology researcher Miles Wernick, the algorithm assigned risk scores from 1 to 500 to individuals based on their arrest history, associates, and other factors to predict their likelihood of being involved in violent crime as either perpetrator or victim.
The RAND Corporation's 2016 evaluation revealed severe racial disparities in the algorithm's targeting. While Black residents comprised 33% of Chicago's population, they represented 56% of those on the SSL. Latino residents, making up 29% of the city's population, accounted for 23% of the list. The algorithm essentially codified and amplified existing racial biases in Chicago's policing data, creating a technological justification for disproportionate surveillance and intervention.
More troubling, RAND found that individuals on the SSL were significantly more likely to be shot by police during encounters, while the algorithm showed no measurable impact on reducing gun violence or overall crime rates. The study analyzed data from 398,000 individuals and concluded that the tool failed its primary stated objective of violence prevention while creating substantial harm through discriminatory targeting.
Civil rights organizations, including the ACLU of Illinois, filed multiple lawsuits challenging the SSL as a violation of constitutional equal protection rights. They argued the algorithm created a digital stop-and-frisk program that systematically violated Fourth and Fourteenth Amendment protections. Community advocates documented cases where individuals were repeatedly stopped and harassed by police solely based on their SSL scores.
The controversy intensified as journalists and researchers revealed the algorithm's lack of transparency and accountability mechanisms. Police officers received SSL scores without context about the methodology or error rates, leading to decisions based on algorithmic recommendations that subjects could not challenge or understand. The Chicago Police Department initially defended the program but faced mounting evidence of its discriminatory impact and ineffectiveness.
Facing sustained legal challenges, community pressure, and academic criticism, the Chicago Police Department officially discontinued the SSL program in 2019. The incident became a landmark case study in algorithmic bias in criminal justice systems and influenced policy discussions about AI governance in law enforcement nationwide.
Root Cause
The algorithm was trained on historical crime data that reflected existing racial disparities in policing, creating a feedback loop that perpetuated and amplified discriminatory targeting practices without accounting for systemic bias in the training data.
Mitigation Analysis
Bias testing during development, demographic parity constraints, and algorithmic auditing could have identified disparate impact. Independent oversight committees and regular algorithmic impact assessments would have provided accountability. Human review protocols requiring justification beyond algorithmic scores could have reduced automated discrimination.
Litigation Outcome
Multiple civil rights lawsuits were filed. The city eventually discontinued the SSL program in 2019 following sustained criticism and legal challenges.
Lessons Learned
Predictive policing algorithms can perpetuate and amplify existing racial biases in criminal justice data, demonstrating the critical need for bias testing and algorithmic auditing in high-stakes government applications.
Sources
Evaluating the Effectiveness of Chicago's Strategic Subject List
RAND Corporation · Aug 8, 2016 · academic paper
Chicago police 'heat list' proves unreliable
Chicago Tribune · Aug 8, 2016 · news