← Back to incidents

AI Risk Assessment Tools Exhibited Racial Bias in Prison Parole Decisions

High

AI risk assessment tools like COMPAS and PATTERN used across US prison systems exhibited racial bias, incorrectly flagging Black defendants as high-risk at nearly twice the rate of white defendants and keeping low-risk prisoners incarcerated longer.

Category
Bias
Industry
Government
Status
Ongoing
Date Occurred
Jan 1, 2016
Date Reported
May 23, 2016
Jurisdiction
US
AI Provider
Other/Unknown
Model
COMPAS, PATTERN, LSI-R
Application Type
other
Harm Type
legal
People Affected
100,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
ongoing
Regulatory Body
Various state legislatures and corrections departments
criminal_justiceracial_biasgovernment_airecidivism_predictionparole_decisionsalgorithmic_fairnesscivil_rights

Full Description

Beginning in 2016, investigative reporting by ProPublica revealed that the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm, used by courts and corrections departments across the United States, exhibited significant racial bias in predicting recidivism risk. The analysis of over 7,000 people arrested in Broward County, Florida found that Black defendants were almost twice as likely to be incorrectly flagged as future criminals compared to white defendants (45% vs 23% false positive rate), while white defendants were more likely to be incorrectly flagged as low risk (48% vs 28% false negative rate). The COMPAS system, developed by Northpointe (later Equivant), was used by correctional agencies in multiple states including Florida, Wisconsin, California, and New York to inform parole decisions, sentencing recommendations, and custody classifications. Similar bias patterns were subsequently identified in other risk assessment tools including the Pennsylvania Risk Assessment Instrument (PRAI), the Level of Service Inventory-Revised (LSI-R), and the federal PATTERN (Prisoner Assessment Tool Targeting Estimated Risk and Needs) system implemented by the Bureau of Prisons in 2019. Research by the Brennan Center for Justice and ACLU found that these algorithmic tools systematically overestimated recidivism risk for Black defendants while underestimating risk for white defendants. The algorithms incorporated factors that served as proxies for race, including neighborhood characteristics, employment history, and family background - all areas where historical discrimination created disparate impacts. A 2018 study by researchers at Dartmouth College found that even when controlling for criminal history, Black defendants received higher risk scores than white defendants. The widespread deployment of these tools affected hundreds of thousands of incarcerated individuals across the United States. In Wisconsin alone, the COMPAS algorithm influenced decisions for over 15,000 defendants annually. The biased risk scores directly impacted parole decisions, with higher scores leading to longer periods of incarceration for Black prisoners who posed similar actual risk levels as white prisoners receiving lower scores. This perpetuated and amplified existing racial disparities in the criminal justice system, with Black individuals serving disproportionately longer sentences for equivalent crimes and risk profiles.

Root Cause

Machine learning algorithms trained on historical criminal justice data perpetuated existing racial disparities, as the training data reflected decades of biased policing and sentencing practices that disproportionately affected Black defendants.

Mitigation Analysis

Bias testing and algorithmic auditing could have identified disparate impact before deployment. Regular fairness evaluations across demographic groups, human oversight requirements for high-stakes decisions, and transparency in algorithmic factors would have reduced harm. Proactive bias detection systems and demographic parity constraints in model training could have prevented discriminatory outcomes.

Litigation Outcome

Multiple lawsuits filed challenging use of biased algorithms in criminal justice decisions, with some resulting in policy changes and restrictions on algorithm use.

Lessons Learned

The incident demonstrates that AI systems can perpetuate and amplify existing societal biases when trained on biased historical data. It highlights the critical need for algorithmic auditing and bias testing before deploying AI in high-stakes government decisions that affect fundamental rights and liberties.

Sources

How Algorithmic Risk Assessment Tools Work
Brennan Center for Justice · Jun 11, 2019 · academic paper
Trapped in a Black Box: Growing Use of Algorithmic Criminal Justice Tools
American Civil Liberties Union · Feb 1, 2020 · academic paper