← Back to incidents

AI Parole Risk Assessment Tools Showed Racial Bias Against Black Defendants Across Multiple US States

High

A 2023 NIJ study found that AI risk assessment tools used in 40+ states systematically assigned higher risk scores to Black defendants. These tools influence parole, sentencing, and pre-trial detention decisions affecting hundreds of thousands annually.

Category
Bias
Industry
Government
Status
Under Investigation
Date Occurred
Date Reported
Jun 15, 2023
Jurisdiction
US
AI Provider
Other/Unknown
Model
PSA, ORAS, LSI-R, PATTERN, COMPAS
Application Type
api integration
Harm Type
legal
People Affected
500,000
Human Review in Place
Yes
Litigation Filed
Yes
Litigation Status
ongoing
Regulatory Body
National Institute of Justice, various state criminal justice agencies
criminal_justiceracial_biasgovernmentparolesentencingrisk_assessmentalgorithmic_fairnesscivil_rights

Full Description

The National Institute of Justice released a comprehensive 2023 study examining algorithmic risk assessment tools deployed across more than 40 US states for criminal justice decisions. The research analyzed five major systems: the Public Safety Assessment (PSA), Ohio Risk Assessment System (ORAS), Level of Service Inventory-Revised (LSI-R), PATTERN, and COMPAS. These tools are used by judges, parole boards, and corrections officials to make high-stakes decisions about pre-trial detention, sentencing, and parole eligibility, affecting an estimated 500,000 individuals annually. The study revealed systematic racial disparities across all examined tools. Black defendants consistently received higher risk scores than white defendants with similar criminal histories and case characteristics. The PSA, used in over 35 jurisdictions including New Jersey and Kentucky, showed Black defendants were 31% more likely to be classified as high-risk for failure to appear and 39% more likely to be flagged for new criminal activity. The PATTERN system, used by federal prisons for recidivism assessment, exhibited a 15-point average score difference between Black and white defendants. Similar patterns emerged in state-specific tools like Ohio's ORAS system. The algorithmic bias traced to training data that incorporated decades of racially disparate policing, prosecuting, and sentencing practices. Variables like zip code, employment history, and family structure served as proxies for race, while the models learned patterns from historical data where Black defendants faced harsher treatment at every stage of the criminal justice process. The feedback loop effect meant that higher initial risk scores led to harsher treatment, generating data that reinforced the bias in future algorithmic updates. The disparate impact extended beyond individual cases to broader criminal justice outcomes. Counties using these tools showed increased racial gaps in pre-trial detention rates, with Black defendants held at 25% higher rates than comparable white defendants. Parole denial rates for Black inmates increased by 18% in jurisdictions implementing algorithmic risk assessment compared to traditional review processes. The bias also influenced judicial decision-making, with judges following algorithmic recommendations 85% of the time despite being instructed to use the scores as just one factor among many.

Root Cause

Risk assessment algorithms incorporated historical criminal justice data that reflected decades of racial bias, creating feedback loops where algorithmic scores perpetuated and amplified existing disparities in arrests, convictions, and sentencing patterns.

Mitigation Analysis

Bias could be reduced through algorithmic auditing for disparate impact, removal of race-correlated proxy variables, implementation of fairness constraints during model training, mandatory bias testing before deployment, and establishment of independent oversight boards to monitor algorithmic decisions in criminal justice settings.

Litigation Outcome

Multiple class action lawsuits filed challenging algorithmic bias in criminal justice systems, with several pending in federal courts

Lessons Learned

AI systems trained on biased historical data will perpetuate and amplify existing disparities. High-stakes applications in criminal justice require mandatory bias testing, ongoing monitoring, and human oversight structures specifically designed to identify and correct algorithmic discrimination.

Sources

Algorithmic Risk Assessment Tools in the Criminal Justice System
National Institute of Justice · Jun 15, 2023 · regulatory action
Algorithmic Bias in Criminal Justice Systems
ACLU · Jul 10, 2023 · company statement