← Back to incidents
PredPol Predictive Policing Algorithm Reinforced Racial Bias in LAPD Deployment
HighLAPD's use of PredPol predictive policing software from 2011-2019 created feedback loops that disproportionately targeted Black and Latino neighborhoods, with multiple academic studies documenting systematic bias before the department ended the program.
Category
Bias
Industry
Government
Status
Resolved
Date Occurred
Jan 1, 2011
Date Reported
Oct 1, 2019
Jurisdiction
US
AI Provider
Other/Unknown
Model
PredPol Algorithm
Application Type
api integration
Harm Type
legal
People Affected
500,000
Human Review in Place
No
Litigation Filed
No
predictive_policingalgorithmic_biaslaw_enforcementracial_discriminationfeedback_loopsgovernment_aicriminal_justice
Full Description
The Los Angeles Police Department deployed PredPol's predictive policing algorithm beginning in 2011 as part of an effort to optimize patrol deployment and reduce crime through data-driven policing. The system analyzed historical crime data to predict where crimes were most likely to occur, generating daily maps with 500-foot by 500-foot boxes indicating high-risk areas for patrol focus. LAPD initially reported positive results, claiming crime reductions in areas where the technology was deployed.
However, academic researchers began documenting serious bias issues with the PredPol system by 2016. Studies by researchers at New York University and other institutions found that the algorithm created harmful feedback loops because it trained on historical crime data that reflected decades of discriminatory policing practices. Areas that had been subject to intensive policing in the past, particularly Black and Latino neighborhoods, were flagged by the algorithm as high-crime areas requiring increased surveillance. This led to more police presence, more stops and searches, and consequently more arrests and crime reports, which the algorithm then used as evidence that these areas were indeed high-crime zones requiring continued intensive policing.
The Brennan Center for Justice published comprehensive research in 2019 documenting these problems across multiple police departments using predictive policing tools. Their analysis of PredPol deployments found that the technology consistently directed more patrol resources to minority communities, even when controlling for actual crime rates. The research showed that in many cases, the algorithm's predictions were more strongly correlated with historical policing patterns than with actual public safety needs. Academic studies found that neighborhoods with higher percentages of Black and Latino residents received disproportionate algorithmic recommendations for police presence regardless of crime severity or frequency.
The bias became increasingly difficult to ignore as multiple police departments using similar predictive policing tools faced criticism. Research published in Science Advances and other journals demonstrated that these algorithms could amplify existing disparities in policing by up to 1,000%. Civil rights organizations and community activists in Los Angeles documented the impacts on affected neighborhoods, including increased stops, searches, and arrests that appeared to correlate more with algorithmic predictions than with criminal activity. The persistent criticism from researchers, civil rights groups, and community organizations eventually led LAPD to quietly discontinue its use of PredPol in 2019, though the department did not initially acknowledge the bias concerns as the primary reason for ending the program.
Root Cause
The PredPol algorithm trained on historical crime data that reflected decades of biased policing practices, creating feedback loops where increased patrol presence in minority neighborhoods generated more arrests and reports, which the algorithm interpreted as higher crime risk, perpetuating the cycle.
Mitigation Analysis
Bias testing during development could have identified discriminatory patterns before deployment. Fairness audits comparing demographic impacts across neighborhoods would have revealed the disparate effect. Regular algorithmic audits with demographic impact assessments and adjustment for historical bias in training data could have prevented the feedback loops that amplified existing inequities.
Lessons Learned
Predictive algorithms trained on biased historical data will perpetuate and amplify existing inequities unless specifically designed with fairness constraints. The incident demonstrates the critical need for algorithmic auditing in government deployments and highlights how AI bias can systematically impact marginalized communities at scale.
Sources
Predictive Policing Explained
Brennan Center for Justice · Apr 1, 2020 · academic paper
Runaway feedback loops in predictive policing
Science Advances · Feb 5, 2020 · academic paper
Predictive policing algorithms are racist. They need to be dismantled.
The Verge · Feb 6, 2020 · news