Academic Studies Document PredPol's Racial Bias and Ineffectiveness

| Importance: 8/10

Multiple academic studies and internal police audits published in 2019 provide comprehensive evidence of PredPol’s racial bias and failure to demonstrate effectiveness, undermining the fundamental claims that have justified the technology’s widespread adoption.

An internal audit by the LAPD Inspector General produces a critical report on the department’s data-driven policing initiatives, concluding there is insufficient data to determine if PredPol actually helped reduce crime. This finding is particularly significant given that Los Angeles was one of PredPol’s flagship clients and an early adopter of the technology. The Inspector General’s inability to validate the program’s effectiveness after years of implementation raises fundamental questions about the evidential basis for predictive policing.

A 2018 study examining potential deployment of the algorithm in Indianapolis finds that Latino and Black communities would experience 200-400% and 150-250% greater patrol presence respectively compared to white communities. This quantifies the discriminatory impact of algorithmic policing: the technology systematically directs more intensive law enforcement to communities of color, regardless of actual crime rates.

A 2019 study by the U.K.’s Government Centre for Data Ethics and Innovation identifies a psychological mechanism that amplifies bias: simply designating an area as a crime “hot spot” primes police officers to anticipate trouble while on patrol, making them more likely to arrest people in the area due to their preconceived expectations rather than objective necessity. This creates a self-fulfilling prophecy where predictions become reality through changed police behavior.

Another 2019 study examines 13 police jurisdictions known to use predictive policing algorithms and to maintain corrupted historical databases due to racially biased policing practices, arrest quotas, and data manipulation. The research demonstrates that algorithmic systems trained on tainted data perpetuate and amplify existing biases, transforming historical discrimination into seemingly objective mathematical predictions.

Multiple police departments publicly acknowledge they stopped using PredPol because “it simply did not work for them,” with the system either suggesting areas they already knew to be problematic or offering recommendations that conflicted with the urgent daily demands of police work. The gap between marketing promises and operational reality becomes increasingly apparent.

These findings expose the fundamental flaws in predictive policing technology: algorithms trained on biased historical data reproduce and amplify that bias; systems lack independent verification of their effectiveness; and the veneer of mathematical objectivity obscures discriminatory impacts on communities of color.

The accumulation of evidence from academic researchers, government auditors, and police departments themselves builds the case that will lead to widespread abandonment of predictive policing tools in subsequent years.

Key Actors

Help Improve This Timeline

Found an error or have additional information? You can help improve this event.

✏️ Edit This Event ➕ Suggest New Event

Edit: Opens GitHub editor to submit corrections or improvements via pull request.
Suggest: Opens a GitHub issue to propose a new event for the timeline.