AI Case Study
US Justice system standardises bail permission through deploying algorithms but generates significant debate on biased outcomes
The US justice system is heavily decentralised but significant numbers of States are rolling out algorithmic scoring to support bail decision-making. Typically these score for the risk of recidivism or non-show at trial. The extent to which they are advisory, and where exactly they are deployed in the justice process will vary by geography.
Industry
Professional Services
Legal
Project Overview
Various software vendors - Northpointe being a much cited vendor - have rolled out scoring approaches to key issues like likelihood to commit crime, abscond or the level of risk posed to society. This can trigger different responses from yes/no decisions on bail to setting of financial levels of bail. In some geographies the system may inform police decisions from initial contact with alleged offenders right through to sentencing andthen parole decisions. Data analysed may include sets of questions, socio-demographic data, any prior criminal record and the circumstances of the particular case. According to the New York Times: "data and statistics can help overcome the limits of intuitive human judgments, which can suffer from inconsistency, implicit bias and even outright prejudice." Civil liberty campaigners are concerned that historic biases implicit in training data (e.g. from historic sentencing decisions) may be entrenched in these algorithms.
Reported Results
According to the New York Times
"In one recent experiment, agencies in Virginia were randomly selected to use an algorithm that rated both defendants’ likelihood of skipping trial and their likelihood of being arrested if released. Nearly twice as many defendants were released, and there was no increase in pretrial crime. New Jersey similarly reformed its bail system this year, adopting algorithmic tools that contributed to a 16 percent drop in its pretrial jail population, again with no increase in crime."
But investigative journalism outfit ProPublica was reported by the Guardian as arguing: "The program, Correctional Offender Management Profiling for Alternative Sanctions (Compas), was much more prone to mistakenly label black defendants as likely to reoffend – wrongly flagging them at almost twice the rate as white people (45% to 24%)."
The reasons for these outcomes remain disputed, and will likely vary by geography.
Technology
Function
Operations
General Operations
Background
The US justice system faces pressure to both limit growth in the prison population (already amongst the highest per capita in the world) and to justify bail decisions.
Benefits
Data