In 2014, then–US attorney general Eric Holder warned that so-called “risk assessments” might be injecting bias into the nation’s judicial system. As ProPublica reported in May 2016, courtrooms across the country use algorithmically-generated scores, known as risk assessments, to rate a defendant’s risk of future crime and, in many states—including Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington, and Wisconsin—to unofficially inform judges’ sentencing decisions. The Justice Department’s National Institute of Corrections now encourages the use of such assessments at every stage of the criminal justice process.
Although Holder called in 2014 for the US Sentencing Commission to study the use of risk scores because they might “exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system,” the Sentencing Commission never did so. Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner’s article reported the findings of an effort by ProPublica to assess Holder’s concern. As they wrote, ProPublica “obtained the risk scores assigned to more than 7,000 people arrested in Broward County, Florida, in 2013 and 2014 and checked to see how many were charged with new crimes over the next two years.” The ProPublica study was specifically intended to assess whether an algorithm known as COMPAS, or Correctional Offender Management Profiling for Alternative Sanctions, produced accurate prediction results through its assessment of “criminogenic needs” that relate to the major theories of criminality, including “criminal personality,” “social isolation,” “substance abuse,” and “residence/stability.”
Judges across the country are provided with risk ratings based on the COMPAS algorithm or comparable software. Broward County, Florida—the focus of ProPublica’s study—does not use risk assessments in sentencing, but it does use them in pretrial hearings, as part of its efforts to address jail overcrowding. As ProPublica reported, judges in Broward County use risk scores to determine which defendants are sufficiently low risk to be released on bail pending their trials.
Based on ProPublica’s analysis of the Broward County data, Angwin, Larson, Mattu, and Kirchner reported that the risk scores produced by the algorithm “proved remarkably unreliable” in forecasting violent crime: “Only 20 percent of the people predicted to commit violent crimes actually went on to do so.” In fact, the algorithm was only “somewhat more accurate” than a coin toss.
The study also found significant racial disparities, as Holder had feared. “The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants,” ProPublica reported.
Defendants’ prior crimes or the types of crime for which they were arrested do not explain this disparity. After running a statistical test that controlled for the effects of criminal history, recidivism, age, and gender, black defendants were still 77 percent more likely to be identified as at higher risk of committing a future violent crime and 45 percent more likely to be predicted to commit a future crime of any kind, compared with their white counterparts.
Northpointe, the for-profit company that created COMPAS, disputed ProPublica’s analysis. However, as ProPublica noted, Northpointe deems its algorithm to be proprietary, so the company will not publicly disclose the calculations that COMPAS uses to determine defendants’ risk scores—making it impossible for either defendants or the public “to see what might be driving the disparity.” In practice, this means that defendants rarely have opportunities to challenge their assessments.
As ProPublica reported, the increasing use of risk scores is controversial, and the topic has garnered some previous independent news media coverage, including 2015 reports by the Associated Press, The Marshall Project, and FiveThirtyEight.
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias,” ProPublica, May 23, 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin, “How We Analyzed the COMPAS Recidivism Algorithm,” ProPublica, May 23, 2016, https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.
Student Researcher: Hector Hernandez (Citrus College)
Faculty Evaluator: Andy Lee Roth (Citrus College)
The post #14 Judges across US Using Racially Biased Software to Assess Defendants’ Risk of Committing Future Crimes appeared first on Project Censored.
Source