What if the algorithms worked for the defendants, rather than against them? | News and Comments


Across the United States, judges, prosecutors, and parole boards are being given algorithms to guide life-changing decisions about the liberty of the people before them, primarily based on perceived risks to “public safety.” At the same time, those accused and convicted of crimes receive little support. With an underfunded public defense in most of these settings, and no right to counsel in others (eg, in parole decisions), the system is stacked against them. We wanted to know what would happen if we reversed the script and used algorithms to benefit those entangled in the justice system, rather than those who wield power against them.

In a recent peer-reviewed studythe ACLU and collaborators from Carnegie Mellon and the University of Pennsylvania posed a simple question: Can we predict the risk of the criminal justice system to accused persons, instead of the risks posed by the people themselves?

The answer appears to be yes, and the process of creating a tool like this helps lay bare larger questions within the logic of existing risk assessment tools. While traditional risk assessment tools take into account risks to the public such as the likelihood of recidivism, the criminal justice system itself poses a host of risks to those caught up in it, many of which extend to their families and communities and have long-term impacts. These include being denied bail, being given a disproportionately long sentence for the conviction handed down, being wrongfully convicted, struggling with a criminal record that makes it impossible to obtain housing or employment, etc.

The prototype risk assessment instrument we created predicts that a person charged with a federal crime is likely to receive a particularly long sentence based on factors that should not be legally relevant in determining the length of the sentence. , such as the race of the accused or the party of the president who appointed the judge. The instrument performs comparably to other risk assessment instruments used in criminal justice settings, and the predictive accuracy matches or exceeds that of many tools deployed across the country. However, this does not mean that this tool, or any of the existing tools used, is necessarily good or improves people’s lives, just that it meets existing validation standards.

We chose to model the risk of long sentences among persons prosecuted at the federal level for several reasons. The most practical is simply that the data existed. In many criminal justice settings, the information defenders and researchers need most is not collected, or is poorly collected, such as details of plea bargains that account for approximately 95% of convictions. Long sentences are also a particularly pernicious problem in the United States — much worse than in most other democratic nations. Norway caps sentences for most crimes at 21, Portugal at 25. Excessively long sentences are scored as a leading cause of mass incarcerationand substantial evidence suggests that longer sentences may in fact have a negligible or negative impact on the alleged purpose of rehabilitate people sent to prison. Evidence also suggests that long sentences do not prevent future crimes. Finally, the legally irrelevant factors that affect sentencing decisions are well documented: The US Sentencing Commission concluded that black men in the same situation received, on average, a 19.1% longer sentence than their white counterparts.

The process of creating this tool informed the choices embedded in the creation of other tools that are frequently used in parole and pretrial settings. For example, we defined several thresholds, such as the definition of an excessively long sentence and the probability limit of when we considered that a person was particularly likely to be sentenced to one of these sentences. These are political choices, just like the choices made by the Bureau of Prisons to move the thresholds of its own tool to reduce the number of medically vulnerable incarcerated people eligible for release during the pandemic, or ICE, which used a risk assessment tool to prevent immigrant bail. In short, each time a tool is made, it is likely to focus the gaze of its creators and create a new politics, which should make us suspicious of these tools and their applications.

The models we have built – unlike existing risk assessment instruments – are designed to help public defenders and defendants, instead of prosecutors and judges. If we were to provide public defenders with the risk of a defendant being given a harsh sentence or how far they are from other similar cases, it might help them make informed decisions when navigating the sentencing proceedings and plea bargaining.

There are also other possible applications. The recently enacted First Step Act has allowed incarcerated persons, for the first time in history, to file petitions directly with the court to seek a reduced sentence when “extraordinary and compelling” circumstances warrant a reduction. Since then, federal district courts across the country have granted thousands of such motions when the defendant’s personal history, the underlying offense, the original sentence, the disparity created by any change in the law and other factors justify such a reduction. With our models, defendants could indicate how much their sentence deviates from what the model would predict based on the characteristics of their case, including the ability to consider how their case can be resolved today versus when where they were originally sentenced. Applicants could also identify legally irrelevant factors that may have influenced their conviction.

The ACLU and our coalition partners pushed the Biden administration to use the presidential power of clemency. Although it recently commuted the sentences of 75 people, it does not use it consistently. Clemency is mainly used only for specific high-profile cases and tends to provide relief to those convicted under now lapsed criminal laws or charges now deemed too punitive (e.g. non-violent drug offenses ). These categories exclude many federally incarcerated people from even a slim possibility of mercy. We built the model in such a way that it could indicate excessively long sentences, even for people excluded from many criminal justice reforms and clemency actions, such as those who have been convicted of violent crimes.

We expect there will be objections to the use of this model based on technical limitations, the acceptability of using an algorithm for such a high-stakes decision, or the subjectivity of the choices that have been made. But those who might raise such concerns would do well to apply them to tools currently in use throughout the system as well.


About Author

Comments are closed.