The criminal justice system seeks to decrease recidivism, the tendency of a criminal to reoffend, believing that a small percentage of the population is responsible for the majority of crime. This theory of “selective incapacitation” is based on the idea that repeat offenders are responsible for the majority of crime and that incarcerating this group would decrease the overall crime rate. An actuarial risk assessment statistically predicts the risk of any given event happening, such as the risk of reoffending. Judicial algorithms that conduct these risk assessments have been used to predict recidivism rates. These algorithms are often not tested on minorities, and thus the accuracy of the algorithm is not verified vis-à-vis the populations that it is applied to. Minorities have, and continue to be, discriminated against. Judicial algorithms can further perpetuate their second-class status by subjecting people of color to longer periods of incarceration. Moving from the American into the European jurisdiction, I present several Europe-centered solutions to mitigate the discriminatory outcomes of judicial algorithms.
Three policy arguments drive the use of risk assessment tools and judicial algorithms. First, it is assumed that risk assessments help reduce prison populations and save taxpayer money by enabling judges to sentence low-risk defendants to shorter prison terms. Further, there is a notion that judicial algorithms increase fairness in the criminal justice system by providing an assessment that is allegedly free from bias. Finally, these algorithms supposedly increase public safety by enabling judges to better understand a defendant’s rehabilitative capabilities. While these policy arguments are valid and well-intentioned, judicial algorithms have yet to produce fair and accurate justice outcomes.
Furthermore, in the criminal justice system more broadly, there are two concepts of accuracy: accuracy as reducing the number of guilty people evading punishment, and accuracy as reducing the number of innocent people from wrongfully experiencing punishment. These notions of accuracy drive at the twofold aim of criminal justice: the guilty shall not escape and the innocent shall not suffer. Elisa Celis, Assistant Professor of Statistics and Data Science at Yale University, points out that there must be a balancing scale between imprisoning an innocent inmate due to a high-risk assessment, and releasing a guilty inmate prematurely who could pose a risk to the public.
Celis also asks: “Does a recidivism rating of seven out of ten mean the same for a Black and a White inmate? Does this account for the fact that different communities are policed disproportionately?” Her questions highlight the importance of recognizing the context through which we evaluate notions of fairness and accuracy. As such, there is no singular right notion of fairness, as it is context and stakeholder dependent.
If we fail to recognize a context dependent notion of fairness and accuracy, judicial algorithms will have the potential to disproportionately target minorities by overestimating their recidivism rates. In Ewert v. Canada (2018), for example, an indigenous Métis man challenged the use of risk assessment tools to predict his recidivism rate, arguing that these tools were not developed with Indigenous training data—data used to train an algorithm. A contested ProPublica study of COMPAS, a U.S. actuarial risk assessment tool, found that the algorithm “was particularly likely to falsely flag Black defendants as future criminals, wrongly labeling them this way at almost twice the rate as White defendants.” Experts claim that risk assessment tools examine four different categories of predictive factors: criminal history, anti-social attitude, demographics, and socioeconomic status. Sex, age, and prior criminal history often have the most impact on an individual’s score. For example, Stevenson and Slobogin’s study of the COMPAS Violent Recidivism Risk Score demonstrated that roughly 60 percent of the risk score it produces is attributable to young age. The ProPublica study further indicated that the recidivism score provided by COMPAS was vastly unreliable in predicting violent crime, with only 20 percent of the flagged offenders actually going on to commit violent offences. ProPublica concluded that “the algorithm was somewhat more accurate than a coin flip.”
Further, minorities are often subjected to surveillance bias. They are over-surveyed and this “data often pushes officers into the same over-policed and over-criminalized communities, [which becomes part of the] ‘bias in, bias out’ concern regarding predictive systems.” Consequently, certain demographics have their risk of recidivism miscalculated due to a false or inaccurate correlation with crime.
The demographic of European prison populations does not stray far from the North American. In the United Kingdom, Blacks are more likely to be incarcerated than in the United States. The UK, while no longer part of the European Union, spearheads the European continent in its incarceration of ethnic minorities. Black people in the UK are four times more likely to be imprisoned when considering their small proportion of the country’s total population. With judicial algorithms having the potential to exacerbate existing biases in the criminal justice system, European governance frameworks must safeguard the justice outcomes prompted by judicial algorithms.
The most prominent existing governance frameworks within Europe are EU Directive 2016/680 and Article 22 of the General Data Protection Regulation (GDPR). Directive 2016/680 states: “It is necessary for competent authorities to process personal data collected in the context of … criminal offences … in order to develop an understanding of criminal activities and to make links between different criminal offences detected.” This EU directive appears to permit the use of judicial algorithms to make links between criminal offenses. Meanwhile Article 22 of the GDPR protects the right not to be subjected to a decision based solely on automated processing, including profiling. This proposes the right to opt-out of judicial algorithms.
The shortcomings of these existing governance mechanisms lie in that they do not recognize the importance of explainability. Directive 2016/680 permits the processing of personal data in order to understand criminal activity and link criminal offences; it does not address how links between criminal offences detected could be explained. Article 22 of the GDPR safeguards the right to opt-out, but not the right to understand the automated decision made by the algorithm. I propose two solutions: the first technical, the second governance-based.
In order to prevent confounding factors from leading to false positives on recidivism rates, a technical solution is to exclude the weighing of “innate” characteristics in recidivism predictions. These include: race, religion, nationality, membership of a particular social group or political opinion, and are informed by Article 1(a)(2) of the 1951 Refugee Convention. Discrimination based on these characteristics may, at an extreme level, constitute persecution by disproportionately imprisoning these communities. Judicial algorithms would consequently focus on behavioral indicators such as past violent offences and parole violations, as opposed to innate characteristics which have historically caused harm through the predictive policing of minorities.
From a governance standpoint, I propose that an EU expert group be tasked with: conducting audits of training datasets, managing randomized case review, and conducting periodic review of judicial algorithms used in the 47 EU Member States, publishing reports and reporting back to the European Commission on their findings. The expert group should also advise states on certification procedures for judicial algorithms—informing them on which algorithms are fair and may be used versus which should be discontinued. These tasks could be fulfilled by the High-Level Expert Group on Artificial Intelligence (AI HLEG), in conjunction with the European Data Protection Board (EDPB) and the European Committee on Legal Co-operation (CDCJ).
The European Convention on Human Rights (ECHR) should also be amended to articulate a new sub-clause (f) to Article 6’s right to a fair trial, clause 3, stating:
“(f) to have the right of explanation for judicial algorithms applied to the case”
This will afford defendants the right to appeal a sentencing decision prompted by a judicial algorithm, understand which variables informed its output, and how these different variables were weighted. This additional right would complement the existing protections set forth in the GDPR in establishing the right to reject automated decision-making and the right to refuse profiling.
Evidently, several stakeholders are implicated in reforming this practice: the designers of the algorithm, inmates, judges, victims of crime, the public, the state, and the EU. Judges want to accurately predict an offender’s recidivism risk to ensure an efficient use of government resources, minimize the risk of future crime, and protect their own reputation. Likewise, private risk assessment developers are motivated to produce accurate tools to increase the demand for their products. In this sense, public and private incentives are closely aligned. Inmates want to receive the shortest sentence—whether through human or algorithmic judgment, meanwhile victims of the crime may feel disheartened in their pursuit of justice if the perpetrator’s sentence is determined by an automated tool which may not account for their trauma and lived experience. The public has a vested interest in its own safety, which the state must protect. The EU has a stake in this issue, given its supranational status and the challenge of maintaining the rule of law alongside the mobility of people, goods, and data across its borders. Balancing these many interests requires a robust and fair judicial system. The proposed solution’s emphasis on explainability addresses most of the interests of the stakeholders (save for the algorithm’s designer), in ensuring the accurate provision of justice which will facilitate public safety.
More broadly, two overarching alternative solutions can be proposed. First, instating an EU-wide judicial algorithm that will be used across all Member States, as designed by the European Court of Human Rights. Second, banning the use of judicial algorithms across all EU Member States altogether. These alternative solutions are not viable in that they do not respect the sovereignty of Member States and the independence of their judiciaries, and they overlook the benefit of algorithms as cost and time efficient means of delivering justice, particularly in understaffed justice systems.
Criteria for the success of these judicial algorithm reforms are particularly difficult to determine, due to the grey area that encompasses wrongful convictions, as well as bias in recidivism predictions and subsequent sentencing. A qualitative criterion for success may be EU-wide public opinion polls on judicial algorithms—not only on public support, but also to determine the public’s levels of algorithmic literacy, as measured by Pew polling data. A quantitative measure of success may be the number of sentencing decisions overturned that were informed by algorithmic risk assessment tools, in comparison to a control group of human error in sentencing decisions. A further quantitative criteria for success may also be, more broadly, whether recidivism rates decreased across jurisdictions that employed judicial algorithms, as compared to the jurisdictions that do not apply these tools. The difficulty with all of these criteria for success are that judicial algorithms are not standardized in their design, nor in their application across the different EU Member States. As such, performance evaluations may need to be context specific.
Explainability must be at the core of judicial algorithms to ensure their accuracy vis-à-vis the populations they are applied to. This, in turn, will mitigate harms caused by unrepresentative training datasets that perpetuate biases in the criminal justice system. Whether in Europe or in North America, judicial algorithms have the potential to disproportionately target minorities by overestimating their recidivism rates. Governance frameworks must safeguard fair and accurate criminal justice outcomes.