Garrett & Rudin on Glass-Box AI in Criminal Justice

Brandon L. Garrett (Duke Law) and Cynthia Rudin (Duke Computer Science) have posted “The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice” on SSRN. Here is the abstract:

Artificial intelligence (AI) increasingly is used to make important decisions that affect individuals and society. As governments and corporations use AI more and more pervasively, one of the most troubling trends is that developers so often design it to be a “black box.” Designers create models too complex for people to understand or they conceal how AI functions. Policymakers and the public increasingly sound alarms about black box AI, and a particularly pressing area of concern has been in criminal cases, in which a person’s life, liberty, and public safety can be at stake. In the United States and globally, despite concerns that technology may deepen pre-existing racial disparities and overreliance on incarceration, black box AI has proliferated in areas such as: DNA mixture interpretation; facial recognition; recidivism risk assessments; and predictive policing. Despite constitutional criminal procedure protections, judges have largely embraced claims that AI should remain undisclosed.

The champions and critics of AI have something in common, where both sides argue that we face a central trade-off: black box AI may be incomprehensible, but it performs more accurately. In this Article, we question the basis for this assertion, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be fully interpretable by people—can be more accurate than the black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. After all, criminal justice data is notoriously error prone, it reflects pre-existing racial and socio-economic disparities, and any AI system must be used by decisionmakers like lawyers and judges—who must understand it.

Debunking the black box performance myth has implications for constitutional criminal procedure rights and legislative policy. Judges and lawmakers have been reluctant to impair the perceived effectiveness of black box AI by requiring disclosures to the defense. Absent some compelling—or even credible—government interest in keeping AI black box, and given substantial constitutional rights and public safety interests at stake, we argue that the burden rests on the government to justify any departure from the norm that all lawyers, judges, and jurors can fully understand AI. If AI is to be used at all in settings like the criminal system—and we do not suggest that it necessarily should—the bottom line is that glass box AI can better accomplish both fairness and public safety goals. We conclude by calling for national and local regulation to safeguard, in all criminal cases, the right to glass box AI.

Chin on Introducing Independence to the Foreign Intelligence Surveillance Court

Simon Chin (Yale Law School) has posted “Introducing Independence to the Foreign Intelligence Surveillance Court” (131 Yale L.J. 655 (2021)) on SSRN. Here is the abstract:

The Foreign Intelligence Surveillance Court (FISC), which reviews government applications to conduct surveillance for foreign intelligence purposes, is an anomaly among Article III courts. Created by the Foreign Intelligence Surveillance Act (FISA) in 1978, the FISC ordinarily sits ex parte, with the government as the sole party to the proceedings. The court’s operations and decisions are shrouded in secrecy, even as they potentially implicate the privacy and civil liberties interests of all Americans. After Edward Snowden disclosed the astonishing details of two National Security Agency mass surveillance programs that had been approved by the FISC, Congress responded with the USA FREEDOM Act of 2015. The bill’s reforms included the creation of a FISA amicus panel: a group of five, security-cleared, part-time, outside attorneys available to participate in FISC proceedings at the court’s discretion. Policy makers hoped to introduce an independent voice to the FISC that could challenge the government’s positions and represent the civil liberties interests of the American people. With the FBI’s investigation of Trump campaign advisor Carter Page in 2016 and 2017 raising new concerns about the FISC’s one-sided proceedings, it is now imperative to assess the FISA amicus provision: how it has functioned in practice since 2015, what effects it has had on foreign intelligence collection, and whether it has achieved the objectives that motivated its creation.

To conduct this assessment and overcome the challenges of studying a secret court, this Note draws upon the first systematic set of interviews conducted with six of the current and former FISA amici. This Note also includes interviews with two former FISA judges and three former senior government attorneys intimately involved in the FISA process. Using these interviews, as well as declassified FISA material, this Note presents an insiders’ view of FISC proceedings and amicus participation at the court. The Note arrives at three main insights about the amicus panel. First, amicus participation at the FISC has not substantially interfered with the collection of timely foreign intelligence information. Second, the available record suggests that amici have had a limited impact on privacy and civil liberties. Third, there are significant structural limitations to what incremental reforms to the existing amicus panel can accomplish. Instead, this Note supports the creation of an office of the FISA special advocate—a permanent presence at the FISC to serve as a genuine adversary to the government. While Congress considered and rejected a FISA special advocate in 2015, this Note reenvisions the original proposal with substantive and procedural modifications to reflect the lessons of the past six years, as well as with a novel duty: oversight of approved FISA applications. This Note’s proposal would address both the limitations of the FISA amicus panel that have become manifest in practice and the new Carter Page-related concerns about individual surveillance.

Henderson Reviewing When Machines Can Be Judge, Jury, and Executioner

Stephen E. Henderson (University of Oklahoma – College of Law) has posted a review of Katherine Forrest’s “When Machines Can Be Judge, Jury, and Executioner” (Book Review: Criminal Law and Criminal Justice Books 2022) on SSRN. Here is the abstract:

There is much in Katherine Forrest’s claim—and thus in her new book—that is accurate and pressing. Forrest adds her voice to the many who have critiqued contemporary algorithmic criminal justice, and her seven years as a federal judge and decades of other experience make her perspective an important one. Many of her claims find support in kindred writings, such as her call for greater transparency, especially when private companies try to hide algorithmic details for reasons of greater profit. A for-profit motive is a fine thing in a private company, but it is anathema to our ideals of public trial. Algorithms are playing an increasingly dominant role in criminal justice, including in our systems of pretrial detention and sentencing. And as we criminal justice scholars routinely argue, there is much that is rather deeply wrong in that criminal justice.

But the relation between those two things—algorithms on the one hand and our systems of criminal justice on the other—is complicated, and it most certainly does not run any single direction. Just as often as numbers and formulae are driving the show (a right concern of Forrest’s), a terrible dearth of both leaves judges meting out sentences that, in the words of Ohio Supreme Court Justice Michael Donnelly, “have more to do with the proclivities of the judge you’re assigned to, rather than the rule of law.” Moreover, most of the algorithms we currently use—and even most of those we are contemplating using—are ‘intelligent’ in only the crudest sense. They constitute ‘artificial intelligence’ only if we term every algorithm run by, or developed with the assistance of, a computer to constitute AI, and that is hardly the kind of careful, precise definition that criminal justice deserves. A calculator is a machine that we most certainly want judges using, a truly intelligent machine is something we humans have so far entirely failed to create, and the spectrum between is filled with innumerable variations, each of which must be carefully, scientifically evaluated in the particular context of its use.

This brief review situates Forrest’s claims in these two regards. First, we must always compare apples to apples. We ought not compare a particular system of algorithmic justice to some elysian ideal, when the practical question is whether to replace and/or supplement a currently biased and logically-flawed system with that algorithmic counterpart. After all, the most potently opaque form of ‘intelligence’ we know is that we term human—we humans go so far as routine, affirmative deception—and that truth calls for a healthy dose of skepticism and humility when it comes to claims of human superiority. Comparisons must be, then, apples to apples. Second, when we speak of ‘artificial intelligence,’ we ought to speak carefully, in a scientifically precise manner. We will get nowhere good if we diverge into autonomous weapons when trying to decide, say, whether we ought to run certain historic facts about an arrestee through a formula as an aid to deciding whether she is likely to appear as required for trial. The same if we fail to understand the very science upon which any particular algorithm runs. We must use science for science.

Edwards on Transparency and Accountability of Algorithmic Regulation

Ernesto Edwards (National University of Rosario) has posted “How to Stop Minority Report from Becoming a Reality: Transparency and Accountability of Algorithmic Regulation” on SSRN. Here is the abstract:

In this essay I aim to illuminate on the importance of transparency and accountability in algorithmic regulation, a highly topical legal issue that presents important consequences because Machine Learning algorithms have been constantly developing as of late. Building on prior studies and on current literature, such as Citron, Crootof, Pasquale and Zarsky, I intend to develop a proposal that bridges said knowledge with that of Daniel Kahneman in order to amplify the legal question at hand with the notions of blinders and biases. I will argue that if left unattended or if improperly attended, Machine Learning algorithms will produce more harm than good due to these blinders and biases. After linking the aforementioned ideas, I will focus on the transparency and accountability of algorithmic regulation, and its ties to technological due process. The findings will illustrate the present need of a human element, better exemplified by the concept of cyborg justice, and the public policy challenges it entails. In the end, I will propose what could be done in the future in this area.

Hausman on The Danger of Rigged Algorithms: Evidence from Immigration Detention Decisions

David Hausman (Stanford University, Department of Political Science) has posted “The Danger of Rigged Algorithms: Evidence from Immigration Detention Decisions” on SSRN. Here is the abstract:

This article illustrates a simple risk of algorithmic risk assessment tools: rigging. In 2017, U.S. Immigration and Customs Enforcement removed the “release” recommendation from the algorithmic tool that helped officers decide whom to detain and whom to release. After the change, the tool only recommended detention or referred cases to a supervisor. Taking advantage of the suddenness of this change, I use a fuzzy regression discontinuity design to show that the change reduced actual release decisions by about half, from around 10% to around 5% of all decisions. Officers continued to follow the tool’s detention recommendations at almost the same rate even after the tool stopped recommending release, and when officers deviated from the tool’s recommendation to order release, supervisors became more likely to overrule their decisions. Although algorithmic tools offer the possibility of reducing the use of detention, they can also be rigged to increase it.

Blass on Observing the Effects of Automating the Judicial System with Behavioral Equivalence

Joseph Blass (Northwestern University Pritzker School of Law; Northwestern University – Dept. Electrical Engineering & Computer Science) has posted “Observing the Effects of Automating the Judicial System with Behavioral Equivalence” (South Carolina Law Review, Vol. 72, No. 4, 2022) on SSRN. Here is the abstract:

Building on decades of work in Artificial Intelligence, legal scholars have begun to consider whether components of the judicial system could be replaced by computers. Much of the scholarship in AI and Law has focused on whether such automated systems could reproduce the reasoning and outcomes produced by the current system. This scholarly framing captures many aspects of judicial processes, but overlooks how automated judicial decision-making likely would change how participants in the legal system interact with it, and how societal interests outside that system who care about its processes would be affected by those changes.

This Article demonstrates how scholarship on legal automation comes to leave out perspectives external to the process of judicial decision-making. It analyses the problem using behavioral equivalence, a Computer Science concept that assesses systems’ behaviors according to the observations of specific monitors of those systems. It introduces a framework to examine the various observers of the judicial process and the tradeoffs they may perceive when legal systems are automated. This framework will help scholars and policymakers more effectively anticipate the consequences of automating components of the legal system.

Adams, Adams-Prassl & Adams-Prassl on Online Tribunal Judgments and The Limits of Open Justice

Zoe Adams (University of Cambridge), Abi Adams-Prassl (University of Oxford – Department of Economics), and Jeremias Adams-Prassl (University of Oxford – Faculty of Law) have posted “Online Tribunal Judgments and The Limits of Open Justice” (Forthcoming (2021) 41 Legal Studies) on SSRN. Here is the abstract:

The principle of open justice is a constituent element of the rule of law: it demands publicity of legal proceedings, including the publication of judgments. Since 2017, the UK government has systematically published first instance Employment Tribunal decisions in an online repository. Whilst a veritable treasure trove for researchers and policy makers, the database also has darker potential – from automating blacklisting to creating new and systemic barriers to access to justice. Our scrutiny of existing legal safeguards, from anonymity orders to equality law and data protection, finds a number of gaps, which threaten to make the principle of open justice as embodied in the current publication regime inimical to equal access to justice.

Chen, Stremitzer & Tobia on Having Your Day in Robot Court

Benjamin Minhao Chen (The University of Hong Kong – Faculty of Law), Alexander Stremitzer (ETH Zurich), and Kevin Tobia (Georgetown University Law Center; Georgetown University – Department of Philosophy) have posted “Having Your Day in Robot Court” on SSRN. Here is the abstract:

Should machines be judges? Some balk at this possibility, holding that ordinary citizens would see a robot-led legal proceeding as procedurally unfair: To have your “day in court” is to have a human hear and adjudicate your claims. Two original experiments assess whether laypeople share this intuition. We discover that laypeople do, in fact, see human judges as fairer than artificially intelligent (“AI”) robot judges: All else equal, there is a perceived human-AI “fairness gap.” However, it is also possible to eliminate the fairness gap. The perceived advantage of human judges over AI judges is related to perceptions of accuracy and comprehensiveness of the decision, rather than “softer” and more distinctively human factors. Moreover, the study reveals that laypeople are amenable to “algorithm offsetting.” Adding an AI hearing and increasing the AI interpretability reduces the perceived human-AI fairness gap. Ultimately, the results support a common challenge to robot judges: there is a concerning human-AI fairness gap. Yet, the results also indicate that the strongest version of this challenge — human judges have inimitable procedural fairness advantages — is not reflected in the views of laypeople. In some circumstances, people see a day in robot court as no less fair than day in human court.

Papagianneas on Automated Justice and Fairness in the PRC 

Straton Papagianneas (Leiden University, Leiden Institute for Area Studies) has posted “Automated Justice and Fairness in the PRC” on SSRN. Here is the abstract:

The digitalisation and automation of the judiciary, also known as judicial informatisation, (司法信息化) has been ongoing for two decades in China. The latest development is the emergence of “smart courts” (智慧法院), which are part of the Chinese party-state’s efforts to reform and modernise its governance capacity. These are legal courts where the judicial process is fully conducted digitally, and judicial officers make use of technological applications sustained by algorithms and big-data analytics. The end-goal is to create a judicial decision-making process that is fully conducted in an online judicial ecosystem where the majority of tasks are automated and opportunities for human discretion or interference are minimal.

This article asks how automation and digitalisation satisfy procedural fairness in the PRC? First, it discusses the Chinese conception of judicial fairness through a literature review. It finds that the utilitarian conception of fairness is a reflection of the inherently legalist and instrumentalist vision of law. This is turn, also influences the way innovations, such as judicial automation, are assessed. Then, it contextualises the policy of ‘building smart courts’, launched in 2017, which aimed to automate and digitalise large parts of the judicial process. The policy is part of a larger reform drive that aims to recentralise power and standardise decision-making. Next, it discusses how automation and digitalisation have changed the judicial process, based on a reading of court and media reports of technological applications. The final section analyses the implications of automation and digitalisation for judicial fairness in the PRC.

The article argues that, within the utilitarian conceptualisation of justice and law, automated justice can indeed be considered fair because it improves the quality of procedures to the extent that they facilitate the achievement of the political goals of judicial reform and the judiciary in general.

Kemper on Kafkaesque AI

Carolin Kemper (German Research Institute for Public Administration) has posted “Kafkaesque AI? Legal Decision-Making in the Era of Machine Learning” (University of San Francisco Intellectual Property and Technology Law Journal, Vol. 24, No. 2, 2020) on SSRN. Here is the abstract:

Artificial Intelligence (“AI”) is already being employed to make critical legal decisions in many countries all over the world. The use of AI in decision-making is a widely debated issue due to allegations of bias, opacity, and lack of accountability. For many, algorithmic decision-making seems obscure, inscrutable, or virtually dystopic. Like in Kafka’s The Trial, the decision-makers are anonymous and cannot be challenged in a discursive manner. This article addresses the question of how AI technology can be used for legal decisionmaking and decision-support without appearing Kafkaesque.

First, two types of machine learning algorithms are outlined: both Decision Trees and Artificial Neural Networks are commonly used in decision-making software. The real-world use of those technologies is shown on a few examples. Three types of use-cases are identified, depending on how directly humans are influenced by the decision. To establish criteria for evaluating the use of AI in decision-making, machine ethics, the theory of procedural justice, the rule of law, and the principles of due process are consulted. Subsequently, transparency, fairness, accountability, the right to be heard and the right to notice, as well as dignity and respect are discussed. Furthermore, possible safeguards and potential solutions to tackle existing problems are presented. In conclusion, AI rendering decisions on humans does not have to be Kafkaesque. Many solutions and approaches offer possibilities to not only ameliorate the downsides of current AI technologies, but to enrich and enhance the legal system.