Garrett on Artificial Intelligence and Procedural Due Process

Brandon L. Garrett (Duke U Law) has posted “Artificial Intelligence and Procedural Due Process” on SSRN. Here is the abstract:

Artificial intelligence (AI) violates procedural due process rights if the government uses it to deprive people of life, liberty, and property without adequate notice or an opportunity to be heard. A wide range of government agencies deploy AI systems, including in courts, law enforcement, public benefits administration, and national security. If the government refused to disclose the reasons why it denied a person bail, public benefits, or immigration status, there would be substantial due process concerns. If the government delegates such tasks to an AI system, due process analysis does not change. As in any other setting, we still need to ask whether a person received adequate notice and an opportunity to heard. And further, where applicable, we need to ask whether the risk of error and costs to rights justify not using interpretable and adequately tested AI. 

Nor is it necessary for AI or other automated systems to operate in a “black box” manner without providing people with notice or a way to meaningfully contest decisions. There is a ready alternative: a “glass box” or interpretable AI systems present results so that users know what factors it relied on, what weight it gave to each, and the strengths and limitations of the association or prediction made. Whether it is a criminal investigation or a public benefits eligibility determination, interpretable AI can ensure that people have notice and can challenge any error, using the procedures available. And such a system can be more readily checked for errors. Due process demands a greater opportunity to contest government decisions that raise greater reliability concerns. We need to know how reliable an AI system performs, under realistic conditions, to assess the risk of error. 

Longstanding due process protections and well-developed interpretable AI approaches can ensure that AI systems safeguard due process rights. Conversely, due process rights have little meaning if the government uses “black box” systems that are not fully interpretable or fully tested for reliability, and as a result, cannot comply with procedural due process requirements. So far, there has been little government self-regulation of AI. In response, judges have begun to enforce existing due process rights in AI and other automated decisionmaking settings. As judges consider due process challenges to AI, they should consider the interpretability and the reliability of AI systems. Similarly, as lawmakers and regulators examine government use of AI systems, they should ensure safeguards, including interpretability and reliability, to protect our due process rights in an increasingly AI-dominated world.

Garrett on Artificial Intelligence and Procedural Due Process

Brandon L. Garrett (Duke U Law) has posted “Artificial Intelligence and Procedural Due Process” on SSRN. Here is the abstract:

Artificial intelligence (AI) violates procedural due process rights if the government uses it to deprive people of life, liberty, and property without adequate notice or an opportunity to be heard. A wide range of government agencies deploy AI systems, including in courts, law enforcement, public benefits administration, and national security. If the government refused to disclose the reasons why it denied a person bail, public benefits, or immigration status, there would be substantial due process concerns. If the government delegates such tasks to an AI system, due process analysis does not change. As in any other setting, we still need to ask whether a person received adequate notice and an opportunity to heard. And further, where applicable, we need to ask whether the risk of error and costs to rights justify not using interpretable and adequately tested AI. 

Nor is it necessary for AI or other automated systems to operate in a “black box” manner without providing people with notice or a way to meaningfully contest decisions. There is a ready alternative: a “glass box” or interpretable AI systems present results so that users know what factors it relied on, what weight it gave to each, and the strengths and limitations of the association or prediction made. Whether it is a criminal investigation or a public benefits eligibility determination, interpretable AI can ensure that people have notice and can challenge any error, using the procedures available. And such a system can be more readily checked for errors. Due process demands a greater opportunity to contest government decisions that raise greater reliability concerns. We need to know how reliable an AI system performs, under realistic conditions, to assess the risk of error. 

Longstanding due process protections and well-developed interpretable AI approaches can ensure that AI systems safeguard due process rights. Conversely, due process rights have little meaning if the government uses “black box” systems that are not fully interpretable or fully tested for reliability, and as a result, cannot comply with procedural due process requirements. So far, there has been little government self-regulation of AI. In response, judges have begun to enforce existing due process rights in AI and other automated decisionmaking settings. As judges consider due process challenges to AI, they should consider the interpretability and the reliability of AI systems. Similarly, as lawmakers and regulators examine government use of AI systems, they should ensure safeguards, including interpretability and reliability, to protect our due process rights in an increasingly AI-dominated world.

Falletti on Using Predictive and Generative Algorithms in Family Law: A Comparative Perspective

Elena Falletti (Carlo Cattaneo LIUC U) has posted “Using Predictive and Generative Algorithms in Family Law: A Comparative Perspective” on SSRN. Here is the abstract:

The article discusses the use of algorithms, both predictive and generative artificial intelligence systems, in the context of fighting family abuse and child maltreatment. The research approach is based on comparative case law analysis, examining the real-world impact of these algorithms on individuals, potential biases in predictive software, and the perceived authority of GenAI in judicial decisions.

Metikoš & Ausloos on The Right to an Explanation in Practice: Insights from Case Law for the GDPR and the AI Act

Ljubiša Metikoš (U Amsterdam Institute Information Law (IViR)) and Jef Ausloos (U Amsterdam Institute Information Law (IViR)) have posted “The Right to an Explanation in Practice: Insights from Case Law for the GDPR and the AI Act” (Forthcoming in Law, Innovation, and Technology 17.2 (October 2025)) on SSRN. Here is the abstract:

The right to an explanation under the GDPR has been much discussed in legal-doctrinal scholarship. This paper expands upon this academic discourse, by providing insights into what questions the application of the right to an explanation has raised in legal practice. By looking at cases brought before various judicial bodies and data protection authorities across the European Union, we discuss questions regarding the scope, content, and balancing exercise of the right to an explanation. We argue, moreover, that these questions also raise important interpretative issues regarding the right to an explanation under the AI Act. Similar to the GDPR, the AI Act’s right to an explanation leaves many legal questions unanswered. Therefore, the insights from the already established case law under the GDPR, can help us to understand better how the AI Act’s right to an explanation should be understood in practice.

Paolucci on Due Process of Artificial Intelligence: a Challenge for the Protection of Fundamental Rights

Federica Paolucci (Bocconi U) has posted “Due Process of Artificial Intelligence: a Challenge for the Protection of Fundamental Rights” on SSRN. Here is the abstract:

Artificial Intelligence is at the center of global debate. The European Union is finalizing the first Regulation on Artificial Intelligence (AI Act). This chapter analyzes the resilience of the GDPR in scenarios when AI is applied or deployed. The existence and creation of ad hoc procedural mechanisms that can facilitate the horizontal protection of fundamental rights by individuals will also be discussed. The impossibility of applying the mechanisms of the GDPR instead will be demonstrated in the light of the right to erasure in order to support the need for ‘due process’ of Artificial Intelligence.

Demkova on The EU’s Artificial Intelligence Laboratory and Fundamental Rights

Simona Demkova (Leiden Law School) has posted “The EU’s Artificial Intelligence Laboratory and Fundamental Rights” (in: Melanie Fink (ed), Redressing Fundamental Rights Violations by the EU: The Promise of the ‘Complete System of Remedies (CUP 2024)) on SSRN. Here is the abstract:

This contribution examines the possibilities for individuals to access remedies against potential violations of their fundamental rights by the EU actors, specifically the EU agencies’ deployment of artificial intelligence (AI). Presenting the intricate landscape of the EU’s border surveillance, Section 2 sheds light on the prominent role of Frontex in developing and managing AI systems, including automated risk assessments and drone-based aerial surveillance. In light of the fundamental rights concerns posed by these uses, Section 3 examines the possibilities to access remedies by considering the impact of AI uses on the procedural rights to good administration and effective judicial protection, before clarifying the emerging remedial system under the AI Act in its interplay with the EU’s data protection framework. Lastly, the chapter sketches the evolving role of the European Data Protection Supervisor, pointing out the areas demanding further clarifications in order to fill the remedial gaps (Section 4).

Garrett & Rudin on Glass-Box AI in Criminal Justice

Brandon L. Garrett (Duke Law) and Cynthia Rudin (Duke Computer Science) have posted “The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice” on SSRN. Here is the abstract:

Artificial intelligence (AI) increasingly is used to make important decisions that affect individuals and society. As governments and corporations use AI more and more pervasively, one of the most troubling trends is that developers so often design it to be a “black box.” Designers create models too complex for people to understand or they conceal how AI functions. Policymakers and the public increasingly sound alarms about black box AI, and a particularly pressing area of concern has been in criminal cases, in which a person’s life, liberty, and public safety can be at stake. In the United States and globally, despite concerns that technology may deepen pre-existing racial disparities and overreliance on incarceration, black box AI has proliferated in areas such as: DNA mixture interpretation; facial recognition; recidivism risk assessments; and predictive policing. Despite constitutional criminal procedure protections, judges have largely embraced claims that AI should remain undisclosed.

The champions and critics of AI have something in common, where both sides argue that we face a central trade-off: black box AI may be incomprehensible, but it performs more accurately. In this Article, we question the basis for this assertion, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be fully interpretable by people—can be more accurate than the black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. After all, criminal justice data is notoriously error prone, it reflects pre-existing racial and socio-economic disparities, and any AI system must be used by decisionmakers like lawyers and judges—who must understand it.

Debunking the black box performance myth has implications for constitutional criminal procedure rights and legislative policy. Judges and lawmakers have been reluctant to impair the perceived effectiveness of black box AI by requiring disclosures to the defense. Absent some compelling—or even credible—government interest in keeping AI black box, and given substantial constitutional rights and public safety interests at stake, we argue that the burden rests on the government to justify any departure from the norm that all lawyers, judges, and jurors can fully understand AI. If AI is to be used at all in settings like the criminal system—and we do not suggest that it necessarily should—the bottom line is that glass box AI can better accomplish both fairness and public safety goals. We conclude by calling for national and local regulation to safeguard, in all criminal cases, the right to glass box AI.

Chin on Introducing Independence to the Foreign Intelligence Surveillance Court

Simon Chin (Yale Law School) has posted “Introducing Independence to the Foreign Intelligence Surveillance Court” (131 Yale L.J. 655 (2021)) on SSRN. Here is the abstract:

The Foreign Intelligence Surveillance Court (FISC), which reviews government applications to conduct surveillance for foreign intelligence purposes, is an anomaly among Article III courts. Created by the Foreign Intelligence Surveillance Act (FISA) in 1978, the FISC ordinarily sits ex parte, with the government as the sole party to the proceedings. The court’s operations and decisions are shrouded in secrecy, even as they potentially implicate the privacy and civil liberties interests of all Americans. After Edward Snowden disclosed the astonishing details of two National Security Agency mass surveillance programs that had been approved by the FISC, Congress responded with the USA FREEDOM Act of 2015. The bill’s reforms included the creation of a FISA amicus panel: a group of five, security-cleared, part-time, outside attorneys available to participate in FISC proceedings at the court’s discretion. Policy makers hoped to introduce an independent voice to the FISC that could challenge the government’s positions and represent the civil liberties interests of the American people. With the FBI’s investigation of Trump campaign advisor Carter Page in 2016 and 2017 raising new concerns about the FISC’s one-sided proceedings, it is now imperative to assess the FISA amicus provision: how it has functioned in practice since 2015, what effects it has had on foreign intelligence collection, and whether it has achieved the objectives that motivated its creation.

To conduct this assessment and overcome the challenges of studying a secret court, this Note draws upon the first systematic set of interviews conducted with six of the current and former FISA amici. This Note also includes interviews with two former FISA judges and three former senior government attorneys intimately involved in the FISA process. Using these interviews, as well as declassified FISA material, this Note presents an insiders’ view of FISC proceedings and amicus participation at the court. The Note arrives at three main insights about the amicus panel. First, amicus participation at the FISC has not substantially interfered with the collection of timely foreign intelligence information. Second, the available record suggests that amici have had a limited impact on privacy and civil liberties. Third, there are significant structural limitations to what incremental reforms to the existing amicus panel can accomplish. Instead, this Note supports the creation of an office of the FISA special advocate—a permanent presence at the FISC to serve as a genuine adversary to the government. While Congress considered and rejected a FISA special advocate in 2015, this Note reenvisions the original proposal with substantive and procedural modifications to reflect the lessons of the past six years, as well as with a novel duty: oversight of approved FISA applications. This Note’s proposal would address both the limitations of the FISA amicus panel that have become manifest in practice and the new Carter Page-related concerns about individual surveillance.

Henderson Reviewing When Machines Can Be Judge, Jury, and Executioner

Stephen E. Henderson (University of Oklahoma – College of Law) has posted a review of Katherine Forrest’s “When Machines Can Be Judge, Jury, and Executioner” (Book Review: Criminal Law and Criminal Justice Books 2022) on SSRN. Here is the abstract:

There is much in Katherine Forrest’s claim—and thus in her new book—that is accurate and pressing. Forrest adds her voice to the many who have critiqued contemporary algorithmic criminal justice, and her seven years as a federal judge and decades of other experience make her perspective an important one. Many of her claims find support in kindred writings, such as her call for greater transparency, especially when private companies try to hide algorithmic details for reasons of greater profit. A for-profit motive is a fine thing in a private company, but it is anathema to our ideals of public trial. Algorithms are playing an increasingly dominant role in criminal justice, including in our systems of pretrial detention and sentencing. And as we criminal justice scholars routinely argue, there is much that is rather deeply wrong in that criminal justice.

But the relation between those two things—algorithms on the one hand and our systems of criminal justice on the other—is complicated, and it most certainly does not run any single direction. Just as often as numbers and formulae are driving the show (a right concern of Forrest’s), a terrible dearth of both leaves judges meting out sentences that, in the words of Ohio Supreme Court Justice Michael Donnelly, “have more to do with the proclivities of the judge you’re assigned to, rather than the rule of law.” Moreover, most of the algorithms we currently use—and even most of those we are contemplating using—are ‘intelligent’ in only the crudest sense. They constitute ‘artificial intelligence’ only if we term every algorithm run by, or developed with the assistance of, a computer to constitute AI, and that is hardly the kind of careful, precise definition that criminal justice deserves. A calculator is a machine that we most certainly want judges using, a truly intelligent machine is something we humans have so far entirely failed to create, and the spectrum between is filled with innumerable variations, each of which must be carefully, scientifically evaluated in the particular context of its use.

This brief review situates Forrest’s claims in these two regards. First, we must always compare apples to apples. We ought not compare a particular system of algorithmic justice to some elysian ideal, when the practical question is whether to replace and/or supplement a currently biased and logically-flawed system with that algorithmic counterpart. After all, the most potently opaque form of ‘intelligence’ we know is that we term human—we humans go so far as routine, affirmative deception—and that truth calls for a healthy dose of skepticism and humility when it comes to claims of human superiority. Comparisons must be, then, apples to apples. Second, when we speak of ‘artificial intelligence,’ we ought to speak carefully, in a scientifically precise manner. We will get nowhere good if we diverge into autonomous weapons when trying to decide, say, whether we ought to run certain historic facts about an arrestee through a formula as an aid to deciding whether she is likely to appear as required for trial. The same if we fail to understand the very science upon which any particular algorithm runs. We must use science for science.

Edwards on Transparency and Accountability of Algorithmic Regulation

Ernesto Edwards (National University of Rosario) has posted “How to Stop Minority Report from Becoming a Reality: Transparency and Accountability of Algorithmic Regulation” on SSRN. Here is the abstract:

In this essay I aim to illuminate on the importance of transparency and accountability in algorithmic regulation, a highly topical legal issue that presents important consequences because Machine Learning algorithms have been constantly developing as of late. Building on prior studies and on current literature, such as Citron, Crootof, Pasquale and Zarsky, I intend to develop a proposal that bridges said knowledge with that of Daniel Kahneman in order to amplify the legal question at hand with the notions of blinders and biases. I will argue that if left unattended or if improperly attended, Machine Learning algorithms will produce more harm than good due to these blinders and biases. After linking the aforementioned ideas, I will focus on the transparency and accountability of algorithmic regulation, and its ties to technological due process. The findings will illustrate the present need of a human element, better exemplified by the concept of cyborg justice, and the public policy challenges it entails. In the end, I will propose what could be done in the future in this area.