Henderson Reviewing When Machines Can Be Judge, Jury, and Executioner

Stephen E. Henderson (University of Oklahoma – College of Law) has posted a review of Katherine Forrest’s “When Machines Can Be Judge, Jury, and Executioner” (Book Review: Criminal Law and Criminal Justice Books 2022) on SSRN. Here is the abstract:

There is much in Katherine Forrest’s claim—and thus in her new book—that is accurate and pressing. Forrest adds her voice to the many who have critiqued contemporary algorithmic criminal justice, and her seven years as a federal judge and decades of other experience make her perspective an important one. Many of her claims find support in kindred writings, such as her call for greater transparency, especially when private companies try to hide algorithmic details for reasons of greater profit. A for-profit motive is a fine thing in a private company, but it is anathema to our ideals of public trial. Algorithms are playing an increasingly dominant role in criminal justice, including in our systems of pretrial detention and sentencing. And as we criminal justice scholars routinely argue, there is much that is rather deeply wrong in that criminal justice.

But the relation between those two things—algorithms on the one hand and our systems of criminal justice on the other—is complicated, and it most certainly does not run any single direction. Just as often as numbers and formulae are driving the show (a right concern of Forrest’s), a terrible dearth of both leaves judges meting out sentences that, in the words of Ohio Supreme Court Justice Michael Donnelly, “have more to do with the proclivities of the judge you’re assigned to, rather than the rule of law.” Moreover, most of the algorithms we currently use—and even most of those we are contemplating using—are ‘intelligent’ in only the crudest sense. They constitute ‘artificial intelligence’ only if we term every algorithm run by, or developed with the assistance of, a computer to constitute AI, and that is hardly the kind of careful, precise definition that criminal justice deserves. A calculator is a machine that we most certainly want judges using, a truly intelligent machine is something we humans have so far entirely failed to create, and the spectrum between is filled with innumerable variations, each of which must be carefully, scientifically evaluated in the particular context of its use.

This brief review situates Forrest’s claims in these two regards. First, we must always compare apples to apples. We ought not compare a particular system of algorithmic justice to some elysian ideal, when the practical question is whether to replace and/or supplement a currently biased and logically-flawed system with that algorithmic counterpart. After all, the most potently opaque form of ‘intelligence’ we know is that we term human—we humans go so far as routine, affirmative deception—and that truth calls for a healthy dose of skepticism and humility when it comes to claims of human superiority. Comparisons must be, then, apples to apples. Second, when we speak of ‘artificial intelligence,’ we ought to speak carefully, in a scientifically precise manner. We will get nowhere good if we diverge into autonomous weapons when trying to decide, say, whether we ought to run certain historic facts about an arrestee through a formula as an aid to deciding whether she is likely to appear as required for trial. The same if we fail to understand the very science upon which any particular algorithm runs. We must use science for science.