Brandon L. Garrett (Duke Law) and Cynthia Rudin (Duke Computer Science) have posted “The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice” on SSRN. Here is the abstract:
Artificial intelligence (AI) increasingly is used to make important decisions that affect individuals and society. As governments and corporations use AI more and more pervasively, one of the most troubling trends is that developers so often design it to be a “black box.” Designers create models too complex for people to understand or they conceal how AI functions. Policymakers and the public increasingly sound alarms about black box AI, and a particularly pressing area of concern has been in criminal cases, in which a person’s life, liberty, and public safety can be at stake. In the United States and globally, despite concerns that technology may deepen pre-existing racial disparities and overreliance on incarceration, black box AI has proliferated in areas such as: DNA mixture interpretation; facial recognition; recidivism risk assessments; and predictive policing. Despite constitutional criminal procedure protections, judges have largely embraced claims that AI should remain undisclosed.
The champions and critics of AI have something in common, where both sides argue that we face a central trade-off: black box AI may be incomprehensible, but it performs more accurately. In this Article, we question the basis for this assertion, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be fully interpretable by people—can be more accurate than the black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. After all, criminal justice data is notoriously error prone, it reflects pre-existing racial and socio-economic disparities, and any AI system must be used by decisionmakers like lawyers and judges—who must understand it.
Debunking the black box performance myth has implications for constitutional criminal procedure rights and legislative policy. Judges and lawmakers have been reluctant to impair the perceived effectiveness of black box AI by requiring disclosures to the defense. Absent some compelling—or even credible—government interest in keeping AI black box, and given substantial constitutional rights and public safety interests at stake, we argue that the burden rests on the government to justify any departure from the norm that all lawyers, judges, and jurors can fully understand AI. If AI is to be used at all in settings like the criminal system—and we do not suggest that it necessarily should—the bottom line is that glass box AI can better accomplish both fairness and public safety goals. We conclude by calling for national and local regulation to safeguard, in all criminal cases, the right to glass box AI.