Solow-Niederman on Algorithmic Grey Holes

Alicia Solow-Niederman (Harvard Law School) has posted “Algorithmic Grey Holes” (Journal of Law & Innovation (Forthcoming 2022)) on SSRN. Here is the abstract:

It’s almost a cliché to talk about algorithms as “black boxes” that resist human understanding. This frame emphasizes opacity, suggesting that the inability to see inside the algorithm is the problem. If a lack of transparency is the problem, then procedural measures to enhance access to the algorithm—whether by requiring audits, by adjusting the technological parameters of the tool to make it more “explainable,” or by pushing back against proprietary claims made by private vendors—are the natural solution.

But the relentless pursuit of transparency blinks the reality that algorithmic accountability is more complicated than opening a box. Neither critics nor backers of algorithmic tools have reckoned with a related, yet distinct challenge that emerges when the state employs algorithmic methods: algorithmic grey holes. Algorithmic grey holes occur when layers of procedure offer a bare appearance of legality, without accounting for whether legal remedies are in fact available to affected populations. Although opacity about how an algorithm works may contribute to a grey hole, reckoning with a grey hole demands more than transparency. A myopic emphasis on transparency understates not only the consequences for an individual, but also how a lack of effective individual review and redress can have systemic consequences for rule of law itself. This class of potential costs has not been adequately recognized.

This Essay puts the challenge of algorithmic grey holes and the threat to rule of law values, particularly for criminal justice applications, front and center. It evaluates the individual and societal stakes not only for criminal justice, but also for front-line enforcement decisions and adjunction of benefits and burdens in civil settings. By forthrightly confronting these concerns, it becomes possible both to diagnose individual and societal algorithmic harms more effectively and to contemplate how technological tools might innovate in more helpful ways.