David J. Gunkel (Northern Illinois University) has posted “The Rights of Robots” (in A. A. Nakagawa and C. Douzinas (Eds.), Non-Human Rights: Critical Perspectives, Edward Elgar) on SSRN. Here is the abstract:
Robot rights is already in play and operational from the moment the robot appeared on the stage of history. And the question “Can or should robots have rights?” is currently the site of a contentious debate. This chapter engages in an analysis of the terms and conditions of the dispute. It therefore does not take sides in the existing conflict by advocating for one position over and against the other. Instead identifies and critically evaluates the common set of shared values and fundamental assumptions that both sides already endorse and support in order to enter into conflict in the first place. And it does so in order to devise an alternative strategy that may be better suited to responding to and taking responsibility for the moral and legal opportunities or challenges that we currently confront in the face or the faceplate of robots, AI, and other seemingly intelligent artifacts.
Fernando Delgado (Cornell University), Solon Barocas (Microsoft Research; Cornell University), and Karen Levy (Cornell University) have posted “An Uncommon Task: Participatory Design in Legal AI” (ACM Proceedings on Human-Computer Interaction 2022) on SSRN. Here is the abstract:
Despite growing calls for participation in AI design, there are to date few empirical studies of what these processes look like and how they can be structured for meaningful engagement with domain experts. In this paper, we examine a notable yet understudied AI design process in the legal domain that took place over a decade ago, the impact of which still informs legal automation efforts today. Specifically, we examine the design and evaluation activities that took place from 2006 to 2011 within the Text REtrieval Conference’s (TREC) Legal Track, a computational research venue hosted by the National Institute of Standards and Technologies. The Legal Track of TREC is notable in the history of AI research and practice because it relied on a range of participatory approaches to facilitate the design and evaluation of new computational techniques—in this case, for automating attorney document review for civil litigation matters. Drawing on archival research and interviews with coordinators of the Legal Track of TREC, our analysis reveals how an interactive simulation methodology allowed computer scientists and lawyers to become co-designers and helped bridge the chasm between computational research and real-world, high-stakes litigation practice. In analyzing this case from the recent past, our aim is to empirically ground contemporary critiques of AI development and evaluation and the calls for greater participation as a means to address them.
Alicia Solow-Niederman (Harvard Law School) has posted “Algorithmic Grey Holes” (Journal of Law & Innovation (Forthcoming 2022)) on SSRN. Here is the abstract:
It’s almost a cliché to talk about algorithms as “black boxes” that resist human understanding. This frame emphasizes opacity, suggesting that the inability to see inside the algorithm is the problem. If a lack of transparency is the problem, then procedural measures to enhance access to the algorithm—whether by requiring audits, by adjusting the technological parameters of the tool to make it more “explainable,” or by pushing back against proprietary claims made by private vendors—are the natural solution.
But the relentless pursuit of transparency blinks the reality that algorithmic accountability is more complicated than opening a box. Neither critics nor backers of algorithmic tools have reckoned with a related, yet distinct challenge that emerges when the state employs algorithmic methods: algorithmic grey holes. Algorithmic grey holes occur when layers of procedure offer a bare appearance of legality, without accounting for whether legal remedies are in fact available to affected populations. Although opacity about how an algorithm works may contribute to a grey hole, reckoning with a grey hole demands more than transparency. A myopic emphasis on transparency understates not only the consequences for an individual, but also how a lack of effective individual review and redress can have systemic consequences for rule of law itself. This class of potential costs has not been adequately recognized.
This Essay puts the challenge of algorithmic grey holes and the threat to rule of law values, particularly for criminal justice applications, front and center. It evaluates the individual and societal stakes not only for criminal justice, but also for front-line enforcement decisions and adjunction of benefits and burdens in civil settings. By forthrightly confronting these concerns, it becomes possible both to diagnose individual and societal algorithmic harms more effectively and to contemplate how technological tools might innovate in more helpful ways.
Giovanni De Gregorio (Oxford) and Pietro Dunn (Bologna) have posted “The European Risk-Based Approaches: Connecting Constitutional Dots in the Digital Age” (Common Market Law Review 2022) on SSRN. Here is the abstract:
In the last years, risk has become a proxy and a parameter characterizing the European regulation of digital technologies. Nonetheless, the European risk-based regulation in the digital age is multi-faceted in the approaches it takes. This work takes into consideration three examples: the General Data Protection Regulation; the proposal for the Digital Services Act; and the proposal for the Artificial Intelligence Act. These three instruments move across a spectrum, moving from a bottom-up approach (the GDPR) to a top-down architecture (the AI Act), and going through an intermediate stage (the DSA). We argue, however, that, despite the different methods, the three instruments share a common objective and project, i.e., they all seek to guarantee an optimal balance between innovation and the protection rights, in line with the developing features of European (digital) constitutionalism. Through this lens, it is thus possible to grasp the fil rouge behind the GDPR, the DSA and the AI Act as they express a common constitutional aspiration and direction.