Mökander et al. on The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act

Jakob Mökander (Oxford Internet Institute) et al. has posted “The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: What can they learn from each other?” (Minds and Machines 2022) on SSRN. Here is the abstract:

On the whole, the U.S. Algorithmic Accountability Act of 2022 (US AAA) is a pragmatic approach to balancing the benefits and risks of automated decision systems. Yet there is still room for improvement. This commentary highlights how the US AAA can both inform and learn from the European Artificial Intelligence Act (EU AIA).

Paul on The Politics of Regulating Artificial Intelligence Technologies

Regine Paul (U Bergen) has posted “The Politics of Regulating Artificial Intelligence Technologies: A Competition State Perspective” (Handbook on Public Policy and Artificial Intelligence, edited by Regine Paul, Emma Carmel and Jennifer Cobbe (Elgar, forthcoming) on SSRN. Here is the abstract:

This chapter introduces and critically evaluates alternative conceptualizations of public regulation of AITs in what is still a nascent field of research. As often in new regulatory domains, there is a tendency both of re-inventing the wheel – by disregarding insights from neighboring policy domains (e.g. nano-technology or aviation) – and of creating silos of research – by failing to link up and systematize existing accounts in a wider context of regulatory scholarship. The aim of this chapter is to counter both tendencies; first by offering a systematic review of existing social science publications on AIT regulation, second by situating this review in the larger research landscape on (technology) regulation. This opens up for problematizing the relative dominance of narrow and rather a-political concepts of AI regulation in parts of the literature so far. In line with the aims of this Handbook (Paul 2022), I outline a critical political economy perspective that helps expose the politics of regulating AITs beyond applied ethics or “rational” risk-based interventions. Throughout the chapter, I use illustrative examples from my own primary research (documents and semi-structured expert interviews) on how the EU Commission narrates and seeks to enact its proposed AI Act.

Ranchordas on Smart Cities, Artificial Intelligence and Public Law

Sofia Ranchordas (U Groningen Law; LUISS) has posted “Smart Cities, Artificial Intelligence and Public Law: An Unchained Melody” on SSRN. Here is the abstract:

Governments and citizens are by definition in an unequal relationship. Public law has sought to address this power asymmetry with different legal principles and instruments. However, in the context of smart cities, the inequality between public authorities and citizens is growing, particularly for vulnerable citizens. This paper explains this phenomenon in light of the dissonance between the rationale, principles and instruments of public law and the practical implementation of AI in smart cities. It argues first that public law overlooks that smart cities are complex phenomena that pose novel and different legal problems. Smart cities are strategies, products, narratives, and processes that reshape the relationship between governments and citizens, often excluding citizens who are not deemed as ‘smart’. Second, smart urban solutions tend to be primarily predictive as they seek to anticipate, for example, crime, traffic congestion or pollution. On the contrary, public law principles and tools remain reactive or responsive, failing to regulate potential harms caused by predictive systems. In addition, public law remains focused on the need to constrain human discretion and individual flaws rather than systemic errors and datafication systems which place citizens in novel categories. This paper discusses the dissonance between public law and smart urban solutions, presenting the smart city as a corporate narrative which, with its attempts to optimise citizenship, inevitably excludes thousands of citizens.

Narayanan & Tan on Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI

Devesh Narayanan (National University of Singapore) and Zhi Ming Tan (Cornell) have posted “Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI” on SSRN. Here is the abstract:

It is frequently demanded that AI-based Decision Support Tools (AI-DSTs) ought to be both explainable to, and trusted by, those who use them. The joint pursuit of these two principles is ordinarily believed to be uncontroversial. In fact, a common view is that AI systems should be made explainable so that they can be trusted, and in turn, accepted by decision-makers. However, the moral scope of these two principles extends far beyond this particular instrumental connection. This paper argues that if we were to account for the rich and diverse moral reasons that ground the call for explainable AI, and fully consider what it means to “trust” AI in a full-blooded sense of the term, we would uncover a deep and persistent tension between the two principles. For explainable AI to usefully serve the pursuit of normatively desirable goals, decision-makers must carefully monitor and critically reflect on the content of an AI-DST’s explanation. This entails a deliberative attitude. Conversely, the call for full-blooded trust in AI-DSTs implies the disposition to put questions about their reliability out of mind. This entails an unquestioning attitude. As such, the joint pursuit of explainable and trusted AI calls on decision-makers to simultaneously adopt incompatible attitudes towards their AI-DST, which leads to an intractable implementation gap. We analyze this gap and explore its broader implications: suggesting that we may need alternate theoretical conceptualizations of what explainability and trust entail, and/or alternate decision-making arrangements that separate the requirements for trust and deliberation to different parties.

Coglianese & Hefter on From Negative to Positive Algorithm Rights

Cary Coglianese (U Penn Law) and Kat Hefter (same) have posted “From Negative to Positive Algorithm Rights” (Wm. & Mary Bill Rts. J., forthcoming) on SSRN. Here is the abstract:

Artificial intelligence, or “AI,” is raising alarm bells. Advocates and scholars propose policies to constrain or even prohibit certain AI uses by governmental entities. These efforts to establish a negative right to be free from AI stem from an understandable motivation to protect the public from arbitrary, biased, or unjust applications of algorithms. This movement to enshrine protective rights follows a familiar pattern of suspicion that has accompanied the introduction of other technologies into governmental processes. Sometimes this initial suspicion of a new technology later transforms into widespread acceptance and even a demand for its use. In this paper, we show how three now-accepted technologies—DNA analysis, breathalyzers, and radar speed detectors—traversed a path from initial resistance to a positive right that demands their use. We argue that current calls for a negative right to be free from digital algorithms may dissipate over time, with the public and the legal system eventually embracing, if not even demanding, the use of AI. Increased recognition that the human-based status quo itself leads to unacceptable errors and biases may contribute to this transformation. A negative rights approach, after all, may only hamper the development of technologies that could lead to improved governmental performance. If AI tools are allowed to mature and become more standardized, they may also be accompanied by greater reliance on qualified personnel, robust audits and assessments, and meaningful oversight. Such maturation in the use of AI tools may lead to demonstrable improvements over the status quo, which eventually might well justify assigning a positive right to their use in the performance of governmental tasks.

Schultz on The Right of Publicity: A New Framework for Regulating Facial Recognition

Jason Schultz (NYU Law) has posted “The Right of Publicity: A New Framework for Regulating Facial Recognition” (Brooklyn Law Review, forthcoming) on SSRN. Here is the abstract:

For over a century, the right of publicity (ROP) has protected individuals from unwanted commercial exploitation of their images and identities. Originating around the turn of the Twentieth Century in response to the newest image-appropriation technologies of the time, including portrait photography, mass-production packaging, and a ubiquitous printing press, the ROP has continued to evolve along with each new wave of technologies that enable companies to exploit peoples’ images and identities for commercial gain. Over time, the ROP has protected individuals from misappropriation in photographs, films, advertisements, action figures, baseball cards, animatronic robots, video game avatars, and even digital resurrection in film sequels. Critically, as new technologies gained capacity for mass appropriation, the ROP expanded to protect against these practices.

The newest example of such a technology is facial recognition (FR). Facial recognition systems derive their primary economic value from commercially exploiting massive facial image databases filled with millions of individual likenesses and identities, often obtained without sufficient consent. Such appropriations go beyond mere acquisition, playing critical roles in training FR algorithms, matching identities to new images, and displaying results to users. Without the capacity to appropriate and commercially exploit these images and identities, most FR systems would fail to function as commercial products.

In this article, I develop a novel theory for how ROP claims could apply to FR systems and detail how their history and development, both statutory and common law, demonstrate their power to impose liability on entities that conduct mass image and identity appropriation, especially through innovative visual technologies. This provides a robust framework for FR regulation while at the same time balancing issues of informed consent and various public interest concerns, such as compatibility with copyright law and First Amendment-protected news reporting.

Gutierrez et al. on Defining General Purpose Artificial Intelligence Systems

Carlos Ignacio Gutierrez (RAND; ASU Law; Future of Life Institute); Anthony Aguirre (Future of Life Institute); Risto Uuk (same); Claire Boine (University of Ottawa Faculty of Law; Artificial and Natural Intelligence Toulouse Institute); and Matija Franklin (University College London – Department of Experimental Psychology) have posted “A Proposal for a Definition of General Purpose Artificial Intelligence Systems” on SSRN. Here is the abstract:

The European Union (EU) is in the middle of comprehensively regulating artificial intelligence (AI) through an effort known as the AI Act. Within the vast spectrum of issues under the Act’s aegis, the treatment of technologies classified as general purpose AI systems (GPAIS) merits special consideration. Particularly, existing proposals to define GPAIS do not provide sufficient guidance to distinguish these systems from those designed to perform specific tasks, denominated as fixed-purpose. Thus, our working paper has three objectives. First, to highlight the variance and ambiguity in the interpretation of GPAIS in the literature. Second, to examine the dimensions of generality of purpose available to define GPAIS. Lastly, it proposes a functional definition of the term that facilitates its governance within the EU. Our intention with this piece is to spark a discussion that improves the hard and soft law efforts to mitigate these systems’ risks and protect the well-being and future of constituencies in the EU and globally.

Puaschunder on Digital Inequality

Julia M. Puaschunder (Columbia University; New School for Social Research; Harvard University; The Situationist Project on Law and Mind Sciences) has posted “Digital Inequality: A Research Agenda” (Proceedings of the 28th RAIS, June 2022) on SSRN. Here is the abstract:

We live in the age of digitalization. Digital disruption is the advancement of our lifetimes. Never before in the history of humankind have human beings given up as much decision-making autonomy as today to a growing body of artificial intelligence (AI). Digitalization features a wave of self-learning entities that generate information from exponentially-growing big data sources that are encroaching every aspect of our daily lives. Inequality is one of the most significant pressing concern of our times. Ample evidence exists in economics, law and historical studies that multiple levels of inequality dominate the current socio-dynamics, politics and living conditions around the world. Social inequality stretches from societal levels within nation states to global dimensions but also intergenerational inequality domains. While digitalization and inequality are predominant features of our times, hardly any information exists on the inequality inherent in digitalization. This paper breaks new ground in theoretically arguing for inequality being an overlooked by-product of innovative change – featuring concrete examples in insights and applications in the digitalization domain. A multi-faceted analysis will draw a contemporary digital inequality account from behavioral economic, macroeconomic, comparative and legal economic perspectives. This paper targets at aiding academics and practitioners in understanding the advantages but also the potential inequalities imbued in digitalization. It sets a historic landmark to capture the Zeitgeist of our digitalization disruption heralding unexpected inequalities stemming from innovative change. The article may open eyes to understand our times holistically in its advantageous innovation capacities but also potential societal, international and intertemporal unequal gains and losses perspectives from digitalization.

Khan & Hanna on The Subjects and Stages of AI Dataset Development: A Framework for Dataset Accountability

Mehtab Khan (Yale Law School) and Alex Hanna (Distributed AI Research Institute) have posted “The Subjects and Stages of AI Dataset Development: A Framework for Dataset Accountability” (19 Ohio St. Tech. L.J. (Forthcoming 2023)) on SSRN. Here is the abstract:

There has been increased attention toward the datasets that are used to train and build AI technologies from the computer science and social science research communities, but less from legal scholarship. Both Large-Scale Language Datasets (LSLDs) and Large-Scale Computer Vision Datasets (LSCVDs) have been at the forefront of such discussions, due to recent controversies involving the use of facial recognition technologies, and the discussion of the use of publicly-available text for the training of massive models which generate human-like text. Many of these datasets serve as “benchmarks” to develop models that are used both in academic and industry research, while others are used solely for training models. The process of developing LSLDs and LSCVDs is complex and contextual, involving dozens of decisions about what kinds of data to collect, label, and train a model on, as well as how to make the data available to other researchers. However, little attention has been paid to mapping and consolidating the legal issues that arise at different stages of this process: when the data is being collected, after the data is used to build and evaluate models and applications, and how that data is distributed more widely.

In this article, we offer four main contributions. First, we describe what kinds of objects these datasets are, how many different kinds exist, what types of modalities they encompass, and why they are important. Second, we provide more clarity about the stages of dataset development – a process that has thus far been subsumed within broader discussions about bias and discrimination – and the subjects who may be susceptible to harms at each point of development. Third, we provide a matrix of both the stages of dataset development and the subjects of dataset development, which traces the connections between stages and subjects. Fourth, we use this analysis to identify some basic legal issues that arise at the various stages in order to foster a better understanding of the dilemmas and tensions that arise at every stage. We situate our discussion within wider discussion of current debates and proposals related to algorithmic accountability. This paper fulfills an essential gap when it comes to comprehending the complicated landscape of legal issues connected to datasets and the gigantic AI models trained on them.

Hutson & Winters on Algorithmic Disgorgement

Jevan Hutson & Ben Winters (Electronic Privacy Information Center) have posted “America’s Next ‘Stop Model!’: Algorithmic Disgorgement” on SSRN. Here is the abstract:

Beginning with its 2019 final order In the Matter of Cambridge Analytica, LLC, followed by a May 2021 decision and order In the Matter of Everalbum, Inc. in the context of facial recognition technology and affirmed by its March 2022 stipulated order in United States of America v. Kurbo, Inc. et al in the context of children’s privacy, the United States Federal Trade Commission now wields algorithmic disgorgement—effectively the destruction of algorithms and models built upon unfairly or deceptively sourced (i.e., ill-gotten) data — as a consumer protection tool in its ongoing, uphill battle against unfair and deceptive practices in an increasingly data-driven world. The thesis of this Article is that algorithmic disgorgement is (i) an essential tool for consumer protection enforcement to address the complex layering of unfairness and deception common in data-intensive products and businesses and (ii) worthy of express endorsement by lawmakers and immediate use by consumer protection law enforcement. To that end, the Article will explain how the harms of algorithms built on and enhanced by ill-gotten data are layered, hard to trace, and require an enforcement tool that is consequently comprehensive and effective as a deterrent. This Article first traces the development of algorithmic disgorgement in the United States and then situates the development of algorithmic disgorgement within historical and other current US consumer protection law enforcement mechanisms. From there, this Article reflects upon on the need for and importance of algorithmic disgorgement and broader consumer protection enforcement for issues of unfairness and deception in AI, highlighting the significance of the Kurbo case being a violation of a children’s privacy law, which does not have a corollary for adults in the U.S. Ultimately, this Article argues that (i) state and federal lawmakers should enshrine algorithmic disgorgement into law to insulate it from potential challenge and (ii) state and federal consumer protection law enforcement entities ought to wield algorithmic disgorgement more aggressively to remedy and deter unfair and deceptive practices.