Margot Kaminski (U Colorado Law School; Yale ISP; U Silicon Flatirons Center for Law, Technology, and Entrepreneurship) has posted “Regulating the Risks of AI” (Boston University Law Review, Vol. 103, forthcoming 2023) on SSRN. Here is the abstract:
Companies and governments now use Artificial Intelligence (AI) in a wide range of settings. But using AI leads to well-known risks—that is, not yet realized but potentially catastrophic future harms that arguably present challenges for a traditional liability model. It is thus unsurprising that lawmakers in both the United States and the European Union (EU) have turned to the tools of risk regulation for governing AI systems.
This Article observes that constructing AI harms as risks is a choice with consequences. Risk regulation comes with its own policy baggage: a set of tools and troubles that have emerged in other fields. Moreover, there are at least four models for risk regulation, each with divergent goals and methods. Emerging conflicts over AI risk regulation illustrate the tensions that emerge when regulators employ one model of risk regulation, while stakeholders call for another.
This Article is the first to examine and compare a number of recently proposed and enacted AI risk regulation regimes. It asks whether risk regulation is, in fact, the right approach. It closes with suggestions for addressing two types of shortcomings: failures to consider other tools in the risk regulation toolkit (including conditional licensing, liability, and design mandates), and shortcomings that stem from the nature of risk regulation itself (including the inherent difficulties of non-quantifiable harms, and the dearth of mechanisms for public or stakeholder input).