Marchant on Swords and Shields: Impact of Private Standards in Technology-Based Liability

Gary E. Marchant (Arizona State University – College of Law) has posted “Swords and Shields: Impact of Private Standards in Technology-Based Liability” on SSRN. Here is the abstract:

Private voluntary standards are playing an ever greater role in the governance of many emerging technologies, including autonomous vehicles. Government regulation has lagged due to the ‘pacing problem’ in which technology moves faster than government regulation, and regulators lack the first-hand information that is mostly in the hands of industry and other experts in the field who often participate in standard-setting activities. Consequently, private standards have moved beyond historical tasks such as inter-operability to now produce quasi-governmental policy specifications that address the risk management, governance, privacy risks of emerging technologies. As the federal government has prudently concluded that promulgating government standards for autonomous vehicles would be premature at this time and may do more harm than good, private standards have become the primary governance tool for these vehicles. A number of standard-setting organizations, including the SAE, ISO, UL and IEEE have stepped forward to adopt a series of inter-locking private standards that collectively govern autonomous vehicle safety. While these private standards were not developed with litigation in mind, they could provide a useful benchmark for judge and juries to use in evaluating the safety of autonomous vehicles and whether compensatory and punitive damages are appropriate after an injury-causing accident involving an autonomous vehicle. Drawing on several decades of relevant case law, this paper argues that a manufacturer’s conformance with private standards for autonomous vehicle safety should be a partial shield against liability, whereas failure to conform to such standards should be a partial sword used by plaintiffs tor lack of due care.

Kaminski on Regulating the Risks of AI

Margot Kaminski (U Colorado Law School; Yale ISP; U Silicon Flatirons Center for Law, Technology, and Entrepreneurship) has posted “Regulating the Risks of AI” (Boston University Law Review, Vol. 103, forthcoming 2023) on SSRN. Here is the abstract:

Companies and governments now use Artificial Intelligence (AI) in a wide range of settings. But using AI leads to well-known risks—that is, not yet realized but potentially catastrophic future harms that arguably present challenges for a traditional liability model. It is thus unsurprising that lawmakers in both the United States and the European Union (EU) have turned to the tools of risk regulation for governing AI systems.

This Article observes that constructing AI harms as risks is a choice with consequences. Risk regulation comes with its own policy baggage: a set of tools and troubles that have emerged in other fields. Moreover, there are at least four models for risk regulation, each with divergent goals and methods. Emerging conflicts over AI risk regulation illustrate the tensions that emerge when regulators employ one model of risk regulation, while stakeholders call for another.

This Article is the first to examine and compare a number of recently proposed and enacted AI risk regulation regimes. It asks whether risk regulation is, in fact, the right approach. It closes with suggestions for addressing two types of shortcomings: failures to consider other tools in the risk regulation toolkit (including conditional licensing, liability, and design mandates), and shortcomings that stem from the nature of risk regulation itself (including the inherent difficulties of non-quantifiable harms, and the dearth of mechanisms for public or stakeholder input).

Selbst & Barocas on Unfair Artificial Intelligence: How FTC Intervention Can Overcome the Limitations of Discrimination Law

Andrew D. Selbst (UCLA School of Law) and Solon Barocas (Microsoft Research; Cornell University) have posted “Unfair Artificial Intelligence: How FTC Intervention Can Overcome the Limitations of Discrimination Law” (171 University of Pennsylvania Law Review, forthcoming). Here is the abstract:

The Federal Trade Commission has indicated that it intends to regulate discriminatory AI products and services. This is a welcome development, but its true significance has not been appreciated to date. This Article argues that the FTC’s flexible authority to regulate ‘unfair and deceptive acts and practices’ offers several distinct advantages over traditional discrimination law when applied to AI. The Commission can reach a wider range of commercial domains, a larger set of possible actors, a more diverse set of harms, and a broader set of business practices than are currently covered or recognized by discrimination law. For example, while most discrimination laws can address neither vendors that sell discriminatory software to decision-makers nor consumer products that work less well for certain demographic groups than others, the Commission could address both. The Commission’s investigative and enforcement powers can also overcome many of the practical and legal challenges that have limited plaintiffs’ ability to successfully seek remedies under discrimination law. The Article demonstrates that the FTC has the existing authority to address the harms of discriminatory AI and offers a method for the Commission to tackle the problem, based on its existing approach to data security.