Guerra, Parisi & Pi on Liability for Robots II: An Economic Analysis

Alice Guerra (University of Bologna – Department of Economics), Francesco Parisi (University of Minnesota – Law School), and Daniel Pi (University of Maine – School of Law) have posted “Liability for Robots II: An Economic Analysis” (Journal of Institutional Economics 2021) on SSRN. Here is the abstract:

This is the second of two companion papers that discuss accidents caused by robots. In the first paper (Guerra et al., 2021), we presented the novel problems posed by robot accidents, and assessed the related legal approaches and institutional opportunities. In this paper, we build on the previous analysis to consider a novel liability regime, which we refer to as “manufacturer residual liability” rule. This makes operators and victims liable for accidents due to their negligence—hence, incentivizing them to act diligently; and makes manufacturers residually liable for non-negligent accidents—hence, incentivizing them to make optimal investments in R&D for robots’ safety. In turn, this rule will bring down the price of safer robots, driving unsafe technology out of the market. Thanks to the percolation effect of residual liability, operators will also be incentivized to adopt optimal activity levels in robots’ usage.

Recommended.

Weissinger on AI, Complexity, and Regulation

Laurin Weissinger (Tufts University – The Fletcher School of Law and Diplomacy) has posted “AI, Complexity, and Regulation” (OUP Handbook on AI Governance, Forthcoming) on SSRN. Here is the abstract:

Regulating and governing AI will remain a challenge due to the inherent intricacy of how AI is deployed and used in practice. Regulation effectiveness and efficiency is inversely proportional to system complexity and the clarity of objectives: the more complicated an area is and the harder objectives are to operationalize, the more difficult it is to regulate and govern. Safety regulations, while often concerned with complex systems like airplanes, benefit from measurable, clear objectives and uniform subsystems. AI has emergent properties, and is not just “a technology” but interwoven with organizations, people, and the wider social context. Furthermore, objectives like “fairness” are not only difficult to grasp and classify but they will change their meaning case-by-case.

The inherent complexity of AI systems will continue to complicate regulation and governance but with appropriate investment, monetary and otherwise, complexity can be tackled successfully. However, due to the considerable power imbalance between users of AI in comparison to those AI systems are used on, successful regulation might be difficult to create and enforce. As such, AI regulation is more of a political and socio-economic problem than a technical one.