Howells & Twigg-Flesner on Interconnectivity and Liability: AI and the Internet of Things

Geraint Howells (University of Manchester) and Christian Twigg-Flesner (University of Warwick – School of Law) have posted “Interconnectivity and Liability: AI and the Internet of Things” (Larry di Matteo et.al. Artificial Intelligence: Global Perspectives on Law & Ethics, Forthcoming) on SSRN. Here is the abstract:

In this paper, we focus on the question of liability in circumstances where an IoT system has not performed as expected and where this has resulted in loss or damage of some kind. We will argue that the combination of AI and the IoT raises several novel aspects concerning the basis for assessing responsibility and of allocating liability for loss or damage, and that this will necessitate the development of a more creative approach to liability than generally followed in many legal systems. Most legal systems combine linear liability based on contractual relationships and fault-based or strict liability on a wrongdoer in tort law. We seek to demonstrate that this approach is no longer sufficient to deal with the complex issues associated with the interaction of AI and the IoT, and to offer possible solutions. Our discussion will proceed as follows: first, we will explain the nature of an IoT system in general terms, drawing on case studies from both the consumer and commercial sphere to illustrate this. We will then focus on the role of AI in the operation of an IoT system. Secondly, we will analyze the particular issues that arise in the circumstances where an AI-driven IoT system malfunctions and causes loss or damage, and the specific legal questions this raises. Third, we will examine to what extent legal systems (particularly the UK and the EU) are currently able to address these questions, and identify aspects that require action, whether in the form of legislation or some other intervention. Finally, we will propose an alternative for addressing the liability challenges arising in this particular context.

Naughton on Facebook’s Decision to Uphold the Ban on Donald Trump and its Consequence in Social Media Censorship Regulations

James Naughton (Loyola University Chicago School of Law) has posted “Facebook’s Decision to Uphold the Ban on Donald Trump and its Consequence in Social Media Censorship Regulations” on SSRN. Here is the abstract:

A short comment written on Facebook’s recent decision to ban President Trump from its platforms. This comment argues that, while the U.S. Supreme Court has not applied the First Amendment to the “vast democratic forums of the internet,” the time is ripe for guidance by the Courts. This is especially true when social media has become a main source of communication for governmental figures and entities.

Waldman on Outsourcing Privacy

Ari Ezra Waldman (Northeastern University) has posted “Outsourcing Privacy” (Notre Dame Law Review, Vol. 96, 2021) on SSRN. Here is the abstract:

An underappreciated part of the narrative of privacy managerialism—and the focus of this Essay—is the information industry’s increasing tendency to outsource privacy compliance responsibilities to technology vendors. In the last three years alone, the International Association of Privacy Professionals has identified more than 250 companies in the privacy technology vendor market. These companies market their products as tools to help companies comply with new privacy laws like the General Data Protection Regulation, with consent orders from the Federal Trade Commission, and with other privacy rules from around the world. They do so by building compliance templates, pre-completed assessment forms, and monitoring consents, among many other things. As such, many of these companies are doing far more than helping companies identify the data they have or answer data access requests; many of them are instantiating their own definitions and interpretations of complex privacy laws into the technologies they create and doing so only with managerial values in mind. This undermines privacy law in four ways: it creates asymmetry between large technology companies and their smaller competitors, it makes privacy law underinclusive by limiting it to those requirements that can be written into code, it erodes expertise by outsourcing human work to artificial intelligence and automated systems, and it creates a “black box” that undermines accountability.