Kesan & Zhang on When Is A Cyber Incident Likely to Be Litigated and How Much Will It Cost? An Empirical Study 

Jay P. Kesan (University of Illinois College of Law) and Linfeng Zhang (University of Illinois Department of Mathematics) have posted “When Is A Cyber Incident Likely to Be Litigated and How Much Will It Cost? An Empirical Study” (Connecticut Insurance Law Journal, Forthcoming) on SSRN. Here is the abstract:

Numerous cyber incidents have shown that there are substantial legal risks associated with these events. However, empirical analysis of the legal aspects of cyber risk is largely missing in the existing literature. Based on a dataset of historical cyber incidents and cyber-related litigation cases, we provide one of the earliest quantitative studies on the likelihood of cyber incidents being litigated and the cost of settling a cyber-related case. Using regression models, we showed that some company and incident characteristics play an important role in determining the litigation probability and settlement costs, and the models proposed in the paper display good explanatory power. Our findings show that the lack of Article III standing is commonplace in cyber-related cases and that solely relying on the common law system makes it difficult for victims of malicious data breaches to sue and receive legal remedies. In addition, we demonstrate that our findings have valuable implications for enterprise risk management in terms of how the legal risk associated with different types of cyber risk should be properly addressed.

Almada on Automated Decision-Making as a Data Protection Issue

Marco Almada (European University Institute – Department of Law) has posted “Automated Decision-Making as a Data Protection Issue” on SSRN. Here is the abstract:

Artificial intelligence techniques have been used to automate various procedures in modern life, ranging from ludic applications to substantial decisions about the lives of individuals and groups. Given the variety of automated decision-making applications and the different forms in which decisions may harm humans, the law has struggled to provide adequate responses to automation. This paper examines the role of a specific branch of law — data protection law — in the regulation of artificial intelligence. Data protection law is applicable to automation scenarios which rely on data about natural persons, and it seeks to address risks to these persons through three approaches: allowing persons to exercise rights against specific automated decisions, disclosing information about the decision-making systems and imposing design requirements for those systems. By exploring the potentials and limits of such approaches, this paper presents a portrait of the relevance of data protection law for regulating AI.