Blass on Observing the Effects of Automating the Judicial System with Behavioral Equivalence

Joseph Blass (Northwestern University Pritzker School of Law; Northwestern University – Dept. Electrical Engineering & Computer Science) has posted “Observing the Effects of Automating the Judicial System with Behavioral Equivalence” (South Carolina Law Review, Vol. 72, No. 4, 2022) on SSRN. Here is the abstract:

Building on decades of work in Artificial Intelligence, legal scholars have begun to consider whether components of the judicial system could be replaced by computers. Much of the scholarship in AI and Law has focused on whether such automated systems could reproduce the reasoning and outcomes produced by the current system. This scholarly framing captures many aspects of judicial processes, but overlooks how automated judicial decision-making likely would change how participants in the legal system interact with it, and how societal interests outside that system who care about its processes would be affected by those changes.

This Article demonstrates how scholarship on legal automation comes to leave out perspectives external to the process of judicial decision-making. It analyses the problem using behavioral equivalence, a Computer Science concept that assesses systems’ behaviors according to the observations of specific monitors of those systems. It introduces a framework to examine the various observers of the judicial process and the tradeoffs they may perceive when legal systems are automated. This framework will help scholars and policymakers more effectively anticipate the consequences of automating components of the legal system.

Lavi on Do Platforms Kill?

Michal Lavi (Hebrew University of Jerusalem – Faculty of Law) has posted “Do Platforms Kill?” (Harvard Journal of Law and Public Policy, Vol. 43, No. 2, 2020) on SSRN. Here is the abstract:

Terror kills, inciting words can kill, but what about online platforms? In recent years, social networks have turned into a new arena for incitement. Terror organizations operate active accounts on social networks. They incite, recruit, and plan terror attacks by using online platforms. These activities pose a serious threat to public safety and security. Online intermediaries, such as Facebook, Twitter, YouTube, and others provide online platforms that make it easier for terrorists to meet and proliferate in ways that were not dreamed of before. Thus, terrorists are able to cluster, exchange ideas, and promote extremism and polarization. In such an environment, do platforms that host inciting content bear any liability? What about intermediaries operating internet platforms that direct extremist and unlawful content at susceptible users, who, in turn, engage in terrorist activities? Should intermediaries bear civil liability for algorithm-based recommendations on content, connections, and advertisements? Should algorithmic targeting enjoy the same protections as traditional speech? This Article analyzes intermediaries’ civil liability for terror attacks under the anti-terror statutes and other doctrines in tort law. It aims to contribute to the literature in several ways. First, it outlines the way intermediaries aid terrorist activities either willingly or unwittingly. By identifying the role online intermediaries play in terrorist activities, one may lay down the first step towards creating a legal policy that would mitigate the harm caused by terrorists’ incitement over the internet. Second, this Article outlines a minimum standard of civil liability that should be imposed on intermediaries for speech made by terrorists on their platforms. Third, it highlights the contradictions between intermediaries’ policies regarding harmful content and the technologies that create personalized experiences for users, which can sometimes recommend unlawful content and connections. This Article proposes the imposition of a duty on intermediaries that would incentivize them to avoid the creation of unreasonable risks caused by personalized algorithmic targeting of unlawful messages. This goal can be achieved by implementing effective measures at the design stage of a platform’s algorithmic code. Subsequently, this Article proposes remedies and sanctions under tort, criminal, and civil law while balancing freedom of speech, efficiency, and the promotion of innovation. The Article concludes with a discussion of complementary approaches that intermediaries may take for voluntarily mitigating terrorists’ harm.

Griffin on Artificial Intelligence and Liability in Health Care

Frank Griffin (University of Arkansas) has posted “Artificial Intelligence and Liability in Health Care” (31 Health Matrix: Journal of Law-Medicine 65-106 (2021)) on SSRN. Here is the abstract:

Artificial intelligence (AI) is revolutionizing medical care. Patients with problems ranging from Alzheimer’s disease to heart attacks to sepsis to diabetic eye problems are potentially benefiting from the inclusion of AI in their medical care. AI is likely to play an ever- expanding role in health care liability in the future. AI-enabled electronic health records are already playing an increasing role in medical malpractice cases. AI-enabled surgical robot lawsuits are also on the rise. Understanding the liability implications of AI in the health care system will help facilitate its incorporation and maximize the potential patient benefits. This paper discusses the unique legal implications of medical AI in existing products liability, medical malpractice, and other law.

Gerke, Babic, Evgeniou, and Cohen on The Need for a System View to Regulate AI/ML Software as Medical Device

Sara Gerke (Harvard University – Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics), Boris Babic, Theodoros Evgeniou (INSEAD), and I. Glenn Cohen (Harvard Law School) have posted “The Need for a System View to Regulate Artificial Intelligence/Machine Learning-Based Software as Medical Device” (NPJ Digit Med. 2020 Apr 7;3:53) on SSRN. Here is the abstract:

Artificial intelligence (AI) and Machine learning (ML) systems in medicine are poised to significantly improve health care, for example, by offering earlier diagnoses of diseases or recommending optimally individualized treatment plans. However, the emergence of AI/ML in medicine also creates challenges, which regulators must pay attention to. Which medical AI/ML-based products should be reviewed by regulators? What evidence should be required to permit marketing for AI/ML-based software as a medical device (SaMD)? How can we ensure the safety and effectiveness of AI/ML-based SaMD that may change over time as they are applied to new data? The U.S. Food and Drug Administration (FDA), for example, has recently proposed a discussion paper to address some of these issues. But it misses an important point: we argue that regulators like the FDA need to widen their scope from evaluating medical AI/ML-based products to assessing systems. This shift in perspective—from a product view to a system view—is central to maximizing the safety and efficacy of AI/ML in health care, but it also poses significant challenges for agencies like the FDA who are used to regulating products, not systems. We offer several suggestions for regulators to make this challenging but important transition.