Law Commission of Ontario on The Rise and Fall of Algorithms in American Criminal Justice: Lessons for Canada

The Law Commission of Ontario has posted “The Rise and Fall of Algorithms in American Criminal Justice: Lessons for Canada” on SSRN. Here is the abstract:

Artificial intelligence (AI) and algorithms are often referred to as “weapons of math destruction.” Many systems are also credibly described as “a sophisticated form of racial profiling.” These views are widespread in many current discussions of AI and algorithms.
The Law Commission of Ontario (LCO) Issue Paper, The Rise and Fall of Algorithms in American Criminal Justice: Lessons for Canada, is the first of three LCO Issue Papers considering AI and algorithms in the Canadian justice system. The paper provides an important first look at the potential use and regulation of AI and algorithms in Canadian criminal proceedings. The paper identifies important legal, policy and practical issues and choices that Canadian policymakers and justice stakeholders should consider before these technologies are widely adopted in this country.

Levy, Chasalow & Riley on Algorithms and Decision-Making in the Public Sector

Karen Levy (Cornell University), Kyla Chasalow (University of Oxford), and Sarah Riley (Cornell University) have posted “Algorithms and Decision-Making in the Public Sector” (Annual Review of Law and Social Science, Vol. 17 (2021)) on SSRN. Here is the abstract:

This article surveys the use of algorithmic systems to support decision-making in the public sector. Governments adopt, procure, and use algorithmic systems to support their functions within several contexts—including criminal justice, education, and benefits provision—with important consequences for accountability, privacy, social inequity, and public participation in decision-making. We explore the social implications of municipal algorithmic systems across a variety of stages, including problem formulation, technology acquisition, deployment, and evaluation. We highlight several open questions that require further empirical research.

Ebers on Liability for AI and EU Consumer Law

Martin Ebers (Humboldt University of Berlin – Faculty of Law; University of Tartu, School of Law) has posted “Liability for Artificial Intelligence and EU Consumer Law” (Journal of Intellectual Property, Information Technology and Electronic Commerce Law) on SSRN. Here is the abstract:

The new Directives on Digital Contracts – the Digital Content and Services Directive (DCSD) 2019/770 and the Sale of Goods Directive (SGD) 2019/771 – are often seen as important steps in adapting European private law to the requirements of the digital economy. However, neither directive contains special rules for new technologies such as Artificial Intelligence (AI). In light of this issue, the following paper discusses whether existing EU consumer law is equipped to deal with situations in which AI systems are either used for internal purposes by companies or offered to consumers as the main subject matter of the contract. This analysis will reveal a number of gaps in current EU consumer law and briefly discuss upcoming legislation.

Johnson on Flexible Regulation for Artificial Intelligence

Walter G. Johnson (RegNet, Australian National University) has posted “Flexible Regulation for Dynamic Products? The Case of Applying Principles-Based Regulation to Medical Products Using Artificial Intelligence” (Law, Innovation and Technology 14(2)) on SSRN. Here is the abstract:

Emerging technologies including artificial intelligence (AI) enable novel products to have dynamic and even self-modifying designs, challenging approval-based products regulation. This article uses a proposed framework by the US Food and Drug Administration (FDA) to explore how flexible regulatory tools, specifically principles-based regulation, could be used to manage ‘dynamic’ products. It examines the appropriateness of principles-based approaches for managing the complexity and fragmentation found in the setting of dynamic products in terms of regulatory capacity and accountability, balancing flexibility and predictability, and the role of third parties. The article concludes that successfully deploying principles-based regulation for dynamic products will require taking serious lessons from the global financial crisis on managing complexity and fragmentation while placing equity at the centre of the framework.

Cohen on Lex Informatica to the Control Revolution

Julie E. Cohen (Georgetown University Law Center) has posted “From Lex Informatica to the Control Revolution (Berkeley Technology Law Journal 2022) on SSRN. Here is the abstract:

Legal scholarship on the encounter between networked digital technologies and law has focused principally on how legal and policy processes should respond to new technological developments and has spent much less time considering what that encounter might signify for the shape of legal institutions themselves. This essay focuses on the latter question. Within fields like technology studies, labor history, and economic sociology, there is a well-developed tradition of studying the ways that new information technologies and the “control revolution” they enabled—in brief, a quantum leap in the capacity for highly granular oversight and management—have elicited long-term, enduring changes in the structure and operation of economic organizations. I begin by considering some lessons of work in that tradition for law understood as a set of organizations constituted for the purpose of governance. Next, I turn the lens inward, offering some observations about techlaw scholarship that are essentially therapeutic. The disruptions of organizational change have affected scholars who teach, think, and write about techlaw in ways more profound than are commonly acknowledged and discussed. It seems fitting, in a symposium dedicated to Joel Reidenberg’s life and work, to use the process of grief as a device for exploring the arc of techlaw scholarship over its first quarter century. The fit is surprisingly good and the takeaways relatively clear: If, as I intend to suggest, the organizational forms that underpin our familiar legal institutions have been in the process of evolving out from under us, we still have choices to make about how legal institutions optimized for the information economy will be constituted. Finally, I identify two sets of important considerations that should inform the processes of organizational and institutional redesign.

Blass on Observing the Effects of Automating the Judicial System with Behavioral Equivalence

Joseph Blass (Northwestern University Pritzker School of Law; Northwestern University – Dept. Electrical Engineering & Computer Science) has posted “Observing the Effects of Automating the Judicial System with Behavioral Equivalence” (South Carolina Law Review, Vol. 72, No. 4, 2022) on SSRN. Here is the abstract:

Building on decades of work in Artificial Intelligence, legal scholars have begun to consider whether components of the judicial system could be replaced by computers. Much of the scholarship in AI and Law has focused on whether such automated systems could reproduce the reasoning and outcomes produced by the current system. This scholarly framing captures many aspects of judicial processes, but overlooks how automated judicial decision-making likely would change how participants in the legal system interact with it, and how societal interests outside that system who care about its processes would be affected by those changes.

This Article demonstrates how scholarship on legal automation comes to leave out perspectives external to the process of judicial decision-making. It analyses the problem using behavioral equivalence, a Computer Science concept that assesses systems’ behaviors according to the observations of specific monitors of those systems. It introduces a framework to examine the various observers of the judicial process and the tradeoffs they may perceive when legal systems are automated. This framework will help scholars and policymakers more effectively anticipate the consequences of automating components of the legal system.

Lavi on Do Platforms Kill?

Michal Lavi (Hebrew University of Jerusalem – Faculty of Law) has posted “Do Platforms Kill?” (Harvard Journal of Law and Public Policy, Vol. 43, No. 2, 2020) on SSRN. Here is the abstract:

Terror kills, inciting words can kill, but what about online platforms? In recent years, social networks have turned into a new arena for incitement. Terror organizations operate active accounts on social networks. They incite, recruit, and plan terror attacks by using online platforms. These activities pose a serious threat to public safety and security. Online intermediaries, such as Facebook, Twitter, YouTube, and others provide online platforms that make it easier for terrorists to meet and proliferate in ways that were not dreamed of before. Thus, terrorists are able to cluster, exchange ideas, and promote extremism and polarization. In such an environment, do platforms that host inciting content bear any liability? What about intermediaries operating internet platforms that direct extremist and unlawful content at susceptible users, who, in turn, engage in terrorist activities? Should intermediaries bear civil liability for algorithm-based recommendations on content, connections, and advertisements? Should algorithmic targeting enjoy the same protections as traditional speech? This Article analyzes intermediaries’ civil liability for terror attacks under the anti-terror statutes and other doctrines in tort law. It aims to contribute to the literature in several ways. First, it outlines the way intermediaries aid terrorist activities either willingly or unwittingly. By identifying the role online intermediaries play in terrorist activities, one may lay down the first step towards creating a legal policy that would mitigate the harm caused by terrorists’ incitement over the internet. Second, this Article outlines a minimum standard of civil liability that should be imposed on intermediaries for speech made by terrorists on their platforms. Third, it highlights the contradictions between intermediaries’ policies regarding harmful content and the technologies that create personalized experiences for users, which can sometimes recommend unlawful content and connections. This Article proposes the imposition of a duty on intermediaries that would incentivize them to avoid the creation of unreasonable risks caused by personalized algorithmic targeting of unlawful messages. This goal can be achieved by implementing effective measures at the design stage of a platform’s algorithmic code. Subsequently, this Article proposes remedies and sanctions under tort, criminal, and civil law while balancing freedom of speech, efficiency, and the promotion of innovation. The Article concludes with a discussion of complementary approaches that intermediaries may take for voluntarily mitigating terrorists’ harm.

Griffin on Artificial Intelligence and Liability in Health Care

Frank Griffin (University of Arkansas) has posted “Artificial Intelligence and Liability in Health Care” (31 Health Matrix: Journal of Law-Medicine 65-106 (2021)) on SSRN. Here is the abstract:

Artificial intelligence (AI) is revolutionizing medical care. Patients with problems ranging from Alzheimer’s disease to heart attacks to sepsis to diabetic eye problems are potentially benefiting from the inclusion of AI in their medical care. AI is likely to play an ever- expanding role in health care liability in the future. AI-enabled electronic health records are already playing an increasing role in medical malpractice cases. AI-enabled surgical robot lawsuits are also on the rise. Understanding the liability implications of AI in the health care system will help facilitate its incorporation and maximize the potential patient benefits. This paper discusses the unique legal implications of medical AI in existing products liability, medical malpractice, and other law.

Gerke, Babic, Evgeniou, and Cohen on The Need for a System View to Regulate AI/ML Software as Medical Device

Sara Gerke (Harvard University – Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics), Boris Babic, Theodoros Evgeniou (INSEAD), and I. Glenn Cohen (Harvard Law School) have posted “The Need for a System View to Regulate Artificial Intelligence/Machine Learning-Based Software as Medical Device” (NPJ Digit Med. 2020 Apr 7;3:53) on SSRN. Here is the abstract:

Artificial intelligence (AI) and Machine learning (ML) systems in medicine are poised to significantly improve health care, for example, by offering earlier diagnoses of diseases or recommending optimally individualized treatment plans. However, the emergence of AI/ML in medicine also creates challenges, which regulators must pay attention to. Which medical AI/ML-based products should be reviewed by regulators? What evidence should be required to permit marketing for AI/ML-based software as a medical device (SaMD)? How can we ensure the safety and effectiveness of AI/ML-based SaMD that may change over time as they are applied to new data? The U.S. Food and Drug Administration (FDA), for example, has recently proposed a discussion paper to address some of these issues. But it misses an important point: we argue that regulators like the FDA need to widen their scope from evaluating medical AI/ML-based products to assessing systems. This shift in perspective—from a product view to a system view—is central to maximizing the safety and efficacy of AI/ML in health care, but it also poses significant challenges for agencies like the FDA who are used to regulating products, not systems. We offer several suggestions for regulators to make this challenging but important transition.

Coglianese & Lampann on Contracting for Algorithmic Accountability

Cary Coglianese (University of Pennsylvania Law School) and Erik Lampmann (University of Pennsylvania Law School) have posted “Contracting for Algorithmic Accountability” (Administrative Law Review Accord, vol. 6, p. 175, 2021 on SSRN. Here is the abstract:

As local, state, and federal governments increase their reliance on artificial intelligence (AI) decision-making tools designed and operated by private contractors, so too do public concerns increase over the accountability and transparency of such AI tools. But current calls to respond to these concerns by banning governments from using AI will only deny society the benefits that prudent use of such technology can provide. In this Article, we argue that government agencies should pursue a more nuanced and effective approach to governing the governmental use of AI by structuring their procurement contracts for AI tools and services in ways that promote responsible use of algorithms. By contracting for algorithmic accountability, government agencies can act immediately, without any need for new legislation, to reassure the public that governmental use of machine-learning algorithms will be deployed responsibly. Furthermore, unlike with the adoption of legislation, a contracting approach to AI governance can be tailored to meet the needs of specific agencies and particular uses. Contracting can also provide a means for government to foster improved deployment of AI in the private sector, as vendors that serve government agencies may shift their practices more generally to foster responsible AI practices with their private sector clients. As a result, we argue that government procurement officers and agency officials should consider several key governance issues in their contract negotiations with AI vendors. Perhaps the most fundamental issue relates to vendors’ claims to trade secret protection—an issue that we show can be readily addressed during the procurement process. Government contracts can be designed to balance legitimate protection of proprietary information with the vital public need for transparency about the design and operation of algorithmic systems used by government agencies. We further urge consideration in government contracting of other key governance issues, including data privacy and security, the use of algorithmic impact statements or audits, and the role for public participation in the development of AI systems. In an era of increasing governmental reliance on artificial intelligence, public contracting can serve as an important and tractable governance strategy to promote the responsible use of algorithmic tools.