Lior on Innovating Liability: The Virtuous Cycle of Torts, Technology and Liability Insurance

Anat Lior (Drexel Law; Yale ISP) has posted “Innovating Liability: The Virtuous Cycle of Torts, Technology and Liability Insurance” (25 Yale J. L. & Tech. 448 (2023)) on SSRN. Here is the abstract:

Emerging technologies, such as artificial intelligence and quantum computing, are predicted to grow exponentially over the next decade. This growth should lead to a substantial economic impact on various commercial markets, but it will also lead to different types of harms. These may include physical harms, such as a chess robot breaking a child’s finger, or non-physical harms, such as excessive privacy breaches and cyberattacks enabled by quantum computing. While considering the safe integration of emerging technologies into our commercial stream, stakeholders often overlook the vital role of insurance. So far, scholars have identified different roles insurance hold, such as spreading and reducing risk. This Article identifies a new role insurance has in the context of emerging technologies—enabling safe and productive innovation.

The novelty of emerging technologies leads to difficulties in premium estimations and setting the terms of a liability policy to genuinely reflect the risks associated with an emerging technology. Despite this difficulty, insurance possesses the ability to enhance the integration of emerging technologies into daily commercial routines while mitigating the harms that may arise from this process. Throughout history, from the industrial revolution to outer space exploration, insurance has allowed innovative manufacturers to pursue breakthrough technologies while hedging their risks.

The intersection of torts, technology and liability insurance is perpetually developing as each field continuously fuels the others. Emerging technologies lead to new types of risks and losses, creating new liability rules, which in turn drive the purchase of liability insurance. Other times, tort law reacts slowly to harms caused by emerging technology leading to the purchasing of liability insurance and only then to the formation of liability rules, which are influenced by the existence of these policies. Yet in other instances, the existence of liability rules and insurance helps facilitate the safe dissemination of emerging technologies into our commerce stream. This virtuous cycle is a dominant one in the realm of liability law. However, to date, little has been discussed on the interplay between these three fields.

This Article challenges the notion that insurance is inadequate to cover emerging technologies given their novelty. It argues that insurance holds a vital underexplored role in advancing safe and healthy innovation and that, as a result, regulators should actively ensure its availability to both manufacturers and consumers. It aims to flesh out the influence torts, liability insurance and emerging technologies have on each other. Liability insurance allows consumers and manufacturers of emerging technologies to innovate while hedging their risks, thus acting as a catalyzing force of innovation itself.

Medill on Integrating Artificial Intelligence Tools into the Formation of Professional Identity

Colleen Medill (University of Nebraska at Lincoln – College of Law) has posted “Integrating Artificial Intelligence Tools into the Formation of Professional Identity” on SSRN. Here is the abstract:

My claim in this Article is that a lawyer’s personal use of artificial intelligence (AI) in the practice of law is now an essential component of a lawyer’s professional identity that must be intentionally developed as a law student before entering the practice of law. After demonstrating the strong connection between the use of AI tools in legal practice, the requirement of lawyer competence, and the formation of professional identity, the Article proposes four “best practices” principles for integrating AI tools with traditional lawyering skills exercises to assist students in the formation of professional identity. The Article concludes with an example that can be used in the first-year Property course.

Bystranowski & Tobia on Measuring Meta-Interpretation

Piotr Bystranowski (Interdisciplinary Centre for Ethics; Jagiellonian University) and Kevin Tobia
Georgetown University Law Center; Georgetown University – Department of Philosophy) have posted “Measuring Meta-Interpretation” (Journal of Institutional and Theoretical Economics (Forthcoming)) on SSRN. Here is the abstract:

American legal interpretation has taken an empirical turn. Courts and scholars use corpus linguistics, survey experiments, and machine learning to clarify legal texts’ meanings. We introduce these developments in “issue-level interpretation,” concerning interpretive theories’ application to legal language. Empirical methods also inform “meta-interpretative” debate: Which interpretive theory do interpreters use; which have they used; and which should they use? We demonstrate machine learning’s relevance to these meta-interpretive debates with insights provided by word embeddings that we trained on a corpus of over 1.3 million U.S. federal court decisions.

Ariel Aaronson on Data Dysphoria: The Governance Challenge Posed by Large Learning Models

Susan Ariel Aaronson (George Washington University – Elliott School of International Affairs) has posted “Data Dysphoria: The Governance Challenge Posed by Large Learning Models” on SSRN. Here is the abstract:

Only 8 months have passed since Chat-GPT and the large learning model underpinning it took the world by storm. This article focuses on the data supply chain—the data collected and then utilized to train large language models and the governance challenge it presents to policymakers These challenges include:

• How web scraping may affect individuals and firms which hold copyrights.
• How web scraping may affect individuals and groups who are supposed to be protected under privacy and personal data protection laws.
• How web scraping revealed the lack of protections for content creators and content providers on open access web sites; and
• How the debate over open and closed source LLM reveals the lack of clear and universal rules to ensure the quality and validity of datasets. As the US National Institute of Standards explained, many LLMs depend on “largescale datasets, which can lead to data quality and validity concerns. “The difficulty of finding the “right” data may lead AI actors to select datasets based more on accessibility and availability than on suitability… Such decisions could contribute to an environment where the data used in processes is not fully representative of the populations or phenomena that are being modeled, introducing downstream risks” –in short problems of quality and validity (NIST: 2023, 80).

Thie author uses qualitative methods to examine these data governance challenges. In general, this report discusses only those governments that adopted specific steps (actions, policies, new regulations etc.) to address web scraping, LLMs, or generative AI. The author acknowledges that these examples do not comprise a representative sample based on income, LLM expertise, and geographic diversity. However, the author uses these examples to show that while some policymakers are responsive to rising concerns, they do not seem to be looking at these issues systemically. A systemic approach has two components: First policymakers recognize that these AI chatbots are a complex system with different sources of data, that are linked to other systems designed, developed, owned, and controlled by different people and organizations. Data and algorithm production, deployment, and use are distributed among a wide range of actors who together produce the system’s outcomes and functionality Hence accountability is diffused and opaque(Cobbe et al: 2023). Secondly, as a report for the US National Academy of Sciences notes, the only way to govern such complex systems is to create “a governance ecosystem that cuts across sectors and disciplinary silos and solicits and addresses the concerns of many stakeholders.” This assessment is particularly true for LLMs—a global product with a global supply chain with numerous interdependencies among those who supply data, those who control data, and those who are data subjects or content creators (Cobbe et al: 2023).

In many countries, policymakers are trying to address these complex systems with policies designed to promote accountability, transparency, and mitigate risk. For example, some governments have proposed one size fits all AI regulation to address the risks, business practices, and the technology. For example, the EU AI Act has been approved by the EU Parliament, but many people want to update it to meet the challenges of generative AI. They are calling for provisions to encourage transparency in the data supply chain and algorithms that could complement the regulation of digital services in the Digital Services Act. In short, they are pushing for a more systemic and coherent approach. In contrast, , in 2019, Canada adopted procurement regulations, The Directive on Automated Decision Making, to govern a wide range of AI systems procured by the Canadian government. The Directive requires that the data be relevant, accurate, up to date, and traceable, protected and accessed appropriately, and lawfully collected, used, retained, and disposed. However, thus far Canadian policymakers have not linked learning from this directive to its approach to governing AI risk. As of August 2023, Canadian Parliamentarians are still reviewing the AI and Data Act (which says very little about the data supply chain and data governance and nothing about LLMs). It is in short, disconnected from the governance of data.

Lobel on Behavioral Law & Policy of AI Trust

Orly Lobel (University of San Diego School of Law) has posted “Behavioral Law & Policy of AI Trust” on SSRN. Here is the abstract:

With the dazzling advances in artificial intelligence capabilities, regulatory policy should aim at spurring the right amount — and the correct kind — of AI trust. In my recent research on AI policy, including my new book The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future and my article, The Law of AI for Good, I aim to pivot policy debates about automation and artificial intelligence (AI) toward more rational and grounded analysis. Just as behavioral research first developed in relation to marketing and consumer behavior and only later came to be recognized as significant in policymaking, so should policymakers turn their attention to understanding the human biases that lead to irrational algorithmic aversion and algorithmic adoration. In this short essay, I argue that the emerging experimental literature on trust, and distrust, of AI can serve as a blueprint for policy research and interventions. We do not yet have a common language, or even shared taxonomy, to compare and evaluate the tradeoffs inherent to automation. I call this the human-AI trust gap, which I argue is a significant barrier to benefiting from automation opportunities. That is, whether we have too little trust or too much trust in algorithms, the human-AI trust gap is that we are missing a shared literature and methods to understand when trust is given and when trust is due. The existing research insights on human-machine trust should raise doubt about recent policy reforms, such as laws requiring real-time consumer notification about the use of automated processes. I argue that there may be inadvertent irrationality in some aspects of contemporary AI policy. Government entities should commit to improving AI and building rational social trust in these systems.