Wagner on Liability Rules for the Digital Age – Aiming for the Brussels Effect

Gerhard Wagner (Humboldt University School of Law; University of Chicago Law School) has posted “Liability Rules for the Digital Age – Aiming for the Brussels Effect” on SSRN. Here is the abstract:

With legislative proposals for two directives published in September 2022, the EU Commission aims to adapt the existing liability system to the challenges posed by digitalization. One of the proposals remains related and limited to liability for artificially intelligent systems, but the other contains nothing less than a full revision of the 1985 Product Liability Directive which lies at the heart of European tort law. Whereas the current Product Liability Directive largely followed the model of U.S. law, the revised version breaks new ground. It does not limit itself to the expansion of the concept of product to include intangible digital goods such as software and data as well related services, important enough in itself, but also targets the new intermediaries of e-commerce as liable parties. With all of that, the proposal for a new product liability directive is a great leap forward and has the potential to grow into a worldwide benchmark in the field. In comparison, the proposal of a directive on AI liability is much harder to assess. It remains questionable whether a second directive is actually needed at this stage of the technological development.

Tasioulas on The Rule of Algorithm and the Rule of Law

John Tasioulas (Oxford) has posted “The Rule of Algorithm and the Rule of Law” (Vienna Lectures on Legal Philosophy (2023) on SSRN. Here is the abstract:

Can AI adjudicative tools in principle better enable us to achieve the rule of law by replacing judges? This article argues that answers to this question have been excessively focussed on ‘output’ dimensions of the rule of law – such as conformity of decisions with the applicable law – at the expense of vital ‘process’ considerations such as explainability, answerability, and reciprocity. These process considerations do not by themselves warrant the conclusion that AI adjudicative tools can never, in any context, properly replace human judges. But they help bring out the complexity of the issues – and the potential costs – that are involved in this domain.

Soh on Legal Dispositionism and Artificially-Intelligent Attributions

Jerrold Soh (Singapore Management University – Yong Pung How School of Law) has posted “Legal Dispositionism and Artificially-Intelligent Attributions” (Legal Studies, forthcoming) on SSRN. Here is the abstract:

It is often said that because an artificially-intelligent (AI) system acts autonomously, its makers cannot easily be faulted should the system’s actions harm. Since the system cannot be held liable on its own account either, existing laws expose victims to accountability gaps and require reform. Drawing on attribution theory, however, this article argues that the ‘autonomy’ that law tends to ascribe to AI is premised less on fact than science fiction. Specifically, the folk dispositionism which demonstrably underpins the legal discourse on AI liability, personality, publications, and inventions, leads us towards problematic legal outcomes. Examining the technology and terminology driving contemporary AI systems, the article contends that AI systems are better conceptualised as situational characters whose actions remain constrained by their programming, and that properly viewing AI as such illuminates how existing legal doctrines could be sensibly applied to AI. In this light, the article advances a framework for re-conceptualising AI.

Recommended.

Østbye on Liability for Cryptoeconomic Consensus

Peder Østbye (Norges Bank) has posted “Exploring Liability for Cryptoeconomic Consensus – A Law and Economics Approach” on SSRN. Here is the abstract:

Cryptoeconomic systems, such as cryptocurrencies and decentralized autonomous organizations, rely on consensus at several levels. Their protocols and the open source code implementing them are often the results of consensus among several participants. The systems are updated according to consensus mechanisms set in their protocols. This consensus is sometimes reliant on consensus among another set of participants in other cryptoeconomic systems, such as oracles feeding a cryptoeconomic system with external information. The outcomes of consensus may be illegitimate or harmful, which raises the question of liability. There is a heated debate around such liability – both as a matter of law and policy. Some call for stricter regulation in terms of harsher liabilities, while others argue for more of a light-touch approach, shielding participants from liability in the name of promoting “responsible innovation.” Some even argue for cryptoeconomic systems to be left to themselves and their own architecture-based self-regulation not subject to national laws. However, when cryptoeconomic consensus results in undesirable outcomes, remedies are often searched for in the law, both in public enforcement and private litigation. This paper utilizes law and economics to explore the merits of legalist approaches to liability for cryptoeconomic consensus, normative policy guidance for such liability, and institutional implications for such liability.

Recommended.

Nugent on The Five Internet Rights

Nicholas Nugent (University of Virginia School of Law) has posted “The Five Internet Rights” (Washington Law Review, Forthcoming) on SSRN. Here is the abstract:

Since the dawn of the commercial internet, content moderation has operated under an implicit social contract that website operators could accept or reject users and content as they saw fit, but users in turn could self-publish their views on their own websites if no one else would have them. However, as online service providers and activists have become ever more innovative and aggressive in their efforts to deplatform controversial speakers, content moderation has progressively moved down into the core infrastructure of the internet, targeting critical resources, such as networks, domain names, and IP addresses, on which all websites depend. These innovations point to a world in which it may soon be possible for private gatekeepers to exclude unpopular users, groups, or viewpoints from the internet altogether, a phenomenon I call viewpoint foreclosure.

For more than three decades, internet scholars have searched, in vain, for a unifying theory of interventionism—a set of principles to guide when the law should intervene in the private moderation of lawful online content and what that intervention should look like. These efforts have failed precisely because they have focused on the wrong gatekeepers, scrutinizing the actions of social media companies, search engines, and other third-party websites—entities that directly publish, block, or link to user-generated content—while ignoring the core resources and providers that make internet speech possible in the first place. This Article is the first to articulate a workable theory of interventionism by focusing on the far more fundamental question of whether users should have any right to express themselves on the now fully privatized internet. By articulating a new theory premised on viewpoint access—the right to express one’s views on the internet itself (rather than on any individual website)—I argue that the law need take account of only five basic non-discrimination rights to protect online expression from private interference—namely, the rights of connectivity, addressability, nameability, routability, and accessibility. Looking to property theory, internet architecture, and economic concepts around market entry barriers, it becomes clear that as long as these five fundamental internet rights are respected, users are never truly prevented from competing in the online marketplace of ideas, no matter the actions of any would-be deplatformer.

Lobel on The Law of AI for Good

Orly Lobel (U San Diego Law) has posted “The Law of AI for Good” on SSRN. Here is the abstract:

Legal policy and scholarship are increasingly focused on regulating technology to safeguard against risks and harms, neglecting the ways in which the law should direct the use of new technology, and in particular artificial intelligence (AI), for positive purposes. This article pivots the debates about automation, finding that the focus on AI wrongs is descriptively inaccurate, undermining a balanced analysis of the benefits, potential, and risks involved in digital technology. Further, the focus on AI wrongs is normatively and prescriptively flawed, narrowing and distorting the law reforms currently dominating tech policy debates. The law-of-AI-wrongs focuses on reactive and defensive solutions to potential problems while obscuring the need to proactively direct and govern increasingly automated and datafied markets and societies. Analyzing a new Federal Trade Commission (FTC) report, the Biden administration’s 2022 AI Bill of Rights and American and European legislative reform efforts, including the Algorithmic Accountability Act of 2022, the Data Privacy and Protection Act of 2022, the European General Data Protection Regulation (GDPR) and the new draft EU AI Act, the article finds that governments are developing regulatory strategies that almost exclusively address the risks of AI while paying short shrift to its benefits. The policy focus on risks of digital technology is pervaded by logical fallacies and faulty assumptions, failing to evaluate AI in comparison to human decision-making and the status quo. The article presents a shift from the prevailing absolutist approach to one of comparative cost-benefit. The role of public policy should be to oversee digital advancements, verify capabilities, and scale and build public trust in the most promising technologies.

A more balanced regulatory approach to AI also illuminates tensions between current AI policies. Because AI requires better, more representative data, the right to privacy can conflict with the right to fair, unbiased, and accurate algorithmic decision-making. This article argues that the dominant policy frameworks regulating AI risks—emphasizing the right to human decision-making (human-in-the-loop) and the right to privacy (data minimization)—must be complemented with new corollary rights and duties: a right to automated decision-making (human-out-of-the-loop) and a right to complete and connected datasets (data maximization). Moreover, a shift to proactive governance of AI reveals the necessity for behavioral research on how to establish not only trustworthy AI, but also human rationality and trust in AI. Ironically, many of the legal protections currently proposed conflict with existing behavioral insights on human-machine trust. The article presents a blueprint for policymakers to engage in the deliberate study of how irrational aversion to automation can be mitigated through education, private-public governance, and smart policy design.

Choi et al. on Chat-GPT Goes to Law School

Jonathan H. Choi (U Minnesota Law) et al. have posted “ChatGPT Goes to Law School” on SSRN. Here is the abstract:

How well can AI models write law school exams without human assistance? To find out, we used the widely publicized AI model ChatGPT to generate answers on four real exams at the University of Minnesota Law School. We then blindly graded these exams as part of our regular grading processes for each class. Over 95 multiple choice questions and 12 essay questions, ChatGPT performed on average at the level of a C+ student, achieving a low but passing grade in all four courses. After detailing these results, we discuss their implications for legal education and lawyering. We also provide example prompts and advice on how ChatGPT can assist with legal writing.

Garrett & Rudin on Glass-Box AI in Criminal Justice

Brandon L. Garrett (Duke Law) and Cynthia Rudin (Duke Computer Science) have posted “The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice” on SSRN. Here is the abstract:

Artificial intelligence (AI) increasingly is used to make important decisions that affect individuals and society. As governments and corporations use AI more and more pervasively, one of the most troubling trends is that developers so often design it to be a “black box.” Designers create models too complex for people to understand or they conceal how AI functions. Policymakers and the public increasingly sound alarms about black box AI, and a particularly pressing area of concern has been in criminal cases, in which a person’s life, liberty, and public safety can be at stake. In the United States and globally, despite concerns that technology may deepen pre-existing racial disparities and overreliance on incarceration, black box AI has proliferated in areas such as: DNA mixture interpretation; facial recognition; recidivism risk assessments; and predictive policing. Despite constitutional criminal procedure protections, judges have largely embraced claims that AI should remain undisclosed.

The champions and critics of AI have something in common, where both sides argue that we face a central trade-off: black box AI may be incomprehensible, but it performs more accurately. In this Article, we question the basis for this assertion, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be fully interpretable by people—can be more accurate than the black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. After all, criminal justice data is notoriously error prone, it reflects pre-existing racial and socio-economic disparities, and any AI system must be used by decisionmakers like lawyers and judges—who must understand it.

Debunking the black box performance myth has implications for constitutional criminal procedure rights and legislative policy. Judges and lawmakers have been reluctant to impair the perceived effectiveness of black box AI by requiring disclosures to the defense. Absent some compelling—or even credible—government interest in keeping AI black box, and given substantial constitutional rights and public safety interests at stake, we argue that the burden rests on the government to justify any departure from the norm that all lawyers, judges, and jurors can fully understand AI. If AI is to be used at all in settings like the criminal system—and we do not suggest that it necessarily should—the bottom line is that glass box AI can better accomplish both fairness and public safety goals. We conclude by calling for national and local regulation to safeguard, in all criminal cases, the right to glass box AI.

Nay on Large Language Models as Corporate Lobbyists

John Nay (Stanford CodeX) has posted “Large Language Models as Corporate Lobbyists” on SSRN. Here is the abstract:

We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. An autoregressive large language model (OpenAI’s text-davinci-003) determines if proposed U.S. Congressional bills are relevant to specific public companies and provides explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of novel ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model. It outperforms the baseline of predicting the most common outcome of irrelevance. We also benchmark the performance of the previous OpenAI GPT-3 model (text-davinci-002), which was the state-of-the-art model on many academic natural language tasks until text-davinci-003 was recently released. The performance of text-davinci-002 is worse than the simple baseline. These results suggest that, as large language models continue to exhibit improved natural language understanding capabilities, performance on lobbying related tasks will continue to improve.​​ Longer-term, if AI begins to influence law in a manner that is not a direct extension of human intentions, this threatens the critical role that law as information could play in aligning AI with humans. Initially, AI is being used to simply augment human lobbyists for a small portion of their daily tasks. However, firms have an incentive to use less and less human oversight over automated assessments of policy ideas and the written communication to regulatory agencies and Congressional staffers. The core question raised is where to draw the line between human-driven and AI-driven policy influence.

Almada & Petit on The EU AI Act: Between Product Safety and Fundamental Rights

Marco Almada (EUI Law) and Nicolas Petit (same) have posted “The EU AI Act: Between Product Safety and Fundamental Rights” on SSRN. Here is the abstract:

The European Union (“EU”) Artificial Intelligence Act (the AI Act) is a legal medley. Under the banner of risk-based regulation, the AI Act combines two repertoires of European Union (EU) law, namely product safety and fundamental rights protection. Like a medley, the AI Act attempts to combine the best features of both repertoires. But like a medley, the AI Act risks delivering insufficient levels of both product safety or fundamental rights protection. This article describes these issues by reference to three classical issues of law and technology. Some adjustments to the text and spirit of the AI Act are suggested.