Goldman on Generative AI is Doomed (and Incorrectly Regulated)

Eric Goldman (Santa Clara University – School of Law) has posted “Generative AI is Doomed” on SSRN. Here is the abstract:

I delivered this talk as the 2024 Nies Lecture at Marquette University School of Law, Milwaukee, WI. The talk compares the recent proliferation of Generative AI with the Internet’s proliferation in the mid-1990s. In each case, it was clear that the technology would have revolutionary but uncertain impacts on society. However, the public sentiments toward the two innovations have differed radically. The Internet arrived during a period of widespread techno-optimism, creating a regulatory environment that fostered the Internet’s growth. Generative AI, in contrast, has arrived during widespread techno-pessimism and following decades of conditioning about the dangers of “AI.” The difference is consequential: The prevailing regulatory and legal responses to Generative AI will limit or even negate its benefits. If society hopes to achieve the full potential of Generative AI, we’ll need to adopt a new regulatory approach quickly.

Mazur & Thimmesch on Transforming Government with Augmented LLMs

Orly Mazur (SMU Law) and Adam B. Thimmesch (U Nebraska Law) have posted “Beyond ChatGPT: Transforming Government with Augmented LLMs” (Tennessee Law Review, Forthcoming) on SSRN. Here is the abstract:

The release of ChatGPT demonstrated the remarkable capabilities and the existing limitations of large language models (LLMs) and the natural language chatbots that they power. One area that is ripe for innovation using this new technology, but that has often been bypassed in mainstream discussions, is the public sector. This Article redirects attention towards this overlooked area, acknowledging the limitations of LLMs, while specifically exploring their potential to transform government operations.

The Article discusses the various technological advancements that allow for the development of tools far more refined than the general-use chatbots commonly available to the public. The Article then introduces a dual-category framework for proposing potential government AI applications: applications that improve external government operations and those that streamline internal operations. Using tax administration as a case study, the Article illustrates how generative AI, such as LLMs, can respond to well-known issues within the administration of law by substantially enhancing the quality of government communications, thereby improving operational efficiency and promoting equitable access to government services.

The Article makes several innovative, practical proposals. These include leveraging LLM-powered chatbots to manage interactions with non-government entities, strategically integrating LLMs into workplace training and customer service processes, and developing various AI tools to mitigate service disparities faced by marginalized communities. These recommendations underscore the promising potential that LLMs have in this area, despite their current shortcomings. Ultimately, however, the Article concludes that to fully harness the benefits of generative AI within the public sphere, a concerted, inclusive effort involving a broad spectrum of stakeholders is necessary. Such a collaborative effort holds the promise to redefine public service delivery in a manner that enhances the efficiency, effectiveness, and overall quality of government services.

Pasquale & Sun on Consent and Compensation: Resolving Generative AI’s Copyright Crisis

Frank Pasquale (Cornell Law School; Cornell Tech) & Haochen Sun (University of Hong Kong Law) have posted “Consent and Compensation: Resolving Generative AI’s Copyright Crisis” on SSRN. Here is the abstract:

Generative artificial intelligence (AI) has the potential to augment and democratize creativity. However, it is undermining the knowledge ecosystem that now sustains it. Generative AI may unfairly compete with creatives, displacing them in the market. Most AI firms are not compensating creative workers for composing the songs, drawing the images, and writing both the fiction and non-fiction books that their models need in order to function. AI thus threatens not only to undermine the livelihoods of authors, artists, and other creatives, but also to destabilize the very knowledge ecosystem it relies on.

Alarmed by these developments, many copyright owners have objected to the use of their works by AI providers. To recognize and empower their demands to stop non-consensual use of their works, we propose a streamlined opt-out mechanism that would require AI providers to remove objectors’ works from their databases once copyright infringement has been documented. Those who do not object still deserve compensation for the use of their work by AI providers. We thus also propose a levy on AI providers, to be distributed to the copyright owners whose work they use without a license. This scheme is designed to ensure creatives receive a fair share of the economic bounty arising out of their contributions to AI. Together these mechanisms of consent and compensation would result in a new grand bargain between copyright owners and AI firms, designed to ensure both thrive in the long-term.

Jacobi & Sag on We Are the AI Problem

Tonja Jacobi (Emory Law) and Matthew Sag (same) have posted “We are the AI problem” (Emory Law Journal Online) on SSRN. Here is the abstract:

In this Essay we note that some controversies surrounding AI are strikingly familiar and quotidian; they reflect existing cultural divides and obsessions of the moment. The recent flare-up over Google’s Gemini illustrates how many of the debates about AI primarily reflect social problems, rather than technological ones. We argue that, for those upset about AI wokeness gone wild, it is important to understand that, in many ways, the problem is us. Gemini’s un-whitewashing of history resulted in absurd creations, but the situation reflects some truths about our society—that the underlying problem is society, not inherently the new technology representing it. There are four important elements about the creation process of AI that explain the “Black-Nazi problem” (for want of a better short-hand ) that also reveal broader problems about society. Understanding those aspects of the AI creation process reveals that AI’s foibles are a symptom of our ongoing struggle with the ramifications of past inequality and the difficulty of balancing inherently conflicting goals, such as aspirational diversity and historical accuracy. The Gemini storm in a teacup over “woke AI” gives us a window onto other intractable socio-technical problems we need to confront in AI.

Wills on Care for Chatbots

Peter Wills (Oxford) has posted “Care for Chatbots” (UBC Law Review 2024) on SSRN. Here is the abstract:

Individuals will rely on language models (LMs) like ChatGPT to make decisions. Sometimes, due to that reliance, they will get hurt, have their property be damaged, or lose money. If the LM had been a person, they might sue the LM. But LMs are not persons.

This paper analyses whom the individual could sue, and on what facts they can succeed according to the Hedley Byrne-inspired doctrine of negligence. The paper identifies a series of hurdles conventional Canadian and English negligence doctrine poses and how they may be overcome. Such hurdles include identifying who is making a representation or providing a service when an LM generates a statement, determining whether that person can owe a duty of care based on text the LM reacts to, and identifying the proper analytical path for breach and causation.

To overcome such hurdles, the paper questions how courts should understand who “controls” a system. Should it be the person who designs the system, or the person who uses the system? Or both? The paper suggests that, in answering this question, courts should prioritise social dimensions of control (for example, who understand how a system works, not merely what it does) over physical dimensions of control (such as on whose hardware a program is running) when assessing control and therefore responsibility.

The paper make further contributions in assessing what it means (or should mean) for a person to not only act, but react via an LM. It identifies a doctrinal assumption that when one person reacts to another’s activity, the first person must know something about the second’s activity. LMs break that assumption, because they allow the first person to react to information from another person without any human having knowledge. The paper thus reassesses what it means to have knowledge in light of these technological developments. It proposes redefining “knowledge” such that it would accommodate duties of care to individuals when an LM provides individualised advice.

The paper then shows that there is a deep tension running through the breach and causation analyses in Anglo-Canadian negligence doctrine, relating to how to describe someone who takes an imprudent process when performing an act but whose ultimate act is nonetheless justifiable. One option is to treat them as in breach of a standard of care, but that breach did not cause the injury; another is to treat them as not in breach at all. The answer to this question could significantly affect LM-based liability because it affects whether “using an LM” is itself treated as a breach of a standard of care.

Finally, the paper identifies alternative approaches to liability for software propounded in the literature and suggests that these approaches are not plainly superior to working within the existing framework that treats software as a tool used by a legal person.

Lobel on Humans-Out-of-the-Loop Law

Orly Lobel (U San Diego Law) has posted “Automation Rights: How to Rationally Design Humans-Out-Of-The-Loop Law” (U Chicago L Rev Online 2024) on SSRN. Here is the abstract:

This essay begins with the following puzzle: in sharp contrast to significant evidence demonstrating the effectiveness of AI-based automation in high stakes spheres-health care, transportation, national security, finance, workplace safety, public administration, and more-the contemporary impulse is to legally require a human-into-the-loop is heightened the higher the stakes of the activity or decision. Indeed, in the legislation emerging in both the EU and the United States, ironically showcases the assumption that when it comes to AI, high stakes equal high-risk of tackling the stakes through the most advanced technology. Moreover, while there are hundreds of bills, reports, and executive orders that seek to prohibit or restrain certain uses or applications of AI, there are virtually no equivalent frameworks, or even language, that would mandate automation when such a shift has been empirically shown to be the safest, or most consistent in achieving agreed upon goals or courses of action. This essay, written for the University of Chicago Symposium on How AI Will Change the Law, argues for the development of more robust-and balanced-law that focuses not only on the risks but also the potential that AI brings. In turn, it argues that there is a need to develop a framework for laws and policies that incentivize, and at times, mandate transitions to AI-based automation. Automation rights-the right to demand and the duty to deploy AI-based technology when it outperforms human-based action-should become part of the legal landscape. A rational analysis of the costs and benefits of AI deployment would suggest that certain high-stakes circumstances compel automation because of the high costs and risks of not adopting the best available technologies. Inevitably, the rapid advancements in machine learning will mean that law soon must embrace AI, accelerate deployment, and under certain circumstances prohibit human intervention, as a matter of fairness, welfare, and justice.

Strine on the External and Internal Governance of Corporate Use of Artificial Intelligence

Leo E. Strine, Jr. (Wachtell; U Penn Law) has posted “Using Experience Smartly to Ensure a Better Future: How the Hard-Earned Lessons of History Should Shape The External and Internal Governance of Corporate Use of Artificial Intelligence” on SSRN. Here is the abstract:

Artificial intelligence or “AI” has transformative potential. But that reality should not obscure the fact that our society has longstanding experience with the corporate development of novel technologies that pose the simultaneous potential to better human lives and to create massive harm. This article, prepared for the occasion of the 50th anniversary of the Journal of Corporate Law and for the Rome Conference on AI, Ethics, and the Future of Corporate Governance, looks backward at the prior experience with corporate profit-seeking through the development and use of transformative technologies to suggest policy measures that might help ensure that the benefits of AI’s development by for-profit business entities to society far exceed its downside.

Wang & Ke on Digital Corporate Law

Chen Wang (UC Berkeley Law) and Xu Ke (Renmin U) have posted “Toward Digital Corporate Law: Revisiting Corporate Law’s Responses to Technology” on SSRN. Here is the abstract:

This article explores the dynamic interplay between emerging technologies and corporate law, questioning whether these advancements necessitate a fundamental reshaping of core legal doctrines. It delves into specific areas like corporate formation, governance, and finance, through a comparative lens examining Chinese and U.S. laws and regulations. The article focuses on the capacity of modern frameworks of corporate law to address challenges posed by technologies like AI, particularly concerning the evolution of corporate agents’ fiduciary duties and the balance of power between shareholders and management. It proposes innovative approaches to developing future corporate law, stressing enhanced compatibility and adaptability with technological progress, such as contemplating data as a corporate asset and allowing for the issuance and storage of stocks in digital form. The article also explores balancing shareholder, stakeholder, and societal interests in this evolving landscape, including the use of AI in fulfilling corporation’s ESG responsibilities and the potential for fund managers to employ AI to make informed proxy voting decisions. By posing thought-provoking questions for future research, it aims to stimulate a nuanced dialogue on the critical intersection of law and technology, particularly in the context of the increasing digitization of corporate law and the potential for using software engineering technics to improve legislating and rulemaking.

Yoo on Algorithmic Disclosure For AI

Christopher S. Yoo (U Penn Law) has posted “Beyond Algorithmic Disclosure For AI” (Columbia Science and Technology Law Review, forthcoming 2024) on SSRN. Here is the abstract:

One of the most commonly recommended policy interventions with respect to algorithms in general and artificial intelligence (“AI”) systems in particular is the need for greater transparency, often focusing on the disclosure of the variables employed by the algorithm and the weights given to those variables. This Essay argues that any meaningful transparency regime must provide information on other critical dimensions as well. For example, any transparency regime must also include key information about the data on which the algorithm was trained, including its source, scope, quality, and inner correlations, subject to constraints imposed by copyright, privacy, and cybersecurity law. Disclosures about prerelease testing also play a critical role in understanding an AI system’s robustness and its susceptibility to specification gaming. Finally, the fact that AI, like all complex systems, tends to exhibit emergent phenomena, such as proxy discrimination, interactions among multiple agents, the impact of adverse environments, and the well-known tendency of generative AI to hallucinate, makes ongoing post-release evaluation a critical component of any system of AI transparency.