Wang on Can ChatGPT Personalize Index Funds’ Voting Decisions?

Chen Wang (UC Berkeley – School of Law) has posted “Can ChatGPT Personalize Index Funds’ Voting Decisions?” on SSRN. Here is the abstract:

ChatGPT has risen rapidly to prominence due to its unique features and generalization ability. This article proposes using ChatGPT to assist small investment funds, particularly small passive funds, in making more accurate and informed proxy voting decisions.

Passive funds adopt a low-cost business model. Small passive funds lack financial incentives to make informed proxy voting decisions that align with their shareholders’ interests. This article examines the implications of passive funds on corporate governance and the issues associated with outsourcing voting decisions to proxy advisors. The article finds that passive funds underspend on investment stewardship and outsource their voting proxy decisions to proxy advisors, which could lead to biased or erroneous recommendations.

However, by leveraging advanced AI language models such as ChatGPT, small passive funds can improve their proxy voting accuracy and personalization, enabling them to better serve their shareholders and navigate the competitive market.

To test ChatGPT’s potential, this article conducted an experiment using its zero-shot GPT-4 model to generate detailed proxy voting guidelines and apply them to a real-world proxy statement. The model successfully identified conflicts of interest in the election of directors and generated comprehensive guidelines with weight for each variable. However, ChatGPT has some limitations, such as token limitations, long-range dependencies, and likely ESG inclination.

To enhance its abilities, ChatGPT can be fine-tuned using high-quality, domain-specific datasets. However, investment funds may face challenges when outsourcing voting decisions to AI, such as data and algorithm biases, cybersecurity and privacy concerns, and regulatory uncertainties.

Siebecker on The Incompatibility of Artificial Intelligence and Citizens United

Michael R. Siebecker (U Denver Law) has posted “The Incompatibility of Artificial Intelligence and Citizens United” (Ohio State Law Journal, Vol. 83, No. 6, pp. 1211-1273, 2022) on SSRN. Here is the abstract:

In Citizens United v. FEC, the Supreme Court granted corporations essentially the same political speech rights as human beings. But does the growing prevalence of artificial intelligence (“AI”) in directing the content and dissemination of political communications call into question the jurisprudential soundness of such a commitment? Would continuing to construe the corporation as a constitutional rights bearer make much sense if AI entities could wholly own and operate business entities without any human oversight? Those questions seem particularly important, because in the new era of AI, the nature and practices of the modern corporation are quickly evolving. The magnitude of that evolution will undoubtedly affect some of the most important aspects of our shared social, economic, and political lives. To the extent our conception of the corporation changes fundamentally in the AI era, it seems essential to assess the enduring soundness of prior jurisprudential commitments regarding corporate rights that might no longer seem compatible with sustaining our democratic values. The dramatic and swift evolution of corporate practices in the age of AI provides a clarion call for revisiting the jurisprudential sensibility of imbuing corporations with full constitutional personhood in general and robust political speech rights in particular. For if corporations can use AI data mining and predictive analytics to manipulate political preferences and election outcomes for greater profits, the basic viability and legitimacy of our democratic processes hang in the balance. Moreover, if AI technology itself plays an increasingly important, if not controlling, role in determining the content of corporate political communication, granting corporations the same political speech rights as humans effectively surrenders the political realm to algorithmic entities. In the end, although AI could help corporations act more humanely, the very notion of a corporation heavily influenced or controlled by non-human entities creates the need to cabin at least somewhat the commitment to corporations as full constitutional rights bearers. In particular, with respect to corporate political activity, the growing prevalence of AI in managerial (and possibly ownership) positions makes granting corporations the same political speech rights as humans incompatible with maintaining human sovereignty.

Østbye on Liability for Cryptoeconomic Consensus

Peder Østbye (Norges Bank) has posted “Exploring Liability for Cryptoeconomic Consensus – A Law and Economics Approach” on SSRN. Here is the abstract:

Cryptoeconomic systems, such as cryptocurrencies and decentralized autonomous organizations, rely on consensus at several levels. Their protocols and the open source code implementing them are often the results of consensus among several participants. The systems are updated according to consensus mechanisms set in their protocols. This consensus is sometimes reliant on consensus among another set of participants in other cryptoeconomic systems, such as oracles feeding a cryptoeconomic system with external information. The outcomes of consensus may be illegitimate or harmful, which raises the question of liability. There is a heated debate around such liability – both as a matter of law and policy. Some call for stricter regulation in terms of harsher liabilities, while others argue for more of a light-touch approach, shielding participants from liability in the name of promoting “responsible innovation.” Some even argue for cryptoeconomic systems to be left to themselves and their own architecture-based self-regulation not subject to national laws. However, when cryptoeconomic consensus results in undesirable outcomes, remedies are often searched for in the law, both in public enforcement and private litigation. This paper utilizes law and economics to explore the merits of legalist approaches to liability for cryptoeconomic consensus, normative policy guidance for such liability, and institutional implications for such liability.

Recommended.

Aoyagi & Ito on Competing DAOs

Jun Aoyagi (HKUST) and Yuki Ito (U Cal, Berkeley) have posted “Competing DAOs” on SSRN. Here is the abstract:

A decentralized autonomous organization (DAO) is an entity with no central control and ownership. A group of users discuss, propose, and implement a new platform design with smart contracts on blockchain by taking control away from a centralized platformer. We develop a model of platform competition with the DAO governance structure and analyze how strategic complementarity affects the development of DAOs. Compared to traditional competition between centralized platformers, a DAO introduces an additional layer of competition played by users. Since users are multi-homing, they propose a new platform design by internalizing interactions between platforms and create additional values, which is reflected by the price of a governance token. A platformer can extract this value by issuing a token but must relinquish control of her platform, losing potential fee revenue. Analyzing this tradeoff, we show that centralized platformers tend to be DAOs when strategic complementarity is strong, while an intermediate degree of strategic complementarity leads to the coexistence of a DAO and a traditional centralized platform.

Low, Schuster & Wan on The Company and Blockchain Technology

Kelvin F.K. Low (NUS – Faculty of Law), Edmund Schuster (London School of Economics – Law School), and Wai Yee Wan
(City University of Hong Kong) have posted “The Company and Blockchain Technology” (Elgar Handbook on Corporate Liability, forthcoming).

Blockchain and distributed ledger technology (DLT) has generated much excitement over the past decade, with proclamations that it would disrupt everything from elections to finance. Unsurprisingly, the much-maligned corporate form is also considered ripe for disruption. While certainly imperfect, and currently serviced by creaking legal infrastructure premised upon direct shareholdings, are its problems ones of centralization/intermediation? What exactly are the limits of DLT? In this chapter, we propose to expose the ignorance behind the hype that the venerable corporation will either be revitalized by DLT or replaced by Decentralised Autonomous Organisations (DAOs). We will demonstrate that proponents of DLT disruption either overestimate the potential of the technology by taking at face value its claims of security without unpacking what said security entails (and what it does not) or lack awareness of the history of and market demand for intermediation as well as the complexities of modern corporations.

Martin & Parmar on What Firms Must Know Before Adopting AI

Kirsten Martin (Notre Dame)) and Bidhan Parmar (U Virginia – Darden School of Business) have posted “What Firms Must Know Before Adopting AI: The Ethics of AI Transparency” on SSRN. Here is the abstract:

Firms have obligations to stakeholders that do not disappear when managers adopt AI decision systems. We introduce the concept of the AI knowledge gap – where AI provides limited information about its operations while the stakeholder demands for information justifying firm decisions increase. We develop a framework of what firms must know about their AI model in the procurement process to ensure they understand how the model allows a firm to meet existing obligations including the anticipated risks of using the AI decision system, how to prevent foreseeable risks, and have a plan for resilience. We argue there are no conditions where it is ethical to unquestioningly adopt recommendations from a black box AI program within an organization. According to this argument, adequate comprehension and knowledge about an AI model is not a negotiable design feature but a strategic and moral requirement.

De Giovanni on Blockchain Technology Applications in Businesses and Organizations

Pietro De Giovanni (Luiss University) has posted “Blockchain Technology Applications in Businesses and Organizations” on SSRN. Here is the abstract:

Blockchain technology has the ability to disrupt industries and transform business models since all intermediaries and stakeholders can now interact with little friction and at a fraction of the current transaction costs. Using blockchain technology, firms can undergo new applications and processes by pursuing transparency and control, low bureaucracy, trustless relationships, high standards of responsibility, and sustainability. As a result, business and organizations can successfully implement blockchain to grant transparency to consumers and end-users; remove challenges linked to pollution, frauds, human rights, abuse, and other inefficiencies; as well as guaranteed traceability of goods and services by univocally identifying the provenance inputs’ quantity and quality along with their treatment and origin. Blockchain Technology Applications in Businesses and Organizations reveals the true advantages that blockchain entails for firms by creating transparent and digital transactions, resolves conflicts and exceptions, and provides incentive-based mechanisms and smart contracts. This book seeks to create a clear understanding of blockchain’s applications such that business leaders can see and evaluate its real advantages. Blockchain is then analyzed not from the typical perspective of financial tools using cryptocurrencies and bitcoins but from the perspective of the business advantages for business and organizations. Specifically, the book highlights the advantages of blockchain across different segments and industries by analyzing specific aspects like procurement, manufacturing, contracts, inventory, logistics, operations, sustainability, technology, and innovation. It is an essential reference source for managers, executives, IT specialists, students, operations managers, supply chain managers, project managers, technology managers, academicians, and researchers.

Cheong on Granting Legal Personhood to Artificial Intelligence Systems and Traditional Veil-Piercing Concepts To Impose Liability

Ben Chester Cheong (Singapore University of Social Sciences) has posted “Granting Legal Personhood to Artificial Intelligence Systems and Traditional Veil-Piercing Concepts To Impose Liability” on SSRN. Here is the abstract:

This article discusses some of the issues surrounding artificial intelligence systems and whether artificial intelligence systems should be granted legal personhood. The first part of the article discusses whether current artificial intelligence systems should be granted rights and obligations, akin to a legal person. The second part of the article deals with imposing liability on artificial intelligence beings by analogizing with incorporation and veil piercing principles in company law. It examines this by considering that a future board may be replaced entirely by an artificial intelligence director managing the company. It also explores the possibility of disregarding the corporate veil to ascribe liability on such an artificial intelligence beings and the ramifications of such an approach in the areas of fraud and crime.

Bruner on Artificially Intelligent Boards and the Future of Delaware Corporate Law

Christopher M. Bruner (University of Georgia School of Law) has posted “Artificially Intelligent Boards and the Future of Delaware Corporate Law” on SSRN. Here is the abstract:

The prospects for Artificial Intelligence (AI) to impact the development of Delaware corporate law are at once over- and under-stated. As a general matter, claims to the effect that AI systems might ultimately displace human directors not only exaggerate the foreseeable technological potential of these systems, but also tend to ignore doctrinal and institutional impediments intrinsic to Delaware’s competitive model – notably, heavy reliance on nuanced and context-specific applications of the fiduciary duty of loyalty by a true court of equity. At the same time, however, there are specific applications of AI systems that might not merely be accommodated by Delaware corporate law, but perhaps eventually required. Such an outcome would appear most plausible in the oversight context, where fiduciary loyalty has been interpreted to require good faith effort to adopt a reasonable compliance monitoring system, an approach driven by an implicit cost-benefit analysis that could lean decisively in favor of AI-based approaches in the foreseeable future.
This article discusses the prospects for AI to impact Delaware corporate law in both general and specific respects and evaluates their significance. Section II describes the current state of the technology and argues that AI systems are unlikely to develop to the point that they could displace the full range of functions performed by human boards in the foreseeable future. Section III, then, argues that even if the technology were to achieve more impressive results in the near-term than I anticipate, acceptance of non-human directors would likely be blunted by doctrinal and institutional structures that place equity at the very heart of Delaware corporate law. Section IV, however, suggests that there are nevertheless discrete areas within Delaware corporate law where reliance by human directors upon AI systems for assistance in board decision-making might not merely be accommodated, but eventually required. This appears particularly plausible in the oversight context, where fiduciary loyalty has become intrinsically linked with adoption of compliance monitoring systems that are themselves increasingly likely to incorporate AI technologies. Section V briefly concludes.

Reyes on Autonomous Corporate Personhood

Carla Reyes (Southern Methodist University – Dedman School of Law) has posted “Autonomous Corporate Personhood” (Washington Law Review, Forthcoming) on SSRN. Here is the abstract:

Currently, several states are considering changes to their business organization law to accommodate autonomous businesses—businesses operated entirely through computer code. Several international civil society groups are also actively developing new frameworks and a model law for enabling decentralized, autonomous businesses to achieve a corporate or corporate-like status that bestows legal personhood. Meanwhile, various jurisdictions, including the European Union, have considered whether and to what extent artificial intelligence (AI) more broadly should be endowed with personhood in order to respond to AI’s increasing presence in society. Despite the fairly obvious overlap between the two sets of inquiries, the legal and policy discussions between the two only rarely overlap. As a result of this failure to communicate, both areas of personhood theory fail to account for the important role that socio-technical and socio-legal context plays for law and policy development. This Article fills the gap by investigating the limits of artificial rights at the intersection of corporations and artificial intelligence. Specifically, this Article argues that building a comprehensive legal approach to artificial rights—rights enjoyed by artificial people, whether entity, machine, or otherwise—requires approaching the issue through a systems lens to ensure law’s consideration of the varied socio-technical contexts in which artificial people exist.

To make these claims, this Article first establishes a baseline of terminology, emphasizes the importance of viewing AI as part of a socio-technical system, and reviews the existing market for autonomous corporations. Sections II and III then examine the existing debates around both artificially intelligent persons and corporate personhood, arguing that the socio-legal needs driving artificial personhood debates in both contexts include: protecting the rights of natural people, upholding social values, and creating a fiction for legal convenience. Sections II and III explore the extent to which the theories from either set of literature fits the reality of autonomous businesses, illuminating gaps and using them to demonstrate that the law must consider the socio-technical context of AI systems and the socio-legal complexity of corporations to decide how autonomous businesses will interact with the world. Ultimately, the Article identifies links between both areas of legal personhood, leveraging those links to demonstrate the Article’s core claim: developing law for artificial systems in any context should use the systems nature of the artificial artifact to tie its legal treatment directly to the system’s socio-technical reality.