Wang on Can ChatGPT Personalize Index Funds’ Voting Decisions?

Chen Wang (UC Berkeley – School of Law) has posted “Can ChatGPT Personalize Index Funds’ Voting Decisions?” on SSRN. Here is the abstract:

ChatGPT has risen rapidly to prominence due to its unique features and generalization ability. This article proposes using ChatGPT to assist small investment funds, particularly small passive funds, in making more accurate and informed proxy voting decisions.

Passive funds adopt a low-cost business model. Small passive funds lack financial incentives to make informed proxy voting decisions that align with their shareholders’ interests. This article examines the implications of passive funds on corporate governance and the issues associated with outsourcing voting decisions to proxy advisors. The article finds that passive funds underspend on investment stewardship and outsource their voting proxy decisions to proxy advisors, which could lead to biased or erroneous recommendations.

However, by leveraging advanced AI language models such as ChatGPT, small passive funds can improve their proxy voting accuracy and personalization, enabling them to better serve their shareholders and navigate the competitive market.

To test ChatGPT’s potential, this article conducted an experiment using its zero-shot GPT-4 model to generate detailed proxy voting guidelines and apply them to a real-world proxy statement. The model successfully identified conflicts of interest in the election of directors and generated comprehensive guidelines with weight for each variable. However, ChatGPT has some limitations, such as token limitations, long-range dependencies, and likely ESG inclination.

To enhance its abilities, ChatGPT can be fine-tuned using high-quality, domain-specific datasets. However, investment funds may face challenges when outsourcing voting decisions to AI, such as data and algorithm biases, cybersecurity and privacy concerns, and regulatory uncertainties.

Cyphert & Martin on Developing a Liability Framework for Social Media Algorithmic Amplification

Amy Cyphert (West Virginia University – College of Law) and Jena Martin (same) have posted “‘A Change is Gonna Come:’ Developing a Liability Framework for Social Media Algorithmic Amplification” (U.C. Irvine Law Review, Vol. 13 (2022)) on SSRN. Here is the abstract:

From the moment social media companies like Facebook were created, they have been largely immune to suit for the actions they take with respect to user content. This is thanks to Section 230 of the Communications Decency Act, 47 U.S.C. § 230, which offers broad immunity to sites for content posted by users. But seemingly the only thing a deeply divided legislature can agree on is that Section 230 must be amended, and soon. Once that immunity is altered, either by Congress or the courts, these companies may be liable for the decisions and actions of their algorithmic recommendation systems, artificial intelligence models that sometimes amplify the worst in our society, as Facebook whistleblower Frances Haugen explained to Congress in her testimony.

But what, exactly, will it look like to sue a company for the actions of an algorithm?

Whether through torts like defamation or under certain statutes, such as those aimed at curbing terrorism, the mechanics of bringing such a claim will surely occupy academics and practitioners in the wake of changes to Section 230. To that end, this Article is the first to examine how the issue of algorithmic amplification might be addressed by agency principles of direct and vicarious liability, specifically within the context of holding social media companies accountable. As such, this Article covers the basics of algorithmic recommendation systems, discussing them in layman’s terms and explaining why Section 230 reform may spur claims that have a profound impact on traditional tort law. The Article looks to sex trafficking claims made against social media companies—an area already exempted from Section 230’s shield—as an early model of how courts might address other claims against these companies. It also examines the potential hurdles, such as causation, that will remain even when Section 230 is amended. It concludes by offering certain policy considerations for both lawmakers and jurists.


Balkin on Free Speech Versus the First Amendment

Jack M. Balkin (Yale Law School) has posted “Free Speech Versus the First Amendment” (UCLA Law Review, Forthcoming) on SSRN. Here is the abstract:

The digital age has widened the gap between the judge-made doctrines of the First Amendment and the practical exercise of freedom of speech. Today speech is regulated not only by territorial governments but also by the owners of digital infrastructure — for example, broadband and cellular providers, caching services, app stores, search engines, and social media companies. This has made First Amendment law less central and the private governance of speech more central.

When the free speech interests of digital companies and their end-users conflict, the major beneficiaries of First Amendment rights are likely to be the former and not the latter. Digital companies will try to use the First Amendment to avoid government regulation, including regulation designed to protect the free speech and privacy interests of end-users.

In response, internet reformers on both the left and the right will attempt to de-constitutionalize internet regulation: They will offer legal theories designed to transform conflicts over online speech from First Amendment questions into technical, statutory and administrative questions. In the U.S., at least, de-constitutionalization is the most likely strategy for imposing public obligations on privately-owned digital companies. If successful, it will make the First Amendment even less important to online expression.

The speed and scale of digital speech have also transformed how speech is governed. To handle the enormous traffic, social media companies have developed algorithmic and administrative systems that do not view speech in terms of rights. Accompanying these changes in governance is a different way of thinking about speech. In place of the civil liberties model of individual speech rights that developed in the twentieth century, the emerging model views speech in hygienic, epidemiological, environmental, and probabilistic terms.

The rise of algorithmic decisionmaking and data science also affect how people think about free expression. Speech becomes less the circulation of ideas and opinions among autonomous individuals and more a collection of measurable data and network connections that companies and governments use to predict social behavior and nudge end-users. Conceived as a collection of data, speech is no longer special; it gets lumped together with other sources of measurable and analyzable data about human behavior that can be used to make predictions for influence and profit.

Meanwhile, the speed and scale of digital expression, the scarcity of audience attention, and social media’s facilitation of online propaganda and conspiracy theories have placed increasing pressure on the standard justifications for freedom of speech, including the pursuit of truth and the promotion of democracy. The gap between the values that justify freedom of speech and what the First Amendment actually protects grows ever wider.

In response, some scholars have argued that courts should change basic First Amendment doctrines about incitement, defamation, and false speech. But it is far more important to focus on regulating the new forms of informational capitalism that drive private speech governance and have had harmful effects on democracy around the globe.

The digital age has also undermined many professions and institutions for producing and disseminating knowledge. These professions and institutions are crucial to the health and vitality of the public sphere. Changing First Amendment doctrines will do little to fix them. Instead, the task of the next generation is to revive, reestablish and recreate professional and public-regarding institutions for knowledge production and dissemination that are appropriate to the digital age. That task will take many years to accomplish.

Recommended.

Issacharoff & McKenzie on Managerialism and its Discontents

Samuel Issacharoff (NYU Law) and Troy A. McKenzie (same) have posted “Managerialism and its Discontents”
(Review of Litigation, Fall 2023) on SSRN. Here is the abstract:

Managerialism has rooted itself in the American system of civil litigation in the 40 years since the amendment of Rule 16 to recognize a new form of judicial authority, and since Judith Resnik gave the phenomenon the name that serves as its shorthand moniker. Time has not perfectly tamed the inherent tensions between the mantle judges had to adopt in the face of increasingly complex, high-stakes, and multi-jurisdictional disputes and their traditional role as detached adjudicators. One ready manifestation of that tension is the back-and-forth between fixed and discretionary practices in federal courts. This essay examines the gyrations between formal rules of application and those understood to be contextual, and it presents three approaches to the familiar rules/standard divide in federal procedure: formal managerialism, algorithmic managerialism, and structural managerialism. The first is readily exemplified by reforms to social security cases, which received a carve-out from the Federal Rules of Civil Procedure in 2022 and a set of formal rules tailored to their unique issues. Algorithmic managerialism hopes to harness the growing power of Artificial Intelligence to craft custom sets of discovery, motion, and other practice rules at the outset of litigation to maximize judicial economy. Lastly, structural managerialism addresses how courts choose the most efficient fora, from multidistrict litigation to bankruptcy, for resolving polycentric disputes, most notably mass torts. We conclude our review of these trends with a simple reflection: Managerialism is not just an established feature of federal judicial practice, but a new expansion may be on the horizon.

Siebecker on The Incompatibility of Artificial Intelligence and Citizens United

Michael R. Siebecker (U Denver Law) has posted “The Incompatibility of Artificial Intelligence and Citizens United” (Ohio State Law Journal, Vol. 83, No. 6, pp. 1211-1273, 2022) on SSRN. Here is the abstract:

In Citizens United v. FEC, the Supreme Court granted corporations essentially the same political speech rights as human beings. But does the growing prevalence of artificial intelligence (“AI”) in directing the content and dissemination of political communications call into question the jurisprudential soundness of such a commitment? Would continuing to construe the corporation as a constitutional rights bearer make much sense if AI entities could wholly own and operate business entities without any human oversight? Those questions seem particularly important, because in the new era of AI, the nature and practices of the modern corporation are quickly evolving. The magnitude of that evolution will undoubtedly affect some of the most important aspects of our shared social, economic, and political lives. To the extent our conception of the corporation changes fundamentally in the AI era, it seems essential to assess the enduring soundness of prior jurisprudential commitments regarding corporate rights that might no longer seem compatible with sustaining our democratic values. The dramatic and swift evolution of corporate practices in the age of AI provides a clarion call for revisiting the jurisprudential sensibility of imbuing corporations with full constitutional personhood in general and robust political speech rights in particular. For if corporations can use AI data mining and predictive analytics to manipulate political preferences and election outcomes for greater profits, the basic viability and legitimacy of our democratic processes hang in the balance. Moreover, if AI technology itself plays an increasingly important, if not controlling, role in determining the content of corporate political communication, granting corporations the same political speech rights as humans effectively surrenders the political realm to algorithmic entities. In the end, although AI could help corporations act more humanely, the very notion of a corporation heavily influenced or controlled by non-human entities creates the need to cabin at least somewhat the commitment to corporations as full constitutional rights bearers. In particular, with respect to corporate political activity, the growing prevalence of AI in managerial (and possibly ownership) positions makes granting corporations the same political speech rights as humans incompatible with maintaining human sovereignty.

Witt on The Digital Markets Act 

Anne Witt (EDHEC Business School – Department of Legal Sciences) has posted “The Digital Markets Act – Regulating the Wild West” (Common Market Law Review, Forthcoming 2023) on SSRN. Here is the abstract:

This contribution critically assesses the European Union’s Digital Markets Act (DMA). The DMA is the first comprehensive legal regime to regulate digital gatekeepers in the aim of making platforms markets fairer and more contestable. To this end, the DMA establishes 22 per se conduct rules for designated platforms. It also precludes national gatekeeper regulation by EU Member States, thereby calling into question the legality of the pioneering German sec. 19a GWB. The analysis shows that the DMA’s rules are not as rigid as they may appear at first sight. While it is more accepting of false positives than of false negatives, the DMA contains several corrective mechanisms that could allow the Commission to finetune the rules to address both the danger of over- and under-inclusiveness. A further positive is that the new regulation incorporates key concepts of the GDPR, and requires coordination between the Commission and key EU data protection bodies. On the downside, the DMA does not contain any substantive principles for the assessment of gatekeeper acquisitions, leaving a worrying gap. While the DMA’s conduct rules outlaw specific leveraging strategies in digital ecosystems and may thereby indirectly address certain non-horizontal concerns arising from gatekeeper acquisitions, it remains that the European Union’s existing guidance on merger control is seriously out of date. The merger guidelines therefore urgently need updating to include (workable) theories of harm for concentrations in the digital economy.

Shope on GPT Performance on the Bar Exam in Taiwan

Mark Shope (National Yang Ming Chiao Tung University; Indiana University Robert H. McKinney School of Law) has posted “GPT Performance on the Bar Exam in Taiwan” on SSRN. Here is the abstract:

This paper reports the performance of the GPT-4 Model of ChatGPT Plus (“ChatGPT4”) on the multiple-choice section of the 2022 Lawyer’s Bar Exam in Taiwan. ChatGPT4 outperforms approximately half of human test-takers on the multiple-choice section with a score of 342. This score, however, would not advance a test taker to the second and final essay portion of the exam. Therefore, this paper will not include an evaluation of ChatGPT4’s performance on the essay portion of the exam.

Gallese on the AI Act and the Right to Technical Interpretability

Chiara Gallese (University of Trieste Dept of Engineering) has posted “The AI Act Proposal: a New Right to Technical Interpretability?” on SSRN. Here is the abstract:

The debate about the concept of the so called right to explanation in AI is the subject of a wealth of literature. It has focused, in the legal scholarship, on art. 22 GDPR and, in the technical scholarship, on techniques that help explain the output of a certain model (XAI). The purpose of this work is to investigate if the new provisions introduced by the proposal for a Regulation laying down harmonised rules on artificial intelligence (AI Act), in combination with Convention 108 plus and GDPR, are enough to indicate the existence of a right to technical explainability in the EU legal framework and, if not, whether the EU should include it in its current legislation. This is a preliminary work submitted to the online event organised by the Information Society Law Center and it will be later developed into a full paper.

Henderson, Li, Jurafsky, Hashimoto, Lemley & Liang on Foundation Models and Fair Use

Peter Henderson (Stanford University), Xuechen Li (same), Dan Jurafsky (same), Tatsunori Hashimoto (same), Mark A. Lemley
(Stanford Law School), and Percy Liang (Stanford Computer Science) have posted “Foundation Models and Fair Use” on SSRN. Here is the abstract:

Existing foundation models are trained on copyrighted material. Deploying these models can pose both legal and ethical risks when data creators fail to receive appropriate attribution or compensation. In the United States and several other countries, copyrighted content may be used to build foundation models without incurring liability due to the fair use doctrine. However, there is a caveat: If the model produces output that is similar to copyrighted data, particularly in scenarios that affect the market of that data, fair use may no longer apply to the output of the model. In this work, we emphasize that fair use is not guaranteed, and additional work may be necessary to keep model development and deployment squarely in the realm of fair use. First, we survey the potential risks of developing and deploying foundation models based on copyrighted content. We review relevant U.S. case law, drawing parallels to existing and potential applications for generating text, source code, and visual art. Experiments confirm that popular foundation models can generate content considerably similar to copyrighted material. Second, we discuss technical mitigations that can help foundation models stay in line with fair use. We argue that more research is needed to align mitigation strategies with the current state of the law. Lastly, we suggest that the law and technical mitigations should co-evolve. For example, coupled with other policy mechanisms, the law could more explicitly consider safe harbors when strong technical tools are used to mitigate infringement harms. This co-evolution may help strike a balance between intellectual property and innovation, which speaks to the original goal of fair use. But we emphasize that the strategies we describe here are not a panacea and more work is needed to develop policies that address the potential harms of foundation models.