Bassini on Speech Without a Speaker: Constitutional Coverage for Generative AI Output?

Marco Bassini (Tilburg U Tilburg Institute Law) has posted “Speech Without a Speaker: Constitutional Coverage for Generative AI Output?” (European Constitutional La Review, First View, pp. 1-37, https://doi.org/10.1017/S1574019625100771) on SSRN. Here is the abstract:

Generative AI systems’ output as speech – Constitutional coverage for AI speech in the absence of a (human) speaker – Right of individuals to receive information as a perspective for framing constitutional coverage of generative AI output – Implications of constitutional coverage for content policing and content moderation by private platforms – Trends in the interpretation of existing content moderation regimes and their applicability to generative AI systems

Dickinson on Section 230: A Juridical History

Gregory M. Dickinson (U Nebraska Lincoln College Law) has posted “Section 230: A Juridical History” (28 Stan. Tech. L. Rev. 1 (2025)) on SSRN. Here is the abstract:

Section 230 of the Communications Decency Act of 1996 is the most important law in the history of the internet. It is also one of the most flawed. Under Section 230, online entities are absolutely immune from lawsuits related to content authored by third parties. The law has been essential to the internet’s development over the last twenty years, but it has not kept pace with the times and is now a source of deep consternation to courts and legislatures. Lawmakers and legal scholars from across the political spectrum praise the law for what it has done, while criticizing its protection of bad-actor websites and obstruction of internet law reform.

Absent from the fray, however, has been the Supreme Court, which has never issued a decision interpreting Section 230. That is poised to change, as the Court now appears determined to peel back decades of lower court case law and interpret the statute afresh to account for the tremendous technological advances of the last two decades. Rather than offer a proposal for reform, of which there are plenty, this Article acts as a guidebook to reformers by examining how we got to where we are today. It identifies those interpretive steps and missteps by which courts constructed an immunity doctrine insufficiently resilient against technological change, with the aim of aiding lawmakers and scholars in crafting an immunity doctrine better situated to accommodate future innovation.

Kogan on Artificial Intelligence, Existential Risk, and the First Amendment

Ilan Kogan (Yale U Law) has posted “Artificial Intelligence, Existential Risk, and the First Amendment” (University of Pennsylvania Journal of Constitutional Law Vol. 27, March 2025) on SSRN. Here is the abstract:

In May 2023, hundreds of public figures signed a statement warning of the growing risk of human extinction from sophisticated artificial-intelligence systems. Yet, in many important cases, the outputs of sophisticated artificial-intelligence systems qualify as protected speech for First Amendment purposes. Regulators’ increasing focus on the potential for artificial intelligence to extinguish humanity is thus minimally actionable. Sophisticated artificial-intelligence systems are unlikely to present sufficient risk such that their regulation may subvert the Constitution. By limiting unnecessary regulation aimed at speculative risks, the First Amendment helps ensure that the United States will benefit from important technological advances in the twenty-first century.

Goodyear on Dignity and Deepfakes

Michael Goodyear (New York U Law) has posted “Dignity and Deepfakes” (Arizona State Law Journal, Forthcoming) on SSRN. Here is the abstract:

Today, we face a dangerous technosocial combination: AI-generated deepfakes and the Internet. Believable and accessible, these deepfakes have already spread sex, lies, and false advertisements across the Internet and targeted everyone from Taylor Swift to middle school students. Dissemination of deepfakes inflicts multifarious dignitary harms against their victims—especially women and LGBTQ+ persons—stripping them of control over their own identities, harming their reputations, and ostracizing them from society through shame.

Yet this is not the first time a new technology for capturing one’s likeness and a method for disseminating images threatened individuals’ dignity. In the late nineteenth century, the right of publicity emerged in response to a similar troubling technosocial combination: the portable camera and mass media. With no legal remedy for the capture and dissemination of one’s likeness to friend and foe alike, the right of publicity sought to protect both individuals’ dignitary and economic interests by curtailing the sharing of images without permission.

The right of publicity offers an apt historical analogy that should inform a dual approach for how we approach deepfakes. Promising anti-deepfake proposals should both counter dissemination and address the dignitary harms inflicted by deepfakes. Yet most proposed legal remedies are unviable because they do not meet one of these two prongs. Some proposals are limited in restricting dissemination because a federal law, Section 230, immunizes online platforms for their users’ actions, including the posting of deepfakes. Other claims that lie outside of Section 230, such as copyright and trademark infringement, are a conceptual mismatch for the dignitary harms of deepfake dissemination, limiting their utility.

This Article proposes the right of publicity is not only a helpful historical analog, but also offers a third path between these doctrinally and conceptually lacking proposals. Recognizing the right of publicity as intellectual property would exclude it from the liability shield of Section 230 . If Section 230 did not apply, online platforms could be liable for hosting misappropriations of another’s right of publicity, including deepfakes. This would oblige online platforms, consistent with the First Amendment, to adopt notice-and-takedown frameworks to restrict deepfakes’ dissemination. Although the right of publicity has become unmoored from its dignitary purpose and is increasingly limited to commercial uses, now is the time to restore the right’s full original purpose to address both commercial and dignitary harms.

Caputo on ‘Quiet’ Enjoyment: Uncovering the Hidden History of the Right to Attention in Private and Public

Nicholas A. Caputo (Oxford Martin) has posted “’Quiet’ Enjoyment: Uncovering the Hidden History of the Right to Attention in Private and Public” (Stanford Technology Law Review (forthcoming 2025)) on SSRN. Here is the abstract:

Legal scholars have largely neglected attention as a subject of legal rights, even as attention has become one of the most valuable economic resources of the modern era. This Article argues that a right to attention has existed implicitly in American law since the early twentieth century, emerging in response to technological, social, and economic changes in that period that made attention both increasingly valuable and increasingly impinged upon, as America shifted toward knowledge work and leisure activities that demanded sustained focus. By examining court decisions in private law doctrines around property and public law doctrines around speech that can only be explained by reference to an implicit right to attention, this Article begins to uncover the ways in which judges and lawmakers built out a set of legal protections that enabled people to invoke the law to protect their own attention while avoiding stifling the sometimes-disruptive conduct of others. In particular, I show that in private law, courts began recognizing “attentional nuisances,” nontrespassory invasions of land that did not cause physical but only attentional harm, thereby creating a framework for protecting a person’s attention on her own land. In public spaces, the new right to attention came into conflict with also-emerging free speech rights, which seem to require the ability to attract the attention of others in order to express oneself to them. There, the Supreme Court sought a balance through the development of frameworks like time, place, or manner doctrine, which allowed governments to try to regulate attention-grabbing stimuli without directly regulating speech, and through the uneven development of listeners’ rights. In closing, I argue that the right to attention developed in the early twentieth century provides a foundation upon which a modern right to attention addressed to the attention economy could be developed that is both rooted in the experience of the past and capable of meeting the novel challenges presented by digital technology and the rise of artificial intelligence, which promise another epochal technological revolution like that which gave rise to the right a century ago. Drawing out the right to attention buried in the caselaw gives scholars, lawmakers, and the public a set of tools that they can use to decide how to adapt it to the demands of the present. The future of attention relies upon the lessons of its past, and recognizing explicitly the so-far hidden right to attention provides better ways shaping its future.

Balkin on Moody v. NetChoice – The Supreme Court Meets the Free Speech Triangle

Jack M. Balkin (Yale U Law) has posted “Moody v. NetChoice – The Supreme Court Meets the Free Speech Triangle” on SSRN. Here is the abstract:

Moody v. NetChoice is the Supreme Court’s first attempt at applying the First Amendment to social media content regulation. Private infrastructure owners can act both as speakers and as the governors of other people’s speech. This requires a shift from the traditional dyadic model of speech regulation–government versus citizen– to a pluralist or triangular model in which both states and owners of private infrastructure govern end user speech.

Traditional First Amendment doctrine has problems dealing with this shift. The free speech triangle generates perpetual conflicts between the free speech interests of infrastructure companies and end users. Because First Amendment doctrine assumes that only governments regulate (and censor) speech, it has difficulty dealing with these conflicts, and it tends to conflate speech rights with property rights. As a result, to the extent that existing doctrine recognizes First Amendment rights, they will usually be the rights of large digital companies and not of end users.

Moody exemplifies these tendencies, granting social media companies a First Amendment right to govern their end users’ speech. The free speech interests of end users play little to no role in the Court’s analysis.

The best approach is to read Moody narrowly to apply to applications resembling social media feeds, but not to other kinds of digital platforms or to other services lower in the “tech stack.” This would allow governments to impose non-discrimination or common-carriage rules on other parts of the digital infrastructure, especially when their primary job is to ensure that digital traffic flows smoothly and efficiently.

Moody leaves untouched content-neutral structural regulations to ensure fair competition. For example, governments could require social media platforms to permit end users to subscribe to middleware services that would offer alternative content moderation and recommendation systems. Governments could also require interoperability between social media platforms. These kinds of reforms would allow end users to benefit from the network effects of global platforms but also offer them greater choice in how their speech is governed and regulated. They would lower barriers to entry for new companies that could provide competing content moderation and recommendation services. This would help counter the dominance of a tiny number of powerful global companies that decide who speaks online.

The Court assumed without deciding that states might impose disclosure and transparency rules on social media companies under compelled commercial speech doctrine. This is in tension with its holding that content moderation and recommendation systems involve editorial judgments like those in newspapers. Newspapers are normally free to make editorial judgments without having to justify themselves to the state. In fact, commercial speech doctrine is an imperfect proxy for the real issues of procedural fairness. What is really at stake is not whether end users are well informed; it is whether they are being governed arbitrarily.

Finally, Moody begins thinking about whether content produced by algorithms and artificial intelligence is protected by the First Amendment. The Court’s brief discussion shows that it understands the problem is important but that it currently lacks the tools to resolve it in a satisfactory way.

Goodman on Synthetic Content: Default to Distrust

Ellen P. Goodman (Rutgers Law) has posted “Synthetic Content: Default to Distrust” (Forthcoming Case Western Reserve Law Review) on SSRN. Here is the abstract:

AI generated or altered content — synthetic content — can cause economic, dignitary, and epistemic harms. To combat epistemic harms in particular, many jurisdictions have adopted or are considering mandatory synthetic content provenance or content labels.  This article takes a skeptical view of such mandates on practical and conceptual grounds. Synthetic content disclosure mandates usually rest on two premises: that (1) synthetic content deceives or otherwise distorts public discourse, as compared with authentic content; and (2) source disclosure effectively combats discourse harms. These premises are not well-supported and the disclosures themselves may distort understanding. Moreover, in a near-future world where a large portion of communications are partially or fully synthetic, it makes little sense to assume such content is rare. These problems of scaling, burden, and meaning all feed into the First Amendment infirmities of mandatory synthetic content disclosures. This is not to devalue the importance of content authentication or voluntary synthetic content disclosure. It is simply to challenge the wisdom of taking what are inherently contested sociotechnical calls — not unlike distinguishing what is true from false — and legally mandating them to be made. 

An alternative to “defining in” the synthetic is to “define out” the authentic. It is to default to distrust in the authenticity of content. Rather than putting the onus on synthetic content creators and distributors to mark content as synthetic, a more scalable approach is to support those who want to authenticate content as human-made. Robust content authentication calls for quite different policy interventions than disclosure mandates. The latter draw from the regulatory heritage of media laws, such as electioneering and advertising disclosure laws. These force reluctant communicators to disclose what they would prefer to conceal. By contrast, the policies needed to support authentication would draw from the regulatory heritage of software law, such as the Digital Millenium Copyright Act’s requirement that distributors preserve content technical protection measures. Here, the goal is to preserve communications voluntarily made and protect them from downstream tampering. If and when most content is at least partially synthetic, those who want to flag their communications as authentic (or provide provenance information about content alterations) need assurance that those flags will convey through the content distribution chain to recipients.

Shaheen on Section 230’s Immunity for Generative Artificial Intelligence

Louis Shaheen (Ohio State U (OSU)) has posted “Section 230’s Immunity for Generative Artificial Intelligence” on SSRN. Here is the abstract:

Section 230 of the Communications Decency Act of 1996 (47 U.S.C § 230) was Congress’s attempt to shield online platforms from liability for third-party content posted on their websites.1 The law also protects online platforms serving as “Good Samaritans,” those that perform editorial functions on their websites to keep them free of obscene material. Consistent with an admonishment by Congress, the federal courts have interpreted Section 230 broadly and have granted immunity in a variety of cases.

Law reviews responded critically and provided potential reform recommendations. Law professors to law firms have critiqued Section 230 for extending broad immunity beyond its drafters’ intentions. Indeed, Section 230 affords protection to website owners far greater than what the First Amendment guarantees. The criticisms are warranted, considering that those harmed by content posted on websites are left without recourse. For example, Section 230 has granted immunity in situations where a website allows content that sexually harasses and humiliates people. Although Congress has put forth legislation that would curb this immunity, it has not passed any. Thus, under Congress’s direction, courts have interpreted Section 230 broadly and granted immunity in ill-advised circumstances. This Article deals with an emerging Section 230 frontier commonly discussed in legal commentary that courts have not yet addressed: generative artificial intelligence (AI). Currently, the closest analog that courts have addressed are algorithms, like the one used by Facebook, that push content to a user’s feed based on that user’s input. Generative AI, when trained using third-party content on the internet, can produce text, images, and other media in response to a user’s input, similar to and yet different from these algorithms. The emergence of generative artificial intelligence presents an unanswered question: does Section 230 protect the content that generative AI produces? In four sections, this Article answers in the affirmative: First, it provides a background that traces Section 230’s legislative history. Second, it highlights White House and congressional activity surrounding generative AI. Third, it applies Section 230’s three elements to determine whether generative AI has Section 230 immunity. Fourth, it demonstrates that Section 230 immunity is too broad and proposes solutions based on analogous laws that would allow generative AI technology to develop and allow those harmed by generative AI to petition courts for redress.

Ginsburg on Humanist Copyright

Jane C. Ginsburg (Columbia U Law) has posted “Humanist Copyright” (Forthcoming, 6 JOURNAL OF FREE SPEECH LAW (2025)) on SSRN. Here is the abstract:

This exploration of the role of authorship in copyright law proceeds in three parts: historical, doctrinal, and predictive. First, I will review the development of authorfocused property rights in the pre-copyright regimes of printing privileges and in early Anglo-American copyright law through the 1909 U.S. Copyright Act. Second, I will analyze the extent to which the present U.S. copyright law does (and does not) honor human authorship. Finally, I will consider the potential responses of copyright law to the claims of proprietary rights in AI-generated outputs. I will explain why the humanist orientation of US copyright law validates the position of the Copyright Office and the courts that the output of an AI system will not be a “work of authorship” unless human participation has determinatively caused the creation of the output.

Neumann et al. on Informational Justice in AI-Assisted Fact-Checking

Terrence Neumann (U Texas Austin) et al. have posted “Informational Justice in AI-Assisted Fact-Checking” on SSRN. Here is the abstract:

Faced with the scale of misinformation, fact-checking organizations are turning to algorithms to efficiently triage claims in need of verification. However, there is uncertainty regarding the appropriate “ground truth” for training and evaluating these algorithms given the varied factors that influence how claims are prioritized for checking. For instance, numerous fact-checking organizations prioritize checking claims that are most likely to impact the “general public,” while others prioritize claims that harm vulnerable demographic groups. To better understand the implications of these and other algorithmic design choices, we first extend and then apply the theoretical lens of informational justice to elucidate the often-competing interests of representation, participation, credibility, and the distribution of benefits and burdens among stakeholders affected by algorithms designed to assist with fact-checking. From our examination of an original dataset, we show that different definitions of claim prioritization lead to certain topics being systematically prioritized over others. Moreover, even when using the same definition, we show that data labelers interpret and apply them differently based on their perspectives and demographics. We conclude with a discussion on the theoretical and practical implications of these findings for fact-checking organizations, highlighting the risks of “off-the-shelf” algorithms and opportunities for technical approaches to informational justice.