Cyphert & Martin on Developing a Liability Framework for Social Media Algorithmic Amplification

Amy Cyphert (West Virginia University – College of Law) and Jena Martin (same) have posted “‘A Change is Gonna Come:’ Developing a Liability Framework for Social Media Algorithmic Amplification” (U.C. Irvine Law Review, Vol. 13 (2022)) on SSRN. Here is the abstract:

From the moment social media companies like Facebook were created, they have been largely immune to suit for the actions they take with respect to user content. This is thanks to Section 230 of the Communications Decency Act, 47 U.S.C. § 230, which offers broad immunity to sites for content posted by users. But seemingly the only thing a deeply divided legislature can agree on is that Section 230 must be amended, and soon. Once that immunity is altered, either by Congress or the courts, these companies may be liable for the decisions and actions of their algorithmic recommendation systems, artificial intelligence models that sometimes amplify the worst in our society, as Facebook whistleblower Frances Haugen explained to Congress in her testimony.

But what, exactly, will it look like to sue a company for the actions of an algorithm?

Whether through torts like defamation or under certain statutes, such as those aimed at curbing terrorism, the mechanics of bringing such a claim will surely occupy academics and practitioners in the wake of changes to Section 230. To that end, this Article is the first to examine how the issue of algorithmic amplification might be addressed by agency principles of direct and vicarious liability, specifically within the context of holding social media companies accountable. As such, this Article covers the basics of algorithmic recommendation systems, discussing them in layman’s terms and explaining why Section 230 reform may spur claims that have a profound impact on traditional tort law. The Article looks to sex trafficking claims made against social media companies—an area already exempted from Section 230’s shield—as an early model of how courts might address other claims against these companies. It also examines the potential hurdles, such as causation, that will remain even when Section 230 is amended. It concludes by offering certain policy considerations for both lawmakers and jurists.


Balkin on Free Speech Versus the First Amendment

Jack M. Balkin (Yale Law School) has posted “Free Speech Versus the First Amendment” (UCLA Law Review, Forthcoming) on SSRN. Here is the abstract:

The digital age has widened the gap between the judge-made doctrines of the First Amendment and the practical exercise of freedom of speech. Today speech is regulated not only by territorial governments but also by the owners of digital infrastructure — for example, broadband and cellular providers, caching services, app stores, search engines, and social media companies. This has made First Amendment law less central and the private governance of speech more central.

When the free speech interests of digital companies and their end-users conflict, the major beneficiaries of First Amendment rights are likely to be the former and not the latter. Digital companies will try to use the First Amendment to avoid government regulation, including regulation designed to protect the free speech and privacy interests of end-users.

In response, internet reformers on both the left and the right will attempt to de-constitutionalize internet regulation: They will offer legal theories designed to transform conflicts over online speech from First Amendment questions into technical, statutory and administrative questions. In the U.S., at least, de-constitutionalization is the most likely strategy for imposing public obligations on privately-owned digital companies. If successful, it will make the First Amendment even less important to online expression.

The speed and scale of digital speech have also transformed how speech is governed. To handle the enormous traffic, social media companies have developed algorithmic and administrative systems that do not view speech in terms of rights. Accompanying these changes in governance is a different way of thinking about speech. In place of the civil liberties model of individual speech rights that developed in the twentieth century, the emerging model views speech in hygienic, epidemiological, environmental, and probabilistic terms.

The rise of algorithmic decisionmaking and data science also affect how people think about free expression. Speech becomes less the circulation of ideas and opinions among autonomous individuals and more a collection of measurable data and network connections that companies and governments use to predict social behavior and nudge end-users. Conceived as a collection of data, speech is no longer special; it gets lumped together with other sources of measurable and analyzable data about human behavior that can be used to make predictions for influence and profit.

Meanwhile, the speed and scale of digital expression, the scarcity of audience attention, and social media’s facilitation of online propaganda and conspiracy theories have placed increasing pressure on the standard justifications for freedom of speech, including the pursuit of truth and the promotion of democracy. The gap between the values that justify freedom of speech and what the First Amendment actually protects grows ever wider.

In response, some scholars have argued that courts should change basic First Amendment doctrines about incitement, defamation, and false speech. But it is far more important to focus on regulating the new forms of informational capitalism that drive private speech governance and have had harmful effects on democracy around the globe.

The digital age has also undermined many professions and institutions for producing and disseminating knowledge. These professions and institutions are crucial to the health and vitality of the public sphere. Changing First Amendment doctrines will do little to fix them. Instead, the task of the next generation is to revive, reestablish and recreate professional and public-regarding institutions for knowledge production and dissemination that are appropriate to the digital age. That task will take many years to accomplish.

Recommended.

Goldman on The United States’ Approach to ‘Platform’ Regulation

Eric Goldman (Santa Clara University – School of Law) has posted “The United States’ Approach to ‘Platform’ Regulation” on SSRN. Here is the abstract:

This paper summarizes the United States’ legal framework governing Internet “platforms” that publish third-party content. It highlights three key features of U.S. law: the constitutional protections for free speech and press, the statutory immunity provided by 47 U.S.C. § 230 (“Section 230”), and the limits on state regulation of the Internet. It also discusses U.S. efforts to impose mandatory transparency obligations on Internet “platforms.”

G’sell on The Digital Services Act

Florence G’sell (Sciences Po; University of Lorraine) has posted “The Digital Services Act (DSA): A General Assessment” (in Antje von Ungern-Sternberg (ed.), Content Regulation in the European Union – The Digital Services Act (Trier 2023)) on SSRN. Here is the abstract:

Effective since November 16, 2022, the Digital Services Act (DSA) introduces an innovative and pragmatic regulatory approach, utilizing novel and ingenious mechanisms to update and complement the current rules governing online platforms while adapting to their present characteristics. This article presents and comments the main features of the DSA, while highlighting the potential challenges that could arise during its implementation. The first section outlines the five key aspects of the DSA, including the asymmetric nature of the Regulation, which adjusts rules and obligations to suit the size and activities of regulated entities; the preservation of the exemption from liability established by the E-Commerce Directive, along with the inclusion of a new Good Samaritan clause; the creation of new obligations in content moderation to ensure the effective combating of objectionable content and the protection of users’ rights; the establishment of specific obligations to protect users and consumers and respond to crisis situations; and finally, the original provisions concerning the enforcement of the DSA. The second part of the article concentrates on identifying the potential challenges of implementing the DSA, focusing specifically on obstacles that could hinder the text’s effective application, potential difficulties arising from provisions related to managing systemic risks, and the complex adaptation of the DSA to emerging technologies. Ultimately, while the DSA is undoubtedly an innovative, necessary, and commendable initiative, its ability to address the most pressing issues of the contemporary internet will only become clear upon its practical implementation.

Cortez & Sage on The Disembodied First Amendment

Nathan Cortez (SMU – Dedman School of Law) and William M. Sage (Texas A&M University School of Law) have posted “The Disembodied First Amendment” (100 Washington University Law Review 707 (2023)) on SSRN. Here is the abstract:

First Amendment doctrine is becoming disembodied—increasingly detached from human speakers and listeners. Corporations claim that their speech rights limit government regulation of everything from product labeling to marketing to ordinary business licensing. Courts extend protections to commercial speech that ordinarily extended only to core political and religious speech. And now, we are told, automated information generated for cryptocurrencies, robocalling, and social media bots are also protected speech under the Constitution. Where does it end? It begins, no doubt, with corporate and commercial speech. We show, however, that heightened protection for corporate and commercial speech is built on several “artifices” – dubious precedents, doctrines, assumptions, and theoretical grounds that have elevated corporate and commercial speech rights over the last century. This Article offers several ways to deconstruct these artifices, re-tether the First Amendment to natural speakers and listeners, and thus reclaim the individual, political, and social objectives of the First Amendment.

Guerra-Pujol on Truth Markets

F. E. Guerra-Pujol (Pontifical Catholic University of Puerto Rico; University of Central Florida) has posted “Truth Markets” on SSRN. Here is the abstract:

A growing chorus of legal scholars and policy makers have decried the proliferation of false information on the Internet–e.g. fake news, conspiracy theories, and the like–while at the same time downplaying the dangers of Internet censorship, including shadow bans, arbitrary or selective enforcement of content moderation policies, and other forms of Internet speech suppression. This Article proposes a simple alternative to censorship: a truth market.

Nugent on The Five Internet Rights

Nicholas Nugent (University of Virginia School of Law) has posted “The Five Internet Rights” (Washington Law Review, Forthcoming) on SSRN. Here is the abstract:

Since the dawn of the commercial internet, content moderation has operated under an implicit social contract that website operators could accept or reject users and content as they saw fit, but users in turn could self-publish their views on their own websites if no one else would have them. However, as online service providers and activists have become ever more innovative and aggressive in their efforts to deplatform controversial speakers, content moderation has progressively moved down into the core infrastructure of the internet, targeting critical resources, such as networks, domain names, and IP addresses, on which all websites depend. These innovations point to a world in which it may soon be possible for private gatekeepers to exclude unpopular users, groups, or viewpoints from the internet altogether, a phenomenon I call viewpoint foreclosure.

For more than three decades, internet scholars have searched, in vain, for a unifying theory of interventionism—a set of principles to guide when the law should intervene in the private moderation of lawful online content and what that intervention should look like. These efforts have failed precisely because they have focused on the wrong gatekeepers, scrutinizing the actions of social media companies, search engines, and other third-party websites—entities that directly publish, block, or link to user-generated content—while ignoring the core resources and providers that make internet speech possible in the first place. This Article is the first to articulate a workable theory of interventionism by focusing on the far more fundamental question of whether users should have any right to express themselves on the now fully privatized internet. By articulating a new theory premised on viewpoint access—the right to express one’s views on the internet itself (rather than on any individual website)—I argue that the law need take account of only five basic non-discrimination rights to protect online expression from private interference—namely, the rights of connectivity, addressability, nameability, routability, and accessibility. Looking to property theory, internet architecture, and economic concepts around market entry barriers, it becomes clear that as long as these five fundamental internet rights are respected, users are never truly prevented from competing in the online marketplace of ideas, no matter the actions of any would-be deplatformer.

Bloch-Wehba on The Rise, Fall, and Rise of Cyber Civil Libertarianism

Hannah Bloch-Wehba (Texas A&M University School of Law; Yale ISP) has posted “The Rise, Fall, and Rise of Cyber Civil Libertarianism” in Feminist Cyberlaw (Meg Leta Jones and Amanda Levendowski, eds.) forthcoming. Here is the abstract:

Using sexual speech as its focal point, this essay explores the ambiguous legacy of cyber civil liberties and the ascent of alternative paradigms for digital freedom. From its inception, cyberlaw was characterized by a moral panic over sexual speech, pornography, and the protection of children familiar to First Amendment scholars. Important civil libertarian victories recognized that sexual speech and pornography were constitutionally protected from state intervention. The civil libertarian paradigm saw government regulation as the primary threat to free speech online, the marketplace as the more appropriate mechanism for regulating expression, and courts as the rightful arbiters of these disputes.

But while civil libertarians successfully rolled back much regulatory intervention to enforce moral codes online, their successes came at a price: the legitimation of private power over speech. Though the civil libertarian tradition would theoretically protect sexual speech, it has in practice shifted the locus of power over speech from public to private hands. The result is a form of “market” ordering that is nominally private but that, in fact, reflects the entrenched power and influence of conservative cultural politics. In turn, this burgeoning private authority has prompted both political and cultural realignments (the “techlash”) and a broader turning away from the civil libertarian approach to speech.

Amid attacks on women’s health, privacy, equality, and autonomy, it is tempting to look to online platforms as guardians of these values and defenders of First Amendment traditions. Yet platforms have been—and continue to be—ambivalent defenders of sexual speech. Today, private speech enforcement is far broader than what the state could accomplish through direct regulation. But in a moment of challenge to sexual freedom and equality, cyber civil libertarianism might—with renewed attention to private power—yet find another foothold.

Lemert on Facebook’s Corporate Law Paradox

Abby Lemert (Yale Law School) has posted “Facebook’s Corporate Law Paradox” on SSRN. Here is the abstract:

In response to the digital harms created by Facebook’s platforms, lawmakers, the media, and academics repeatedly demand that the company stop putting “profits before people.” But these commentators have consistently overlooked the ways in which Delaware corporate law disincentives and even prohibits Facebook’s directors from prioritizing the public interest. Because Facebook experiences the majority of the harms it creates as negative externalities, Delaware’s unflinching commitment to shareholder primacy prevents Facebook’s directors from making unprofitable decisions to redress those harms. Even Facebook’s attempt to delegate decision-making authority to the independent Oversight Board verges on an unlawful abdication of corporate director fiduciary duties. Facebook’s experience casts doubt on the prospects for effective corporate self-regulation of content moderation, and more broadly, on the ability of existing corporate law to incentivize or even allow social media companies to meaningfully redress digital harms.

Goldman on Assuming Good Faith Online

Eric Goldman (Santa Clara University – School of Law) has posted “Assuming Good Faith Online” (30 Catholic U.J.L. & Tech (Forthcoming)) on SSRN. Here is the abstract:

Every internet service enabling user-generated content faces a dilemma of balancing good-faith and bad-faith activity. Without that balance, the service loses one of the internet’s signature features—users’ ability to engage with and learn from each other in pro-social and self-actualizing ways—and instead drives towards one of two suboptimal outcomes. Either it devolves into a cesspool of bad-faith activity or becomes a restrictive locked-down environment with limited expressive options for any user, even well-intentioned ones.

Striking this balance is one of the hardest challenges that internet services must navigate, and yet the U.S. regulatory policy currently lets services prioritize the best interests of their audiences rather than regulators’ paranoia of bad faith actors. However, that regulatory deference is in constant jeopardy. Should it change, it will hurt the internet—and all of us.