Lemert on Facebook’s Corporate Law Paradox

Abby Lemert (Yale Law School) has posted “Facebook’s Corporate Law Paradox” on SSRN. Here is the abstract:

In response to the digital harms created by Facebook’s platforms, lawmakers, the media, and academics repeatedly demand that the company stop putting “profits before people.” But these commentators have consistently overlooked the ways in which Delaware corporate law disincentives and even prohibits Facebook’s directors from prioritizing the public interest. Because Facebook experiences the majority of the harms it creates as negative externalities, Delaware’s unflinching commitment to shareholder primacy prevents Facebook’s directors from making unprofitable decisions to redress those harms. Even Facebook’s attempt to delegate decision-making authority to the independent Oversight Board verges on an unlawful abdication of corporate director fiduciary duties. Facebook’s experience casts doubt on the prospects for effective corporate self-regulation of content moderation, and more broadly, on the ability of existing corporate law to incentivize or even allow social media companies to meaningfully redress digital harms.

Goldman on Assuming Good Faith Online

Eric Goldman (Santa Clara University – School of Law) has posted “Assuming Good Faith Online” (30 Catholic U.J.L. & Tech (Forthcoming)) on SSRN. Here is the abstract:

Every internet service enabling user-generated content faces a dilemma of balancing good-faith and bad-faith activity. Without that balance, the service loses one of the internet’s signature features—users’ ability to engage with and learn from each other in pro-social and self-actualizing ways—and instead drives towards one of two suboptimal outcomes. Either it devolves into a cesspool of bad-faith activity or becomes a restrictive locked-down environment with limited expressive options for any user, even well-intentioned ones.

Striking this balance is one of the hardest challenges that internet services must navigate, and yet the U.S. regulatory policy currently lets services prioritize the best interests of their audiences rather than regulators’ paranoia of bad faith actors. However, that regulatory deference is in constant jeopardy. Should it change, it will hurt the internet—and all of us.

Goldman on Zauderer and Compelled Editorial Transparency

Eric Goldman (Santa Clara University – School of Law) has posted “Zauderer and Compelled Editorial Transparency” (Iowa Law Review Online, Forthcoming) on SSRN. Here is the abstract:

A 1985 Supreme Court opinion, Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, holds the key to the Internet’s future. Zauderer provides a relaxed level of scrutiny for Constitutional challenges to some compelled commercial speech disclosure laws. Regulators throughout the country are adopting “transparency” laws to force Internet services to disclose information about their editorial operations or decisions when they publish third-party content, based on their assumption that Zauderer permits such compelled disclosures. This article explains why these transparency laws do not qualify for Zauderer’s relaxed scrutiny. Instead, given the inevitably censorial consequences of enacting and enforcing compelled editorial transparency laws, they should usually trigger strict scrutiny—just like outright speech restrictions do.

Recommended.

Huq on Militant Democracy Comes to the Metaverse

Aziz Z. Huq (University of Chicago – Law School) has posted “Militant Democracy Comes to the Metaverse” (Emory Law Journal, Vol. 72, Forthcoming) on SSRN. Here is the abstract:

Social media platforms such as Facebook, Twitter, Instagram, and Parlor are an increasingly central plank of the democratic public sphere in the United States. The prevailing view of this platform-based public sphere has of late become increasingly dour and pessimistic. What were once seen as a “technology of liberation” has come to be understood to act a channel and amplifier of “antisystem” forces in democracies. This is not the first time, however, that a private actor that operates as a necessary part of the democratic system has turned out to be a threat to the quality of democracy itself: The same was true for parties of the extreme left and extreme right in postwar Europe. The principal theoretical lens through which those earlier challenges were analyzed traveled under the label of “militant democracy,” a term coined by the émigré German political scientist Karl Loewenstein.

This essay uses the lens of militant democracy theory to think about the challenge posed by digital platforms to democracy today. It draws two main lessons. First, the social digital platform/democracy problem is structurally similar to the challenge of antisystem parties that Loewenstein’s militant democracy theory was crafted to meet. This insight leads, secondly, to an opportunity to explore the practical and theoretical space of militant democracy for insights into democracy’s contemporary challenge from social media. While I make no claim that it is possible to read off in some mechanical way effectual interventions today from yesterday’s experience with anti-democratic parties, I do suggest that the debate on militant democracy has broad-brush lessons for contemporary debates. This illuminates, at least in general terms, the sort of legal and reform strategies that are more likely to be successful, and those that are likely to fail as pro-democracy moves in respect to digital platforms.

Ho on Countering Personalized Speech

Leon G. Ho (University of North Carolina Law) has posted “Countering Personalized Speech” (Northwestern Journal of Technology and Intellectual Property, Vol. 20, Issue 1, 2022) on SSRN. Here is the abstract:

Social media platforms use personalization algorithms to make content curation decisions for each end user. These “personalized instances of content curation” (“PICCs”) are essentially speech conveying a platform’s predictions on content relevance for each end user. Yet, PICCs are causing some of the worst problems on the internet. First, they facilitate the precipitous spread of mis- and disinformation by exploiting the very same biases and insecurities that drive end user engagement with such content in the first place. Second, they exacerbate social media addiction and related mental health harms by leveraging users’ affective needs to drive engagement to greater and greater heights. Lastly, they help erode end user privacy and autonomy as both sources and incentives for data collection.

As with any harmful speech, the solution is often counterspeech. Free speech jurisprudence considers counterspeech the most speech-protective weapon to combat false or harmful speech. Thus, to combat problematic PICCs, social media platforms, policymakers, and other stakeholders should embolden end users’ counterspeech capabilities in the digital public sphere.

One way to implement this solution is through platform-provided end user personalization tools. The prevailing end user personalization inputs prevent users from providing effective countermeasures against problematic PICCs, since on most, if not all, major social media platforms, these inputs confer limited ex post control over PICCs. To rectify this deficiency and empower end users, I make several proposals along key regulatory modalities to move end user personalization towards more robust ex ante capabilities that also filter by content type and characteristics, rather than just ad hoc filters on specific pieces of content and content creators.

Chen on How Equalitarian Regulation of Online Hate Speech Turns Authoritarian: A Chinese Perspective

Ge Chen (Durham Law School) has posted “How Equalitarian Regulation of Online Hate Speech Turns Authoritarian: A Chinese Perspective” ((2022) Journal of Media Law 14(1)) on SSRN. Here is the abstract:

This article reveals how the heterogeneous legal approaches of balancing online hate speech against equality rights in liberal democracies have informed China in its manipulative speech regulation. In an authoritarian constitutional order, the regulation of hate speech is politically relevant only because the hateful topics are related to regime-oriented concerns. The article elaborates on the infrastructure of an emerging authoritarian regulatory patchwork of online hate speech in the global context and identifies China’s unique approach of restricting political contents under the aegis of protecting equality rights. Ultimately, both the regulation and dis-regulation of online hate speech form a statist approach that deviates from the paradigm protective of equality rights in liberal democracies and serves to fend off open criticism of government policies and public discussion of topics that potentially contravene the mainstream political ideologies.

Spencer on The First Amendment and the Regulation of Speech Intermediaries

Shaun B. Spencer (University of Massachusetts School of Law – Dartmouth) has posted “The First Amendment and the Regulation of Speech Intermediaries” (Marquette Law Review, Forthcoming) on SSRN. Here is the abstract:

Calls to regulate social media platforms abound on both sides of the political spectrum. Some want to prevent platforms from deplatforming users or moderating content, while others want them to deplatform more users and moderate more content. Both types of regulation will draw First Amendment challenges. As Justices Thomas and Alito have observed, applying settled First Amendment doctrine to emerging regulation of social media platforms presents significant analytical challenges.

This Article aims to alleviate at least some of those challenges by isolating the role of the speech intermediary in First Amendment jurisprudence. Speech intermediaries complicate the analysis because they introduce speech interests that may conflict with the traditional speaker and listener interests that First Amendment doctrine evolved to protect. Clarifying the under-examined role of the speech intermediary can help inform the application of existing doctrine in the digital age. The goal of this Article is to articulate a taxonomy of speech intermediary functions that will help courts (1) focus on which intermediary functions are implicated by a given regulation and (2) evaluate how the mix of speaker, listener, and intermediary interests should affect whether that regulation survives a First Amendment challenge.

This Article proceeds as follows. First, it provides a taxonomy of the speech intermediary functions—conduit, curator, commentator, and collaborator—and identifies for each function the potential conflict or alignment between the intermediary’s speech interest and the speech interests of the speakers and listeners the intermediary serves. Next, it maps past First Amendment cases onto the taxonomy and describes how each intermediary’s function influenced the application of First Amendment doctrine. Finally, it illustrates how the taxonomy can help analyze First Amendment challenges to emerging regulation of contemporary speech intermediaries.

Recommended.

Dickinson on The Internet Immunity Escape Hatch

Gregory M. Dickinson (St. Thomas University – School of Law; Stanford Law School) has posted “The Internet Immunity Escape Hatch” (47 BYU L. Rev. 1435 (2022)) on SSRN. Here is the abstract:

Internet immunity doctrine is broken, and Congress is helpless. Under Section 230 of the Communications Decency Act of 1996, online entities are absolutely immune from lawsuits related to content authored by third parties. The law has been essential to the internet’s development over the last twenty years, but it has not kept pace with the times and is now deeply flawed. Democrats demand accountability for online misinformation. Republicans decry politically motivated censorship. And all have come together to criticize Section 230’s protection of bad-actor websites. The law’s defects have put it at the center of public debate, with more than two dozen bills introduced in Congress in the last year alone.

Despite widespread agreement on basic principles, however, legislative action is unlikely. Congress is deadlocked, unable to overcome political polarization and keep pace with technological change. Rather than add to the sizeable literature proposing changes to the law, this Article asks a different question—how to achieve meaningful reform despite a decades-old statute and a Congress unable to act. Even without fresh legislation, reform is possible via an unlikely source: the Section 230 internet immunity statute that is already on the books. Because of its extreme breadth, Section 230 grants significant interpretive authority to the state and federal courts charged with applying the statute. This Article shows how, without any change to the statute, courts could press forward with the very reforms on which Congress has been unable to act.

Dickinson on Big Tech’s Tightening Grip on Internet Speech

Gregory M. Dickinson (St. Thomas University – School of Law; Stanford Law School) has posted “Big Tech’s Tightening Grip on Internet Speech” (54 Ind. L. Rev. Forthcoming 2022) on SSRN. Here is the abstract:

Online platforms have completely transformed American social life. They have democratized publication, overthrown old gatekeepers, and given ordinary Americans a fresh voice in politics. But the system is beginning to falter. Control over online speech lies in the hands of a select few—Facebook, Google, and Twitter—who moderate content for the entire nation. It is an impossible task. Americans cannot even agree among themselves what speech should be permitted. And, more importantly, platforms have their own interests at stake: Fringe theories and ugly name-calling drive away users. Moderation is good for business. But platform beautification has consequences for society’s unpopular members, whose unsightly voices are silenced in the process. With control over online speech so centralized, online outcasts are left with few avenues for expression.

Concentrated private control over important resources is an old problem. Last century, for example, saw the rise of railroads and telephone networks. To ensure access, such entities are treated as common carriers and required to provide equal service to all comers. Perhaps the same should be true for social media. This Essay responds to recent calls from Congress, the Supreme Court, and academia arguing that, like common carriers, online platforms should be required to carry all lawful content. The Essay studies users’ and platforms’ competing expressive interests, analyzes problematic trends in platforms’ censorship practices, and explores the costs of common-carrier regulation before ultimately proposing market expansion and segmentation as an alternate pathway to avoid the economic and social costs of common-carrier regulation.

Stylianou, Zingales, and Di Stefano on Is Facebook Keeping up with International Standards on Freedom of Expression? A Time-Series Analysis 2005-2020

Konstantinos Stylianou (University of Leeds – School of Law), Nicolo Zingales (FGV; Tilburg; Stanford Center for Internet and Society), and Stefania Di Stefano (Graduate Institute of International and Development Studies) have posted “Is Facebook Keeping up with International Standards on Freedom of Expression? A Time-Series Analysis 2005-2020” on SSRN. Here is the abstract:

Through an exhaustive tracking of the evolution of relevant documents, we assess the compatibility of Facebook’s content policies with applicable international standards on freedom of expression, not only regarding Facebook’s current policies (as of late 2020), but historically as well starting from Facebook’s founding. The historical dimension allows us to observe how Facebook’s response has changed through time, and how freedom of expression has evolved and how emphasis has shifted to new areas of speech, issues, or groups, particularly online. Our reserach highlights areas where progress was noticed, and areas where progress has been insufficient, making relevant recommendations. Our overall finding is that in virtually all areas of freedom of expression we tracked, Facebook responded slowly to develop content moderation policies that were up to international standards. While the international community was more proactive, it too missed opportunities for timely guidance on key areas.