Ho on Countering Personalized Speech

Leon G. Ho (University of North Carolina Law) has posted “Countering Personalized Speech” (Northwestern Journal of Technology and Intellectual Property, Vol. 20, Issue 1, 2022) on SSRN. Here is the abstract:

Social media platforms use personalization algorithms to make content curation decisions for each end user. These “personalized instances of content curation” (“PICCs”) are essentially speech conveying a platform’s predictions on content relevance for each end user. Yet, PICCs are causing some of the worst problems on the internet. First, they facilitate the precipitous spread of mis- and disinformation by exploiting the very same biases and insecurities that drive end user engagement with such content in the first place. Second, they exacerbate social media addiction and related mental health harms by leveraging users’ affective needs to drive engagement to greater and greater heights. Lastly, they help erode end user privacy and autonomy as both sources and incentives for data collection.

As with any harmful speech, the solution is often counterspeech. Free speech jurisprudence considers counterspeech the most speech-protective weapon to combat false or harmful speech. Thus, to combat problematic PICCs, social media platforms, policymakers, and other stakeholders should embolden end users’ counterspeech capabilities in the digital public sphere.

One way to implement this solution is through platform-provided end user personalization tools. The prevailing end user personalization inputs prevent users from providing effective countermeasures against problematic PICCs, since on most, if not all, major social media platforms, these inputs confer limited ex post control over PICCs. To rectify this deficiency and empower end users, I make several proposals along key regulatory modalities to move end user personalization towards more robust ex ante capabilities that also filter by content type and characteristics, rather than just ad hoc filters on specific pieces of content and content creators.

Chen on How Equalitarian Regulation of Online Hate Speech Turns Authoritarian: A Chinese Perspective

Ge Chen (Durham Law School) has posted “How Equalitarian Regulation of Online Hate Speech Turns Authoritarian: A Chinese Perspective” ((2022) Journal of Media Law 14(1)) on SSRN. Here is the abstract:

This article reveals how the heterogeneous legal approaches of balancing online hate speech against equality rights in liberal democracies have informed China in its manipulative speech regulation. In an authoritarian constitutional order, the regulation of hate speech is politically relevant only because the hateful topics are related to regime-oriented concerns. The article elaborates on the infrastructure of an emerging authoritarian regulatory patchwork of online hate speech in the global context and identifies China’s unique approach of restricting political contents under the aegis of protecting equality rights. Ultimately, both the regulation and dis-regulation of online hate speech form a statist approach that deviates from the paradigm protective of equality rights in liberal democracies and serves to fend off open criticism of government policies and public discussion of topics that potentially contravene the mainstream political ideologies.

Spencer on The First Amendment and the Regulation of Speech Intermediaries

Shaun B. Spencer (University of Massachusetts School of Law – Dartmouth) has posted “The First Amendment and the Regulation of Speech Intermediaries” (Marquette Law Review, Forthcoming) on SSRN. Here is the abstract:

Calls to regulate social media platforms abound on both sides of the political spectrum. Some want to prevent platforms from deplatforming users or moderating content, while others want them to deplatform more users and moderate more content. Both types of regulation will draw First Amendment challenges. As Justices Thomas and Alito have observed, applying settled First Amendment doctrine to emerging regulation of social media platforms presents significant analytical challenges.

This Article aims to alleviate at least some of those challenges by isolating the role of the speech intermediary in First Amendment jurisprudence. Speech intermediaries complicate the analysis because they introduce speech interests that may conflict with the traditional speaker and listener interests that First Amendment doctrine evolved to protect. Clarifying the under-examined role of the speech intermediary can help inform the application of existing doctrine in the digital age. The goal of this Article is to articulate a taxonomy of speech intermediary functions that will help courts (1) focus on which intermediary functions are implicated by a given regulation and (2) evaluate how the mix of speaker, listener, and intermediary interests should affect whether that regulation survives a First Amendment challenge.

This Article proceeds as follows. First, it provides a taxonomy of the speech intermediary functions—conduit, curator, commentator, and collaborator—and identifies for each function the potential conflict or alignment between the intermediary’s speech interest and the speech interests of the speakers and listeners the intermediary serves. Next, it maps past First Amendment cases onto the taxonomy and describes how each intermediary’s function influenced the application of First Amendment doctrine. Finally, it illustrates how the taxonomy can help analyze First Amendment challenges to emerging regulation of contemporary speech intermediaries.

Recommended.

Dickinson on The Internet Immunity Escape Hatch

Gregory M. Dickinson (St. Thomas University – School of Law; Stanford Law School) has posted “The Internet Immunity Escape Hatch” (47 BYU L. Rev. 1435 (2022)) on SSRN. Here is the abstract:

Internet immunity doctrine is broken, and Congress is helpless. Under Section 230 of the Communications Decency Act of 1996, online entities are absolutely immune from lawsuits related to content authored by third parties. The law has been essential to the internet’s development over the last twenty years, but it has not kept pace with the times and is now deeply flawed. Democrats demand accountability for online misinformation. Republicans decry politically motivated censorship. And all have come together to criticize Section 230’s protection of bad-actor websites. The law’s defects have put it at the center of public debate, with more than two dozen bills introduced in Congress in the last year alone.

Despite widespread agreement on basic principles, however, legislative action is unlikely. Congress is deadlocked, unable to overcome political polarization and keep pace with technological change. Rather than add to the sizeable literature proposing changes to the law, this Article asks a different question—how to achieve meaningful reform despite a decades-old statute and a Congress unable to act. Even without fresh legislation, reform is possible via an unlikely source: the Section 230 internet immunity statute that is already on the books. Because of its extreme breadth, Section 230 grants significant interpretive authority to the state and federal courts charged with applying the statute. This Article shows how, without any change to the statute, courts could press forward with the very reforms on which Congress has been unable to act.

Dickinson on Big Tech’s Tightening Grip on Internet Speech

Gregory M. Dickinson (St. Thomas University – School of Law; Stanford Law School) has posted “Big Tech’s Tightening Grip on Internet Speech” (54 Ind. L. Rev. Forthcoming 2022) on SSRN. Here is the abstract:

Online platforms have completely transformed American social life. They have democratized publication, overthrown old gatekeepers, and given ordinary Americans a fresh voice in politics. But the system is beginning to falter. Control over online speech lies in the hands of a select few—Facebook, Google, and Twitter—who moderate content for the entire nation. It is an impossible task. Americans cannot even agree among themselves what speech should be permitted. And, more importantly, platforms have their own interests at stake: Fringe theories and ugly name-calling drive away users. Moderation is good for business. But platform beautification has consequences for society’s unpopular members, whose unsightly voices are silenced in the process. With control over online speech so centralized, online outcasts are left with few avenues for expression.

Concentrated private control over important resources is an old problem. Last century, for example, saw the rise of railroads and telephone networks. To ensure access, such entities are treated as common carriers and required to provide equal service to all comers. Perhaps the same should be true for social media. This Essay responds to recent calls from Congress, the Supreme Court, and academia arguing that, like common carriers, online platforms should be required to carry all lawful content. The Essay studies users’ and platforms’ competing expressive interests, analyzes problematic trends in platforms’ censorship practices, and explores the costs of common-carrier regulation before ultimately proposing market expansion and segmentation as an alternate pathway to avoid the economic and social costs of common-carrier regulation.

Stylianou, Zingales, and Di Stefano on Is Facebook Keeping up with International Standards on Freedom of Expression? A Time-Series Analysis 2005-2020

Konstantinos Stylianou (University of Leeds – School of Law), Nicolo Zingales (FGV; Tilburg; Stanford Center for Internet and Society), and Stefania Di Stefano (Graduate Institute of International and Development Studies) have posted “Is Facebook Keeping up with International Standards on Freedom of Expression? A Time-Series Analysis 2005-2020” on SSRN. Here is the abstract:

Through an exhaustive tracking of the evolution of relevant documents, we assess the compatibility of Facebook’s content policies with applicable international standards on freedom of expression, not only regarding Facebook’s current policies (as of late 2020), but historically as well starting from Facebook’s founding. The historical dimension allows us to observe how Facebook’s response has changed through time, and how freedom of expression has evolved and how emphasis has shifted to new areas of speech, issues, or groups, particularly online. Our reserach highlights areas where progress was noticed, and areas where progress has been insufficient, making relevant recommendations. Our overall finding is that in virtually all areas of freedom of expression we tracked, Facebook responded slowly to develop content moderation policies that were up to international standards. While the international community was more proactive, it too missed opportunities for timely guidance on key areas.

Damjan on Algorithms and Fundamental Rights: The Case of Automated Online Filters

Matija Damjan (University of Ljubljana Law) has posted “Algorithms and Fundamental Rights: The Case of Automated Online Filters” (Journal of Liberty and International Affairs 2021) on SSRN. Here is the abstract:

The information that we see on the internet is increasingly tailored by automated ranking and filtering algorithms used by online platforms, which significantly interfere with the exercise of fundamental rights online, particularly the freedom of expression and information. The EU’s regulation of the internet prohibits general monitoring obligations. The paper first analyses the CJEU’s case law which has long resisted attempts to require internet intermediaries to use automated software filters to remove infringing user uploads. This is followed by an analysis of article 17 of the Directive on Copyright in the Digital Single Market, which effectively requires online platforms to use automated filtering to ensure the unavailability of unauthorized copyrighted content. The Commission’s guidance and the AG’s opinion in the annulment action are discussed. The conclusion is that the regulation of the filtering algorithms themselves will be necessary to prevent private censorship and protect fundamental rights online.

Bambauer, Masconale & Sepe on Cheap Friendship

Jane R. Bambauer (University of Arizona Law), Saura Masconale (University of Arizona Department of Political Economy and Moral Science; Center for the Philosophy of Freedom), and Simone M. Sepe
(University of Arizona Law; University of Toulouse 1; ECGI; American College of Governance Counsel) have posted “Cheap Friendship” (54 UC Davis Law Review 2341 (2021)) on SSRN. Here is the abstract:

This Essay argues that the Internet law and policy community has misdiagnosed the causes of political polarization. Rather, more precisely, it has missed a major contributing cause. The dominant theories focus on Big Tech (e.g., the filter bubbles that curate Internet content with self-interested goals at the expense of democratic functioning) and on faulty cognition (e.g., human tendencies to favor sensationalism and tribal dogmatism). Cheap speech, according to these dominant theories, provides the fuel and fodder.

We offer an explanation that is at once more banal and more resistant to policy interventions: cheap friendship.

Keats Citron on How To Fix Section 230

Danielle Keats Citron (University of Virginia School of Law) has posted “How To Fix Section 230” (Boston University Law Review, Forthcoming) on SSRN. Here is the abstract:

Section 230 is finally getting the clear-eyed attention that it deserves. No longer is it naive to suggest that we revisit the law that immunizes online platforms from liability for illegality that they enable. Today, the harm wrought by the current approach is undeniable. Time and practice have made clear that tech companies don’t have enough incentive to remove harmful content, especially if it generates likes, clicks, and shares. They earn a fortune in advertising fees from illegality like nonconsensual pornography with little risk to their reputations. Victims can’t sue the entities that have enabled and profited from their suffering. The question is how to fix Section 230. The legal shield enjoyed by online platforms needs preconditions. This essay proposes a reasonable steps approach borne out of more than 12 years working with tech companies on content moderation policies and victims of intimate privacy violations. In this essay, I lay out concrete suggestions for a reasonable steps approach, one that has synergies with international efforts.

Liu on Exporting the First Amendment through Trade: the Global ‘Constitutional Moment’ for Online Platform Liability

Han-Wei Liu (Monash University) “Exporting the First Amendment through Trade: the Global ‘Constitutional Moment’ for Online Platform Liability” (Georgetown Journal of International Law, Vol. 53, No. 1, 2022) on SSRN. Here is the abstract:

The U.S. in the recent United States-Mexico-Canada Agreement and U.S.-Japan Digital Trade Agreement adopts a new clause which mirrors Section 230 of the Communications Decency Act of 1996, shielding online intermediaries from third-party contents liability. For policymakers, the seemingly innocuous “Interactive Computer Services” title creates the fundamental challenge in balancing free speech against competing interests in the digital age. This Article argues against globally normalizing this clause through its diffusion in trade deals. Internally, as the Biden Administration has offered a clean slate to discuss reforms to the controversial regime, it is unwise for U.S. trade negotiators to export the same clause in future negotiations. Externally, it is problematic for other partners to accept this clause, born from American values deeply rooted in the First Amendment. Each country is entitled to achieve the fundamental right of free speech through their own economic, social, and political pathways, towards an optimal balance—and rebalance—against other interests. The clause should be dropped from future trade negotiations while policymakers worldwide grapple with the challenges posed by online platforms and reconfigure their regulatory frameworks in the digital era.