Sun on The Right to Know Social Media Algorithms

Haochen Sun (The U Hong Kong Law) has posted “The Right to Know Social Media Algorithms” (18 Harvard Law & Policy Review 1 (2023)) on SSRN. Here is the abstract:

One of the most important legal issues in the age of social media is how to tackle algorithmic secrecy. Social media algorithms permeate society, yet most are developed and applied in a black-box manner with a range of serious social consequences. For example, the amplification of fake news by social media algorithms has caused tremendous harm to democratic governance and undermined pandemic relief measures. 

In addressing the problems of algorithmic secrecy, the legal protection of social media algorithms as trade secrets is a major obstacle. This article explores the possibility of recognizing a right to know algorithms as the legal basis for requiring proportionate disclosure of trade secrets pertaining to social media algorithms. This new legal right would promote algorithmic transparency in the public interest. 

The right to know, a civil liberty that enables citizens to obtain information held by the government and certain private entities, lends strong policy support to recognition of the right to know social media algorithms. As the article shows, this new right would function to protect democratic participation, public safety, and social equality, the three kinds of public interest that are of crucial importance in the algorithmic society. 

The article then discusses how this new legal right could prevail over the trade secret protection of social media algorithms, paving the way to a multi-stakeholder approach to regulating algorithmic secrecy. This new approach would empower the legislature, administration, and judiciary to determine how social media companies should effect proportionate disclosure of information on their algorithms. Its primary aim is to promote transparency of social media algorithms, to make them more intelligible, and to hold social media companies accountable should they fail to fulfil their disclosure responsibility.

Atkinson et al. on Intentionally Unintentional: GenAI Exceptionalism and the First Amendment

David Atkinson (The U Texas Austin) et al. have posted “Intentionally Unintentional: GenAI Exceptionalism and the First Amendment” (Forthcoming, First Amendment Law Review 2025) on SSRN. Here is the abstract:

This paper challenges the assumption that courts should grant outputs from large generative AI models, such as GPT-4 and Gemini, First Amendment protections. We argue that because these models lack intentionality, their outputs do not constitute speech as understood in the context of established legal precedent, so there can be no speech to protect. Furthermore, if the model outputs are not speech, users cannot claim a First Amendment right to receive the outputs. We also argue that extending First Amendment rights to AI models would not serve the fundamental purposes of free speech, such as promoting a marketplace of ideas, facilitating self-governance, or fostering self-expression. In fact, granting First Amendment protections to AI models would be detrimental to society because it would hinder the government’s ability to regulate these powerful technologies effectively, potentially leading to the unchecked spread of misinformation and other harms.

Meher & Zhang on Two Types of Censorship

Shreyas Meher (U Texas at Dallas) and Pengfei Zhang (U Texas at Dallas, Cornell U) have posted “Two Types of Censorship” on SSRN. Here is the abstract:

Not all autocracies are doing the same kind of censorship. Countries like China build a closed border and a policing workforce for their internet, whereas countries like Russia compete in their internet with pro-government messages and requests. This paper employs a data-driven approach to study the variety of censorship among autocratic countries. Internet controls are measured by the panel data from Freedom House, V-Dem, OONI, and Google Transparency Report. Using an unsupervised learning technique of cluster analysis, we group the countries’ censorship behaviors based on multi-dimensional indicators of internet access, content restriction, and technological barriers. We discover two distinct types of censorship: pervasive control regime (e.g., China) and influence operation regime (e.g., Russia). The two types are supported by country-specific studies and are shown to predict the country’s content restriction strategies. We also show that differences in national IT capacity explain a country’s distinct censorship style. Sending takedown requests is a cost-saving alternative to a state-run monitoring workforce. A one-unit increase in the country’s IT capacity leads to 9,206 fewer requests and 11,398 more incidents of blocking the internet annually.

Mills on A Contractual Approach to Social Media Governance

Gilad Mills (Harvard Law School) has posted “A Contractual Approach to Social Media Governance”
(Yale Law & Policy Review, Vol. 42, Forthcoming) on SSRN. Here is the abstract:

The heated scholarly debate in recent years around social media governance has been dominated by a clear public law bias and has yielded a substantively incomplete analysis of the issues at hand. Captured by public law analogies that depict platforms as governors who perform legislative, administrative, and adjudicatory functions, scholars and policymakers have repeatedly turned to public law norms as the hook on which they hang proposed governance solutions. As a practical strategy, they either called to impose public law norms by way of regulatory intervention or, conversely, called on platforms to adopt them voluntarily. This approach to social media governance, however, has met with limited success, stymied by political deadlocks, constitutional constraints, and platforms’ commercial preferences. At the same time, private law has been broadly overlooked as a potentially superior source of governance norms for social media, while the potential role the judiciary could play in generating these norms has been seriously discounted or even ignored altogether.

This Article tackles this blind spot in the current scholarship and thinking, offering a novel, comprehensive contractual approach to social media governance. Applying relational contract theory to social media contracting, it lays out the normative underpinnings for subjecting platforms to contractual duties of fairness and diligence, from which governance norms can and should be derived, it is argued. A doctrinal analysis is also provided, to equip courts and litigators with the practical tools for holding platforms liable when such contractual duties are breached. Finally, to mitigate concerns about judicial over-encroachment on platforms’ decision-making, the Article offers a pragmatic remedial approach that prefers equitable remedies to damages and adopts a deferential standard of review––a “platform judgment rule”––that would insulate platforms from judicial scrutiny so long as they uphold their “best-efforts” commitments to conduct informed, unbiased, content-moderation in good faith, and to refrain from grossly misusing personal data.

Douek on The Meta Oversight Board and the Empty Promise of Legitimacy

Evelyn Douek (Stanford Law School) has posted “The Meta Oversight Board and the Empty Promise of Legitimacy” (Harvard Journal of Law & Technology, Vol. 37, 2024 Forthcoming) on SSRN. Here is the abstract:

The Meta Oversight Board is an audacious experiment in self-regulation by one of the world’s most powerful corporations, set up to oversee one of the largest systems of speech regulation in history. In the few years since its establishment, the Board has in some ways defied its many skeptics, by becoming a consistent and accepted feature of academic and public discourse about content moderation. It has also achieved meaningful independence from Meta, shed light on the otherwise completely opaque processes within the corporation, instantiated meaningful reforms to Meta’s content moderation systems, and provided an avenue for greater stakeholder engagement in content moderation decision-making. But the Board has also failed to live up to core aspects of its role, in ways that have gone underappreciated. The Board has consistently shied away from answering the hardest and most controversial questions that come before it—that is, the very questions it was set up to tackle—and has not provided meaningful yardsticks for quantifying its actual impact. Understanding why the Board eschews these questions, and why it has nevertheless managed to acquire a significant amount of institutional legitimacy, suggests important lessons about institutional incentives and the revealed preferences of stakeholders in content moderation governance. Ultimately, this Article argues, the current political environment incentivizes a kind of oversight that is formalistic and unmoored from substantive goals. This is a problem that plagues regulatory reform far beyond the Board itself, and shows that generalized calls for “more legitimate” content moderation governance are underspecified and may, as a result, incentivize poor outcomes.

Burk on Asemic Defamation, or, the Death of the AI Speaker

Dan L. Burk (UC Irvine Law) has posted “Asemic Defamation, or, the Death of the AI Speaker” (First Amendment Law Review, Vol. 22, 2024) on SSRN. Here is the abstract:

Large Language Model (“LLM”) systems have captured considerable popular, scholarly, and governmental notice. By analyzing vast troves of text, these machine learning systems construct a statistical model of relationships among words, and from that model they are able to generate syntactically sophisticated texts. However, LLMs are prone to “hallucinate,” which is to say that they routinely generate statements that are demonstrably false. Although couched in the language of credible factual statements, such LLM output may entirely diverge from known facts. When they concern particular individuals, such texts may be reputationally damaging if the contrived false statements they contain are derogatory.

Scholars have begun to analyze the prospects and implications of such AI defamation. However, most analyses to date begin from the premise that LLM texts constitute speech that is protected under constitutional guarantees of expressive freedom. This assumption is highly problematic, as LLM texts have no semantic content. LLMs are not designed, have no capability, and do not attempt to fit the truth values of their output to the real world. LLM texts appear to constitute an almost perfect example of what semiotics labels “asemic signification,” that is, symbols that have no meaning except for meaning imputed to them by a reader.

In this paper, I question whether asemic texts are properly the subject of First Amendment coverage. I consider both LLM texts and historical examples to examine the expressive status of asemic texts, recognizing that LLM texts may be the first instance of fully asemic texts. I suggest that attribution of meaning by listeners alone cannot credibly place such works within categories of protected speech. In the case of LLM outputs, there is neither a speaker, nor communication of any message, nor any meaning that is not supplied by the text recipient. I conclude that LLM texts cannot be considered protected speech, which vastly simplifies their status under defamation law.

Dickinson on Beyond Social Media Analogues

Gregory M. Dickinson (St. Thomas Law; Stanford) has posted “Beyond Social Media Analogues” (99 NYU Law Rev. Online (forthcoming 2024)) on SSRN. Here is the abstract:

The steady flow of social-media cases toward the Supreme Court shows a nation reworking its fundamental relationship with technology. The cases raise a host of questions ranging from difficult to impossible: how to nurture a vibrant public square when a few tech giants dominate the flow of information, how social media can be at the same time free from conformist groupthink and also protected against harmful disinformation campaigns, and how government and industry can cooperate on such problems without devolving toward censorship.

To such profound questions, this Essay offers a comparatively modest contribution—what not to do. Always the lawyer’s instinct is toward analogy, considering what has come before and how it reveals what should come next. Almost invariably, that is the right choice. The law’s cautious evolution protects society from disruptive change. But almost is not always, and, with social media, disruptive change is already upon us. Using social-media laws from Texas and Florida as a case study, this Essay shows how social-media’s distinct features render it poorly suited to analysis by analogy and argues that courts should instead shift their attention toward crafting legal doctrines targeted to address social media’s unique ills.

Lemley, Henderson & Hasimoto on Liability for Harmful AI Speech (including hallucinations)

Mark A. Lemley (Stanford Law School), Peter Henderson (Stanford University), and Tatsunori Hashimoto (same) have posted “Where’s the Liability in Harmful AI Speech?” on SSRN. Here is the abstract:

Generative AI, in particular text-based “foundation models” (large models trained on a huge variety of information including the internet), can generate speech that could be problematic under a wide range of liability regimes. Machine learning practitioners regularly “red-team” models to identify and mitigate such problematic speech: from “hallucinations” falsely accusing people of serious misconduct to recipes for constructing an atomic bomb. A key question is whether these red-teamed behaviors actually present any liability risk for model creators and deployers under U.S. law, incentivizing investments in safety mechanisms. We examine three liability regimes, tying them to common examples of red-teamed model behaviors: defamation, speech integral to criminal conduct, and wrongful death. We find that any Section 230 immunity analysis or downstream liability analysis is intimately wrapped up in the technical details of algorithm design. And there are many roadblocks to truly finding models (and their associated parties) liable for generated speech. We argue that AI should not be categorically immune from liability in these scenarios and that as courts grapple with the already fine-grained complexities of platform algorithms, the technical details of generative AI loom above with thornier questions. Courts and policymakers should think carefully about what technical design incentives they create as they evaluate these issues.

Lemley, Henderson & Volokh on Freedom of Speech and AI Output

Mark A. Lemley (Stanford Law), Peter Henderson (same), and Eugene Volokh (UCLA Law) have posted “Freedom of Speech and AI Output” on SSRN. Here is the abstract:

Is the output of generative AI entitled to First Amendment protection? We’re inclined to say yes. Even though current AI programs are of course not people and do not themselves have constitutional rights, their speech may potentially be protected because of the rights of the programs’ creators. But beyond that, and likely more significantly, AI programs’ speech should be protected because of the rights of their users—both the users’ rights to listen and their rights to speak. In this short Article, we sketch the outlines of this analysis.

Khan on Framing Online Speech Governance As An Algorithmic Accountability Issue

Mehtab Khan (Yale Law School) has posted “Framing Online Speech Governance As An Algorithmic Accountability Issue” (99 Ind. L.J. Supp. (forthcoming 2023)) on SSRN. Here is the abstract:

Automated tools used in online speech governance are prone to errors on a large-scale yet widely used. Legal and policy responses have largely focused on case-by-case evaluations of these errors, instead of an examination of the development process of the tools. Moreover, information on the internet is no longer simply generated by users, but also by sophisticated language tools like ChatGPT, that are going to pose a challenge to speech governance. Yet, legal and policy measures have not responded adequately to AI tools becoming more dynamic and impactful. In order to address the challenges posed by algorithmic content governance, I argue that there is a need to frame a regulatory approach that focuses on the tools used in both content moderation and content generation contexts—which can be done by viewing this technology through an algorithmic accountability lens. I provide an overview of the various aspects of the technical and normative features of these tools that help us frame the regulation of these tools as an algorithmic accountability issue. I do this in three steps: First, I discuss the lack of sufficient attention towards AI tools in current regulatory approaches. Second, I highlight the shared features of both content moderation and content generation to offer insights about the interlinked and evolving landscape of online speech and AI Governance. Third, I situate this discussion of speech governance within a broader framework of algorithmic accountability to guide future regulatory interventions.