Mailyn Fidler (Harvard University – Berkman Klein Center for Internet & Society) has posted “The New Editors: Refining First Amendment Protections for Internet Platforms” (Notre Dame Law School Journal on Emerging Technology, 2021 (Forthcoming)) to SSRN. Here is the abstract:
This Article envisions what it would look like to tailor First Amendment editorial privilege to the multifaceted nature of the Internet, just as courts have done with media in the offline world. It reviews the law of editorial judgment offline, where protections for editorial judgment are strong but not absolute, and its nascent application online. It then analyzes whether the diversity of Internet platforms and functions alters this application at all.
First Amendment editorial privilege, as applied to Internet platforms, is often treated by courts and platforms themselves as monolithic and equally applicable to all content moderation decisions. The privilege is asserted by all types of platforms, whether search engine or social media, and for all kinds of choices. But Section 230’s broad protections for Internet platforms have largely precluded the development of a robust body of First Amendment law specific to Internet platforms. With Section 230 reform a clear priority for Congress, Internet platforms will likely turn to First Amendment defenses to a greater extent in coming years, prompting the need to examine how the law of editorial privilege applies online.
I offer six concrete conclusions about how online platforms do or do not challenge the application of the law of editorial judgment. The features and functions of online platforms do not change the need to differentiate when a platform is occupying a speaker or non-speaker role, the application of longstanding First Amendment exceptions for low value speech to platforms, and the judiciary’s hesitancy to include market competitiveness in First Amendment analyses. These same features and functions require insisting that no distinction between wholesale and retail-level editorial judgments emerges in the online space, threaten to collapse the useful distinction between editing and advertising, and suggest user decisions should be given greater weight in determining speech-related damages.
Agnieszka McPeak (Gonzaga University School of Law) has posted “Platform Immunity Redefined” (William & Mary Law Review, Vol. 62, No. 5, 2021) on SSRN. Here is the abstract:
Section 230 of the Communications Decency Act (CDA) immunizes “interactive computer services” from most claims arising out of third-party content posted on the service. Passed in 1996, section 230 is a vital law for allowing free expression online, but it is ill-suited for addressing some of the harms that arise in the modern platform-based economy.
This Article proposes to redefine section 230 immunity for sharing economy platforms and online marketplaces by tying internet platform immunity to the economic relationship between the platform and the third party. It primarily focuses on one key flaw of section 230: its binary classification of online actors as either “interactive computer services” (who are immune under the statute) or “information content providers” (who are not immune). This binary classification, while perhaps adequate for the internet that existed in 1996, fails to account for the full range of economic activities in which modern platforms now engage.
This Article argues that courts applying section 230 should incorporate joint enterprise liability theory to better define the contours of platform immunity. A platform should lose immunity when there exists a common business purpose, specific pecuniary interest, and shared right of control in the underlying transaction giving rise to liability. Sharing economy platforms, such as Airbnb and Uber, and online marketplaces, such as Amazon, are primary examples of platforms that may function as joint enterprises. By using joint enterprise theory to redefine platform immunity, this Article seeks to promote greater fairness to tort victims while otherwise retaining section 230’s core free expression purpose.
Anne Klinefelter (University of North Carolina School of Law) and Sam Wrigley (University of Helsinki, Finland, Faculty of Law) have posted “Google LLC v. CNIL: The Location-Based Limits of the EU Right to Erasure and Lessons for U.S. Privacy Law” (North Carolina Journal of Law and Technology, Vol. 22, No. 4, 2021) on SSRN. Here is the abstract:
As the United States considers preemptive federal privacy law, the discussion can be enriched by a reassessment of the EU example as illustrated in a 2019 decision at the European Court of Justice. The General Data Protection Regulation that took effect in 2018 is often described as an important model for unifying and centralizing data protection law in order to provide consistent protections of rights. But the Google LLC v. CNIL decision highlights that the EU law did not in fact create a monolithic system without room for Member State variation.
This Article takes a close look at the way that the erasure right is articulated in the GDPR, examining how competing rights are balanced, how Member States’ different approaches to balancing rights are accommodated, and how related provisions in the law inform an understanding of the erasure provision in Article 17. The Article also examines the 2019 Google LLC v. CNIL decision, exploring the Court’s reasoning and the impact of the case on EU erasure rights and beyond.
This Article draws on these examinations of the erasure-related provisions of the GDPR and of the Google LLC v. CNIL decision to advance a better understanding of how the influential EU Regulation embraces the possibility of significant Member State variation and ongoing balancing of data protection with expression and information rights. Guiding principles of subsidiarity and proportionality that are foundational to the European Union, incorporated into the GDPR, and evident in the Google LLC v. CNIL decision provide the basis for this national deference and deferred balancing. Together, subsidiarity and proportionality principles caution against extensive consolidation of privacy law into a one-size-fits-all solution. The United States can learn from the European Union that a monolithic and inflexible federal law many not only be difficult to enact but also undesirable.
Pierre Larouche (Université de Montréal; Center on Regulation in Europe) has posted “Platforms, Disruptive Innovation and Competition on the Market” (CPI Antitrust Chronicle, February 2020) on SSRN. Here is the abstract:
This short piece aims to suggest that the concept of competition for the market is not sufficient to account for the competitive forces at play in the “digital economy”. It offers a richer concept of competition on the market, which differs from both competition in and for the market in that it assumes that market definition (in the business and hence also in the competition law sense) is itself a competitive parameter. The literature on disruptive innovation provides the best account of competition on the market.
At first glance, including competition on the market in the analysis points to error risks that might have been neglected so far. It provides a reminder that enforcement should proceed prudently in order to avoid Type I errors, since competition on the market seems to have had more impact than competition law enforcement in the major cases of this century such as Microsoft, Intel and potentially also Google Search (Shopping). It also offers a solid basis to address Type II-error concerns around merger control as regards the acquisitions made by platforms, such as Facebook/WhatsApp.
Haley Amster (Stanford Law School) and Brett Diehl (Stanford Law School) have posted “Against Geofences” (Stanford Law Review, Forthcoming) on SSRN. Here is the abstract:
Since roughly 2016, law enforcement has increasingly relied on a new tool when investigating a crime with no suspects: geofence warrants. These warrants operate differently from a typical digital location history search warrant, through which law enforcement requires a third-party company to provide the location history of a particular user’s device. […] This Note aims to begin filling that analytical void by putting forward the first thorough scholarly analysis of the constitutionality of a geofence warrant.
This Note proceeds in five parts. Part I is a technology primer, explaining the steps involved in geofence warrants: the initial data dump, the expansion, and the unmasking. Part II catalogs the burgeoning geofence litigation, analyzing the first few federal magistrate opinions on the issue before briefly profiling other pending litigation. Part III looks more closely at the initial data dump, identifying the difficulty law enforcement has in meeting probable cause and particularity requirements due to the inherent breadth of the search. This Part explores potential constitutional limits on geofence warrants through analogies to the search of many people located at the scene of a crime, digital checkpoints, and area warrants. In this Part, the Note answers the question of whether probable cause must be shown for each device included in a digital search, exploring relevant scholarship regarding cell tower dumps. This Part then explores the difficulty in achieving constitutional tailoring, analogizing to digital searches of multi-occupancy buildings, and considers potential particularized search protocol that could indeed meet constitutional requirements. Part IV examines geofence warrants’ expansion and unmasking steps. It first argues that geofence warrants are unconstitutional general warrants because of the discretion given to law enforcement in the warrant execution. It then considers the additional steps as reaches beyond the scope of a warrant, or as multiple searches encompassed under one warrant.
Noam Kolt (University of Toronto) has posted “Predicting Consumer Contracts” (Berkeley Technology Law Journal, Vol. 37, 2022 Forthcoming) on SSRN. Here is the abstract:
This Article empirically examines whether a computational language model can read and understand consumer contracts. Language models are able to perform a wide range of complex tasks by predicting the next word in a sequence. In the legal domain, language models can summarize laws, draft case documents, and translate legalese into plain English. However, the ability of language models to inform consumers of their contractual rights and obligations has not been explored in detail.
To showcase the opportunities and challenges of using language models to read consumer contracts, this Article studies the performance of GPT-3, a powerful language model released in June 2020. The case study employs a novel dataset comprised of questions relating to the terms of service of popular U.S. websites. Although the results are not definitive, they offer several important insights. First, owing to its immense training data, the model can exploit subtle informational cues embedded in questions. Second, the model performed poorly on contractual provisions that favor the rights and interests of consumers, suggesting that it may contain an anti-consumer bias. Third, the model is brittle in unexpected ways. Performance was highly sensitive to the wording of questions, but surprisingly indifferent to variations in contractual language.
While language models could potentially empower consumers, they could also provide misleading legal advice and entrench harmful biases. Leveraging the benefits of language models in reading consumer contracts and confronting the challenges they pose requires a combination of engineering and governance. Policymakers, together with developers and users of language models, should begin exploring technical and institutional safeguards to ensure that language models are used responsibly and align with broader social values.