Green on The Contestation of Tech Ethics

Ben Green (University of Michigan; Harvard Berkman Klein Center for Internet & Society) has posted “The Contestation of Tech Ethics: A Sociotechnical Approach to Ethics and Technology in Action” on SSRN. Here is the abstract:

Recent controversies related to topics such as fake news, privacy, and algorithmic bias have prompted increased public scrutiny of digital technologies and soul-searching among many of the people associated with their development. In response, the tech industry, academia, civil society, and governments have rapidly increased their attention to “ethics” in the design and use of digital technologies (“tech ethics”). Yet almost as quickly as ethics discourse has proliferated across the world of digital technologies, the limitations of these approaches have also become apparent: tech ethics is vague and toothless, is subsumed into corporate logics and incentives, and has a myopic focus on individual engineers and technology design rather than on the structures and cultures of technology production. As a result of these limitations, many have grown skeptical of tech ethics and its proponents, charging them with “ethics-washing”: promoting ethics research and discourse to defuse criticism and government regulation without committing to ethical behavior. By looking at how ethics has been taken up in both science and business in superficial and depoliticizing ways, I recast tech ethics as a terrain of contestation where the central fault line is not whether it is desirable to be ethical, but what “ethics” entails and who gets to define it. This framing highlights the significant limits of current approaches to tech ethics and the importance of studying the formulation and real-world effects of tech ethics. In order to identify and develop more rigorous strategies for reforming digital technologies and the social relations that they mediate, I describe a sociotechnical approach to tech ethics, one that reflexively applies many of tech ethics’ own lessons regarding digital technologies to tech ethics itself.

Szoka on Antitrust, Section 230 & the First Amendment

Berin Szóka (TechFreedom) has posted “Antitrust, Section 230 & the First Amendment” (CPI Antitrust Chronicle May 2021) on SSRN. Here is the abstract:

The First Amendment allows antitrust action against media companies for their business practices, but not for their editorial judgments. Section 230 mirrors this distinction by protecting providers of interactive computer services from being “treated as the publisher” of content provided by others, including decisions to withdraw or refuse to publish that content (230(c)(1)), and by further protecting decisions made “in good faith” to take down content, regardless of who created it (230(c)(2)(A)). Section 230 provides a critical civil procedure shortcut: when providers of interactive computer services are sued for refusing to carry the speech of others, they need not endure the expense of litigating constitutional questions. Thus, changing Section 230 could dramatically increase litigation costs, but it would not ultimately create new legal liability for allegedly “biased” or “unfair” content moderation. Nor will the First Amendment permit new quasi-antitrust remedies that compel websites to carry content they find objectionable.

Reyes on Creating Cryptolaw for the Uniform Commercial Code

Carla Reyes (Southern Methodist University – Dedman School of Law) has posted “Creating Cryptolaw for the Uniform Commercial Code” (Washington and Lee Law Review, Forthcoming) on SSRN. Here is the abstract:

A contract generally only binds its parties. Security agreements, which create a security interest in specific personal property, stand out as a glaring exception to this rule. Under certain conditions, security interests not only bind the creditor and debtor, but also third-party creditors seeking to lend against the same collateral. To receive this extraordinary benefit, creditors must put the world on notice, usually by filing a financing statement with the state in which the debtor is located. Unfortunately, the Uniform Commercial Code (U.C.C.) Article 9 filing system fails to provide actual notice to interested parties and introduces risk of heavy financial losses.

To solve this problem, this Article introduces a smart contract-based U.C.C.-1 form built using Lexon, an innovative new programming language that enables the development of smart contracts in English. The proposed “Lexon U.C.C. Financing Statement” does much more than merely replicate the financing statement in digital form; it also performs several U.C.C. rules so that, for the first time, the filing system works as intended. In demonstrating that such a system remains compatible with existing law, the Lexon U.C.C. Financing Statement also reveals important lessons about the interaction of technology and commercial law.

This Article brings cryptolaw to the U.C.C. in three sections. Section I examines the failure of the U.C.C. Article 9 filing system to achieve actual notice and argues that blockchain technology and smart contracts can help the system function as intended. Section II introduces the Lexon U.C.C. Financing Statement, demonstrating how the computer code implements U.C.C. provisions. Section II also examines the goals that influenced the design of the Lexon U.C.C. Financing Statement, discusses the new programming language used to build it, and argues that the prototype could be used now, under existing law. Section III proposes five innovations for the Article 9 filing system enabled by the Lexon U.C.C. Financing Statement. Section III then considers the broader implications of the project for commercial law, legal research around smart contracts, and the interplay between technology neutral law and a lawyer’s increasingly important duty of technological competence. Ultimately, by providing the computer code needed to build the Lexon U.C.C. Financing Statement, this Article demonstrates not only that crypto-legal structures are possible, but that they can simplify the law and make it more accessible.


Bloch-Wehba on Content Moderation as Surveillance

Hannah Bloch-Wehba (Texas A&M University School of Law; Yale University – Yale Information Society Project) has posted “Content Moderation as Surveillance” (Berkeley Technology Law Journal, Vol. 36, 2022 (Forthcoming) on SSRN. Here is the abstract:

Technology platforms are the new governments, and content moderation is the new law, or so goes a common refrain. And as platforms increasingly turn toward new, automated mechanisms of enforcing their rules, the apparent power of the private sector seems only to grow. Yet beneath the surface lies a web of complex relationships between public and private authorities that call into question whether platforms truly possess such unilateral power. Law enforcement and police are exerting influence over platform content rules, giving governments a louder voice in supposedly “private” decisions. At the same time, law enforcement avails itself of the affordances of social media in detecting, investigating, and preventing crime.

This Article, prepared for a symposium dedicated to Joel Reidenberg’s germinal article Lex Informatica, untangles the relationship between content moderation and surveillance. Building on Reidenberg’s fundamental insights regarding the relationships between rules imposed by legal regimes and those imposed by technological design, the Article first traces how content moderation rules intersect with law enforcement, including through formal demands for information, informal relationships between platforms and law enforcement agencies, and the impact of end-to-end encryption. Second, it critically assesses the degree to which government involvement in content moderation actually tempers platform power. Rather than effective oversight and checking of private power, it contends, the emergent arrangements between platforms and law enforcement institutions foster mutual embeddedness and the entrenchment of private authority within public governance.