Voss on Data Protection Issues for Smart Contracts

W. Gregory Voss (TBS Business School) has posted “Data Protection Issues for Smart Contracts” (Smart Contracts: Technological, Business and Legal Perspectives (Marcelo Corrales, Mark Fenwick & Stefan Wrbka, eds., 2021) on SSRN. Here is the abstract:

Smart contracts offer promise for facilitating and streamlining transactions in many areas of business and government. However, they also may be subject to the provisions of relevant data protection laws such as the European Union’s General Data Protection Regulation (GDPR) if personal data is processed. Initially, this chapter discusses the data protection/data privacy distinction in the context of differing legal models. However, the focus of analysis is the GDPR, as the most significant and influential data protection legislation at this time, given in part to its omnibus nature and extraterritorial scope, and its application to smart contracts.

By their very nature, smart contracts raise difficulties for the classification of the various actors involved, which will have an impact on their responsibilities under the law and their potential liability for violations. The analysis in this chapter turns on the roles of the data controller in the context of smart contracts, and this contract review the definition of that term and of ‘joint controller’ considering supervisory authority guidance. In doing so, the signification of the classification is highlighted, especially in the case of the GDPR.

Furthermore, certain rights granted to data subjects under the GDPR may be difficult to provide in the context of smart contracts, such as the right to be forgotten/right to erasure, the right to rectification and the right not to be subject to a decision based solely on automated processing. This chapter addresses such issues, together with relevant supervisory authority advice, such as the use of encryption to make data nearly inaccessible to approach as nearly as possible the same result as erasure. On the way, the important distinction between anonymized data and personal data is explained, together with its practical implications, and requirements for data integrity and confidentiality (security) are detailed.

In addition, the GDPR requirement of privacy by design and by default must be respected, when that that legislation applies. Data protection principles such as purpose limitation and data minimisation in the case of smart contracts are also scrutinized in this chapter. Data protection and privacy must be considered when smart contracts are designed. This chapter will help the reader understand the contours of such requirement. Even for jurisdictions outside of the European Union, privacy by design will be interesting as best practice.

Finally, problems related to cross-border data transfers in the case of public blockchains are debated, prior to this chapter setting out key elements to allow for a GDPR-compliant blockchain and other concluding remarks.

Douek on The Siren Call of Content Moderation Formalism

Evelyn Douek (Harvard Law School) has posted “The Siren Call of Content Moderation Formalism” (New Technologies of Communication and The First Amendment (Bollinger & Stone eds., 2022) on SSRN. Here is the abstract:

Systems of online content moderation governance are becoming some of the most elaborate and extensive bureaucracies in history, and they are deeply imperfect and need reform. Would-be reformers of content moderation systems are drawn to a highly rule-bound and formalistic vision of how these bureaucracies should operate, but the sprawling chaos of online speech is too vast, ever-changing, and varied to be brought into consistent compliance with rigid rules. This essay argues that the quest to make content moderation systems ever more formalistic will not fix public and regulatory concerns about the legitimacy and accountability of how platforms moderate content on their services. The largest social media platforms operate massive unelected, unaccountable, and increasingly complex bureaucracies that decide to act or not act on millions of pieces of content uploaded to their platforms every day. A formalistic model, invoking judicial-style norms of reasoning and precedent, is doomed to fail at this scale and complexity. As these governance systems mature, it is time to be content moderation realists about the task ahead.

Leiter on The Epistemology of the Internet and the Regulation of Speech in America

Brian Leiter (University of Chicago) has posted “The Epistemology of the Internet and the Regulation of Speech in America” (Georgetown Journal of Law & Public Policy, 2022) on SSRN. Here is the abstract:

The Internet is the epistemological crisis of the 21st-century: it has fundamentally altered the social epistemology of societies with relative freedom to access it. Most of what we think we know about the world is due to reliance on epistemic authorities, individuals or institutions that tell us what we ought to believe about Newtonian mechanics, evolution by natural selection, climate change, resurrection from the dead, or the Holocaust. The most practically fruitful epistemic norm of modernity, empiricism, demands that knowledge be grounded in sensory experience, but almost no one who believes in evolution by natural selection or the reality of the Holocaust has any sensory evidence in support of those beliefs. Instead, we rely on epistemic authorities—biologists and historians, for example. Epistemic authority cannot be sustained by empiricist criteria, for obvious reasons: salient anecdotal evidence, the favorite tool of propagandists, appeals to ordinary faith in the senses, but is easily exploited given that most people understand neither the perils of induction nor the finer points of sampling and Bayesian inference. Sustaining epistemic authority depends, crucially, on social institutions that inculcate reliable second-order norms about whom to believe about what. The traditional media were crucial, in the age of mass democracy, with promulgating and sustaining such norms. The Internet has obliterated the intermediaries who made that possible (and, in the process, undermined the epistemic standing of experts), while even the traditional media in the U.S., thanks to the demise of the “Fairness Doctrine,” has contributed to the same phenomenon. I argue that this crisis cries out for changes in the regulation of speech in cyberspace—including liability for certain kinds of false speech, incitement, and hate speech–but also a restoration of a version of the Fairness Doctrine for the traditional media.