Brummer on Disclosure, Dapps and DeFi

Chris Brummer (Georgetown University Law Center; Institute of International Economic Law (IIEL)) has posted “Disclosure, Dapps and DeFi” (Stanford Journal of Blockchain Law and Policy, forthcoming) on SSRN. Here is the abstract:

Disclosure in decentralized finance is an area where founders’ and regulators’ interests can overlap in important ways. Market participants need to differentiate their dapps to compete and grow—just as regulators have long demanded transparency in order for people to know what they’re buying. But adapting disclosure frameworks popularized in the 1930s to today’s digital marketplace requires bridging decades of technological evolution and fundamentally alien assumptions about market infrastructure.

This white paper contributes to that work. It observes that DeFi presents novel policy questions for disclosure because much of the material information required to participate in an informed way is already available to technologically sophisticated actors on blockchains. This feature is relevant when contemplating how and for whom a disclosure system for DeFi should be modeled. Securities law, with its focus on institutional actors, calls for voluminous and often technical disclosures designed to be filed with authorities; by contrast, consumer protection frameworks rely on targeted, retail-friendly disclosures meant to be digested by everyday shoppers and end users.

Against this backdrop, this white paper offers a framework transposable to securities law, but given the information already accessible to technologically savvy actors emphasizes the need for shorter, crisper disclosures typically associated with consumer protection law. It makes two key contributions. First, it highlights ambiguities inhabiting legacy disclosure obligations, and offers a conceptual roadmap for assisting developers and regulators seeking to identify relevant disclosure issue areas and principles. Second, it introduces a series of crypto-native tools to modernize disclosure delivery in DeFi systems, among them “Disclosure NFTs,” “Disclosure DAOs,” and “Disclosure DIDs.” If properly developed, the white paper shows how these tools could potentially provide more functionality and security than the SEC’s Edgar database and afford a new generation of developers and engineers a unique opportunity to reorient disclosure towards its original New Deal purpose: to be read.

Hu on Individuals as Gatekeepers against Data Misuse

Ying Hu (National University of Singapore – Faculty of Law) has posted “Individuals as Gatekeepers against Data Misuse” (28 Michigan Technology Law Review 115 (2021)) on SSRN. Here is the abstract:

This article makes a case for treating individual data subjects as gatekeepers against misuse of personal data. Imposing gatekeeper responsibility on individuals is most useful where (a) the primary wrongdoers engage in data misuse intentionally or recklessly; (b) misuse of personal data is likely to lead to serious harm; and (c) one or more individuals are able to detect and prevent data misuse at a reasonable cost.

As gatekeepers, individuals should have a legal duty to take reasonable measures to prevent data misuse where they are aware of facts indicating that the person seeking personal data from them is highly likely to misuse it or to facilitate its misuse. Recognizing a legal duty to prevent data misuse provides a framework for determining the boundaries of appropriate behavior when dealing with personal data that people have legally acquired. It does not, however, abrogate the need to impose gatekeeping obligations on big technology companies.

In addition, individuals should also owe a social duty to protect the personal data in their possession. Whether individuals have sufficient incentive to protect their personal data in a particular situation depends not only on the cost of the relevant security measures, but also on their expectation of the security decisions made by others who also possess that data. Even a privacy conscious individual would have little incentive to invest in privacy protective measures if he believes that his personal data is possessed by a sufficiently large number of persons who do not invest in such measures. On the flip side, an individual’s decision to protect his personal data generates positive externalities—it incentivizes others to invest in security measures. As such, promoting the norm of data security is likely to lead to a self-reinforcing virtuous cycle which helps improve the level of data security in a given community.

Keats Citron on How To Fix Section 230

Danielle Keats Citron (University of Virginia School of Law) has posted “How To Fix Section 230” (Boston University Law Review, Forthcoming) on SSRN. Here is the abstract:

Section 230 is finally getting the clear-eyed attention that it deserves. No longer is it naive to suggest that we revisit the law that immunizes online platforms from liability for illegality that they enable. Today, the harm wrought by the current approach is undeniable. Time and practice have made clear that tech companies don’t have enough incentive to remove harmful content, especially if it generates likes, clicks, and shares. They earn a fortune in advertising fees from illegality like nonconsensual pornography with little risk to their reputations. Victims can’t sue the entities that have enabled and profited from their suffering. The question is how to fix Section 230. The legal shield enjoyed by online platforms needs preconditions. This essay proposes a reasonable steps approach borne out of more than 12 years working with tech companies on content moderation policies and victims of intimate privacy violations. In this essay, I lay out concrete suggestions for a reasonable steps approach, one that has synergies with international efforts.