Goforth on Regulation of Crypto: Who is the SEC Protecting?

Carol R. Goforth (University of Arkansas – School of Law) has posted “Regulation of Crypto: Who is the SEC Protecting?” (American Business Law Journal (2021 Forthcoming)) on SSRN. Here is the abstract:

Just months apart in 2020, two judges from the Southern District of New York wrote incredibly important opinions concerning the SEC’s approach to regulating the sales of cryptoassets. In both SEC v. Telegram and SEC v. Kik, the court concluded that the way in which a large, reputable social media company had gone about conducting its SAFT and crypto offerings violated federal law by amounting to an unregistered, non-exempt sale of securities. In neither case was fraud or other criminal conduct an issue; the sole problem identified by the courts was failure to register the sales. In reaching their conclusions, both courts collapsed a two-phase offering into a single, integrated scheme. The problem with this approach is that this appears to be an unnecessarily overbroad application of the SEC’s regulatory authority, falling outside the commission’s mandate to protect investors and markets. The cost of this approach is that crypto entrepreneurs are being forced away from the U.S., and American investors are denied opportunities to participate in a potentially exciting and desirable technological revolution

This Article examines the underlying regulatory framework, the SEC’s stated approach to crypto, and the rationale employed in these two recent opinions. It also considers the potential impact of recent amendments to the SEC’s rules defining the “integration doctrine,” which was explicitly relied upon in the Kik decision in order to treat the contractual sales and the eventual token sales as a single scheme that violated the federal securities laws. This Article makes suggestions for how crypto entrepreneurs and their advisors might structure future crypto offerings to avoid the pitfalls created by the Kik and Telegram opinions. This Article certainly advocates that courts limit the approach that is being urged by the SEC, but the suggestions offered here do not depend on a change in the law, merely a change in understanding what is required in order to conduct a compliant crypto offering.

Remolina & Findlay on The Paths to Digital Self-Determination – A Foundational Theoretical Framework

Nydia Remolina (Singapore Management University – Centre for AI & Data Governance) and Mark Findlay (Singapore Management University – School of Law; Centre for AI & Data Governance) have posted “The Paths to Digital Self-Determination – A Foundational Theoretical Framework” on SSRN. Here is the abstract:

A deluge of data is giving rise to new understandings and experiences of society and economy as our digital footprint grows steadily. Are data subjects able to determine themselves in this data-driven society? The emerging debates about autonomy and communal responsibility in the context of data access or protection, highlight a pressing imperative to re-imagine the ‘self’ in the digital space. Empowerment, autonomy, sovereignty, human centricity, are all terms often associated with the notion of digital self-determination in current policy language. More academics, industry experts, policymakers, regulators are now advocating self-determination in a data-driven world. The attitudes to self-determination range from alienating data as property through to broad considerations of communal access and enrichment. Digital self-determination is a complex notion to be viewed from different perspectives and in unique spaces, re-shaping what we understand as self-determination in the non-digital world. This paper explores the notion of digital self-determination by presenting a foundational theoretical framework based on pre-existent self-determination theories and exploring the implications of the digital society in the determination of the self. Only by better appreciating and critically framing the discussion of digital self-determination, is it possible to engage in trustworthy data spaces, and ensure ethical human centric approaches when living in a data driven society.

French on Five Approaches to Insuring Cyber Risks

Christopher C. French (The Pennsylvania State University (University Park) – Penn State Law) has posted “Five Approaches to Insuring Cyber Risks” (81 Md. L. Rev. __ (Forthcoming)) on SSRN. Here is the abstract:

Cyber risks are some of the most dangerous risks of the twenty-first century. Many types of businesses, including retail stores, healthcare entities and financial institutions, as well as government entities, are the targets of cyber attacks. The simple reality is that no computer security system is completely safe. They all can be breached if the hackers are skilled enough and determined. Consequently, the worldwide damages caused by cyber attacks are predicted to reach $10.5 trillion by 2025. Insuring such risks is a monumental task.

The cyber insurance market currently is fragmented with hundreds of insurers selling their own cyber risk insurance policies that cover different types of cyber risks. This means the purchasers of cyber insurance must be experts in both insurance and cyber security in order to make a knowledgeable purchase. And, even knowledgeable purchasers of cyber insurance can only obtain limited coverage for cyber risks. This is because the insurance is sold on a named peril, as opposed to all-risk, basis and the policies contain numerous exclusions. Cyber policies also have relatively low policy limits in comparison to other lines of insurance and the enormity of the risks presented.

This Article explores ways the cyber insurance market could be improved. In doing so, it analyzes the current cyber insurance market, including the history of cyber insurance and the challenges that insuring cyber risks present. The Article then offers five different approaches to insuring cyber risks moving forward that address many of the problems with the current cyber insurance market. Ultimately, the article concludes the fifth approach, the novel “all-risk private-public” approach, would be the best one.

Fox on Tokenized Assets in Private Law

David Fox (School of Law, University of Edinburgh) has posted “Tokenised Assets in Private Law” on SSRN. Here is the abstract:

Tokenisation is a recent development in distributed ledger technology (“DLT”) systems. Assets that exist off the ledger system, such as securities or rights to the delivery of a service, are represented as tokens issued on DLT system. The tokens evidence the holders’ claims against the issuer of the assets. The system serves as a platform for trading the tokens and the off-ledger assets they represent.

Tokenisation presents many analytical difficulties for private law. My concern in this paper is to outline some of the problems – and possible solutions – of mapping traditional rules of property and transfer onto transactions with tokenised assets on a DLT system. Much depends on the legal analysis used to explain the transactions with tokens on the system, and on the possibility of recognising a legal link between the on-chain token and the off-chain asset that it represents. The paper develops the argument that digital transactions can be analysed as the expression of the parties’ intentions to make an assignment or novation of the rights represented by the assets.


Fan on The Public’s Right to Benefit from Privately Held Consumer Big Data

Mary D. Fan (University of Washington – School of Law) has posted “The Public’s Right to Benefit from Privately Held Consumer Big Data” (Forthcoming, 96 N.Y.U. L. Rev. __ (2021)) on SSRN. Here is the abstract:

The information that we reveal from interactions online and with electronic devices has massive value—for both private profit and public benefit such as improving health, safety, and even commute times. Who owns the lucrative big data that we generate through the everyday necessity of interacting with technology? Calls for legal regulation regarding how companies use our data have spurred laws and proposals framed by the predominant lens of individual privacy and the right to control and delete data about oneself. By focusing on individual control over droplets of personal data, the major consumer privacy regimes overlook the important question of rights in the big data ocean. This article is the first to frame a right of the public to benefit from our consumer big data and propose a model to realize the benefits of this common resource while reducing the harms of opening access.

In the absence of an overarching theoretical and legal framework for property rights in our data, businesses are using intellectual property protections for compiled data to wall off access. The result is that while companies may use the data to find ways to get us to click or buy more, access for nonprofit public interest purposes is denied or at the discretion of companies. The article proposes a model for sharing the benefits of our pooled personal information. Drawing on insights from property theory, regulatory advances, and open innovation, the article proposes protections to permit controlled access and use for public interest purposes while protecting against privacy and related harms.

Houser on Artificial Intelligence and The Struggle Between Good and Evil

Kimberly Houser (University of North Texas) has posted “Artificial Intelligence and The Struggle Between Good and Evil” (Washburn Law Journal, Vol. 60) on SSRN. Here is the abstract:

Numerous reports have described—in great detail—the real and potential harms of the widespread development and adoption of artificial intelligence by both government and private industry. However, artificial intelligence has also been shown to create faster, more accurate and more equitable outcomes than humans in many situations. This seeming contradiction has led to dichotomous thinking describing artificial intelligence as either good or evil. Artificial intelligence, like all technological developments, is a tool: one that can be used—intentionally and sometimes, unintentionally—for good or for harm. The gray area comes into play when there is good or neutral intent that leads to a harmful result. This Essay is designed to help policymakers, investors, scholars, and students understand the multifaceted nature of artificial intelligence and the key challenges it presents and to caution against creating laws that prohibit AI programs outright rather than addressing the fundamental need to develop AI responsibly.

Miller & Mitchell on Dynamic Competition in Digital Markets: A Critical Analysis of the House Judiciary Committee’s Antitrust Report

Tracy Miller (Mercatus Center at George Mason University) and Trace Mitchell (NetChoice; George Mason University – Mercatus Center) have posted “Dynamic Competition in Digital Markets: A Critical Analysis of the House Judiciary Committee’s Antitrust Report” to SSRN. Here is the abstract:

This paper provides a critical analysis of the antitrust report from the Subcommittee on Antitrust, Commercial, and Administrative Law of the House Committee on the Judiciary, based on the consumer welfare standard, which has governed antitrust policy since the late 1970s. It also proposes a theoretical framework for refutation of the report’s allegations about anticompetitive conduct of the big four tech companies that we hope will be useful for future empirical work. Using this framework, we find that the report likely overstates the market power held by these tech companies and the extent to which their conduct is actually harmful to consumers. In addition, our framework leads us to hypothesize that the reforms advocated in the report may actually make consumers worse off by interfering with market dynamism and slowing innovation.