Trautman on The FTX Crypto Debacle

Lawrence J. Trautman (Prairie View A&M University – College of Business; Texas A&M University School of Law (By Courtesy)) has posted “The FTX Crypto Debacle: Largest Fraud Since Madoff?” on SSRN. Here is the abstract:

In her letter to Treasury Secretary Janet Yellen dated September 15, 2022, U.S. Senator Elizabeth Warren requests “the Treasury Department’s (Treasury’s) comprehensive review of the risks and opportunities presented by the proliferation of the digital asset market, which ‘will highlight the economic danger of cryptocurrencies in several key areas, including the fraud risks they pose for investors.” Senator Warren warns, “It is crucial that Treasury “create the analytical basis for very strong oversight of this sector of finance because cryptocurrency poses grave risks to investors and to the economy as a whole.”

Just weeks later, during November 2022 reports emerge that “In less than a week, the cryptocurrency billionaire Sam Bankman-Fried went from industry leader to industry villain, lost most of his fortune, saw his $32 billion company plunge into bankruptcy and became the target of investigations by the Securities and Exchange Commission and the Justice Department.” The demise of FTX and its’ many related crypto entities created contagion and collateral damage for other participants and investors in the cryptocurrency community. The U.S. bankruptcy proceedings of many FTX related entities, scattered across many jurisdictions worldwide, will likely take years to sort out.

Shortly after the Chapter 11 filing, post-bankruptcy FTX new CEO John J. Ray III characterizes the collapse of FTX as the result of “the absolute of concentration of control in the hands of a very small group of grossly inexperienced and unsophisticated individuals who failed to implement virtually any of the systems or controls that are necessary for a company that is entrusted with other people’s money.”

In just a few years Bitcoin and other cryptocurrencies have had a major societal impact, proving to be unique payment systems challenge for law enforcement, policy makers, and financial regulatory authorities worldwide. Rapid introduction and diffusion of technological changes, such as Bitcoin’s crypto foundation the blockchain, thus far continue to exceed the ability of law and regulation to keep pace. The story of FTX and potential consequences for investors and the global financial system is the subject of this paper.

This paper proceeds in thirteen parts. First, is a discussion of the history and growth of crypto currencies. Second, crypto and national security risks is examined. Third, the failure of FTX is introduced. Fourth, bankruptcy. Fifth, the collateral damage thus far to the crypto ecosystem is described. Sixth, the FTX demise is examined in terms of threshold questions that may help to understand what has transpired and how productive policy may be crafted for the future. Seventh, the role of the SEC is explored. Eighth, the CFTC is discussed. Ninth, crypto and the federal Reserve is addressed. Tenth, features the role of Congressional inquiries. Eleventh, explores regulatory implications. Twelfth, focuses on the failure of corporate governance. Thirteenth discusses prosecution and litigation. And last, I conclude.

Aoyagi & Ito on Competing DAOs

Jun Aoyagi (HKUST) and Yuki Ito (U Cal, Berkeley) have posted “Competing DAOs” on SSRN. Here is the abstract:

A decentralized autonomous organization (DAO) is an entity with no central control and ownership. A group of users discuss, propose, and implement a new platform design with smart contracts on blockchain by taking control away from a centralized platformer. We develop a model of platform competition with the DAO governance structure and analyze how strategic complementarity affects the development of DAOs. Compared to traditional competition between centralized platformers, a DAO introduces an additional layer of competition played by users. Since users are multi-homing, they propose a new platform design by internalizing interactions between platforms and create additional values, which is reflected by the price of a governance token. A platformer can extract this value by issuing a token but must relinquish control of her platform, losing potential fee revenue. Analyzing this tradeoff, we show that centralized platformers tend to be DAOs when strategic complementarity is strong, while an intermediate degree of strategic complementarity leads to the coexistence of a DAO and a traditional centralized platform.

Slobogin on Predictive Policing in the United States

Christopher Slobogin (Vanderbilt U Law) has posted “Predictive Policing in the United States” (forthcoming in The Algorithmic Transformation of the Criminal Justice system (Castro-Toledo ed.) on SSRN. Here is the abstract:

 This chapter, published in the book The Algorithmic Transformation of the Criminal Justice system (Castro-Toledo ed., Thomson Reuters, 2022) describes police use of algorithms to identify “hot spots” and “hot people,” and then discusses how this practice should be regulated. Predictive policing algorithms should have to demonstrate a “hit rate” that justifies both the intrusion necessary to acquire the information necessary to implement the algorithm and the action (e.g., surveillance, stop or arrest) that police seek to carry out based on the algorithm’s results. Further, for legality reasons, even a sufficient hit rate should not authorize action unless police have also observed risky conduct by the person the algorithm targets. Finally, the chapter discusses ways of dealing with the possible impact of racialized policing on the data fed into these algorithms.

Crootof on AI and the Actual IHL Accountability Gap

Rebecca Crootof (U Richmond Law; Yale ISP) has posted “AI and the Actual IHL Accountability Gap” (in The Ethics of Automated Warfare and AI, Centre for Int’l Gov Innov 2022) on SSRN. Here is the abstract:

Article after article bemoans how new military technologies — including landmines, unmanned drones, cyberoperations, autonomous weapon systems and artificial intelligence (AI) — create new “accountability gaps” in armed conflict. Certainly, by introducing geographic, temporal and agency distance between a human’s decision and its effects, these technologies expand familiar sources of error and complicate causal analyses, making it more difficult to hold an individual or state accountable for unlawful harmful acts.

But in addition to raising these new accountability issues, novel military technologies are also making more salient the accountability chasm that already exists at the heart of international humanitarian law (IHL): the relative lack of legal accountability for unintended, “awful but lawful” civilian harm.

Technological developments often make older, infrequent or underreported problems more stark, pervasive or significant. While many proposals focus on regulating particular weapons technologies to address concerns about increased incidental harms or increased accidents, this is not a case of the law failing to keep up with technological development. Instead, technological developments have drawn attention to the accountability gap built into the structure of IHL. In doing so, AI and other new military technologies have highlighted the need for accountability mechanisms for all civilian harms.

Kumar & Choudhury on Cognitive Moral Development in AI Robots

Shailendra Kumar (Sikkim University) and Sanghamitra Choudhury (University of Oxford) have posted “Cognitive Moral Development in AI Robots” on SSRN. Here is the abstract:

The widespread usage of artificial intelligence (AI) is prompting a number of ethical issues, including those involving concerns for fairness, surveillance, transparency, neutrality, and human rights. This manuscript explores the possibility and means of cognitive moral development in AI bots, and while doing so, it floats a new concept for the characterization and development of artificially intelligent and ethical robotic machines. It proposes the classification of the order of evolution of ethics in the AI bots, by making use of Lawrence Kohlberg’s study related to cognitive moral development in humans. The manuscript further suggests that by providing appropriate inputs to AI robots in accordance with the proposed concept, humans may assist in the development of an ideal robotic creature that is morally responsible.

Patel on Fraud on the Crypto Market

Menesh S. Patel (University of California, Davis – School of Law) has posted “Fraud on the Crypto Market” (Harvard Journal of Law & Technology, Forthcoming (2023)) on SSRN. Here is the abstract:

Crypto asset trading markets are booming. Traders in the United States presently can buy and sell hundreds of crypto assets on dozens of crypto exchanges, and this trading is expected to further intensify in the coming years. While investors now increasingly turn to crypto asset trading for portfolio appreciation and diversification, the popularization of secondary crypto asset trading risks significant investor harm through increased incidents of fraud. False or misleading statements by crypto asset sponsors or third parties have the prospect of financially impairing traders in crypto asset trading markets, including everyday traders who are ill-equipped to sustain significant investment losses.

As traders seek judicial redress for their fraud-related injuries, courts will be asked to make doctrinal determinations that will be pivotal to injured traders’ ability to recover. A primary issue that courts will need to confront is whether crypto asset traders can avail themselves of fraud on the market in connection with fraud claims asserted under SEC Rule 10b-5 or CFTC Rule 180.1. This Article addresses that question and has as its intended audience not just academics, but also courts, practitioners, and market participants.

The Article shows that as a doctrinal matter fraud on the market is available in securities or commodities fraud cases involving crypto assets that trade on crypto exchanges, especially in light of the Supreme Court’s decision in Halliburton II, which resolved that fraud on the market is predicated on just a generalized notion of market efficiency, rather than a strict financial economic notion of efficiency. Drawing on how courts apply the doctrine to fraud cases involving stock transactions, the Article articulates a framework for how fraud on the market should be applied to the crypto asset context and explores methodological issues relevant to the framework’s application in a given crypto asset case.

Kreiczer-Levy on Reclaiming Feudalism for the Technological Era

Shelly Kreiczer-Levy has posted “Reclaiming Feudalism for the Technological Era” (Cardozo Arts & Entertainment Law Journal, (Forthcoming 2023)) on SSRN. Here is the abstract:

Personal property law has a blind spot when it comes to technological items, as they do not account for the long-term, unequal property collaboration that is required in operating these assets. I argue that we can learn from the intellectual legal history of feudalism about the vulnerabilities produced by property collaborations between unequal parties.

Owners of robots as AI objects (e.g., autonomous vehicles, drones, and robot-chefs) have limited control over their property. Users own the physical product, but they only have a license to use the software. As per the terms of the license, the manufacturer retains control over many aspects of the object’s ongoing use. Although this structure is criticized in the literature, none of the critics points out the need to rethink the current structure of these rights. AI products have autonomous decision-making capabilities that make their actions hard to foresee and require periodical updates to secure their safety and quality. This Article is the first to offer a property model for technological property collaborations.

The inspiration for this property model lies in the historical form of feudal property. Feudalism is often evoked in the law and technology literature to warn us against the power that large corporations hold over users. While these concerns are valuable, I maintain that feudal property has important potential to identify and address the unique vulnerabilities in these property collaborations. First, the duties of users and manufacturers in robots as well as feudal property are not connected to the use and function of the asset. Second, the property can only be used with the collaboration of the manufacturer or lord.

Following this analysis, this Article offers two models for property collaborations in AI products: a moderate, connection model and a more radical, competition model. The connection model adopts the basic feudal concept of split ownership accompanied by a specifically tailored relational role and applies it to robots with the necessary changes. The competition model seeks to create a market where different manufacturers compete for the development value of the robot. The proposed models have several normative implications, including invalidating limitations on use, justifying a right to repair and data portability, and clarifying the copyright protection of AI-produced creative work.


Zingales & Renzetti on Digital Platform Ecosystems and Conglomerate Mergers: A Review of the Brazilian Experience

Nicolo Zingales (Getulio Vargas Foundation (FGV); Tilburg Law and Economics Center (TILEC); Stanford University – Stanford Law School Center for Internet and Society) and Bruno Renzetti (Yale University, Law School; University of Sao Paulo (USP), Faculty of Law (FD)) have posted “Digital Platform Ecosystems and Conglomerate Mergers: A Review of the Brazilian Experience” (World Competition 45 (4) (2022)) on SSRN. Here is the abstract:

This paper highlights some of the key challenges for the Brazilian merger control regime in dealing with mergers involving digital platform ecosystems (DPEs). After a quick introduction to DPEs, we illustrate how conglomerate effects that are raised by such mergers remain largely unaddressed in the current landscape for merger control in Brazil. The paper is divided in four sections. First, we introduce the reader to the framework for merger control in Brazil. Second, we identify the possible theories of harm related to conglomerate mergers, and elaborate on the way in which their application may be affected by the context of DPEs. Third, we conduct a review of previous mergers involving DPEs in Brazil, aiming to identify the theories of harm employed (and those that could have been explored) in each case. Fourth and finally, we summarize and results and suggest adaptations to the current regime, advancing proposals for a more consistent and predictable analysis.

Lemert on Facebook’s Corporate Law Paradox

Abby Lemert (Yale Law School) has posted “Facebook’s Corporate Law Paradox” on SSRN. Here is the abstract:

In response to the digital harms created by Facebook’s platforms, lawmakers, the media, and academics repeatedly demand that the company stop putting “profits before people.” But these commentators have consistently overlooked the ways in which Delaware corporate law disincentives and even prohibits Facebook’s directors from prioritizing the public interest. Because Facebook experiences the majority of the harms it creates as negative externalities, Delaware’s unflinching commitment to shareholder primacy prevents Facebook’s directors from making unprofitable decisions to redress those harms. Even Facebook’s attempt to delegate decision-making authority to the independent Oversight Board verges on an unlawful abdication of corporate director fiduciary duties. Facebook’s experience casts doubt on the prospects for effective corporate self-regulation of content moderation, and more broadly, on the ability of existing corporate law to incentivize or even allow social media companies to meaningfully redress digital harms.

Jabri on Algorithmic Policing

Ranae Jabri (National Bureau of Economic Research; Duke University) has posted “Algorithmic Policing” on SSRN. Here is the abstract:

Predictive policing algorithms are increasingly used by law enforcement agencies in the United States. These algorithms use past crime data to generate predictive policing boxes, specifically the highest crime risk areas where law enforcement is instructed to patrol every shift. I collect a novel dataset on predictive policing box locations, crime incidents, and arrests from a major urban jurisdiction where predictive policing is used. Using institutional features of the predictive policing policy, I isolate quasi-experimental variation to examine the causal impacts of algorithm-induced police presence. I find that algorithm-induced police presence decreases serious property and violent crime. At the same time, I also find disproportionate racial impacts on arrests for serious violent crimes as well as arrests in traffic incidents i.e. lower-level offenses where police have discretion. These results highlight that using predictive policing to target neighborhoods can generate a tradeoff between crime prevention and equity.