Hacker on The European AI Liability Directives

Philipp Hacker (European University Viadrina Frankfurt (Oder) – European New School of Digital Studies) has posted “The European AI Liability Directives – Critique of a Half-Hearted Approach and Lessons for the Future” on SSRN. Here is the abstract:

The optimal liability framework for AI systems remains an unsolved problem across the globe. With ChatGPT and other large models taking the technology to the next level, solutions are urgently needed. In a much-anticipated move, the European Commission advanced two proposals outlining the European approach to AI liability in September 2022: a novel AI Liability Directive (AILD) and a revision of the Product Liability Directive (PLD). They constitute the final cornerstone of AI regulation in the EU. Crucially, the liability proposals and the AI Act are inherently intertwined: the latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment. Taken together, these acts may well trigger a “Brussels effect” in AI regulation, with significant consequences for the US and other countries.

Against this background, this paper makes three novel contributions. First, it examines in detail the Commission proposals and shows that, while making steps in the right direction, they ultimately represent a half-hearted approach: if enacted as foreseen, AI liability in the EU will primarily rest on disclosure of evidence mechanisms and a set of narrowly defined presumptions concerning fault, defectiveness and causality. Hence, second, the article makes suggestions for amendments to the proposed AI liability framework. They are collected in a concise Annex at the end of the paper. I argue, inter alia, that the dichotomy between the fault-based AILD Proposal and the supposedly strict liability PLD Proposal is fictional and should be abandoned; that an EU framework for AI liability should comprise one fully harmonizing regulation instead of two insufficiently coordinated directives; and that the current proposals unjustifiably collapse fundamental distinctions between social and individual risk by equating high-risk AI systems in the AI Act with those under the liability framework.

Third, based on an analysis of the key risks AI poses, the final part of the paper maps out a road for the future of AI liability and regulation, in the EU and beyond. More specifically, I make four key proposals. Effective compensation should be ensured by combining truly strict liability for certain high-risk AI systems with general presumptions of defectiveness, fault and causality in cases involving SMEs or non-high-risk AI systems. The paper introduces a novel distinction between illegitimate- and legitimate-harm models to delineate strict liability’s scope. Truly strict liability should be reserved for high-risk AI systems that, from a social perspective, should not cause harm (illegitimate-harm models, e.g., autonomous vehicles or medical AI). Models meant to cause some unavoidable harm by ranking and rejecting individuals (legitimate-harm models, e.g., credit scoring or insurance scoring) may only face rebuttable presumptions of defectiveness and causality. General-purpose AI systems should only be subjected to high-risk regulation, including liability for high-risk AI systems, in specific high-risk use cases for which they are deployed. Consumers ought to be liable based on regular fault, in general.

Furthermore, innovation and legal certainty should be fostered through a comprehensive regime of safe harbours, defined quantitatively to the best extent possible. Moreover, trustworthy AI remains an important goal for AI regulation. Hence, the liability framework must specifically extend to non-discrimination cases and provide for clear rules concerning explainability (XAI).

Finally, awareness for the climate effects of AI, and digital technology more broadly, is rapidly growing in computer science. In diametrical opposition to this shift in discourse and understanding, however, EU legislators thoroughly neglect environmental sustainability in both the AI Act and the proposed liability regime. To counter this, I propose to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime. In this way, the law may help spur not only fair AI and XAI, but potentially also sustainable AI (SAI).

Taskinsoy on Crypto Crashes

John Taskinsoy (Universiti Malaysia Sarawak) has posted “The Great Silent Crash of the 21st Century” on SSRN. Here is the abstract:

A mysterious creator under the alias Satoshi Nakamoto (a pseudonym) launched the world’s first successful cryptocurrency in early January 2009 which, not only was a historic moment, but was one that cultivated a technology revolution and money’s evolution into a digital form. However, technical issues (inherent flaws) in the design of Bitcoin blockchain, non-technical issues (political backlash, regulatory hurdle, and environmental hazard), plus the opaqueness surrounding the launch of Bitcoin have opened the door for an endless debate, incessant criticism, spurious claims, heated arguments, plethora of articles, and media frenzy contemplating what Bitcoin really is (crypto-asset, commodity, or investment vehicle) or it is not (currency). On the technical side, blockchain that made Bitcoin a household name fails miserably; high latency (8 minutes or more), low transaction throughput (7 per second), low-scalability (mining, proof-of-work based validation using consensus and cryptography), and high energy cost make Bitcoin unfit to compete with new and fast-scalable cryptocurrencies such as Solana’s transaction speed (50,000 per second) or XML’s $0.00001 fee per transaction. Although technical problems are not without a solution, non-technical issues are not easy to resolve because their resolution depends on politicians, law makers, regulators, and various government agencies (central banks, the Fed and the ECB in particular) who choose to run headlong into backlash to Bitcoin and other cryptocurrencies. On Tuesday (October 8, 2022), prices of cryptocurrencies tanked, citing the industry-shaking collapse of FTX (second largest after Binance), some even dubbed the event as “Crypto’s Lehman moment”.

But erratic price movements is not something new in the crypto industry which has been on a roller-coaster since December of 2021, i.e. after Bitcoin price hit almost $68,000 and its market cap $1.24 trillion, jittery investors in a hurry began to cash out their hefty gains. The inability of FTX’s CEO Sam Bankman-Fried to handle his plan to sell his company (which was regarded as one of crypto’s “blue chip” companies) to the rival crypto exchange Binance set off a widespread selling panic, as a result, cryptocurrency market shed a mindboggling $236.7 billion ($81 billion by Bitcoin) in just two days (Tuesday and Wednesday), which by any standard was insanely bonkers.

Bloch-Wehba on The Rise, Fall, and Rise of Cyber Civil Libertarianism

Hannah Bloch-Wehba (Texas A&M University School of Law; Yale ISP) has posted “The Rise, Fall, and Rise of Cyber Civil Libertarianism” in Feminist Cyberlaw (Meg Leta Jones and Amanda Levendowski, eds.) forthcoming. Here is the abstract:

Using sexual speech as its focal point, this essay explores the ambiguous legacy of cyber civil liberties and the ascent of alternative paradigms for digital freedom. From its inception, cyberlaw was characterized by a moral panic over sexual speech, pornography, and the protection of children familiar to First Amendment scholars. Important civil libertarian victories recognized that sexual speech and pornography were constitutionally protected from state intervention. The civil libertarian paradigm saw government regulation as the primary threat to free speech online, the marketplace as the more appropriate mechanism for regulating expression, and courts as the rightful arbiters of these disputes.

But while civil libertarians successfully rolled back much regulatory intervention to enforce moral codes online, their successes came at a price: the legitimation of private power over speech. Though the civil libertarian tradition would theoretically protect sexual speech, it has in practice shifted the locus of power over speech from public to private hands. The result is a form of “market” ordering that is nominally private but that, in fact, reflects the entrenched power and influence of conservative cultural politics. In turn, this burgeoning private authority has prompted both political and cultural realignments (the “techlash”) and a broader turning away from the civil libertarian approach to speech.

Amid attacks on women’s health, privacy, equality, and autonomy, it is tempting to look to online platforms as guardians of these values and defenders of First Amendment traditions. Yet platforms have been—and continue to be—ambivalent defenders of sexual speech. Today, private speech enforcement is far broader than what the state could accomplish through direct regulation. But in a moment of challenge to sexual freedom and equality, cyber civil libertarianism might—with renewed attention to private power—yet find another foothold.

Trautman on The FTX Crypto Debacle

Lawrence J. Trautman (Prairie View A&M University – College of Business; Texas A&M University School of Law (By Courtesy)) has posted “The FTX Crypto Debacle: Largest Fraud Since Madoff?” on SSRN. Here is the abstract:

In her letter to Treasury Secretary Janet Yellen dated September 15, 2022, U.S. Senator Elizabeth Warren requests “the Treasury Department’s (Treasury’s) comprehensive review of the risks and opportunities presented by the proliferation of the digital asset market, which ‘will highlight the economic danger of cryptocurrencies in several key areas, including the fraud risks they pose for investors.” Senator Warren warns, “It is crucial that Treasury “create the analytical basis for very strong oversight of this sector of finance because cryptocurrency poses grave risks to investors and to the economy as a whole.”

Just weeks later, during November 2022 reports emerge that “In less than a week, the cryptocurrency billionaire Sam Bankman-Fried went from industry leader to industry villain, lost most of his fortune, saw his $32 billion company plunge into bankruptcy and became the target of investigations by the Securities and Exchange Commission and the Justice Department.” The demise of FTX and its’ many related crypto entities created contagion and collateral damage for other participants and investors in the cryptocurrency community. The U.S. bankruptcy proceedings of many FTX related entities, scattered across many jurisdictions worldwide, will likely take years to sort out.

Shortly after the Chapter 11 filing, post-bankruptcy FTX new CEO John J. Ray III characterizes the collapse of FTX as the result of “the absolute of concentration of control in the hands of a very small group of grossly inexperienced and unsophisticated individuals who failed to implement virtually any of the systems or controls that are necessary for a company that is entrusted with other people’s money.”

In just a few years Bitcoin and other cryptocurrencies have had a major societal impact, proving to be unique payment systems challenge for law enforcement, policy makers, and financial regulatory authorities worldwide. Rapid introduction and diffusion of technological changes, such as Bitcoin’s crypto foundation the blockchain, thus far continue to exceed the ability of law and regulation to keep pace. The story of FTX and potential consequences for investors and the global financial system is the subject of this paper.

This paper proceeds in thirteen parts. First, is a discussion of the history and growth of crypto currencies. Second, crypto and national security risks is examined. Third, the failure of FTX is introduced. Fourth, bankruptcy. Fifth, the collateral damage thus far to the crypto ecosystem is described. Sixth, the FTX demise is examined in terms of threshold questions that may help to understand what has transpired and how productive policy may be crafted for the future. Seventh, the role of the SEC is explored. Eighth, the CFTC is discussed. Ninth, crypto and the federal Reserve is addressed. Tenth, features the role of Congressional inquiries. Eleventh, explores regulatory implications. Twelfth, focuses on the failure of corporate governance. Thirteenth discusses prosecution and litigation. And last, I conclude.

Aoyagi & Ito on Competing DAOs

Jun Aoyagi (HKUST) and Yuki Ito (U Cal, Berkeley) have posted “Competing DAOs” on SSRN. Here is the abstract:

A decentralized autonomous organization (DAO) is an entity with no central control and ownership. A group of users discuss, propose, and implement a new platform design with smart contracts on blockchain by taking control away from a centralized platformer. We develop a model of platform competition with the DAO governance structure and analyze how strategic complementarity affects the development of DAOs. Compared to traditional competition between centralized platformers, a DAO introduces an additional layer of competition played by users. Since users are multi-homing, they propose a new platform design by internalizing interactions between platforms and create additional values, which is reflected by the price of a governance token. A platformer can extract this value by issuing a token but must relinquish control of her platform, losing potential fee revenue. Analyzing this tradeoff, we show that centralized platformers tend to be DAOs when strategic complementarity is strong, while an intermediate degree of strategic complementarity leads to the coexistence of a DAO and a traditional centralized platform.

Slobogin on Predictive Policing in the United States

Christopher Slobogin (Vanderbilt U Law) has posted “Predictive Policing in the United States” (forthcoming in The Algorithmic Transformation of the Criminal Justice system (Castro-Toledo ed.) on SSRN. Here is the abstract:

 This chapter, published in the book The Algorithmic Transformation of the Criminal Justice system (Castro-Toledo ed., Thomson Reuters, 2022) describes police use of algorithms to identify “hot spots” and “hot people,” and then discusses how this practice should be regulated. Predictive policing algorithms should have to demonstrate a “hit rate” that justifies both the intrusion necessary to acquire the information necessary to implement the algorithm and the action (e.g., surveillance, stop or arrest) that police seek to carry out based on the algorithm’s results. Further, for legality reasons, even a sufficient hit rate should not authorize action unless police have also observed risky conduct by the person the algorithm targets. Finally, the chapter discusses ways of dealing with the possible impact of racialized policing on the data fed into these algorithms.

Crootof on AI and the Actual IHL Accountability Gap

Rebecca Crootof (U Richmond Law; Yale ISP) has posted “AI and the Actual IHL Accountability Gap” (in The Ethics of Automated Warfare and AI, Centre for Int’l Gov Innov 2022) on SSRN. Here is the abstract:

Article after article bemoans how new military technologies — including landmines, unmanned drones, cyberoperations, autonomous weapon systems and artificial intelligence (AI) — create new “accountability gaps” in armed conflict. Certainly, by introducing geographic, temporal and agency distance between a human’s decision and its effects, these technologies expand familiar sources of error and complicate causal analyses, making it more difficult to hold an individual or state accountable for unlawful harmful acts.

But in addition to raising these new accountability issues, novel military technologies are also making more salient the accountability chasm that already exists at the heart of international humanitarian law (IHL): the relative lack of legal accountability for unintended, “awful but lawful” civilian harm.

Technological developments often make older, infrequent or underreported problems more stark, pervasive or significant. While many proposals focus on regulating particular weapons technologies to address concerns about increased incidental harms or increased accidents, this is not a case of the law failing to keep up with technological development. Instead, technological developments have drawn attention to the accountability gap built into the structure of IHL. In doing so, AI and other new military technologies have highlighted the need for accountability mechanisms for all civilian harms.

Kumar & Choudhury on Cognitive Moral Development in AI Robots

Shailendra Kumar (Sikkim University) and Sanghamitra Choudhury (University of Oxford) have posted “Cognitive Moral Development in AI Robots” on SSRN. Here is the abstract:

The widespread usage of artificial intelligence (AI) is prompting a number of ethical issues, including those involving concerns for fairness, surveillance, transparency, neutrality, and human rights. This manuscript explores the possibility and means of cognitive moral development in AI bots, and while doing so, it floats a new concept for the characterization and development of artificially intelligent and ethical robotic machines. It proposes the classification of the order of evolution of ethics in the AI bots, by making use of Lawrence Kohlberg’s study related to cognitive moral development in humans. The manuscript further suggests that by providing appropriate inputs to AI robots in accordance with the proposed concept, humans may assist in the development of an ideal robotic creature that is morally responsible.

Patel on Fraud on the Crypto Market

Menesh S. Patel (University of California, Davis – School of Law) has posted “Fraud on the Crypto Market” (Harvard Journal of Law & Technology, Forthcoming (2023)) on SSRN. Here is the abstract:

Crypto asset trading markets are booming. Traders in the United States presently can buy and sell hundreds of crypto assets on dozens of crypto exchanges, and this trading is expected to further intensify in the coming years. While investors now increasingly turn to crypto asset trading for portfolio appreciation and diversification, the popularization of secondary crypto asset trading risks significant investor harm through increased incidents of fraud. False or misleading statements by crypto asset sponsors or third parties have the prospect of financially impairing traders in crypto asset trading markets, including everyday traders who are ill-equipped to sustain significant investment losses.

As traders seek judicial redress for their fraud-related injuries, courts will be asked to make doctrinal determinations that will be pivotal to injured traders’ ability to recover. A primary issue that courts will need to confront is whether crypto asset traders can avail themselves of fraud on the market in connection with fraud claims asserted under SEC Rule 10b-5 or CFTC Rule 180.1. This Article addresses that question and has as its intended audience not just academics, but also courts, practitioners, and market participants.

The Article shows that as a doctrinal matter fraud on the market is available in securities or commodities fraud cases involving crypto assets that trade on crypto exchanges, especially in light of the Supreme Court’s decision in Halliburton II, which resolved that fraud on the market is predicated on just a generalized notion of market efficiency, rather than a strict financial economic notion of efficiency. Drawing on how courts apply the doctrine to fraud cases involving stock transactions, the Article articulates a framework for how fraud on the market should be applied to the crypto asset context and explores methodological issues relevant to the framework’s application in a given crypto asset case.

Kreiczer-Levy on Reclaiming Feudalism for the Technological Era

Shelly Kreiczer-Levy has posted “Reclaiming Feudalism for the Technological Era” (Cardozo Arts & Entertainment Law Journal, (Forthcoming 2023)) on SSRN. Here is the abstract:

Personal property law has a blind spot when it comes to technological items, as they do not account for the long-term, unequal property collaboration that is required in operating these assets. I argue that we can learn from the intellectual legal history of feudalism about the vulnerabilities produced by property collaborations between unequal parties.

Owners of robots as AI objects (e.g., autonomous vehicles, drones, and robot-chefs) have limited control over their property. Users own the physical product, but they only have a license to use the software. As per the terms of the license, the manufacturer retains control over many aspects of the object’s ongoing use. Although this structure is criticized in the literature, none of the critics points out the need to rethink the current structure of these rights. AI products have autonomous decision-making capabilities that make their actions hard to foresee and require periodical updates to secure their safety and quality. This Article is the first to offer a property model for technological property collaborations.

The inspiration for this property model lies in the historical form of feudal property. Feudalism is often evoked in the law and technology literature to warn us against the power that large corporations hold over users. While these concerns are valuable, I maintain that feudal property has important potential to identify and address the unique vulnerabilities in these property collaborations. First, the duties of users and manufacturers in robots as well as feudal property are not connected to the use and function of the asset. Second, the property can only be used with the collaboration of the manufacturer or lord.

Following this analysis, this Article offers two models for property collaborations in AI products: a moderate, connection model and a more radical, competition model. The connection model adopts the basic feudal concept of split ownership accompanied by a specifically tailored relational role and applies it to robots with the necessary changes. The competition model seeks to create a market where different manufacturers compete for the development value of the robot. The proposed models have several normative implications, including invalidating limitations on use, justifying a right to repair and data portability, and clarifying the copyright protection of AI-produced creative work.

Recommended.