Puaschunder on Digital Inequality

Julia M. Puaschunder (Columbia University; New School for Social Research; Harvard University; The Situationist Project on Law and Mind Sciences) has posted “Digital Inequality: A Research Agenda” (Proceedings of the 28th RAIS, June 2022) on SSRN. Here is the abstract:

We live in the age of digitalization. Digital disruption is the advancement of our lifetimes. Never before in the history of humankind have human beings given up as much decision-making autonomy as today to a growing body of artificial intelligence (AI). Digitalization features a wave of self-learning entities that generate information from exponentially-growing big data sources that are encroaching every aspect of our daily lives. Inequality is one of the most significant pressing concern of our times. Ample evidence exists in economics, law and historical studies that multiple levels of inequality dominate the current socio-dynamics, politics and living conditions around the world. Social inequality stretches from societal levels within nation states to global dimensions but also intergenerational inequality domains. While digitalization and inequality are predominant features of our times, hardly any information exists on the inequality inherent in digitalization. This paper breaks new ground in theoretically arguing for inequality being an overlooked by-product of innovative change – featuring concrete examples in insights and applications in the digitalization domain. A multi-faceted analysis will draw a contemporary digital inequality account from behavioral economic, macroeconomic, comparative and legal economic perspectives. This paper targets at aiding academics and practitioners in understanding the advantages but also the potential inequalities imbued in digitalization. It sets a historic landmark to capture the Zeitgeist of our digitalization disruption heralding unexpected inequalities stemming from innovative change. The article may open eyes to understand our times holistically in its advantageous innovation capacities but also potential societal, international and intertemporal unequal gains and losses perspectives from digitalization.

Feldman & Stein on AI Governance in the Financial Industry

Robin Feldman (UC Hastings Law) and Kara Stein (Public Company Oversight Board) have posted “AI Governance in the Financial Industry” (Stanford Journal of Law, Business, and Finance, Vol. 27, No. 1, 2022) on SSRN. Here is the abstract:

Legal regimes in the United States generally conceptualize obligations as attaching along one of two pathways: through the entity or the individual. Although these dual conceptualizations made sense in an ordinary pre-modern world, they no longer capture the financial system landscape, now that artificial intelligence has entered the scene. Neither person nor entity, artificial intelligence is an activity or a capacity, something that mediates relations between individuals and entities. And whether we like it or not, artificial intelligence has already reshaped financial markets. From Robinhood, to the Flash Crash, to Twitter’s Hash Crash, to the Knight Capital incident, each of these episodes foreshadows the potential for puzzling conundra and serious disruptions.

Little space exists in current legal and regulatory regimes to properly manage the actions of artificial intelligence in the financial space. Artificial intelligence does not “have intent” and therefore cannot form the scienter required in many securities law contexts. It also defies the approach commonly used in financial regulation of focusing on size or sophistication. Moreover, the activity of artificial intelligence is too diffuse, distributed, and ephemeral to effectively govern by aiming regulatory firepower at the artificial intelligence itself or even at the entities and individuals currently targeted in securities law. Even when the law deviates from the classic focus on entities and individuals, as it meanders through areas that implicate artificial intelligence, we lack a unifying theory for what we are doing and why.

To begin filling this void, we propose conceptualizing artificial intelligence as a type of skill or capacity—a superpower, if you will. Just as the power of flight opens new avenues for superheroes, so, too, does the power of artificial intelligence open new avenues for mere mortals. With the capacity of flight as its animating imagery, the article proposes what we would call “touchpoint regulation.” Specifically, we set out three forms of scaffolding—touchpoints, types of evil, and types of players—that provide the essential structure for any body of law society will need for governing artificial intelligence in the financial industry.

Khan & Hanna on The Subjects and Stages of AI Dataset Development: A Framework for Dataset Accountability

Mehtab Khan (Yale Law School) and Alex Hanna (Distributed AI Research Institute) have posted “The Subjects and Stages of AI Dataset Development: A Framework for Dataset Accountability” (19 Ohio St. Tech. L.J. (Forthcoming 2023)) on SSRN. Here is the abstract:

There has been increased attention toward the datasets that are used to train and build AI technologies from the computer science and social science research communities, but less from legal scholarship. Both Large-Scale Language Datasets (LSLDs) and Large-Scale Computer Vision Datasets (LSCVDs) have been at the forefront of such discussions, due to recent controversies involving the use of facial recognition technologies, and the discussion of the use of publicly-available text for the training of massive models which generate human-like text. Many of these datasets serve as “benchmarks” to develop models that are used both in academic and industry research, while others are used solely for training models. The process of developing LSLDs and LSCVDs is complex and contextual, involving dozens of decisions about what kinds of data to collect, label, and train a model on, as well as how to make the data available to other researchers. However, little attention has been paid to mapping and consolidating the legal issues that arise at different stages of this process: when the data is being collected, after the data is used to build and evaluate models and applications, and how that data is distributed more widely.

In this article, we offer four main contributions. First, we describe what kinds of objects these datasets are, how many different kinds exist, what types of modalities they encompass, and why they are important. Second, we provide more clarity about the stages of dataset development – a process that has thus far been subsumed within broader discussions about bias and discrimination – and the subjects who may be susceptible to harms at each point of development. Third, we provide a matrix of both the stages of dataset development and the subjects of dataset development, which traces the connections between stages and subjects. Fourth, we use this analysis to identify some basic legal issues that arise at the various stages in order to foster a better understanding of the dilemmas and tensions that arise at every stage. We situate our discussion within wider discussion of current debates and proposals related to algorithmic accountability. This paper fulfills an essential gap when it comes to comprehending the complicated landscape of legal issues connected to datasets and the gigantic AI models trained on them.

Huq on Militant Democracy Comes to the Metaverse

Aziz Z. Huq (University of Chicago – Law School) has posted “Militant Democracy Comes to the Metaverse” (Emory Law Journal, Vol. 72, Forthcoming) on SSRN. Here is the abstract:

Social media platforms such as Facebook, Twitter, Instagram, and Parlor are an increasingly central plank of the democratic public sphere in the United States. The prevailing view of this platform-based public sphere has of late become increasingly dour and pessimistic. What were once seen as a “technology of liberation” has come to be understood to act a channel and amplifier of “antisystem” forces in democracies. This is not the first time, however, that a private actor that operates as a necessary part of the democratic system has turned out to be a threat to the quality of democracy itself: The same was true for parties of the extreme left and extreme right in postwar Europe. The principal theoretical lens through which those earlier challenges were analyzed traveled under the label of “militant democracy,” a term coined by the émigré German political scientist Karl Loewenstein.

This essay uses the lens of militant democracy theory to think about the challenge posed by digital platforms to democracy today. It draws two main lessons. First, the social digital platform/democracy problem is structurally similar to the challenge of antisystem parties that Loewenstein’s militant democracy theory was crafted to meet. This insight leads, secondly, to an opportunity to explore the practical and theoretical space of militant democracy for insights into democracy’s contemporary challenge from social media. While I make no claim that it is possible to read off in some mechanical way effectual interventions today from yesterday’s experience with anti-democratic parties, I do suggest that the debate on militant democracy has broad-brush lessons for contemporary debates. This illuminates, at least in general terms, the sort of legal and reform strategies that are more likely to be successful, and those that are likely to fail as pro-democracy moves in respect to digital platforms.

Hutson & Winters on Algorithmic Disgorgement

Jevan Hutson & Ben Winters (Electronic Privacy Information Center) have posted “America’s Next ‘Stop Model!’: Algorithmic Disgorgement” on SSRN. Here is the abstract:

Beginning with its 2019 final order In the Matter of Cambridge Analytica, LLC, followed by a May 2021 decision and order In the Matter of Everalbum, Inc. in the context of facial recognition technology and affirmed by its March 2022 stipulated order in United States of America v. Kurbo, Inc. et al in the context of children’s privacy, the United States Federal Trade Commission now wields algorithmic disgorgement—effectively the destruction of algorithms and models built upon unfairly or deceptively sourced (i.e., ill-gotten) data — as a consumer protection tool in its ongoing, uphill battle against unfair and deceptive practices in an increasingly data-driven world. The thesis of this Article is that algorithmic disgorgement is (i) an essential tool for consumer protection enforcement to address the complex layering of unfairness and deception common in data-intensive products and businesses and (ii) worthy of express endorsement by lawmakers and immediate use by consumer protection law enforcement. To that end, the Article will explain how the harms of algorithms built on and enhanced by ill-gotten data are layered, hard to trace, and require an enforcement tool that is consequently comprehensive and effective as a deterrent. This Article first traces the development of algorithmic disgorgement in the United States and then situates the development of algorithmic disgorgement within historical and other current US consumer protection law enforcement mechanisms. From there, this Article reflects upon on the need for and importance of algorithmic disgorgement and broader consumer protection enforcement for issues of unfairness and deception in AI, highlighting the significance of the Kurbo case being a violation of a children’s privacy law, which does not have a corollary for adults in the U.S. Ultimately, this Article argues that (i) state and federal lawmakers should enshrine algorithmic disgorgement into law to insulate it from potential challenge and (ii) state and federal consumer protection law enforcement entities ought to wield algorithmic disgorgement more aggressively to remedy and deter unfair and deceptive practices.

Gervais on How Courts Can Define Humanness in the Age of Artificial Intelligence

Daniel J. Gervais (Vanderbilt University – Law School) has posted “Human as a Matter of Law: How Courts Can Define Humanness in the Age of Artificial Intelligence” on SSRN. Here is the abstract:

This Essay considers the ability of AI machines to perform intellectual functions long associated with human higher mental faculties as a form of sapience, a notion that more fruitfully describes their abilities than either intelligence or sentience. Using a transdisciplinary methodology, including philosophy of mind, moral philosophy, linguistics and neuroscience, the essay aims to situates the difference in law between human and machine in a way that a court of law could operationalize. This is not a purely theoretical exercise. Courts have already started to make that distinction and making it correctly will likely become gradually more important, as humans become more like machines (cyborgs, cobots) and machines more like humans (neural networks, robots with biological material). The essay draws a line that separates human and machine using the way in which humans think, a way that machines may mimic and possibly emulate but are unlikely ever to make their own.

Blanke on The CCPA, ‘Inferences Drawn,’ and Federal Preemption

Jordan Blanke (Mercer University) has posted “The CCPA, ‘Inferences Drawn,’ and Federal Preemption”
(Richmond Journal of Law and Technology, Vol. 29, No. 1 (Forthcoming 2022)) on SSRN. Here is the abstract:

In 2018 California passed an extensive data privacy law. One of its most significant features was the inclusion of “inferences drawn” within its definition of “personal information.” The law was significantly strengthened in 2020 with the expansion of rights for California consumers, new obligations on businesses, including the incorporation of GDPR-like principles of data minimization, purpose limitation, and storage limitation, and the creation of an independent agency to enforce these laws. In 2022 the Attorney General of California issued an Opinion that provided for an extremely broad interpretation of “inferences drawn.” Thereafter the American Data Privacy Protection Act was introduced in Congress. It does not provide nearly the protection for inferences that California law does, but it threatens to preempt almost all of it. This article argues that, given the importance of California being able to finally regulate inferences drawn, any federal bill must either provide similar protection, exclude California law from preemption, or be opposed.

Packin & Smith on ESG, Crypto, And What Has The IRS Got To Do With It?

Nizan Geslevich Packin (Baruch College, Zicklin School of Business; CUNY Department of Law) and Sean Stein Smith (CUNY) have posted “ESG, Crypto, And What Has The IRS Got To Do With It?” (Stanford Journal of Blockchain Law & Policy: Forthcoming) on SSRN. Here is the abstract:

Regulation almost always lags behind innovation, and this is also the situation with many FinTech-based products and services, and particularly those offered by crypto industry players. The crypto sector is a new and innovative one, which has proven to be not only based on highly technical concepts, but also by high levels of volatility and financial risk. In attempting to understand how to address many of the issues it raises in legal fields ranging from to financial regulation such as tax requirements to environmental law, and specifically matters that relate to climate change and energy wasting, regulators often find themselves trying to implement existing legal frameworks rather than creating new, clear rules. Much has been written about the SEC’s regime of regulation by enforcement of the crypto industry, and the impact of this type of rulemaking on businesses and persons. However, other financial regulators adopting a similar style of rulemaking—such as the IRS—have gotten much less attention for the impact of their regulatory actions. As noted within this Article, prominent industry associations continue to push back against applying existing tax law and protocols to specific crypto activities. One such notable example, which is relevant in the Environmental, Social and Governance (ESG) awareness era, relates to the unintended consequences of the IRS’ regulation by enforcement, given the impact that such new rules have on the transition to greener energy. This happens in the crypto industry in connection with proof-of-stake (PoS) consensus mechanisms—one of the two prominent transaction validation mechanisms. The PoS mechanism includes staking rewards – reward tokens that are earned/generated from securing a PoS blockchain – that validators, also known as stakers, get when they validate transactions. The IRS has asserted that cryptocurrencies are considered property for income taxation purposes, which means that every transaction results in a gain or loss equating to the difference between the price of the crypto asset at purchase and the price of the sale.
In the case of the PoS mechanism’s staking rewards, the debate behind deciding whether the rewards should be classified as taxable income when received versus when sold can also be framed as either applying existing tax code, word for word, to what many consider a new asset category, or eventually changing the code to reflect new economic models supported by some industry actors. This issue, which might seem minor, has recently become the subject of a lawsuit that one validator brought against the IRS, arguing that staking rewards should be taxed at the time they are sold, rather than created as the IRS has argued. Although just one specific court case with limited implications to other taxpayers, it is illustrative of the frustration felt by some market participants. In early October 2022, the case in question was dismissed by the court. Much like the Department of Justice several months earlier, the court found that Jerret presented no case or controversy, as it was moot. The reason for this is that the case had no issue that remained unsettled because the IRS had issued a full refund, including the interest, as was initially requested by the taxpayers. Ultimately this ongoing debate and conversation highlights the following: under strict application of existing tax rules, staking rewards are taxable when received, but should that be the case? Practically speaking, following the dismissal of the Jerret case, the immediate consequence is that taxpayers should assume that staking income is taxed as income at the time of receipt, unless explicitly excluded by the IRS in future tax guidance. That said, this Article argues that having legal clarity is important and that regulation by enforcement is less equitable. Additionally, this Article argues that proper lawmaking is especially needed in this PoS-based staking situation, given the importance of incentivizing persons to use PoS mechanisms due to environmental considerations, and the implications of financial regulation’s nudges on the behavior of persons and the promotion of ESG-based goals. Indeed, this clarity is needed because current legal ad hoc structures do not have the capacity to keep pace with the negative impact on the environment and the energy consumption issues that the crypto sector causes and also prompted the Ethereum merge. It is clear, therefore, that we need additional behavioral incentives for the law to rely on to support and facilitate the objectives of greener environment goals’ promotion.

Martin & Parmar on What Firms Must Know Before Adopting AI

Kirsten Martin (Notre Dame)) and Bidhan Parmar (U Virginia – Darden School of Business) have posted “What Firms Must Know Before Adopting AI: The Ethics of AI Transparency” on SSRN. Here is the abstract:

Firms have obligations to stakeholders that do not disappear when managers adopt AI decision systems. We introduce the concept of the AI knowledge gap – where AI provides limited information about its operations while the stakeholder demands for information justifying firm decisions increase. We develop a framework of what firms must know about their AI model in the procurement process to ensure they understand how the model allows a firm to meet existing obligations including the anticipated risks of using the AI decision system, how to prevent foreseeable risks, and have a plan for resilience. We argue there are no conditions where it is ethical to unquestioningly adopt recommendations from a black box AI program within an organization. According to this argument, adequate comprehension and knowledge about an AI model is not a negotiable design feature but a strategic and moral requirement.

G’sell on AI Judges

Florence G’sell (Science Po Law) has posted “AI Judges” (Larry A. Dimatteo, Cristina Poncibo, Michal Cannarsa (edit.), The Cambridge Handbook of Artificial Intelligence, Global Perspectives on Law and Ethics, Cambridge University Press, 2022) on SSRN. Here is the abstract:

The prospect of a “robot judge” raises many fantasies and concerns. Some argue that only humans are endowed with the modes of thought, intuition and empathy that would be necessary to analyze or judge a case. As early as 1976, Joseph Weizenbaum, creator of Eliza, one of the very first conversational agents, strongly asserted that important decisions should not be left to machines, which are sorely lacking in human qualities such as compassion and wisdom. On the other hand, it could be argued today that the courts would be wrong to deprive themselves of the possibilities opened up by artificial intelligence tools, whose capabilities are expected to improve greatly in the future. In reality, the question of the use of AI in the judicial system should probably be asked in a nuanced way, without considering the dystopian and highly unlikely scenario of the “robot judge” portrayed by Trevor Noah in a famous episode of The Daily Show. Rather, the question is how courts can benefit from increasingly sophisticated machines. To what extent can these tools help them render justice? What is their contribution in terms of decision support? Can we seriously consider delegating to a machine the entire power to make a judicial decision?

This chapter proceeds as follow. Section 23.2 is devoted to the use of AI tools by the courts. It is divided into three subsections. Section 23.2.1 deals with the use of risk assessment tools, which are widespread in the United States but highly regulated in Europe, particularly in France. Section 23.2.2 presents the possibilities opened by machine learning algorithms trained on databases composed of judicial decisions, which are able to anticipate court decisions or recommend solutions to judges. Section 23.2.3 considers the very unlikely eventuality of full automation of judicial decision making.