Puaschunder on Digital Inequality

Julia M. Puaschunder (Columbia University; New School for Social Research; Harvard University; The Situationist Project on Law and Mind Sciences) has posted “Digital Inequality: A Research Agenda” (Proceedings of the 28th RAIS, June 2022) on SSRN. Here is the abstract:

We live in the age of digitalization. Digital disruption is the advancement of our lifetimes. Never before in the history of humankind have human beings given up as much decision-making autonomy as today to a growing body of artificial intelligence (AI). Digitalization features a wave of self-learning entities that generate information from exponentially-growing big data sources that are encroaching every aspect of our daily lives. Inequality is one of the most significant pressing concern of our times. Ample evidence exists in economics, law and historical studies that multiple levels of inequality dominate the current socio-dynamics, politics and living conditions around the world. Social inequality stretches from societal levels within nation states to global dimensions but also intergenerational inequality domains. While digitalization and inequality are predominant features of our times, hardly any information exists on the inequality inherent in digitalization. This paper breaks new ground in theoretically arguing for inequality being an overlooked by-product of innovative change – featuring concrete examples in insights and applications in the digitalization domain. A multi-faceted analysis will draw a contemporary digital inequality account from behavioral economic, macroeconomic, comparative and legal economic perspectives. This paper targets at aiding academics and practitioners in understanding the advantages but also the potential inequalities imbued in digitalization. It sets a historic landmark to capture the Zeitgeist of our digitalization disruption heralding unexpected inequalities stemming from innovative change. The article may open eyes to understand our times holistically in its advantageous innovation capacities but also potential societal, international and intertemporal unequal gains and losses perspectives from digitalization.

Feldman & Stein on AI Governance in the Financial Industry

Robin Feldman (UC Hastings Law) and Kara Stein (Public Company Oversight Board) have posted “AI Governance in the Financial Industry” (Stanford Journal of Law, Business, and Finance, Vol. 27, No. 1, 2022) on SSRN. Here is the abstract:

Legal regimes in the United States generally conceptualize obligations as attaching along one of two pathways: through the entity or the individual. Although these dual conceptualizations made sense in an ordinary pre-modern world, they no longer capture the financial system landscape, now that artificial intelligence has entered the scene. Neither person nor entity, artificial intelligence is an activity or a capacity, something that mediates relations between individuals and entities. And whether we like it or not, artificial intelligence has already reshaped financial markets. From Robinhood, to the Flash Crash, to Twitter’s Hash Crash, to the Knight Capital incident, each of these episodes foreshadows the potential for puzzling conundra and serious disruptions.

Little space exists in current legal and regulatory regimes to properly manage the actions of artificial intelligence in the financial space. Artificial intelligence does not “have intent” and therefore cannot form the scienter required in many securities law contexts. It also defies the approach commonly used in financial regulation of focusing on size or sophistication. Moreover, the activity of artificial intelligence is too diffuse, distributed, and ephemeral to effectively govern by aiming regulatory firepower at the artificial intelligence itself or even at the entities and individuals currently targeted in securities law. Even when the law deviates from the classic focus on entities and individuals, as it meanders through areas that implicate artificial intelligence, we lack a unifying theory for what we are doing and why.

To begin filling this void, we propose conceptualizing artificial intelligence as a type of skill or capacity—a superpower, if you will. Just as the power of flight opens new avenues for superheroes, so, too, does the power of artificial intelligence open new avenues for mere mortals. With the capacity of flight as its animating imagery, the article proposes what we would call “touchpoint regulation.” Specifically, we set out three forms of scaffolding—touchpoints, types of evil, and types of players—that provide the essential structure for any body of law society will need for governing artificial intelligence in the financial industry.