Silva et al. on Procuring Public-Sector AI: Guidance for Local Governments

Elise Silva (U Pittsburgh) et al. have posted “Procuring Public-Sector AI: Guidance for Local Governments” on SSRN. Here is the abstract:

In this white paper written for public sector employees, we discuss the unique governance challenges posed by procured artificial intelligence (AI) systems, and provide actionable guidance on first steps that governments can take today to manage these emerging risks. Our audience for this paper is U.S. local government employees involved in procurement, IT, innovation, and related departments. Our recommendations are informed by an empirical study of local governments’ procurement practices across the United States.

Cohen on Oligarchy, State, and Cryptopia

Julie E. Cohen (Georgetown U Law Center) has posted “Oligarchy, State, and Cryptopia” (94 Fordham L. Rev. (forthcoming)) on SSRN. Here is the abstract:

Theoretical accounts of power in networked digital environments typically do not give systematic attention to the phenomenon of oligarchy—to extreme concentrations of material wealth deployed to obtain and protect durable personal advantage. The biggest technology platform companies are dominated to a singular extent by a small group of very powerful and extremely wealthy men who have played uniquely influential roles in structuring technological development in particular ways that align with their personal beliefs and who now wield unprecedented informational, sociotechnical, and political power. Developing an account of oligarchy and, more specifically, of tech oligarchy within contemporary political economy therefore has become a project of considerable urgency. This essay undertakes that project.

As I will show, tech oligarchs’ power derives partly from legal entrepreneurship related to corporate governance and partly from the infrastructural character of the functions the largest technology platform firms now perform. It is transnational and multidimensional, producing a wide range of consequences that are impossible for millions (and sometimes billions) around the globe to avoid. And it is personal; tech oligarchs have never been required to trade increased scale for increased accountability.

This account of tech oligarchy has important implications for three large categories of hotly debated issues. First, it sheds new light on the much-remarked inability of nation states to govern giant global technology platform firms effectively using the traditional tools of economic regulation. Second, it illuminates an important difference between the way capitalists approach projects for regulatory capture and the way technology oligarchs approach them. The ordinary capture projects of most interest to tech oligarchs revolve around personal enrichment; the extraordinary experiment that the U.S. is now witnessing, however, also seeks to reconfigure the state and its constituent institutions in ways more amenable to oligarchic direction. Third, it counsels more careful attention to an array of other oligarchic projects—including especially dreams of space colonization and the quest to develop artificial general intelligence—that have struck many observers as fantastical. Through such projects, tech oligarchs are working to dismantle existing forms of social, economic, and political organization and define a human future that they alone determine.

Cohen on Oligarchy, State, and Cryptopia

Julie E. Cohen (Georgetown U Law Center) has posted “Oligarchy, State, and Cryptopia” (94 Fordham L. Rev. (forthcoming)) on SSRN. Here is the abstract:

Theoretical accounts of power in networked digital environments typically do not give systematic attention to the phenomenon of oligarchy—to extreme concentrations of material wealth deployed to obtain and protect durable personal advantage. The biggest technology platform companies are dominated to a singular extent by a small group of very powerful and extremely wealthy men who have played uniquely influential roles in structuring technological development in particular ways that align with their personal beliefs and who now wield unprecedented informational, sociotechnical, and political power. Developing an account of oligarchy and, more specifically, of tech oligarchy within contemporary political economy therefore has become a project of considerable urgency. This essay undertakes that project.

As I will show, tech oligarchs’ power derives partly from legal entrepreneurship related to corporate governance and partly from the infrastructural character of the functions the largest technology platform firms now perform. It is transnational and multidimensional, producing a wide range of consequences that are impossible for millions (and sometimes billions) around the globe to avoid. And it is personal; tech oligarchs have never been required to trade increased scale for increased accountability.

This account of tech oligarchy has important implications for three large categories of hotly debated issues. First, it sheds new light on the much-remarked inability of nation states to govern giant global technology platform firms effectively using the traditional tools of economic regulation. Second, it illuminates an important difference between the way capitalists approach projects for regulatory capture and the way technology oligarchs approach them. The ordinary capture projects of most interest to tech oligarchs revolve around personal enrichment; the extraordinary experiment that the U.S. is now witnessing, however, also seeks to reconfigure the state and its constituent institutions in ways more amenable to oligarchic direction. Third, it counsels more careful attention to an array of other oligarchic projects—including especially dreams of space colonization and the quest to develop artificial general intelligence—that have struck many observers as fantastical. Through such projects, tech oligarchs are working to dismantle existing forms of social, economic, and political organization and define a human future that they alone determine.

Arun on The Silicon Valley Effect

Chinmayi Arun (Yale Law) has posted “The Silicon Valley Effect” on SSRN. Here is the abstract:

The most influential Artificial Intelligence (“AI”) companies are shaping AI’s legal order and regulatory discourse to protect their business interests and shift focus away from how their practices harm human beings. I call Big Tech’s influence on AI’s legal order the Silicon Valley Effect and argue that it is understudied and underestimated.

The major AI companies rely on global value chains and global markets. Capitalism drives them to exploit and experiment on vulnerable populations in permissive regulatory environments. Industry-influenced transnational legal orders – including domestic regulation and treaties –protect companies’ practices and products from regulation.  Legal scholarship should account for how global informational capitalism drives the industry to influence the development of law transnationally.

Scholars who study technology’s political economy tend to advocate for localized regulation, and scholars who focus on technology’s global legal orders tend to focus on states. Focusing on isolated domestic remedies for transnational phenomena is a mistake, since it permits the industry to develop harmful products and practices elsewhere. Focusing exclusively on states’ transnational influence elides the industry’s significant influence on regulatory discourse, and on foreign and domestic policy.

As the AI industry accumulates power, it can overwhelm weakening state regulators in parts of the world that could initially resist their persuasive and material power. ‘Strong’ states like the US are one election away from vulnerability. To be resilient, they should stop relying solely on domestic regulation and develop transnationally harmonized legal orders to curtail the industry’s power and counteract the Silicon Valley Effect.

Dothan on Facing Up To Internet Giants

Shai Dothan (University of Copenhagen – iCourts – Centre of Excellence for International Courts) has posted “Facing Up To Internet Giants” (Duke Journal of Comparative & International Law, Forthcoming) on SSRN. Here is the abstract:

Mancur Olson claimed that concentrated interests win against diffuse interests even in advanced democracies. Multinational companies, for example, work well in unison to suit their interests. The rest of the public is not motivated or informed enough to resist them. In contrast, other scholars argued that diffuse interests may be able to fight back, but only when certain conditions prevail. One of the conditions for the success of diffuse interests is the intervention of national and international courts. Courts are able to fix problems affecting diffuse interests. Courts can also initiate deliberation that can indirectly empower diffuse interests by getting them informed. This paper investigates the jurisprudence of the European Court of Human Rights (ECHR) and the Court of Justice of the European Union (CJEU). It argues that these international courts help consumers, a diffuse interest group, to succeed in their struggle against internet companies, a concentrated interest group.

Whittaker on Corporate Capture of AI

Meredith Whittaker (NYU) has posted “The Steep Cost of Capture” on SSRN. Here is the abstract:

In considering how to tackle this onslaught of industrial AI, we must first recognize that the “advances” in AI celebrated over the past decade were not due to fundamental scientific breakthroughs in AI techniques. They were and are primarily the product of significantly concentrated data and compute resources that reside in the hands of a few large tech corporations. Modern AI is fundamentally dependent on corporate resources and business practices, and our increasing reliance on such AI cedes inordinate power over our lives and institutions to a handful of tech firms. It also gives these firms significant influence over both the direction of AI development and the academic institutions wishing to research it.

Meaning that tech firms are startlingly well positioned to shape what we do—and do not—know about AI and the business behind it, at the same time that their AI products are working to shape our lives and institutions.

Examining the history of the U.S. military’s influence over scientific research during the Cold War, we see parallels to the tech industry’s current influence over AI. This history also offers alarming examples of the way in which U.S. military dominance worked to shape academic knowledge production, and to punish those who dissented.

Today, the tech industry is facing mounting regulatory pressure, and is increasing its efforts to create tech-positive narratives and to silence and sideline critics in much the same way the U.S. military and its allies did in the past. Taken as a whole, we see that the tech industry’s dominance in AI research and knowledge production puts critical researchers and advocates within, and beyond, academia in a treacherous position. This threatens to deprive frontline communities, policymakers, and the public of vital knowledge about the costs and consequences of AI and the industry responsible for it—right at the time that this work is most needed.