Goldstein & Salib on Collaboration at the Brink: International Law for the AI Arms Race

Simon Goldstein (The U Hong Kong U Hong Kong) and Peter Salib (U Houston Law Center) have posted “Collaboration at the Brink: International Law for the AI Arms Race” on SSRN. Here is the abstract:

The US and China are locked in a high stakes race to control the future of AI. This Essay begins by arguing that an AI race is irrational, posing serious risks to both countries. The Essay then critiques existing proposals for avoiding such a race. The existing proposals are too indexed on the international nonproliferation and disarmament agreements that worked for the last arms race-for nuclear supremacy. AI technology is quite different from nuclear technology in ways that make nonproliferation unstable. 

The Essay then defends an alternative approach, backed by international law, in which the US and China would collaborate to develop powerful, safe AIs. The centerpiece of our proposal is the formation of a joint AI lab that would combine the best US and Chinese AI talent, supercharged by US and Chinese national investment. We argue that, compared to either an AI race or a nonproliferation equilibrium, the joint lab would be both a safer and a faster route to AI development. For these reasons, both AI safety advocates and AI accelerationists should endorse the joint lab approach.

Lim on Determinants of Socially Responsible AI Governance

Daryl Lim (Pennsylvania State U) has posted “Determinants of Socially Responsible AI Governance” (Duke Law & Technology Review | Vol. 25, No. 1, 2025) on SSRN. Here is the abstract:

The signing of the first international AI treaty by the United States, European Union, and other nations marks a pivotal step in establishing a global framework for AI governance, ensuring that AI systems respect human rights, democracy, and the rule of law. This article advances the concepts of justice, equity, and the rule of law as yardsticks of socially responsible AI-from development through deployment-to ensure that AI technologies do not exacerbate existing inequalities but actively promote fairness and inclusivity. Part I explores AI’s potential to improve access to justice for marginalized communities and small and medium-sized law firms while scrutinizing AI-related risks judges, lawyers, and the communities they serve face. Part II examines the structural biases in AI systems, focusing on how biased data and coding practices can entrench inequity and how intellectual property protections like trade secrets can limit transparency and undermine accountability in AI governance. Part III evaluates the normative impact of AI on traditional legal frameworks, offering a comparative analysis of governance models: the U.S. market-driven approach, the EU’s rights-based model, China’s command economy, and Singapore’s soft law framework. The analysis highlights how different systems balance innovation with safeguards, emphasizing that successful AI governance must integrate risk-based regulation and transparency without stifling technological advancement. Through these comparative insights, the article proposes a proactive governance framework incorporating transparency, equity audits, and tailored regulatory approaches. This forward-looking analysis offers legal scholars and policymakers a comprehensive roadmap for navigating AI’s transformative effects on justice, equity, and the rule of law.

Grgic on AI Diplomacy: Insights and Innovations from the Bilateral Navigator

Sinisa Grgic (Harvard U) has posted “AI Diplomacy: Insights and Innovations from the Bilateral Navigator” on SSRN. Here is the abstract:

AI Diplomacy: Insights and Innovations from the Bilateral Navigator” presents a groundbreaking exploration of how artificial intelligence is fundamentally transforming international relations and diplomatic practice. Drawing on extensive experience in both technological innovation and diplomatic service, this comprehensive work examines the intersection of AI and diplomacy across strategic, operational, and ethical dimensions. The book introduces the Bilateral Navigator—an innovative AI-powered project analyzing relationships between all 193 UN member states—demonstrating how data-driven insights can democratize diplomatic analysis and enhance international cooperation. Through detailed case studies, practical applications, and theoretical frameworks, it addresses critical questions about algorithmic bias, privacy concerns, and the evolving role of human diplomats in an increasingly AI-augmented world. This pioneering work provides diplomats, policymakers, and scholars with both conceptual understanding and actionable strategies for navigating the new landscape of international relations. By balancing technological possibilities with diplomatic wisdom, “AI Diplomacy” offers a vision for how nations can harness AI’s potential while preserving the essential human elements that have always defined successful diplomacy.

Trautman on International Business, Terrorism, and the Impact of Rapid Technological Change

Lawrence J. Trautman (Prairie View A&M U College Business) has posted “International Business, Terrorism, and the Impact of Rapid Technological Change” on SSRN. Here is the abstract:

As global conflict flourishes, technological advances have dramatically changed the economics of geopolitical conflict. During recent years, U.S. government agencies have invested heavily in facial recognition, fingerprint databases, investigative tools that provide for searching through gigabytes of text messages, email data, similar files, and the unlocking of phones. Other significant technological developments are now on the horizon and promise additional disruption. Many of these technologies fall into the hands of multinational criminal organizations and deployed against entities conducting international business. These are the issues and topic of this paper.

Wu on Techno-Federalism: How Regulatory Fragmentation Shapes the U.S.-China AI Race

Jason Jia-Xi Wu (Harvard U) has posted “Techno-Federalism: How Regulatory Fragmentation Shapes the U.S.-China AI Race” (17 Harv. Nat’l Sec. J. __ (forthcoming 2026)) on SSRN. Here is the abstract:

The U.S. and China are engaging in a regulatory arms race over artificial intelligence (AI). Yet, existing debates often overlook a critical factor shaping this AI race: federalism—the division of regulatory authority between federal and state governments. In the U.S., states lead in AI regulation, with the federal government taking a limited, backseat role. Key laws governing AI use and liability—contract, corporate, licensure, and tort law—fall within state jurisdiction. Similarly, in China, local authorities play a pivotal role in AI policy. Although China does not have a formal federalist system, it fosters decentralized innovation through local policy experimentation, reflecting a form of “federalism, Chinese style.” Despite opposing value systems, both countries are converging towards a fragmentary approach to AI governance.

What explains this convergence? The answer, I argue, lies in industry self-governance. In both countries, the tech industry is increasingly acting as a co-regulator of AI systems alongside traditional central and local authorities. As gatekeepers, suppliers, and beneficiaries of disruptive AI technologies, the tech industry imposes market discipline on regulators at both levels, often by exploiting jurisdictional differences and leveraging local protectionism to advance its interests. However, as national security takes center stage in this AI race, the tech industry is assuming both commercial and geopolitical roles, emerging as a third regulatory force that reshapes center-local relations.

This new paradigm reflects what I call “techno-federalism.” Combining “technocracy” with “federalism,” it describes how AI both disrupts and transforms traditional federalism by enabling industry self-governance. Techno-federalism departs from traditional federalism in three aspects. First, it does not originate from deliberate constitutional design. Rather, it emerges organically in response to AI’s rapidly evolving landscape, leading to blurred regulatory boundaries across different levels of government. Second, it is characterized by legal uncertainty over AI governance responsibilities, contrasting with traditional federalism’s clear power divisions. Third, it is shaped by the market norms of tech firms—platforms, developers, and data intermediaries—that operate under state and local law.

Techno-federalism challenges the dominant view that the U.S.-China AI race is merely a “battle of values” between liberal democracy and techno-autocracy. By highlighting the tripartite interplay between central, local, and market power, techno-federalism offers a more nuanced perspective, addressing the limits of conventional geostrategic approaches to AI engagement with China.

Marcus-Obiene on Towards a New International Framework for Data Governance: Proposing Data Embassy Status for Global Data Centres

Fernandez Marcus-Obiene (Independent) has posted “Towards a New International Framework for Data Governance: Proposing Data Embassy Status for Global Data Centres” on SSRN. Here is the abstract:

This article proposes a novel approach to data governance: data embassies. These extraterritorial entities would facilitate secure cross-border data transfers while addressing privacy, security, and national sovereignty concerns. By granting data embassies immunity from host country laws, the framework aims to enhance trust, reduce legal hurdles, and promote international cooperation. While challenges remain, such as concerns about accountability and varying privacy standards, data embassies offer a promising solution for the complexities of modern data governance. The framework could potentially reshape the future of the global digital economy by creating a more secure and efficient environment for data flows.

Mone et al. on Data Warfare and Creating a Global Legal and Regulatory Landscape: Challenges and Solutions

Varda Mone (Alliance U Law) et al. have posted “Data Warfare and Creating a Global Legal and Regulatory Landscape: Challenges and Solutions” (International Journal of Legal Information, 0; 2024 [10.1017/jli.2024.22]) on SSRN. Here is the abstract:

The world is witnessing an increase in cross-border data transfers and breaches orchestrated by State and non-State actors. Cross-border data transfers may lead to friction among States to localize or globalize data and to provide regulatory frameworks. “Data warfare” or information-war operations are often not covered under conventional rules; however, they are categorized as acts of espionage and subject to domestic regulations. As such, the operations are used to achieve a variety of objectives, including stealing sensitive information, spreading propaganda, and causing economic damage. Notable instances of the theft of sensitive information include the recent Bangladesh government website breach, exposing 50 million records, and the Unique Identification Authority of India (UIDAI) website hack. Regulating the “data war” under the existing principles of international law may be unsuccessful in creating robust international legal frameworks to address the associated challenges. These developments further accentuate the global divide between data-rich regions in the Global North, with strong data protection mechanisms (such as the GDPR and the California Privacy Rights Act), and regions in the Global South, where there is a lack of comprehensive data protection laws and regulatory regimes. This disparity underscores the urgent need for global cooperation for substantial international regulatory mechanisms. This article examines the complexities surrounding data warfare; it highlights the imperative need for establishing a robust global legal framework for data protection, delving into the concept of data war. It also acknowledges the growing influence of advanced technologies like data computing and mining and their ongoing threats to the fundamental rights of individuals associated with exposed personal data. The authors address the deficiencies in international legal provisions and advocate for a global regulatory approach to data protection as a critical means of safeguarding personal freedoms and countering the escalating threats in the digital age.

Giladi Shtub & Gal on Data Without Borders: International Effects of Data Flow Regulation

Tamar Giladi Shtub (U Haifa Law) and Michal Gal (U Haifa Law) have posted “Data Without Borders: International Effects of Data Flow Regulation” (Forthcoming, Vanderbilt Journal of Transnational Law (2025)) on SSRN. Here is the abstract:

Data has no inherent jurisdictional boundaries, and cross-border data and data-based-information transfers can significantly affect national and global welfare. Accordingly, local data flow regulation in one jurisdiction may create intended or unforeseen externalities in other jurisdictions. This article examines the complex challenges and implications of national regulation on data flows in an increasingly interconnected world. Given the pivotal role of data in our economies and societies, it is essential that governments recognize such externalities and take measures to ensure that an efficient balance is reached between the relevant considerations, including economic growth, privacy, and national security.

To illustrate such cross-border effects, we analyze two contrasting case studies: China’s data localization requirements and the European Union’s Data Act of 2023, which facilitates data sharing. Through these examples, we demonstrate how local regulation can create externalities that ripple across the global digital landscape. The analysis highlights the inadequacy of current international frameworks in addressing the complexities of data flows.

Our findings underscore the urgent need for increased international cooperation on data governance frameworks, as unilateral actions risk fragmenting the global digital landscape and limiting the welfare-enhancing potential of data synergies. We contend that countries, particularly the United States, are missing crucial opportunities by delaying engagement in shaping international data flow policies. By highlighting the complex interplay between local data flow policies and global effects, the article provides a foundation for governments to take a more proactive role in shaping welfare-enhancing frameworks for international data flows.

Park on Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework

Sangchul Park (Seoul National U Law) has posted “Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework” (Washington International Law Journal, Volume 33, No. 2, pp. 216-269 (forthcoming)) on SSRN. Here is the abstract:

As debates on potential societal harm from artificial intelligence (AI) culminate in legislation and international norms, a global divide is emerging in both AI regulatory frameworks and international governance structures. In terms of local regulatory frameworks, the European Union (E.U.), Canada, and Brazil follow a “horizontal” or “lateral” approach that postulates the homogeneity of AI, seeks to identify common causes of harm, and demands uniform human interventions. In contrast, the United States (U.S.), the United Kingdom (U.K.), Israel, and Switzerland (and potentially China) have pursued a “context-specific” or “modular” approach, tailoring regulations to the specific use cases of AI systems. In terms of international governance structures, the United Nations is exploring a centralized AI governance framework to be overseen by a superlative body comparable to the International Atomic Energy Agency. However, the U.K. is spearheading, and the U.S. and several other countries have endorsed, a decentralized governance model, where AI safety institutes in each jurisdiction conduct evaluations of the safety of high-performance general-purpose models pursuant to interoperable standards. This paper argues for a context-specific approach alongside decentralized governance, to effectively address evolving risks in diverse mission-critical domains, while avoiding social costs associated with one-size-fits-all approaches. However, to enhance the systematicity and interoperability of international norms and accelerate global harmonization, this paper proposes an alternative contextual, coherent, and commensurable (3C) framework. To ensure contextuality, the framework (i) bifurcates the AI life cycle into two phases: learning and deployment for specific tasks, instead of defining foundation or general-purpose models; and (ii) categorizes these tasks based on their application and interaction with humans as follows: autonomous, discriminative (allocative, punitive, and cognitive), and generative AI. To ensure coherency, each category is assigned specific regulatory objectives replacing 2010s vintage “AI ethics.” To ensure commensurability, the framework promotes the adoption of international standards for measuring and mitigating risks.

Wasil et al. on Governing Dual-use Technologies: Case Studies of International Security Agreements and Lessons for AI Governance

Akash Wasil (Georgetown U) et al. have posted “Governing Dual-use Technologies: Case Studies of International Security Agreements and Lessons for AI Governance” on SSRN. Here is the abstract:

International AI governance agreements and institutions may play an important role in reducing global security risks from advanced AI. To inform the design of such agreements and institutions, we conducted case studies of historical and contemporary international security agreements. We focused specifically on those arrangements around dual-use technologies, examining agreements in nuclear security, chemical weapons, biosecurity, and export controls. For each agreement, we examined four key areas: (a) purpose, (b) core powers, (c) governance structure, and (d) instances of non-compliance. From these case studies, we extracted lessons for the design of international AI agreements and governance institutions. We discuss the importance of robust verification methods, strategies for balancing power between nations, mechanisms for adapting to rapid technological change, approaches to managing trade-offs between transparency and security, incentives for participation, and effective enforcement mechanisms.