Chen on The Algorithmic Curtain: Geopolitical Polarisation and the Fragmentation of Global AI Governance

Zihan Chen (Tsinghua U) has posted “The Algorithmic Curtain: Geopolitical Polarisation and the Fragmentation of Global AI Governance” on SSRN. Here is the abstract:

This article investigates the new ethical, legal, and geopolitical challenges that the rapid proliferation of artificial intelligence presents to international law. The central argument is that the current fragmentation of AI governance is not an incidental outcome, but a deliberate manifestation of competing visions for digital sovereignty. The analysis examines several core dimensions of this phenomenon, including the rise of geopolitical “walled gardens” driven by regional restrictions, creating an “algorithmic curtain”. It also analyzes the emergence of three distinct and competing governance models led by the European Union, the United States, and China, each rooted in different legal philosophies and strategic priorities. Furthermore, the article explores the profound sociological consequences of this divergence, such as the exacerbation of the global “North-South” AI divide and the erosion of a universal digital commons. Drawing a historical analogy from the commercialization of outer space, the analysis shows how differing approaches to data governance, national security, and innovation are erecting this algorithmic curtain, challenging the universality of human rights and hindering global cooperation. The article concludes by proposing a polycentric governance architecture focused on interoperability and harmonization of baseline standards to mitigate the most severe consequences of this geopolitical division.

Asay on Artificial Creators

Clark D. Asay (Brigham Young U J. Reuben Clark Law) has posted “Artificial Creators” (2 George Washington Journal of Law and Technology (forthcoming 2026)) on SSRN. Here is the abstract:

Artificial intelligence systems cannot be inventors or authors under current U.S. law. On that point, the U.S. Patent and Trademark Office and the U.S. Copyright Office agree. Yet beyond that, the two regimes sharply diverge. The USPTO has adopted a more flexible approach to AI-assisted invention, permitting extensive AI involvement so long as a human being can be said to have conceived of the claimed invention. The Copyright Office, by contrast, has taken a far more restrictive stance, effectively denying registration to works whose expressive elements are generated by AI—even where humans engage in detailed, iterative prompting and exercise some amount of creative direction.

This Essay explores the reasons for that divergence and questions whether it is justified. While copyright’s idea–expression dichotomy and independent creation requirement may appear to provide some justification for copyright law’s more restrictive approach, those doctrines do not compel the Copyright Office’s denial of copyright registration in AI-assisted works. Indeed, copyright law has long accommodated technologically mediated creativity—from photography to film—by focusing on human control and creative contribution rather than the mechanics of execution.

Drawing on patent law’s conception requirement, as well as copyright doctrines governing joint authorship and derivative works, this Essay argues that copyrightability standards should move more in patent law’s direction. Where a human meaningfully conceives of and directs the realization of a work—even if AI performs substantial expressive tasks—copyright law should recognize authorship at least to the extent of the human’s creative contribution. Failing to do so risks undermining copyright’s incentive structure and distorting the future development of creative industries in an era where AI assistance is increasingly ubiquitous.

Zhang et al. on Balancing Data-Driven Competition and Privacy Protection: A Duopoly Analysis of AI-Powered Digital Assistants

Xiong Zhang (Beijing Jiaotong U) et al. have posted “Balancing Data-Driven Competition and Privacy Protection: A Duopoly Analysis of AI-Powered Digital Assistants” on SSRN. Here is the abstract:

Artificial Intelligence (AI) is rapidly empowering smart products, enhancing both work efficiency and quality of life. However, these improvements rely heavily on the continuous collection and processing of user data, raising significant concerns about privacy. In response, many countries have enacted regulations to protect personal data and consumer privacy. This study examines how privacy protection influences market competition in AI-powered digital assistant markets. We develop a stylized analytical model of a duopoly where firms differ in their ability to collect and monetize consumer data. The results reveal that stronger AI capabilities amplify the profitability of data-intensive firms, while data-light firms can strategically strengthen privacy protection to remain competitive, thereby generating mutual profit gains and enhancing consumer surplus as well as overall social welfare. These findings contribute to the theoretical understanding of data-driven competition and digital privacy management, while offering actionable insights for firms seeking to balance innovation, consumer trust, and regulatory compliance in smart product markets.

Shucha on Getting Started with GenAI in Legal Practice

Bonnie J. Shucha (U Wisconsin Law) has posted “Getting Started with GenAI in Legal Practice” (97 Wis. Law. 29 (2024)) on SSRN. Here is the abstract:

This article offers advice for approaching generative artificial intelligence (GenAI) in legal practice, examines types of GenAI tools and key policy considerations, and provides a step-by-step approach to building competence.

Raymond on Our AI, Ourselves: Illuminating the Human Fears Animating Early Regulatory Responses to the Use of Generative AI in the Practice of Law

Margaret Raymond (U Wisconsin Law) has posted “Our AI, Ourselves: Illuminating the Human Fears Animating Early Regulatory Responses to the Use of Generative AI in the Practice of Law” (15 St. Mary’s Journal on Legal Malpractice & Ethics 221 (2025)) on SSRN. Here is the abstract:

Generative artificial intelligence is changing the way lawyers work, and with those changes have come questions and concerns about how it should be regulated. Those questions and concerns, particularly on the individual level, are driven by fears about the implications of the use of generative AI. This Article identifies and explores the fears that drive these regulatory responses: fear of exposing judicial fallibility, anxiety over AI replacing human lawyers, and concerns about missing out on AI’s potential benefits. Ultimately, effective regulation of the use of generative AI in legal practice needs to be attentive to the fears and hopes surrounding generative AI in law. Only by understanding the very human anxieties regarding generative AI can the profession craft effective regulatory models that address the integration of AI in legal practice.

Mei et al. on The Illusory Normativity of Rights-Based AI Regulation

Yiyang Mei (Emory U) and Matthew Sag (Emory U Law) have posted “The Illusory Normativity of Rights-Based AI Regulation” on SSRN. Here is the abstract:

Whether and how to regulate AI is now a central question of governance. Across academic, policy, and international legal circles, the European Union is widely treated as the normative leader in this space. Its regulatory framework, anchored in the General Data Protection Regulation, the Digital Services and Markets Acts, and the AI Act, is often portrayed as a principled model grounded in fundamental rights. This Article challenges that assumption. We argue that the rights-based narrative surrounding EU AI regulation mischaracterizes the logic of its institutional design. While rights language pervades EU legal instruments, its function is managerial, not foundational. These rights operate as tools of administrative ordering, used to mitigate technological disruption, manage geopolitical risk, and preserve systemic balance, rather than as expressions of moral autonomy or democratic consent. Drawing on comparative institutional analysis, we situate EU AI governance within a longer tradition of legal ordering shaped by the need to coordinate power across fragmented jurisdictions. We contrast this approach with the American model, which reflects a different regulatory logic rooted in decentralized authority, sectoral pluralism, and a constitutional preference for innovation and individual autonomy. Through case studies in five key domains—data privacy, cybersecurity, healthcare, labor, and disinformation—we show that EU regulation is not meaningfully rights-driven, as is often claimed. It is instead structured around the containment of institutional risk. Our aim is not to endorse the American model but to reject the presumption that the EU approach reflects a normative ideal that other nations should uncritically adopt. The EU model is best understood as a historically contingent response to its own political conditions, not a template for others to blindly follow.

Lyons on The Litigation Solution: Why Courts, Not Code Mandates, Should Address AI Discrimination

Daniel Lyons (Boston College Law) has posted “The Litigation Solution: Why Courts, Not Code Mandates, Should Address AI Discrimination” on SSRN. Here is the abstract:

As artificial intelligence systems increasingly influence decisionmaking in high-stakes sectors, policymakers have focused on regulating model design to combat algorithmic bias. Drawing on examples from the European Union’s AI Act and recent state legislation, this Article critiques the emerging “fairness by design” paradigm. It argues that design mandates rest on a flawed premise: that bias can be objectively defined and mitigated ex ante without compromising competing values such as accuracy, privacy, or innovation. In reality, efforts to engineer fairness through prescriptive regulation risk distorting markets, entrenching incumbents, and stifling technological advancement. Moreover, the opaque, evolving nature of AI systems—especially generative models—makes it difficult to anticipate or eliminate future biases through design alone, often creating tradeoffs that regulators are ill-equipped to manage.

Rather than regulating AI inputs, the Article advocates for a litigation-first approach that focuses on AI outputs and leverages existing antidiscrimination law to address harms as they arise. By applying traditional disparate treatment and disparate impact frameworks to AI-assisted decisions, courts can assess when biased outcomes rise to the level of unlawful discrimination—without prematurely constraining innovation or imposing rigid mandates. This model mirrors America’s historical preference for permissive innovation, allowing technology to evolve while holding bad actors accountable under general principles of law. The result is a more flexible, targeted regulatory regime that fosters AI development while safeguarding civil rights.

Klonowska et al. on Rhetoric and Regulation: The (Limits of) Human/AI Comparison in Legal Debates on Military AI

Klaudia Klonowska (T.M.C. Asser Institute) and Taylor Kate Woodcock (T.M.C. Asser Institute) have posted “Rhetoric and Regulation: The (Limits of) Human/AI Comparison in Legal Debates on Military AI” (Forthcoming in Boutin B., Woodcock T. K. & Soltanzadeh S. (eds.), Decision at the Edge: Interdisciplinary Dilemmas in Military Artificial Intelligence, Asser Press (2025)) on SSRN. Here is the abstract:

The promise of artificial intelligence (AI) is ubiquitous and compelling, yet can it truly deliver ‘better’ speed, accuracy, and decision making in the conduct of war? As AI becomes increasingly embedded in targeting processes, legal and ethical debates often compare who performs better, humans or machines? In this Chapter, we unpack and critique the prevalence of comparisons between humans and AI systems, including in analyses of the fulfilment of legal obligations under International Humanitarian Law (IHL). We challenge this binary framing by highlighting misleading assumptions that neglect how the use of AI results in complex human-machine interactions that transform targeting practices. We unpack what is meant by ‘better performance’, demonstrating how prevailing metrics for speed and accuracy can create misleading expectations around the use of AI given the realities of warfare. We conclude that holistic but granular attention must be paid to the landscape of human-machine interactions to understand how the use of AI impacts compliance with IHL targeting obligations.

Feher et al. on Is AI Trained on Public Money? Evidence from US Data Centers

Adam Feher (U Lausanne) et al. have posted “Is AI Trained on Public Money? Evidence from US Data Centers” on SSRN. Here is the abstract:

Rapid data center growth has raised concerns about rising energy demand and its effects. Leveraging a novel dataset of U.S. data center energy loads, utility prices, and establishment-level outcomes, we quantify local spillover effects on electricity prices, firm performance, and emissions. Using an IV continuous DiD exploiting exogenous variation in data center location attractiveness, we find no local spillovers over 2010–2024. A regional model calibrated to the empirical null suggests that shocks larger than those observed through 2024 could still result in noticeable increases in household utility bills if not offset by regulation or external supply.

Fan et al. on Novel Corporate Governance Structures

Jennifer S. Fan (Loyola Law Los Angeles) and Xuan-thao Nguyen (U Washington Law) have posted “Novel Corporate Governance Structures” (Harvard Journal of Law & Technology, Volume 38, Number 4 Spring 2025) on SSRN. Here is the abstract:

Artificial Intelligence (“AI”) startups have taken center stage, rapidly disrupting conventional industries at an unprecedented pace with their groundbreaking innovations. Hailed by many as the most significant technological advancement of our era, AI’s profound societal impact has garnered heightened public and governmental scrutiny. The spotlight has recently fallen on OpenAI, the creator of ChatGPT, which weathered a tumultuous period marked by the ouster and subsequent rehiring of CEO Sam Altman, a board reconfiguration, and Altman’s later return to the board. Concerns over AI safety were offered as the rationale for the tandem corporate governance structure of nonprofit and for-profit at OpenAI which led to board friction, a management coup, and superalignment defection. Similarly, concerns over AI safety also underscore the creation of the corporate structures at Anthropic and xAI.

This Article explores the innovative corporate governance models that have emerged from leading AI startups like OpenAI, Anthropic, and xAI, assessing their long-term viability as these companies race against one another in building AI foundation models. Ultimately, it proposes a path forward for improved governance in AI startups by  advocating for an amendment to corporate law requiring a board-level AI Safety Committee at AI startups.