Long on The Mirror Test for AI agents: A path to regulate autonomous algorithmic collusion

Sean Norick Long (Georgetown U Law Center) has posted “The Mirror Test for AI agents: A path to regulate autonomous algorithmic collusion” on SSRN. Here is the abstract:

A US federal judge recently reasoned that a pricing algorithm learns “no different” from an attorney. This comparison is flawed in its immediate context, but it poses a greater danger: entrenching a mental model that blinds antitrust enforcement to the emergent threat of autonomous algorithmic collusion, where AI agents coordinate without human instruction. To prove collusion, courts cannot look directly into the human mind for intent, so they rely on an indirect proxy: evidence of observable communication between competitors. This paper argues the proxy is obsolete for AI agents, because their initial design and behavioral patterns are directly observable-offering a new basis to rule out independent action. In its place, I propose a two-part Mirror Test: an ex ante Design Test examines initial conditions for collusive bias, while an ex post Pattern Test detects coordinated pricing patterns inconsistent with independent action. This test can be implemented through agency guidance rather than new legislation, protecting the competitive process while giving companies predictable standards for compliance.

Massarotto on Algorithmic Remedies for Google’s Data Monopoly

Giovanna Massarotto (U Pennsylvania) has posted “Algorithmic Remedies for Google’s Data Monopoly” on SSRN. Here is the abstract:

Algorithms and data are the building blocks of the digital economy. From Google’s search engine to Meta’s Instagram and OpenAI’s ChatGPT, all “Big Tech” rely on algorithms to collect and process vast amounts of data that power their services and AI models. While algorithms themselves can be efficient and impartial tools, Google’s strategic use of them, combined with exclusionary practices, has landed the company in federal court for monopolizing critical digital markets. On September 2, 2025, a judge required Google to grant rivals access to its data to address the company’s monopolization of critical digital markets that rely on data. Another judge is expected to impose remedies on Google in a separate antitrust proceeding, which could encompass data-sharing measures, including data facilities. This remedy would de facto regulate data-driven markets and influence the future of the emerging AI industry.

However, such data-sharing obligations in antitrust law create a classic resource allocation problem: who gets access, and how can courts ensure that access is fair and non-discriminatory? This article demonstrates that this legal challenge mirrors a problem computer science solved decades ago: ensuring multiple parties can use a shared resource without conflict. Thereafter, drawing on those algorithmic solutions, it proposes a framework with systems that operate like a digital ‘take-a-number’ machine or a formal voting process to manage data distribution efficiently and fairly.

This article makes three important contributions to the existing scholarship in this field. First, it explains how data-sharing remedies can be designed and implemented, whether to address specific anticompetitive conduct or as part of broader regulatory frameworks. Second, it develops a comprehensive framework with three algorithmic approaches for resource allocation, translating computer science solutions into legal mechanisms. Third, this framework is applied to Google’s ongoing monopolization cases, guiding data-sharing remedies and promoting competition in AI and other data-driven markets.

Dornis & Lucchi on Generative AI and the Scope of EU Copyright Law: A Doctrinal Analysis in Light of C-250/25

Tim W. Dornis (Leibniz U Hannover) and Nicola Lucchi (Universitat Pompeu Fabra Law) have posted “Generative AI and the Scope of EU Copyright Law: A Doctrinal Analysis in Light of C-250/25” (IIC (International Review of Intellectual Property and Competition Law) vol. 56 (2025), forthcoming November (issue 10)) on SSRN. Here is the abstract:

This article offers a doctrinal analysis of the copyright implications raised by Like Company v. Google Ireland (C-250/25), the first case to bring generative AI before the CJEU. It examines whether the training and output of systems like Gemini infringe exclusive rights under EU copyright law. We argue that AI model training may involve acts of reproduction under Article 2 of the InfoSoc Directive, while the dissemination of AI-generated outputsespecially through public interfaces-may trigger the right of communication to the public under Article 3. Particular concerns arise when protected content is recognisably reproduced or when AI outputs serve as functional substitutes for original works, thereby affecting the normal exploitation of those works. While not a formal infringement criterion, such functional substitution is relevant in assessing the application of exceptions and compliance with the three-step test. The paper also challenges the applicability of the text and data mining exception to generative uses, highlighting its incompatibility with the limitations imposed by the three-step test. Ultimately, the analysis supports a technologically neutral, rights-based interpretation that safeguards the economic viability of creative production in the algorithmic age.

Deng on As AI Regulations and Price-Fixing Allegations Pick Up, New Research on Algorithmic Collusion Offers Insights for Executives and Attorneys

Ai Deng (Berkeley Research Group) has posted “As AI Regulations and Price-Fixing Allegations Pick Up, New Research on Algorithmic Collusion Offers Insights for Executives and Attorneys” (BRG ThinkSet, Spring, 2025) on SSRN. Here is the abstract:

This is a two-part series on the topic of algorithmic collusion. In Part One, I delve into how algorithms influence pricing, the feasibility of algorithmic collusion, and the impact of algorithmic design on whether a pricing algorithm sets supracompetitive prices. In Part Two, I explore the closely related subject of third-party pricing algorithms, which have attracted significant attention. Throughout these articles, I draw lessons for executives and attorneys from the latest academic research.

Hine et al. on The Impact of Modern Big Tech Antitrust on Digital Sovereignty

Emmie Hine (Yale U Digital Ethics Center) et al. have posted “The Impact of Modern Big Tech Antitrust on Digital Sovereignty” on SSRN. Here is the abstract:

This article examines the history of antitrust cases against Big Tech companies in the United States. It highlights a shift in the attitudes of enforcers away from the economic-analysis-informed Chicago and post-Chicago schools of antitrust thought, which are informed by economic analysis, towards New Brandeisian thinking, which emphasizes structural concerns and broader consumer welfare. However, it has yet to catch on in courtrooms. By contrasting the US’s antitrust strategy with those of the European Union and China, we argue that antitrust enforcement may hinder economic and technological competitiveness in the short term, but may have long-term benefits. Regarding global digital sovereignty, the US increasing enforcement likely would not impact its global competitiveness, as it still presents a more favorable regulatory environment than the EU, and targeted economic measures prevent Chinese companies from being competitive in the US. New legislation may help address the complexities of modern digital markets so that the US can maintain its competitive edge in technology while enhancing consumer welfare.

Chaiehloudj on Musk v. OpenAI: Antitrust and the Boundaries of Strategic Litigation in the AI Sector

Walid Chaiehloudj (U Côte d’Azur) has posted “Musk v. OpenAI: Antitrust and the Boundaries of Strategic Litigation in the AI Sector” (European Competition and Regulatory Law Review (CoRe), forthcoming) on SSRN. Here is the abstract:

This paper analyzes the recent decision in Musk v. Altman (N.D. Cal., March 2025), in which the United States District Court denied a preliminary injunction sought by Elon Musk and his company xAI against OpenAI and Microsoft. The plaintiffs alleged that OpenAI and Microsoft had entered into an unlawful group boycott by pressuring investors not to fund competing AI companies, in violation of Section 1 of the Sherman Act. The court rejected the claim on both procedural and substantive grounds, notably finding that Musk lacked standing, and that the evidence presented-consisting mainly of media articles-was insufficient to establish a plausible antitrust violation or irreparable harm.

Beyond its procedural lessons, Musk v. Altman illustrates the intensifying global battle for dominance in AI markets and the legal complexities accompanying it. The court’s decision ultimately favors a model of competition based on innovation rather than speculative or strategic litigation.

Casey on Generative AI’s Duty to Deal Dilemma

Alex Casey (Harvard U Harvard Law) has posted “Generative AI’s Duty to Deal Dilemma” on SSRN. Here is the abstract:

Competition in markets surrounding the development of Generative AI products is currently high across various vertically interconnected markets. However, some scholars project that these markets might be trending towards worrisome concentration and bottlenecks which harm valuable innovation investment incentives. This paper contends that the Supreme Court’s unilateral duty to deal doctrine is perhaps the best answer to any anticompetitive concerns that may arise, provided it is properly construed and readily enforced.The paper addresses common criticisms of duty to deal doctrine, both in general and in generative AI markets specifically. It is argued that investment incentives and ex post efficiencies can be preserved, all without compromising on the goal of disruptive, dynamic innovation at the frontier of AI technological research. Moreover, administrability concerns with the doctrine are perhaps overestimated and tolerable, particularly given the lack of promising alternatives. Finally, the paper moves to demonstrate how the antitrust refusal to deal doctrine can and should be used to resolve hypothetical, but realistic, market strategies which could be adopted by current AI market frontrunners in the near future. The paper then proposes specific applications of the doctrine in the projected long-term scenario where a large portion of these vertically related AI markets become encapsulated within ecosystems controlled by one or two firms. In that grave scenario, the proposals highlights how application of duty to deal doctrine can offer a means to work towards an open access, open competition, and open-source landscape which better promotes consumer welfare through continued innovation.

Kim et al. on Ai Pricing Behavior Under Regulatory Variation

Jeong Yeol Kim (KDI Public Policy and Management) et al. have posted “Ai Pricing Behavior Under Regulatory Variation” on SSRN. Here is the abstract:

This study experimentally examines how generative AI agents adjust pricing under four regulatory environments: no regulation; fixed detection (constant penalty probability above a threshold); linear detection (penalty probability increases with price); and periodic detection (monitoring at fixed intervals). Without regulation, AI agents choose near-monopoly prices. All regulations reduce prices, but do not induce competitive outcomes. Fixed and linear detection produce lower and more stable supra-competitive prices, while periodic detection leads to strategic evasion and higher prices. These findings suggest that AI agents adapt to enforcement structures, maintaining supra-competitive pricing even under regimes designed to deter monopolistic outcomes.

Bietti on Data is Infrastructure

Elettra Bietti (Northeastern U Law) has posted “Data is Infrastructure” (Theoretical Inquiries in Law (forthcoming 2024)) on SSRN. Here is the abstract:

Data is a contextual phenomenon. It reflects the social and material context from which it is derived and in which it is generated. It embeds the purposes, assumptions and rationales of those who produce, collect, use, share and monetize it. In the AI and digital platform economy, data’s role is primarily infrastructural. Its core uses are internal to companies. Data only rarely serves as a medium of exchange or commodity, and more frequently serves to profile users, train models, produce predictions, bundle and extend product capabilities which in turn are sold to advertisers and other customers. Insofar as they focus on the former, many technical, economic and legal attempts at defining data have inspired reductive policy efforts that include data protection, data ownership and limited data sharing remedies. This paper argues that understanding data as part of infrastructural pipelines can have significant conceptual and policy implications, and can redirect the way privacy, property and antitrust experts understand and govern data. This argument becomes more salient as market actors and regulators grapple with the catalyzing effects of neural networks and generative AI models on digital markets. In antitrust and competition law especially, regulators are consciously adopting a view of data as an infrastructural input into AI and other digital markets. Treating data as an input over which certain firms have competitive advantages can have significant implications for nascent AI markets, and yet the views in antitrust remain too narrow. Understanding data infrastructurally means viewing it not only as a critical input but also as inseparable from other material digital resources such as protocols, algorithms, semiconductors, and platform interfaces; as having important collective functions; and as calling for public interest regulation. Understanding data as infrastructure can move us past limited legal efforts and remedial solutions such as data separations, data sharing, and individual controls, and help reorient how data is produced, stored and managed toward public uses.

Lianos on Synthetic Futures and Competition Law: Towards the Emergence of Precautionary Principle-Minded Approaches

Ioannis Lianos (U College London Laws) has posted “Synthetic Futures and Competition Law: Towards the Emergence of Precautionary Principle-Minded Approaches” (Forthcoming Theoretical Inquiries in Law) on SSRN. Here is the abstract:

The study presents an in-depth analysis of the challenges faced by competition law enforcement in light of the rapid advancements in AI, quantum computing, and synthetic biology. It delves into the various approaches that competition law institutions, such as competition agencies and courts, can adopt to address the uncertainties surrounding the competition impact of corporate strategies and conduct in developing and applying these new General Purpose Technologies. The study focuses on the four key features of this “coming wave”: asymmetry, hyper-evolution, omni-use, and autonomy, all interconnected with the rise of complex systems that contribute to uncertainty. It explores the limitations of the Ordinary Risk Management (ORM) approach typically followed in competition law, based on the expected utility framework in such situations. The study advocates for the application of the precautionary principle as a more accurate description of the approach taken by competition authorities in this context and a more normatively adequate option for regulating threats of harm in complex systems and integrating responsible innovation concerns. Moreover, the study extensively examines how the precautionary principle can be seamlessly integrated into the design of competition law institutions and the substance of competition law, discussing the various containment tools used by competition authorities to address uncertainty.