Sean Norick Long (Georgetown U Law Center) has posted “The Mirror Test for AI agents: A path to regulate autonomous algorithmic collusion” on SSRN. Here is the abstract:
A US federal judge recently reasoned that a pricing algorithm learns “no different” from an attorney. This comparison is flawed in its immediate context, but it poses a greater danger: entrenching a mental model that blinds antitrust enforcement to the emergent threat of autonomous algorithmic collusion, where AI agents coordinate without human instruction. To prove collusion, courts cannot look directly into the human mind for intent, so they rely on an indirect proxy: evidence of observable communication between competitors. This paper argues the proxy is obsolete for AI agents, because their initial design and behavioral patterns are directly observable-offering a new basis to rule out independent action. In its place, I propose a two-part Mirror Test: an ex ante Design Test examines initial conditions for collusive bias, while an ex post Pattern Test detects coordinated pricing patterns inconsistent with independent action. This test can be implemented through agency guidance rather than new legislation, protecting the competitive process while giving companies predictable standards for compliance.
