Cheng & Nowag on Algorithmic Predation and Exclusion

Thomas K. Cheng (The University of Hong Kong – Faculty of Law) and Julian Nowag (Lund University – Faculty of Law; Oxford Centre for Competition Law and Policy) have posted “Algorithmic Predation and Exclusion” (LundLawCompWP 1/2022) on SSRN. Here is the abstract:

The debate about the implications of algorithms on competition law enforcement has so far focused on multi-firm conduct in general and collusion in particular. The implications of algorithms on abuse of dominance have been largely neglected. This article seeks to fill the gap in the existing literature by exploring how the increasingly precise practice of individualized targeting by algorithms can facilitate the practice of a range of abuses of dominance, including predatory pricing, rebates, and tying and bundling. The ability to target disparate groups of consumers with different prices helps a predator to minimize the losses it sustains during predation and maximize its ability to recoup its losses. This changes how recoupment should be understood and ascertained and may even undermine the rationale for requiring a proof of likelihood of recoupment under US antitrust law. This increased ability to price discriminate also enhances a dominant firm’s ability to offer exclusionary rebates. Finally, algorithms allow dominant firms to target their tying and bundling practices to loyal customers, hence avoiding the risk of alienating marginal customers with an unwelcome tie. This renders tying and bundling more feasible and effective for dominant firms.

Edwards on Transparency and Accountability of Algorithmic Regulation

Ernesto Edwards (National University of Rosario) has posted “How to Stop Minority Report from Becoming a Reality: Transparency and Accountability of Algorithmic Regulation” on SSRN. Here is the abstract:

In this essay I aim to illuminate on the importance of transparency and accountability in algorithmic regulation, a highly topical legal issue that presents important consequences because Machine Learning algorithms have been constantly developing as of late. Building on prior studies and on current literature, such as Citron, Crootof, Pasquale and Zarsky, I intend to develop a proposal that bridges said knowledge with that of Daniel Kahneman in order to amplify the legal question at hand with the notions of blinders and biases. I will argue that if left unattended or if improperly attended, Machine Learning algorithms will produce more harm than good due to these blinders and biases. After linking the aforementioned ideas, I will focus on the transparency and accountability of algorithmic regulation, and its ties to technological due process. The findings will illustrate the present need of a human element, better exemplified by the concept of cyborg justice, and the public policy challenges it entails. In the end, I will propose what could be done in the future in this area.