Deng & Hernandez on Algorithmic Pricing in Horizontal Merger Review

Ai Deng (Johns Hopkins University; Charles River Associates) and Cristián Hernández (NERA Economic Consulting) have posted “Algorithmic Pricing in Horizontal Merger Review: An Initial Assessment” on SSRN. Here is the abstract:

While the possibility of algorithmic price discrimination and algorithmic collusion has been extensively discussed in the global antitrust community in recent years, there has been much more limited discussion in the context of mergers. In this article, we aim to fill this gap by discussing some potential implications of algorithmic pricing on market definition, unilateral effects, coordinated effects, and remedies. Specifically, we discuss the following topics and related questions:

– Market definition. How to deal with algorithm-enhanced market/customer segmentation and how to identify relevant antitrust markets when prices are set by a “blackbox” algorithm.

– Unilateral effects. How to use merging parties’ pricing algorithms to conduct merger simulations and why there are important antitrust issues related to integrating merging parties’ pricing algorithms and their data.

– Coordinated effects. What some of the recent scholarship tells us about potentially coordinated effects in a merger context.

– Remedies. Why data compatibility and collusion risk are important considerations when “divesting” merging parties’ pricing algorithm.

Almada & Dymitruk on Data Protection and Judicial Automation

Marco Almada (EUI) and Maria Dymitruk (University of Wroclaw) have posted “Data Protection and Judicial Automation” (Eleni Kosta and Ronald E Leenes (eds), Research handbook on EU data protection (Edward Elgar)) on SSRN. Here is the abstract:

The words “judicial automation” invoke a broad range of images, ranging from time-saving tools to decision-aiding tools or even quixotic ideas of robot judges. As the development of artificial intelligence technologies expands the range of possible automation, it also raises questions about the extent to which automation is admissible in judicial contexts and the safeguards required for the safe use of AI in judicial contexts. This chapter argues that these applications raise specific challenges for data protection law, as the use of personal data for judicial automation requires the adoption of safeguards against risks to the right to a fair trial. The chapter discusses current and proposed uses of judicial automation, identifying how they use personal data in their operation and the issues that arise from this use, such as algorithmic biases and system opacity. By connecting these issues to the safeguards required for automated decision-making and data protection by design, the chapter shows how data protection law may contribute to a fair trial in contexts of judicial automation and highlights open research questions in the interface between procedural rights and data protection.