Colangelo on European Proposal for a Data Act – A First Assessment

Giuseppe Colangelo (University of Basilicata; Stanford Law School; LUISS) has posted “European Proposal for a Data Act – A First Assessment” (CERRE Evaluation Paper 2022) on SSRN. Here is the abstract:

On 23 February 2022, the European Commission unveiled its proposal for a Data Act (DA). As declared in the Impact Assessment, the DA complements two other major instruments shaping the European single market for data, such as the Data Governance Act and the Digital Markets Act (DMA), and is a key pillar of the European Strategy for Data in which the Commission announced the establishment of EU-wide common, interoperable data spaces in strategic sectors to overcome legal and technical barriers to data sharing.

To contribute to the current policy debate, the paper provides a first assessment of the tabled DA and will suggest possible improvements for the ongoing legislative negotiations.

Li on Affinity-Based Algorithmic Pricing: A Dilemma for EU Data Protection Law

Zihao Li (University of Glasgow) has posted “Affinity-Based Algorithmic Pricing: A Dilemma for EU Data Protection Law” (Computer Law & Security Review, Volume 46, 2022) on SSRN. Here is the abstract:

The emergence of big data and machine learning has allowed sellers and online platforms to tailor pricing for customers in real-time, but as many legal scholars have pointed out, personalised pricing poses a threat to the fundamental values of privacy and non-discrimination, raising legal and ethical concerns. However, most of those studies neglect affinity-based algorithmic pricing, which may bypass the General Data Protection Regulation (GDPR). This paper evaluates current data protection law in Europe against online algorithmic pricing. The first contribution of the paper is to introduce and clarify the term “online algorithmic pricing” in the context of data protection legal studies, as well as a new taxonomy of online algorithmic pricing by processing the data types. In doing so, the paper finds that the legal nature of affinity data is hard to classify as personal data. Therefore, affinity-based algorithmic pricing is highly likely to circumvent the GDPR. The second contribution of the paper is that it points out that even though some types of online algorithmic pricing can be covered by the GDPR, the data rights provided by the GDPR struggle to provide substantial help. The key finding of this paper is that the GDPR fails to apply to affinity-based algorithmic pricing, but the latter still can lead to privacy invasion. Therefore, four potential resolutions are raised, relating to group privacy, the remit of data protection law, the ex-ante measures in data protection, and a more comprehensive regulatory approach.

Schrepel & Goroza on The Adoption of Computational Antitrust by Agencies: 2021 Report

Thibault Schrepel (University Paris 1 Panthéon-Sorbonne; VU University Amsterdam; Stanford University’s Codex Center; Sciences Po) and Teodora Groza (Sciences Po Law School) have posted “The Adoption of Computational Antitrust by Agencies: 2021 Report” (2 Stanford Computational Antitrust, 78 (2022)) on SSRN. Here is the abstract:

In the first quarter of 2022, the Stanford Computational Antitrust project team invited the partnering antitrust agencies to share their advances in implementing computational tools. Here are the results of the survey.

Yoo & Keung on The Political Dynamics of Legislative Reform: Potential Drivers of the Next Communications Statute

Christopher S. Yoo (University of Pennsylvania) and Tiffany Keung (University of Pennsylvania Carey Law School) have posted “The Political Dynamics of Legislative Reform: Potential Drivers of the Next Communications Statute” (Berkeley Technology Law Journal, Forthcoming) on SSRN. Here is the abstract:

Although most studies of major communications reform legislation focus on the merits of their substantive provisions, analyzing the political dynamics that led to the enactment of such legislation can yield important insights. An examination of the tradeoffs that led the major industry segments to support the Telecommunications Act of 1996 provides a useful illustration of the political bargain that it embodies. Application of a similar analysis to the current context identifies seven components that could form the basis for the next communications statute: universal service, pole attachments, privacy, intermediary immunity, net neutrality, spectrum policy, and antitrust reform. Determining how these components might fit together requires an assessment of areas in which industry interests overlap and diverge as well as aspects of the political environment that can make passage of reform legislation more difficult.

Recommended.

Nachbar on Qualitative Market Definition

Thomas Nachbar (University of Virginia School of Law) has posted “Qualitative Market Definition” (Virginia Law Review, Vol. 109, 2023) on SSRN. Here is the abstract:

Modern antitrust law has come under intense criticism in recent years, with a bipartisan chorus of complaints about the power of technology and internet platforms such as Google, Amazon, Facebook, and Apple. A fundamental issue in these debates is how to define the “market” for the purposes of antitrust law. But market definition is highly contentious. The Supreme Court case that launched modern market definition has become the name of an economic blunder: the “Cellophane fallacy,” and the Justices in 2018’s Ohio v. American Express case disagreed with each other so strongly that the dissent described the majority’s approach as not only “wrong” but “economic nonsense.” Partially in response to the controversy in American Express, recent judicial, legislative, and regulatory proposals have even suggested doing away with market definition in some antitrust cases.

The root problem, this Article shows, is that modern market definition has been treated in antitrust as a matter of quantitative economics, with markets defined by economic formulas (such as the Lerner Index) lacking a connection to widely held social understandings of competition. Antitrust law needs to augment these quantitative approaches by explicitly acknowledging qualitative aspects of markets, including the normative visions of competition they represent. When more fully considered, the Lerner Index itself represents a vision of competition, but it is a vision is one that no society would want to pursue.

Paying attention to the normative meaning underlying quantitative measures is hardly radical; such qualitative factors have been part of market definition since its origin. The Cellophane fallacy itself was originally advanced not as a point about economics but about the content of the antitrust law. This Article argues that market definition is necessarily normative and describes an approach for including qualitative criteria in market definition so that market definition accurately reflects the types of competition antitrust law seeks to protect.

Unekbas on Competition, Privacy, and Justifications: Invoking Privacy to Justify Abusive Conduct under Article 102 TFEU

Selcukhan Unekbas (European University Institute – Department of Law) has posted “Competition, Privacy, and Justifications: Invoking Privacy to Justify Abusive Conduct under Article 102 TFEU” (Journal of Law, Market & Innovation, forthcoming) on SSRN. Here is the abstract:

This Article aims to delineate the extent to which potentially anticompetitive behavior that simultaneously improve user privacy are cognizable as efficiencies or objective justifications within the context of unilateral conduct cases in European competition law. After mapping the existing literature, it moves on to discuss whether the decisional guidance of the European Commission, as well as the case law of the Union Courts, allow the invocation of privacy as proper grounds to mount a defense against abusive practices. In order to concretize the theoretical discussions, the Article focuses on two recent and highly-relevant developments: Apple’s App-Tracking Transparency initiative, and Google’s unveiling of the Privacy Sandbox. It finds that the state of the law pertaining to the second stage of an abuse case is underdeveloped and is in need of clarification. Nevertheless, considering the recent developments surrounding European competition law in general, and the digital transformation in particular, both efficiencies and objective justifications are likely to find room for application in the digital economy. Whereas efficiencies must be evaluated within the context of substantive symmetry, legal coherence, and economic considerations in a manner that caters to consumer choice, objective justifications may give rise to unintended consequences resulting from judicial and legislative developments. Overall, it is apparent that the case law provides valuable insights as to the implementation of efficiency arguments and objective justifications, but the concepts are nonetheless in need of further analysis vis-à-vis the latest jurisprudence and legislative developments. In that regard, the Article highlights several points of potential contention in the near future.

Williams on Algorithmic Price Gouging

Spencer Williams (Golden Gate University School of Law) has posted “Algorithmic Price Gouging” (CUP Research Handbook on Artificial Intelligence & the Law) on SSRN. Here is the abstract:

This chapter examines the intersection of dynamic pricing algorithms and U.S. price gouging laws. Recently, an increasing number of companies, particularly online retailers and technologically- enabled service providers, implemented pricing algorithms that dynamically update prices in real time using data such as competitor prices, market supply quantities, and consumer preferences. While these dynamic pricing algorithms generally increase economic efficiency, they also result in massive price spikes for essential goods and services during emergencies such as the COVID- 19 pandemic. State price gouging laws traditionally regulate price spikes by limiting price increases during emergencies. Drafted largely before the advent of algorithmic pricing and the shift to digital commerce, these laws rely on assumptions such as human-directed pricing and localized markets for goods and services that are inapplicable to companies engaging in algorithmic price gouging. Further, the applicability of state price gouging laws to online retailers selling across state lines remains unclear. The current patchwork of state regulation therefore insufficiently mitigates algorithmic price gouging. The chapter first discusses dynamic pricing algorithms and their relationship to price gouging. The chapter then explores state price gouging laws and identifies key shortfalls with respect to algorithmic price gouging, concluding with the need for a federal solution.

Curagati on The Implementation of the Digital Markets Act with National Antitrust Laws

Christophe Carugati (Université Paris II) has posted “The Implementation of the Digital Markets Act with National Antitrust Laws” on SSRN. Here is the abstract:

The December 2020 Commission’s proposal for a Digital Markets Act (DMA) reached a compromised text with the Council and the Parliament on March 24, 2022. While the text that will impose obligations and prohibition rules on large online platforms acting as “gatekeepers” before any wrongdoing ex-ante is due to enter into force in October 2022, the same platforms are already under investigation in Germany under a DMA-like competition law that also imposes prohibition rules ex-ante. Other countries in Europe, including Italy, are considering following Germany and implementing new competition rules to adapt to the digital economy. How should the DMA implement with national competition laws? This question is crucial because inconsistency will inevitably hamper the effectiveness of both the DMA and national competition laws. The paper addresses this question by studying the DMA and German implementation framework. Section I explains how legislators envisage the implementation of the DMA with national competition laws. Section II then considers the implementation of the DMA-like national competition rules by focusing the analysis on Germany, which already enforced its new legislation in January 2022 against Google. Section III designs a cooperation model between the DMA and national competition laws. Section IV concludes.

White on The Dead Hand of Cellophane and the Federal Google and Facebook Antitrust Cases: Market Delineation Will Be Crucial

Lawrence J. White (NYU Stern School of Business) has posted “The Dead Hand of Cellophane and the Federal Google and Facebook Antitrust Cases: Market Delineation Will Be Crucial” (The Antitrust Bulletin, Forthcoming) on SSRN. Here is the abstract:

The DOJ and FTC monopolization cases against Google and Facebook, respectively, represent the most important federal non-merger antitrust initiatives since (at least) the 1990s. As in any monopolization case, market delineation will be a central feature of both cases – as it was in the du Pont Cellophane case of 65 years ago. Without a delineated market, how can one determine whether a company has engaged in monopolization? Unfortunately, there is currently no accepted market delineation paradigm that can help the courts address this issue for monopolization cases. And this void generally cannot be filled by the market delineation paradigm that is embedded in the DOJ-FTC “Horizontal Merger Guidelines”: Although this paradigm has had almost 40 years of usage and is now well-established and -accepted for merger analysis, this paradigm generally has no applicability for market delineation in monopolization cases.
This article expands on this argument and shows the potential difficulties that are likely to arise in this area of market delineation and the consequent problems for both cases.
This article also points the way toward a paradigm that offers a sensible approach to dealing with these difficulties.

Azzutti on AI-Driven Market Manipulation and Limits of the EU Law Enforcement Regime to Credible Deterrence

Alessio Azzutti (Institute of Law & Economics – University of Hamburg) has posted “AI-Driven Market Manipulation and Limits of the EU Law Enforcement Regime to Credible Deterrence” on SSRN. Here is the abstract:

As in many other sectors of EU economies, ‘artificial intelligence’ (AI) has entered the scene of the financial services industry as a game-changer. Trading on capital markets is undoubtedly one of the most promising AI application domains. A growing number of financial market players have in fact been adopting AI tools within the ramification of algorithmic trading. While AI trading is expected to deliver several efficiency gains, it can also bring unprecedented risks due to the technical specificities and related additional uncertainties of specific ‘machine learning’ methods.
With a focus on new and emerging risks of AI-driven market manipulation, this study critically assesses the ability of the EU anti-manipulation law and enforcement regime to achieve credible deterrence. It argues that AI trading is currently left operating within a (quasi-)lawless market environment with the ultimate risk of jeopardising EU capital markets’ integrity and stability. It shows how ‘deterrence theory’ can serve as a normative framework to think of innovative solutions for fixing the many shortcomings of the current EU legal framework in the fight against AI-driven market manipulation.
In concluding, this study suggests improving the existing EU anti-manipulation law and enforcement with a number of policy proposals. Namely, (i) an improved, ‘harm-centric’ definition of manipulation; (ii) an improved, ‘multi-layered’ liability regime for AI-driven manipulation; and (iii) a novel, ‘hybrid’ public-private enforcement institutional architecture through the introduction of market manipulation ‘bounty-hunters’.