Gal & Rubinfeld on Algorithms, AI and Mergers

Michal Gal (University of Haifa – Faculty of Law) and Daniel L. Rubinfeld (UC Berkeley Law; NBER; NYU Law) have posted “Algorithms, AI and Mergers” (Antitrust Law Journal (2023) on SSRN. Here is the abstract:

Algorithms, especially those based on artificial intelligence, play an increasingly important role in our economy. They are used by market participants to make pricing, output, quality, and inventory decisions; to predict market entry, expansion, and exit; and to predict regulatory moves. In a growing number of jurisdictions, algorithms are also used by regulators to detect and analyze anti-competitive conduct. This game-changing switch to (semi-)automated decision-making has the potential to reshape market dynamics. While the effect of algorithms on coordination between competitors has been a focus of attention, and scholarly work on their effects on unilateral conduct is beginning to accumulate, merger control issues have been undertreated. Accordingly, this article focuses on such issues.

The article identifies six main functions of algorithms that may affect market dynamics: collection and ordering of data; improving the ability to use existing data; reducing the need for data, for in-stance by generating synthetic data; monitoring; predicting, to deter-mine how different types of conduct, including mergers, are likely to affect market conditions; and decision-making.

The article demonstrates how such algorithms can exacerbate anti-competitive conduct with respect to both unilateral and coordinated effects. Towards this end, seven scenarios are explored: collusion, oligopolistic coordination, high unilateral prices, price discrimination, predation, selective pricing (in which a buyer offers a higher price to some suppliers in an aggressive bid for an input), and reducing the interoperability of datasets. For each scenario, we analyze how the market conditions necessary for such conduct are affected by algorithms.

These findings are then translated into merger policy. Algorithms are shown to affect substantive as well as institutional features of merger control. Algorithms also challenge some of the assumptions that are ingrained in merger control, suggesting that a more informed approach to some algorithmic-related mergers is appropriate.

Hacker on Sustainable AI Regulation

Philipp Hacker (European New School of Digital Studies) has posted “Sustainable AI Regulation” on SSRN. Here is the abstract:

Current proposals for AI regulation, in the EU and beyond, aim to spur AI that is trustworthy and accountable. What is missing, however, is a robust regulatory discourse and roadmap to make AI, and technology more broadly, environmentally sustainable. This paper aims to take first steps to fill this gap.

In computer science, AI and technology more generally are increasingly recognized as important contributors to climate change. And with good reason: Current estimates show that information and communication technology (ICT) contributes up to 3.9% of global greenhouse gas (GHG) emissions compared to roughly 2.5% for global air travel. The carbon footprint of machine learning more specifically has skyrocketed over the last years. Water consumption is another crucial factor. Regarding both energy and water, AI training is particularly resource intensive, and even more so with large generative AI models, such as ChatGPT or GPT-4.

However, questions of climate change and sustainability still occupy a significant blind spot in AI regulation. This paper will therefore explore two key dimensions: legal instruments to make AI greener; and methods to render AI regulation more sustainable. Concerning the former, transparency mechanisms, such as the disclosure of the GHG footprint under Article 11 EU AI Act, could be a first step. However, given the well-known limitations of disclosure, regulation needs to go beyond transparency. Hence, in this paper, I propose a mix of co-regulation strategies; sustainability by design; restrictions on training data; and consumption caps.

Within sustainability by design strategies, one important mechanism could be what I term “sustainability impact assessments”. Crucially, during the modelling phase, developers should compare different AI model types (e.g., linear regression versus neural networks) not only regarding their performance but also their estimated GHG footprint. Already, effective tools exist to measure the GHG impact of such models. Simply put, if two model types exhibit similar performance, the developers would be obliged, under such a provision, to choose the more sustainable model for further development and deployment. In this way, the current fixation on performance measures may be complemented by climate change mitigation strategies. Importantly, pre-trained models, such as large AI models, may in the long run be more energy-efficient despite their high upfront training costs. However, ironically, planned regulation might thwart these efforts. Pre-trained models, such as ChatGPT, are significantly dis-incentivized by the EU AI Act and the EU AI liability directives. Hence, regulatory endeavors should urgently be updated to better reflect the sustainability challenges AI raises.

This regulatory toolkit may then, in a second step, serve as a blueprint for other information technologies and infrastructures facing significant sustainability challenges due to their high GHG emissions, for example: blockchain (e.g., bitcoin); Metaverse applications; and data centers. The regulatory toolbox described above, from transparency to sustainability assessments and hard consumption caps, can and must be flexibly adapted to these other areas of technology law.

The final dimension consists in efforts to render AI regulation, and by implication the law itself, more sustainable. Certain rights we have come to take for granted, such as the right to erasure (Article 17 GDPR), may have to be limited due to sustainability considerations. Imagine that a large AI model was trained on supposedly anonymized medical data and is used for cancer detection. Given new re-identification techniques, one data subject exercises her right to erasure. Not only may her data point have to be deleted from the training data, but the entire AI model may have to be re-trained–entailing significant GHG emissions. In my view, the subjective right to erasure, in such situations, has to be balanced against the collective interest in mitigating climate change. Here, I draw on the growing literature on data externalities and third-party effects of processing. The paper formulates guidelines to strike this balance equitably, discusses specific use cases, and identifies doctrinal legal methods for incorporating such a “sustainability limitation” into existing (e.g., Art. 17(3) GDPR) and future law (e.g., AI Act). Ultimately, law, computer science and sustainability studies need to team up to effectively address the dual large-scale transformations of digitization and sustainability.

Diamantis et al. on Forms of Disclosure: The Path to Automated Data Privacy Audits

Mihailis Diamantis (U Iowa Law), Maaz Bin Musa (U Iowa), Lucas Ausberger (same), Rishab Nithyanand (same) has posted “Forms of Disclosure: The Path to Automated Data Privacy Audits” (62 Harv. J. L. & Tech.; Forthcoming) on SSRN. Here is the abstract:

The weakest link in privacy enforcement today is detection. For years, agencies and activists sounded the alarm about unregulated, opaque mechanisms that organizations employ to harvest, process, and sell online user data. Some state legislatures have responded in recent years by passing legislation to protect privacy rights. Federal legislation may not be far off. But privacy rights are meaningless without effective enforcement, and enforcement is blind without detection.

New techniques for uncovering privacy violations hold promise. Historically, this would have required access to data brokers’ books. Unsurprisingly, such access was not forthcoming.

Researchers now have tools that can carry out what this Article calls “closed book privacy audits,” detecting privacy violations without targets’ cooperation. For example, by selectively feeding fictitious personal data to online platforms and measuring its impact web experience, closed book privacy audits can track corporate use (and misuse) of personal information across the data ecosystem. Automated closed book privacy audits could uncork the detection bottleneck, empowering private and public enforcers.

There is one hitch… Privacy audits require both data to test and benchmarks to test it against. Crisp evaluative benchmarks have remained elusive. Emerging privacy laws require corporations to disclosures how they collect and use personal information. The laws do not mandate any particular form of disclosure. Through an original empirical study of privacy disclosures by California data brokers, this Article documents the result: a widely variable mishmash of opaque representations that are impossible to audit using a consistent procedure. We argue that the law should mandate uniform privacy disclosures in a machine-readable format. Regulators could borrow from standardized disclosure frameworks used by other regulatory bodies (e.g., the United States Securities and Exchange Commission) to simultaneously improve disclosure clarity and facilitate low-cost detection of violations through closed book audits.

Swire et al. on Risks to Cybersecurity from Data Localization, Organized by Techniques, Tactics, and Procedures

Peter Swire (Georgia Institute of Technology – Scheller College of Business; Georgia Tech School of Cybersecurity and Privacy; Cross-Border Data Forum) and others have posted “Risks to Cybersecurity from Data Localization, Organized by Techniques, Tactics, and Procedures” on SSRN. Here is the abstract:

This paper continues the research program begun in “The Effects of Data Localization on Cybersecurity – Organizational Effects” (“Effects”). This paper supplements Effects by organizing the risks to cybersecurity by the techniques, tactics, and procedures (“TTPs”) of threat actors and defenders. To categorize the TTPs, we rely on two authoritative approaches, the widely-known MITRE ATT&CK Framework and 2019 guidelines on “The State of the Art” for cybersecurity supported by the European Union Agency on Cybersecurity (“ENISA”).

Based on these two approaches, localization laws disrupt the defenders’ ability to determine “The Who and the What” of an attack. Details about “who” is attacking often require access to personal data. Similarly, as an attacker moves through a defender’s system, there are often account names or other personal data in tracking “what” the attacker does in the system. Threat hunting and privilege escalation are two essential defensive measures that are likely to be especially hard hit by limits on data transfer.

Similarly, localization laws can result in “Risks From Knowing Less Than the Attacker.” An essential part of good cyber defense is for the defenders to test the system through “red teaming,” including penetration (“pen”) testing. With localization, attackers can hop across borders to find holes in system defenses; defenders, however, are prohibited from using information gathered in one locality to jump to another locality. Localization thus limits defenders from effectively testing flaws in their systems.
Part II of the paper examines the tension between the European Union’s regulatory requirements for cybersecurity and data protection. Part III examines the MITRE ATT&CK Framework and ENISA guidelines, and how they identify relevant TTPs of a cybersecurity defense system.

Part IV supplements Part III by providing a quantitative model illustrating effects of data localization under plausible assumptions. In the model, halving the number of IP addresses available to a defender would more than double the likely time until a new attack is detected.

Part V extends the analysis to the cybersecurity approaches now being considered under the proposed European Union Cybersecurity Standard. That standard, written in the name of cybersecurity, would create serious risks for cybersecurity, including by undermining state-of-the-art defensive measures such as threat hunting, privilege escalation, and pen testing.

Part VI offers conclusions. The U.S., Europe, and other nations face incessant and sophisticated cyber-attacks. In the face of these threats, imagine that policymakers were considering a law that would degrade threat intelligence, leave systems open to privilege escalation, and bar effective pen testing and other red teaming. Such a proposed law would deserve great skepticism. As documented in this paper’s research, however, data localization laws appear to have such effects. This paper adds to the finding in Effects, that “until and unless proponents of localization address these concerns, scholars, policymakers, and practitioners have strong reason to expect significant cybersecurity harms from hard localization requirements.”

Khan on Framing Online Speech Governance As An Algorithmic Accountability Issue

Mehtab Khan (Yale Law School) has posted “Framing Online Speech Governance As An Algorithmic Accountability Issue” (99 Ind. L.J. Supp. (forthcoming 2023)) on SSRN. Here is the abstract:

Automated tools used in online speech governance are prone to errors on a large-scale yet widely used. Legal and policy responses have largely focused on case-by-case evaluations of these errors, instead of an examination of the development process of the tools. Moreover, information on the internet is no longer simply generated by users, but also by sophisticated language tools like ChatGPT, that are going to pose a challenge to speech governance. Yet, legal and policy measures have not responded adequately to AI tools becoming more dynamic and impactful. In order to address the challenges posed by algorithmic content governance, I argue that there is a need to frame a regulatory approach that focuses on the tools used in both content moderation and content generation contexts—which can be done by viewing this technology through an algorithmic accountability lens. I provide an overview of the various aspects of the technical and normative features of these tools that help us frame the regulation of these tools as an algorithmic accountability issue. I do this in three steps: First, I discuss the lack of sufficient attention towards AI tools in current regulatory approaches. Second, I highlight the shared features of both content moderation and content generation to offer insights about the interlinked and evolving landscape of online speech and AI Governance. Third, I situate this discussion of speech governance within a broader framework of algorithmic accountability to guide future regulatory interventions.

Brescia on What’s a Lawyer For?: Artificial Intelligence and Third-Wave Lawyering

Raymond H. Brescia (Albany Law) has posted “What’s a Lawyer For?: Artificial Intelligence and Third-Wave Lawyering” (FSU Law Review, Forthcoming) on SSRN. Here is the abstract:

The American legal profession is at a critical inflection point, one that will likely result in dramatic changes in the ways in which consumers access legal guidance and the manner in which lawyers and others deliver it. Chat-enabled artificial intelligence, algorithmic decision-making, digitization, and commoditization threaten existing practices within the legal profession as it is currently constituted by making legal services and information easier to deliver, less expensive to provide, and less difficult for consumers to access. New technologies could lower the cost of legal services generally and make many forms of legal information easier to disseminate, and, as a result, more widely distributed. Because of this, more consumers are likely to gain access to some type or form of legal assistance, even if it does not mean they will necessarily receive the direct services of a lawyer. It likely also means that the traditional methods by which legal services have been delivered will become obsolete in at least some contexts, and with that, many traditional legal services jobs and careers as well. This will, of course, have a dramatic impact on what lawyers do, who delivers services that look like legal services, what law students learn, and what law schools teach. Much could easily be lost as guidance to address legal problems is digitized, commoditized, and delivered in accessible and affordable ways, just not by lawyers. At critical inflection points in the American legal profession’s history, it has responded to demands from within and outside the profession to address the ways in which the profession was not serving its appropriate functions in society and failing to uphold what should be its values. At one of the more significant of these inflection points, which occurred in the turn of the 19th to the 20th century, the profession went through dramatic change: moving it from what I call the profession’s “first wave,” where a loosely organized bar made up almost exclusively of white men of Northern European descent faced few barriers to entry to the profession, to its “second wave,” when the profession erected significant barriers to entry and institutionalized such barriers in an effort to maintain greater control over the practice of law. I argue here that we are on the cusp of what may be a new “wave”—a third wave—where technology impacts the practice of law and the ways in which consumers access legal assistance in dramatic ways. But to change for the sake of change alone is not a good enough reason to applaud the coming disruptions in the delivery of legal services due to new technologies. Any profession promotes a set of professional values and serves a particular role in society. The legal profession, like any profession, should serve its appropriate role in society; it should fulfill its purpose. A critical question for the profession, and society at large, is whether new technologies undermine that role or advance it. What is lost and what is gained with respect to the values the profession is supposed to uphold and those functions it is supposed to fill when new technologies might displace traditional modes of delivering legal services? To answer these questions, one must first conduct an assessment of the values and functions of the American legal profession. Once such an assessment is complete, one can embark upon a broader effort, one that reviews the ways in which new technologies are being deployed, and will be deployed in the future, and calibrate such uses in ways that advance a purpose-driven legal services model in a technology-enhanced legal ecosystem. What I hope to accomplish in this essay is to lay out the parameters of the debate around the coming disruptions to the delivery of legal services due to emerging technologies and identify the considerations that should go into any assessment of the proper role that the legal profession should play in the wake of this current inflection point.

Kim on Artificial Intelligence, Big Data, Algorithmic Management, and Labor Law

Pauline Kim (Wash. U. St. Louis Law) has posted “Artificial Intelligence, Big Data, Algorithmic Management, and Labor Law” (Oxford Handbook of the Law of Work, eds Davidov, Langille, Lester (2024)) on SSRN. Here is the abstract:

Employers are increasingly relying on algorithms and AI to manage their workforces, using automated systems to recruit, screen, select, supervise, discipline, and even terminate employees. This chapter explores the effects of these systems on the rights of workers in standard work relationships, who are presumptively protected by labor laws. It examines how these new technological tools affect fundamental worker interests and how existing law applies, focusing primarily as examples on two particular concerns—nondiscrimination and privacy. Although current law provides some protections, legal doctrine has largely developed with human managers in mind, and as a result, fails to fully apprehend the risks posed by algorithmic tools. Thus, while anti-discrimination law prohibits discrimination by workplace algorithms, the existing framework has a number of gaps and uncertainties when applied to these systems. Similarly, traditional protections for employee privacy are ill-equipped to address the sheer volume and granularity of worker data that can now be collected, and the ability of computational techniques to extract new insights and infer sensitive information from that data. More generally, the expansion of algorithmic management affects other fundamental worker interests because it tends to increase employer power vis à vis labor. This chapter concludes by briefly considering the role that data protection laws might play in addressing the risks of algorithmic management.

Travis on The Freedom of Influencing

Hannibal Travis (FIU Law) has posted “The Freedom of Influencing” (77 Miami L. Rev. 388 (2023)) on SSRN. Here is the abstract:

Social media stars and the Federal Trade Commission (“FTC”) Act are clashing. Influencer marketing is a preferred way for entertainers, pundits, and everyday people to monetize their audiences and popularity. Manufacturers, service providers, retailers, and advertising agencies leverage influencers to reach into millions or even billions of consumer devices, capturing minutes or seconds of the market’s fleeting attention. FTC enforcement actions and private lawsuits have targeted influencers for failing to dis￾close the nature of a sponsorship relationship with a manufacturer, marketer, or service provider. Such a failure to disclose payments prominently is very common in Hollywood films and on radio and television, however. The Code of Federal Regulations, FTC notices, and press releases contain exemptions tailored to such legacy media. This Article addresses whether the disparate treatment of social media influencers and certain legacy media formats may amount to a content-based regulation of speech that violates the freedom of speech. Drawing on intellectual property law, consumer law, and securities law precedents, it argues that the more intense focus on disclosures by social media influencers infringes the freedom of influencing. It is irrational and discriminatory to impose greater obligations on influencers who are paid to mention or use products or services than on legacy media formats whose actors or directors mention or use similar products or services.

Mazzurco on Content Moderation Regulation as Legal Role-Scripting

Sari Mazzurco (Yale ISP; SMU Dedman) has posted “Content Moderation Regulation as Legal Role-Scripting” (Indiana Law Journal, Forthcoming) on SSRN. Here is the abstract:

Lawmakers and scholars concerned with content moderation regulation typically appeal to “analogies” to justify or undermine different forms of regulation. The logic goes, law should afford individuals due process rights against speech platforms because speech platforms are “like” speech governors as a matter of objective reality. Other common analogies include common carriers, publishers, distributors, shopping malls, and book stores.

Commentators attempt to invoke social roles to understand what the content moderation relationship is, what behaviors are “right” and “wrong” within it, and how law should police behavioral deviations. But they do so without relying on foundational sociology theory that explains what social roles are, what they do, and how they come to be. Without this theoretical foundation, the discourse incompletely portrays the project of content moderation regulation. Content moderation regulations do not simply “take” speech platforms’ role as it currently exists. They will also “make” speech platforms’ role, by expressing that speech platforms should be speech governors, common carriers, publishers, or something else, based on how lawmakers choose to regulate.

This Article is the first to introduce role theory into the content moderation discourse. Content moderation regulations are poised to define the basic contours of what it means to be a “speech platform” because the role remains unsettled. Earlier, the Communications Decency Act failed to articulate coherent roles within the content moderation relationship. But current content moderation regulatory reforms — including the PACT Act in Congress, state platform-common carriage laws, and the Supreme Court’s decision in Gonzalez v. Google — have a renewed opportunity to script social roles for speech platforms and individuals. Foregrounding these reforms’ role scripts directs attention to urgent questions about whether they are likely to produce a desirable content moderation relationship and an online speech ecosystem that meets the public’s needs.

Campos & Laurent on A Definition of General-Purpose AI Systems

Simeon Campos (SaferAI) and Romain Laurent (same) have posted “A Definition of General-Purpose AI Systems: Mitigating Risks from the Most Generally Capable Models” on SSRN. Here is the abstract:

The European Union (EU) is currently going through the legislative process on the EU AI Act – the first bill intended to regulate Artificial Intelligence (AI) comprehensively in a major jurisdiction. The bill includes provisions to manage risks of generally capable AIs classified as “General Purpose AI Systems” (GPAIS). We believe that this crucial aspect of the act could be improved by focusing the definition more on the most generally capable systems, which bring very specific risks. The Future of Life Institute (FLI) proposed a definition of GPAIS to better target these models, a significant step in the right direction. Expanding on FLI’s proposal, this paper introduces a new definition of GPAIS, which serves to clearly differentiate between narrow and general systems, and cannot be easily exploited by GPAIS providers who may wish to avoid new regulatory constraints.

This paper consists of two sections. The first section discusses the specific risks of GPAIS, including unpredictability, adaptability, and the potential for emergent capabilities. The second section presents the new definition of GPAIS, and explains the changes made and how they address the risks presented in the first section. The EU AI Act could set a global standard for AI-related risk management. The aim of this document is to help inform AI Act draft reviews and improve the ability to mitigate risks from the most generally capable models to protect stakeholders in the EU and globally.