Colangelo & Mezzanotte on Colluding Through Smart Technologies

Giuseppe Colangelo (University of Basilicata, Department of Mathematics, Computer Science and Economics; Stanford Law School; LUISS) and Francesco Mezzanotte (Roma Tre University) have posted “Colluding Through Smart Technologies: Understanding Agreements in the Age of Algorithms” on SSRN. Here is the abstract:

By affecting business strategies and consumers’ behavior, the wide-scale use of algorithms, prediction machines and blockchain technology is currently challenging the suitability of several legal rules and notions which have been designed to deal with human intervention. In the specific sector of antitrust law, the question is arising on the adequacy of the traditional doctrines sanctioning anticompetitive cartels to tackle coordinated practices which, in the absence of an explicit “meeting of the minds” of their participants, may be facilitated by algorithmic processes adopted, and eventually shared, by market actors. The main concern in these cases, discussed both at regulatory and academic level, derives from the general observation that while the traditional concept of collusive agreement requires some form of mutual understanding among parties, nowadays decision-making of firms is increasingly transferred to digitalized tools. Moving on from these premises, the paper investigates the impact that the rules applicable to the conclusion of (smart) contracts may have, from an antitrust law perspective, in the detection and regulation of anticompetitive practices.

Budish on AI’s Risky Business: Embracing Ambiguity in Managing the Risks of AI

Ryan Budish (Harvard, Berkman Klein Center) has posted “AI’s Risky Business: Embracing Ambiguity in Managing the Risks of AI” (16 J. Bus. & Tech. L. 259 (2021)) on SSRN. Here is the abstract:

There are over 160 different sets of artificial intelligence (AI) governance principles from public and private organizations alike. These principles aspire to enhance AI’s transformative potential and limit its negative consequences. Increasingly, these principles and strategies have invoked the language of “risk management” as a mechanism for articulating concrete guardrails around AI technologies. Unfortunately, what “risk management” means in practice is largely undefined and poorly understood. In fact, there are two very different approaches to how we measure risk. One approach emphasizes quantification and certainty. The other approach eschews the false certainty of quantification and instead embraces the inherently qualitative (and correspondingly imprecise) measures of risk expressed through social and political dialogue across stakeholders. This paper argues that the emerging field of AI governance should embrace a more responsive, inclusive, and qualitative approach that is better tailored to the inherent uncertainties and dynamism of AI technology and its societal impacts. And yet this paper also describes how doing so will be difficult because computer science and digital technologies (and, by extension, efforts to govern those technologies) inherently push toward certainty and the elimination of ambiguity. This paper draws upon experiences from other scientific fields that have long had to grapple with how best to manage the risks of new technologies to show how qualitative approaches to risk may be better tailored to the challenges of emerging technologies like AI, despite the potential tradeoffs of unpredictability and uncertainty.

Ranchordas on Experimental lawmaking in the EU: Regulatory Sandboxes

Sofia Ranchordas (University of Groningen, Faculty of Law; LUISS) has posted “Experimental lawmaking in the EU: Regulatory Sandboxes” (EU Law Live) on SSRN. Here is the abstract:

Regulatory sandboxes, experimental clauses, and experimental regulations are relatively unknown terms in EU law. The term ‘experimental lawmaking’ is elusive and it is unclear how experimental laws and regulations fit within existing EU law frameworks. Regulatory sandboxes are a leading and recent example of experimental lawmaking which started at national level and is now slowly making its way into the EU law toolbox.

Regulatory sandboxes are experimental legal regimes which waive, modify national regulatory requirements (or implementation) or provide bespoke guidance on a temporary basis and for a limited number of actors in order to support businesses in their innovation endeavors. A regulatory sandbox offers safe testbeds for innovative products and services without putting the whole system at risk. Sandboxing aims to promote thus the advancement of technology, new policy solutions through the promotion of collaborative regulation, and novel compliance initiatives between innovators and regulators. After a brief experience of national implementation in the financial, energy, healthcare, telecommunications, and data protection sectors, the EU has embraced the potential of regulatory sandboxes in its AI Regulation Proposal. Nevertheless, there are still many unknowns in the world of EU experimental lawmaking. The definition, modus operandi, regulatory implications as well as the design and methodology of experimental regulations and regulatory sandboxes will determine whether this experimental approach to law and regulation will indeed be successful and help advance responsible innovation in the EU. In this contribution, I draw upon recent scholarship and national experiences with regulatory sandboxes to shed light on the legal nature, innovative potential, and methodology of this instrument.

Recommended.

Balkin on To Reform Social Media, Reform Informational Capitalism

Jack M. Balkin (Yale Law) has posted “To Reform Social Media, Reform Informational Capitalism” (in Social Media, Freedom of Speech and the Future of Our Democracy; Lee Bollinger and Geoffrey R. Stone, eds., Forthcoming) on SSRN. Here is the abstract:

Calls for altering First Amendment protections to deal with problems caused by social media are often misdirected. The problem is not First Amendment doctrines that protect harmful or false speech. The problem is the health of the digital public sphere: in particular, whether the digital public sphere, as currently constituted, adequately protects the values of political democracy, cultural democracy, and the growth and spread of knowledge. Instead of tinkering with First Amendment doctrines at the margins, we should focus on the industrial organization of digital media and the current business models of social media companies.

Only a handful of social media companies currently dominate online discourse. In addition, the business models of social media companies give them incentives to act irresponsibly and amplify false and harmful content. The goals of social media regulation should therefore be twofold. The first goal should be to ensure a more diverse ecology of social media so that no single company’s construction or governance of the digital public sphere dominates. The second goal should be to give social media companies — or at least the largest and most powerful ones — incentives to become trusted and trustworthy organizations for facilitating, organizing, and curating public discourse. Competition law, consumer protection, and privacy reforms are needed to create a more diverse and pluralistic industry and to discourage business practices that undermine the digital public sphere.

Given these goals, the focus should not be on First Amendment doctrines of content regulation, but on digital business models. To the extent that First Amendment doctrine requires any changes, one should aim at relatively recent decisions concerning commercial speech, data privacy, and telecommunications law that might make it harder for Congress to regulate digital businesses.

Hacker & Passoth on Varieties of AI Explanations Under the Law: From the GDPR to the AIA, and Beyond

Philipp Hacker (European University Viadrina Frankfurt (Oder) – European New School of Digital Studies) and Jan-Hendrik Passoth (same) have posted “Varieties of AI Explanations Under the Law. From the GDPR to the AIA, and Beyond” on SSRN. Here is the abstract:

The quest to explain the output of artificial intelligence systems has clearly moved from a mere technical to a highly legally and politically relevant endeavor. In this paper, we provide an overview of legal obligations to explain AI and evaluate current policy proposals. In this, we distinguish between different functional varieties of AI explanations – such as multiple forms of enabling, technical and protective transparency – and show how different legal areas engage with and mandate such different types of explanations to varying degrees. Starting with the rights-enabling framework of the GDPR, we proceed to uncover technical and protective forms of explanations owed under contract, tort and banking law. Moreover, we discuss what the recent EU proposal for an Artificial Intelligence Act means for explainable AI, and review the proposal’s strengths and limitations in this respect. Finally, from a policy perspective, we advocate for moving beyond mere explainability towards a more encompassing framework for trustworthy and responsible AI that includes actionable explanations, values-in-design and co-design methodologies, interactions with algorithmic fairness, and quality benchmarking.