Budish on AI’s Risky Business: Embracing Ambiguity in Managing the Risks of AI

Ryan Budish (Harvard, Berkman Klein Center) has posted “AI’s Risky Business: Embracing Ambiguity in Managing the Risks of AI” (16 J. Bus. & Tech. L. 259 (2021)) on SSRN. Here is the abstract:

There are over 160 different sets of artificial intelligence (AI) governance principles from public and private organizations alike. These principles aspire to enhance AI’s transformative potential and limit its negative consequences. Increasingly, these principles and strategies have invoked the language of “risk management” as a mechanism for articulating concrete guardrails around AI technologies. Unfortunately, what “risk management” means in practice is largely undefined and poorly understood. In fact, there are two very different approaches to how we measure risk. One approach emphasizes quantification and certainty. The other approach eschews the false certainty of quantification and instead embraces the inherently qualitative (and correspondingly imprecise) measures of risk expressed through social and political dialogue across stakeholders. This paper argues that the emerging field of AI governance should embrace a more responsive, inclusive, and qualitative approach that is better tailored to the inherent uncertainties and dynamism of AI technology and its societal impacts. And yet this paper also describes how doing so will be difficult because computer science and digital technologies (and, by extension, efforts to govern those technologies) inherently push toward certainty and the elimination of ambiguity. This paper draws upon experiences from other scientific fields that have long had to grapple with how best to manage the risks of new technologies to show how qualitative approaches to risk may be better tailored to the challenges of emerging technologies like AI, despite the potential tradeoffs of unpredictability and uncertainty.

Ranchordas on Experimental lawmaking in the EU: Regulatory Sandboxes

Sofia Ranchordas (University of Groningen, Faculty of Law; LUISS) has posted “Experimental lawmaking in the EU: Regulatory Sandboxes” (EU Law Live) on SSRN. Here is the abstract:

Regulatory sandboxes, experimental clauses, and experimental regulations are relatively unknown terms in EU law. The term ‘experimental lawmaking’ is elusive and it is unclear how experimental laws and regulations fit within existing EU law frameworks. Regulatory sandboxes are a leading and recent example of experimental lawmaking which started at national level and is now slowly making its way into the EU law toolbox.

Regulatory sandboxes are experimental legal regimes which waive, modify national regulatory requirements (or implementation) or provide bespoke guidance on a temporary basis and for a limited number of actors in order to support businesses in their innovation endeavors. A regulatory sandbox offers safe testbeds for innovative products and services without putting the whole system at risk. Sandboxing aims to promote thus the advancement of technology, new policy solutions through the promotion of collaborative regulation, and novel compliance initiatives between innovators and regulators. After a brief experience of national implementation in the financial, energy, healthcare, telecommunications, and data protection sectors, the EU has embraced the potential of regulatory sandboxes in its AI Regulation Proposal. Nevertheless, there are still many unknowns in the world of EU experimental lawmaking. The definition, modus operandi, regulatory implications as well as the design and methodology of experimental regulations and regulatory sandboxes will determine whether this experimental approach to law and regulation will indeed be successful and help advance responsible innovation in the EU. In this contribution, I draw upon recent scholarship and national experiences with regulatory sandboxes to shed light on the legal nature, innovative potential, and methodology of this instrument.

Recommended.

Balkin on To Reform Social Media, Reform Informational Capitalism

Jack M. Balkin (Yale Law) has posted “To Reform Social Media, Reform Informational Capitalism” (in Social Media, Freedom of Speech and the Future of Our Democracy; Lee Bollinger and Geoffrey R. Stone, eds., Forthcoming) on SSRN. Here is the abstract:

Calls for altering First Amendment protections to deal with problems caused by social media are often misdirected. The problem is not First Amendment doctrines that protect harmful or false speech. The problem is the health of the digital public sphere: in particular, whether the digital public sphere, as currently constituted, adequately protects the values of political democracy, cultural democracy, and the growth and spread of knowledge. Instead of tinkering with First Amendment doctrines at the margins, we should focus on the industrial organization of digital media and the current business models of social media companies.

Only a handful of social media companies currently dominate online discourse. In addition, the business models of social media companies give them incentives to act irresponsibly and amplify false and harmful content. The goals of social media regulation should therefore be twofold. The first goal should be to ensure a more diverse ecology of social media so that no single company’s construction or governance of the digital public sphere dominates. The second goal should be to give social media companies — or at least the largest and most powerful ones — incentives to become trusted and trustworthy organizations for facilitating, organizing, and curating public discourse. Competition law, consumer protection, and privacy reforms are needed to create a more diverse and pluralistic industry and to discourage business practices that undermine the digital public sphere.

Given these goals, the focus should not be on First Amendment doctrines of content regulation, but on digital business models. To the extent that First Amendment doctrine requires any changes, one should aim at relatively recent decisions concerning commercial speech, data privacy, and telecommunications law that might make it harder for Congress to regulate digital businesses.

Hacker & Passoth on Varieties of AI Explanations Under the Law: From the GDPR to the AIA, and Beyond

Philipp Hacker (European University Viadrina Frankfurt (Oder) – European New School of Digital Studies) and Jan-Hendrik Passoth (same) have posted “Varieties of AI Explanations Under the Law. From the GDPR to the AIA, and Beyond” on SSRN. Here is the abstract:

The quest to explain the output of artificial intelligence systems has clearly moved from a mere technical to a highly legally and politically relevant endeavor. In this paper, we provide an overview of legal obligations to explain AI and evaluate current policy proposals. In this, we distinguish between different functional varieties of AI explanations – such as multiple forms of enabling, technical and protective transparency – and show how different legal areas engage with and mandate such different types of explanations to varying degrees. Starting with the rights-enabling framework of the GDPR, we proceed to uncover technical and protective forms of explanations owed under contract, tort and banking law. Moreover, we discuss what the recent EU proposal for an Artificial Intelligence Act means for explainable AI, and review the proposal’s strengths and limitations in this respect. Finally, from a policy perspective, we advocate for moving beyond mere explainability towards a more encompassing framework for trustworthy and responsible AI that includes actionable explanations, values-in-design and co-design methodologies, interactions with algorithmic fairness, and quality benchmarking.

Chander on Artificial Intelligence and Trade

Anupam Chander (Georgetown University Law Center) has posted “Artificial Intelligence and Trade”
(in Big Data and Global Trade Law 115-127 (Mira Burri ed., Cambridge: Cambridge University Press 2021)) on SSRN. Here is the abstract:

Artificial Intelligence is already powering trade today. It is crossing borders, learning, making decisions, and operating cyber-physical systems. It underlies many of the services that are offered today – from customer service chatbots to customer relations software to business processes. The chapter considers AI regulation from the perspective of international trade law. It argues that foreign AI should be regulated by governments – indeed that AI must be ‘locally responsible’. The chapter refutes arguments that trade law should not apply to AI and shows how the WTO agreements might apply to AI using two hypothetical cases . The analysis reveals how the WTO agreements leave room for governments to insist on locally responsible AI, while at the same time promoting international trade powered by AI.

Di Porto et al. on A Computational Analysis of the Debate on Informational Duties in the Digital Services and the Digital Markets Acts

Fabiana Di Porto (University of Salento ; LUISS; Hebrew University) et al. have posted “Talking at Cross Purposes? A Computational Analysis of the Debate on Informational Duties in the Digital Services and the Digital Markets Acts” on SSRN. Here is the abstract:

In the latest Commission proposals, the Digital Markets Act (DMA) and Digital Services Act (DSA), ex ante informational obligations for online intermediaries, platforms, and ‘gatekeepers’ figure prominently. Some are new, others are already state-of-the-art for many operators. Because the efficacy of these duties is widely questioned, one wonders how they are implemented in the normative proposals. The question is largely uncovered in the literature. To fill this void, the paper investigates whether there was any agreement among the stakeholders who participated in the consultation over the DSA and DMA proposals. We do so by using NLP techniques to analyze whether key terms of transparency are used in the same way by different stakeholders. We find significant differences in the employment of terms like ‘simple’ or ‘meaningful’ in the position papers that informed the drafting of the two proposals. These findings are informative for both rule-makers and legal scholars, and may explain why informational duties fail so often to reach their goal.

Recommended.

Orbach on Mandated Neutrality, Platforms, and Ecosystems

Barak Orbach (University of Arizona) has posted “Mandated Neutrality, Platforms, and Ecosystems” (Pinar Akman et al., Research Handbook on Abuse of Dominance and Monopolization (Edward Elgar, forthcoming 2022)) on SSRN. Here is the abstract:

This chapter explores and assesses the conceptual foundations of mandated neutrality standards (MNS) prescriptions, such as ‘platform neutrality’ and bans on ‘self-preferencing’. MNS prescriptions require dominant digital intermediaries to deal with all interested parties on fair and equal terms. Specifically, MNS prescriptions require dominant digital ecosystems to treat rivals as they treat their own subsidiaries and units, and treat all trade partners alike, regardless of the attributes of the trade relations. Extreme forms of MNS prescriptions seek to break up digital ecosystems and outlaw business models that integrate platforms and other lines of business. The stated rationale of MNS prescriptions is that antitrust enforcement must preserve fairness in the marketplace. Inquiries into the intellectual foundations of MNS prescriptions, however, tend to frustrate serious antitrust thinkers. They conflate basic concepts, such as ‘fairness’ and ‘competition’, and ‘opportunism’ and ‘anticompetitive conduct’. They perceive low prices, convenience, and efficiencies as predatory tactics, and fail to articulate practical neutrality standards.

Schmitz & Martinez on ODR in the United States

Amy J. Schmitz (Ohio State University Moritz College of Law) and Janet Martinez (Stanford Law School) have posted “ODR and Innovation in the United States” (in ONLINE DISPUTE RESOLUTION: THEORY AND PRACTICE: A TREATISE ON TECHNOLOGY AND DISPUTE RESOLUTION (Wahab, Katsh and Eds., 2021)) on SSRN. Here is the abstract:

Technology is revolutionizing the Alternative Dispute Resolution (ADR) field, especially in the wake of Covid-19. Despite the long-held assumptions that increasing understanding, building empathy, and crafting resolution are only possible in-person, effective ways have emerged for assisting the resolution of the exploding number of disputes that have burgeoned online. Technology has become the “fourth party” through the growing field of online dispute resolution (ODR), which includes use of technology and computer-mediated-communication (CMC) in negotiation, mediation, arbitration and other dispute resolution processes. ODR is infiltrating every area of dispute resolution, from courts (small claims, tax, landlord/tenant, family and more) to the block chain. Furthermore, innovation in the field continues to grow, as institutionalization expands in the U.S. legal tech market. Nonetheless, it is questionable whether this expansion has sufficiently considered sound and ethical dispute system design. This chapter in a new Treatise on ODR explores ODR’s recent development in the U.S., analyzes the providers that self-identified as providing “ODR” to the National Center for Technology and Dispute Resolution (NCTDR) in the U.S., and proposes closer attention to dispute system design. Moreover, the chapter invites further innovation and research in the ODR to advance access to justice.

Cheng & Nowag on Algorithmic Predation and Exclusion

Thomas K. Cheng (The University of Hong Kong – Faculty of Law) and Julian Nowag (Lund University – Faculty of Law; Oxford Centre for Competition Law and Policy) have posted “Algorithmic Predation and Exclusion” (LundLawCompWP 1/2022) on SSRN. Here is the abstract:

The debate about the implications of algorithms on competition law enforcement has so far focused on multi-firm conduct in general and collusion in particular. The implications of algorithms on abuse of dominance have been largely neglected. This article seeks to fill the gap in the existing literature by exploring how the increasingly precise practice of individualized targeting by algorithms can facilitate the practice of a range of abuses of dominance, including predatory pricing, rebates, and tying and bundling. The ability to target disparate groups of consumers with different prices helps a predator to minimize the losses it sustains during predation and maximize its ability to recoup its losses. This changes how recoupment should be understood and ascertained and may even undermine the rationale for requiring a proof of likelihood of recoupment under US antitrust law. This increased ability to price discriminate also enhances a dominant firm’s ability to offer exclusionary rebates. Finally, algorithms allow dominant firms to target their tying and bundling practices to loyal customers, hence avoiding the risk of alienating marginal customers with an unwelcome tie. This renders tying and bundling more feasible and effective for dominant firms.

Edwards on Transparency and Accountability of Algorithmic Regulation

Ernesto Edwards (National University of Rosario) has posted “How to Stop Minority Report from Becoming a Reality: Transparency and Accountability of Algorithmic Regulation” on SSRN. Here is the abstract:

In this essay I aim to illuminate on the importance of transparency and accountability in algorithmic regulation, a highly topical legal issue that presents important consequences because Machine Learning algorithms have been constantly developing as of late. Building on prior studies and on current literature, such as Citron, Crootof, Pasquale and Zarsky, I intend to develop a proposal that bridges said knowledge with that of Daniel Kahneman in order to amplify the legal question at hand with the notions of blinders and biases. I will argue that if left unattended or if improperly attended, Machine Learning algorithms will produce more harm than good due to these blinders and biases. After linking the aforementioned ideas, I will focus on the transparency and accountability of algorithmic regulation, and its ties to technological due process. The findings will illustrate the present need of a human element, better exemplified by the concept of cyborg justice, and the public policy challenges it entails. In the end, I will propose what could be done in the future in this area.