Thomaz et al. on Ethics for AI in Business

Felipe Thomaz (University of Oxford – Said Business School) et al. have posted “Ethics for AI in Business” on SSRN. Here is the abstract:

Digital transformation and the fourth industrial revolution are increasingly impacting all aspect of everyday life, including business management. The use of AI comes with substantial advantages in the short, medium and long term, but also creates substantial new risks that must be appropriately addressed. These risks can hinder innovation and progress if they constitute an obstacle for the adoption of AI technologies. An ethical adoption of AI can encourage progress and innovation and therefore benefit both business and society at large. In this work we aim to provide a clear and understandable managerial framework for stakeholders to adopt AI safely within their activities, an ethical risk identification and mitigation set of strategies, and emphasise the leadership role that firms can have in this process.

Maas on Aligning AI Regulation to Sociotechnical Change

Matthijs M. Maas (University of Cambridge) has posted “Aligning AI Regulation to Sociotechnical Change” on SSRN. Here is the abstract:

How do we regulate a changing technology, with changing uses, in a changing world? This chapter argues that while existing (inter)national AI governance approaches are important, they are often siloed. Technology-centric approaches focus on individual AI applications; law-centric approaches emphasize AI’s effects on pre-existing legal fields or doctrines. This chapter argues that to foster a more systematic, functional and effective AI regulatory ecosystem, policy actors should instead complement these approaches with a regulatory perspective that emphasizes how, when, and why AI applications enable patterns of ‘sociotechnical change’. Drawing on theories from the emerging field of ‘TechLaw’, it explores how this perspective can provide informed, more nuanced, and actionable perspectives on AI regulation.

A focus on sociotechnical change can help analyze when and why AI applications actually do create a meaningful rationale for new regulation — and how they are consequently best approached as targets for regulatory intervention, considering not just the technology, but also six distinct ‘problem logics’ that appear around AI issues across domains. The chapter concludes by briefly reviewing concrete institutional and regulatory actions that can draw on this approach in order to improve the regulatory triage, tailoring, timing & responsiveness, and design of AI policy.

Wagner & Eidenmueller on Digital Dispute Resolution

Gerhard Wagner (Humboldt University School of Law) & Horst Eidenmueller (University of Oxford – Faculty of Law) have posted “Digital Dispute Resolution” on SSRN. Here is the abstract:

This essay identifies and analyses key developments and regulatory challenges of “Digital Dispute Resolution”. We discuss digital enforcement and smart contracts, internal complaint handling mechanisms, external online dispute resolution and courts in a digital world. Dispute resolution innovations originate primarily in the private sector. New service providers have high-powered incentives and face fewer institutional restrictions than the courts. We demonstrate that with smart contracts, digital enforcement and internal complaint handling, a new era of dispute resolution by contract without a neutral third party dawns. This development takes the idea of a “privatization of dispute resolution” to its extreme. It promises huge efficiency gains for the disputing parties. At the same time, risks of an extremely unequal distribution of these gains, to the detriment of less vigilant parties, and of undermining the rule of law loom large. The key regulatory challenge will be to control the enormous power of large, sophisticated commercial actors, especially platforms. We suggest regulatory tools to address this problem.