Verhulst on The AI Localism Canvas

Stefaan Verhulst (NYU), Andrew Young (NYU), and Mona Sloane (NYU) have posted “The AI Localism Canvas” on SSRN. Here is the abstract:

The proliferation of artificial intelligence (AI) technologies continues to illuminate challenges and opportunities for policymakers – particularly in cities (Allam/Dhunny 2019; Kirwan/Zhiyong 2020). As the world continues to urbanize, cities grow in their importance as hubs of innovation, culture, politics and commerce. More recently, they have also grown in significance as innovators in governance of AI, and AI-related concerns. Prominent examples on how cities are taking the lead in AI governance include the Cities Coalition for Digital Rights, the Montreal Declaration for Responsible AI, and the Open Dialogue on AI Ethics. Cities have also seen an uptick of new laws and policies, such as San Francisco’s ban of facial recognition technology or New York City’s push for regulating the sale of automated hiring systems. The same applies for new oversight initiatives and organizational roles focused on AI, such as New York City’s Algorithms Management and Policy Officer, and numerous local AI Ethics initiatives in various institutes, universities and other educational centers.

Considered together, all of these initiatives and developments add up to an emerging paradigm of governance localism, marked by a shift toward cities and other local jurisdictions in order to address a wide range of environmental, economic and societal challenges (Davoudi/Madanipour 2015). This article examines this field of AI Localism – a global move toward innovative governance of AI at the subnational level. The piece introduces the current state of play in the field, and introduces an “AI Localism Canvas” to help decision-makers identify, categorize and assess instances of AI Localism specific to a city or region. It provides several examples of AI governance innovation on the local level and provides an “AI Localism Canvas” as a framework to help guide the thinking of scholars and policymakers in identifying categorizing, and assessing the different areas of AI Localism within a city or region.

Glaze et al. on AI for Adjudication in the Social Security Administration

Kurt Glaze (US Gov – SSA), Daniel E. Ho (Stanford Law School), Gerald K. Ray (SSA), and Christine Tsang (Stanford Law School) have posted “Artificial Intelligence for Adjudication: The Social Security Administration and AI Governance” (Oxford University Press, Handbook on AI Governance (Forthcoming) on SSRN. Here is the abstract:

Despite widespread skepticism of data analytics and artificial intelligence (AI) in adjudication, the Social Security Administration (SSA) pioneered path breaking AI tools that became embedded in multiple levels of its adjudicatory process. How did this happen? What lessons can we draw from the SSA experience for AI in government?

We first discuss how early strategic investments by the SSA in data infrastructure, policy, and personnel laid the groundwork for AI. Second, we document how SSA overcame a wide range of organizational barriers to develop some of the most advanced use cases in adjudication. Third, we spell out important lessons for AI innovation and governance in the public sector. We highlight the importance of leadership to overcome organizational barriers, “blended expertise” spanning technical and domain knowledge, operational data, early piloting, and continuous evaluation. AI should not be conceived of as a one-off IT product, but rather as part of continuous improvement. AI governance is quality assurance.

Huffman & Schmidt-Kessen on Gig Platforms as Hub-and-Spoke Arrangements and Algorithmic Pricing: A Comparative EU-US Antitrust Analysis

Max Huffman (Indiana University Robert H. McKinney School of Law) and Maria José Schmidt-Kessen
(Central European University (CEU) – Department of Legal Studies) have posted “Gig Platforms as Hub-and-Spoke Arrangements and Algorithmic Pricing: A Comparative EU-US Antitrust Analysis” on SSRN. Here is the abstract:

Gig platforms are a modern economy enterprise structure characterized by a firm matching service providers with consumers – prominent examples include ride-sharing platforms, like Uber; delivery platforms, like Wolt; and lodging rental platforms, like Airbnb. As all online platforms, gig platforms are data-driven business models that employ and develop algorithms and AI tools that learn from user behavior and adapt to make interactions increasingly efficient. In contrast to other online platforms, such as advertising exchanges or online market places for goods, gig platforms enable users to sell their labor or services to other users via the platform.

Scholarship has shown enterprises that contracts with their service providers, who are then by necessity operating as independent enterprises, are best analyzed as agreements implicating Art. 101 TFEU and Section 1 of the Sherman Act. Currently, the dominant legal treatment of service providers on platforms including Uber (ride-sharing) and Wolt (food delivery) is as contractors rather than employees. We employ here the lens of a hub-and-spoke arrangement, with the platform as the hub and the service providers as the spokes, and the algorithmically-established price terms representing a collection of parallel vertical agreements. We then engage in a comparative study of the legal implications under antitrust law in the US and the EU of hub-and-spoke arrangements.

The chapter proceeds to outline the hub-and-spoke structure of the service provider-platform agreements in a gig economy enterprise, including the universal agreement to abide by prices set by algorithm in contracting for services. It covers various design options for pricing algorithms that can be used by platforms to coordinate the transaction between its users. Next, the chapter considers the EU caselaw on hub-and-spoke arrangements, analyzing authorities from across the EU, and identifies the probable treatment of the gig economy agreements in the light of these authorities. The chapter then conducts a similar analysis of leading recent authorities in the US and likewise concludes the most probable treatment under US law. In the conclusion, the chapter compares and explains the likely legal treatment of an algorithmically defined hub-and-spoke agreement and suggests areas for change.

Cheong on Granting Legal Personhood to Artificial Intelligence Systems and Traditional Veil-Piercing Concepts To Impose Liability

Ben Chester Cheong (Singapore University of Social Sciences) has posted “Granting Legal Personhood to Artificial Intelligence Systems and Traditional Veil-Piercing Concepts To Impose Liability” on SSRN. Here is the abstract:

This article discusses some of the issues surrounding artificial intelligence systems and whether artificial intelligence systems should be granted legal personhood. The first part of the article discusses whether current artificial intelligence systems should be granted rights and obligations, akin to a legal person. The second part of the article deals with imposing liability on artificial intelligence beings by analogizing with incorporation and veil piercing principles in company law. It examines this by considering that a future board may be replaced entirely by an artificial intelligence director managing the company. It also explores the possibility of disregarding the corporate veil to ascribe liability on such an artificial intelligence beings and the ramifications of such an approach in the areas of fraud and crime.

Eidenmueller on Why Personalized Law?

Horst Eidenmueller (University of Oxford Law; ECGI) has posted “Why Personalized Law?” (U. Chi. L. Rev. Online (Forthcoming) on SSRN. Here is the abstract:

Big data and advances in Artificial Intelligence (AI) have made it possible to personalize legal rules. In this essay, I investigate the question of whether laws should be personalized. Omri Ben-Shahar and Ariel Porat argue that personalized law could be a “precision tool” to achieve whatever goal the lawmaker wants to achieve. This argument is not convincing. The most “natural” fit and best normative justification for a personalized law program is welfarism/utilitarianism. This is because personalized law and welfarism/utilitarianism are both based on normative individualism. But welfarism/utilitarianism is a highly problematic social philosophy. Against this background, it becomes clear why personalized law should only have a limited role to play in lawmaking. The focus of state action should not be the design and running of a personalized law program. Rather, it should be on controlling “wild personalization” by powerful private actors.