Verhulst on The AI Localism Canvas

Stefaan Verhulst (NYU), Andrew Young (NYU), and Mona Sloane (NYU) have posted “The AI Localism Canvas” on SSRN. Here is the abstract:

The proliferation of artificial intelligence (AI) technologies continues to illuminate challenges and opportunities for policymakers – particularly in cities (Allam/Dhunny 2019; Kirwan/Zhiyong 2020). As the world continues to urbanize, cities grow in their importance as hubs of innovation, culture, politics and commerce. More recently, they have also grown in significance as innovators in governance of AI, and AI-related concerns. Prominent examples on how cities are taking the lead in AI governance include the Cities Coalition for Digital Rights, the Montreal Declaration for Responsible AI, and the Open Dialogue on AI Ethics. Cities have also seen an uptick of new laws and policies, such as San Francisco’s ban of facial recognition technology or New York City’s push for regulating the sale of automated hiring systems. The same applies for new oversight initiatives and organizational roles focused on AI, such as New York City’s Algorithms Management and Policy Officer, and numerous local AI Ethics initiatives in various institutes, universities and other educational centers.

Considered together, all of these initiatives and developments add up to an emerging paradigm of governance localism, marked by a shift toward cities and other local jurisdictions in order to address a wide range of environmental, economic and societal challenges (Davoudi/Madanipour 2015). This article examines this field of AI Localism – a global move toward innovative governance of AI at the subnational level. The piece introduces the current state of play in the field, and introduces an “AI Localism Canvas” to help decision-makers identify, categorize and assess instances of AI Localism specific to a city or region. It provides several examples of AI governance innovation on the local level and provides an “AI Localism Canvas” as a framework to help guide the thinking of scholars and policymakers in identifying categorizing, and assessing the different areas of AI Localism within a city or region.

Glaze et al. on AI for Adjudication in the Social Security Administration

Kurt Glaze (US Gov – SSA), Daniel E. Ho (Stanford Law School), Gerald K. Ray (SSA), and Christine Tsang (Stanford Law School) have posted “Artificial Intelligence for Adjudication: The Social Security Administration and AI Governance” (Oxford University Press, Handbook on AI Governance (Forthcoming) on SSRN. Here is the abstract:

Despite widespread skepticism of data analytics and artificial intelligence (AI) in adjudication, the Social Security Administration (SSA) pioneered path breaking AI tools that became embedded in multiple levels of its adjudicatory process. How did this happen? What lessons can we draw from the SSA experience for AI in government?

We first discuss how early strategic investments by the SSA in data infrastructure, policy, and personnel laid the groundwork for AI. Second, we document how SSA overcame a wide range of organizational barriers to develop some of the most advanced use cases in adjudication. Third, we spell out important lessons for AI innovation and governance in the public sector. We highlight the importance of leadership to overcome organizational barriers, “blended expertise” spanning technical and domain knowledge, operational data, early piloting, and continuous evaluation. AI should not be conceived of as a one-off IT product, but rather as part of continuous improvement. AI governance is quality assurance.

Huffman & Schmidt-Kessen on Gig Platforms as Hub-and-Spoke Arrangements and Algorithmic Pricing: A Comparative EU-US Antitrust Analysis

Max Huffman (Indiana University Robert H. McKinney School of Law) and Maria José Schmidt-Kessen
(Central European University (CEU) – Department of Legal Studies) have posted “Gig Platforms as Hub-and-Spoke Arrangements and Algorithmic Pricing: A Comparative EU-US Antitrust Analysis” on SSRN. Here is the abstract:

Gig platforms are a modern economy enterprise structure characterized by a firm matching service providers with consumers – prominent examples include ride-sharing platforms, like Uber; delivery platforms, like Wolt; and lodging rental platforms, like Airbnb. As all online platforms, gig platforms are data-driven business models that employ and develop algorithms and AI tools that learn from user behavior and adapt to make interactions increasingly efficient. In contrast to other online platforms, such as advertising exchanges or online market places for goods, gig platforms enable users to sell their labor or services to other users via the platform.

Scholarship has shown enterprises that contracts with their service providers, who are then by necessity operating as independent enterprises, are best analyzed as agreements implicating Art. 101 TFEU and Section 1 of the Sherman Act. Currently, the dominant legal treatment of service providers on platforms including Uber (ride-sharing) and Wolt (food delivery) is as contractors rather than employees. We employ here the lens of a hub-and-spoke arrangement, with the platform as the hub and the service providers as the spokes, and the algorithmically-established price terms representing a collection of parallel vertical agreements. We then engage in a comparative study of the legal implications under antitrust law in the US and the EU of hub-and-spoke arrangements.

The chapter proceeds to outline the hub-and-spoke structure of the service provider-platform agreements in a gig economy enterprise, including the universal agreement to abide by prices set by algorithm in contracting for services. It covers various design options for pricing algorithms that can be used by platforms to coordinate the transaction between its users. Next, the chapter considers the EU caselaw on hub-and-spoke arrangements, analyzing authorities from across the EU, and identifies the probable treatment of the gig economy agreements in the light of these authorities. The chapter then conducts a similar analysis of leading recent authorities in the US and likewise concludes the most probable treatment under US law. In the conclusion, the chapter compares and explains the likely legal treatment of an algorithmically defined hub-and-spoke agreement and suggests areas for change.

Cheong on Granting Legal Personhood to Artificial Intelligence Systems and Traditional Veil-Piercing Concepts To Impose Liability

Ben Chester Cheong (Singapore University of Social Sciences) has posted “Granting Legal Personhood to Artificial Intelligence Systems and Traditional Veil-Piercing Concepts To Impose Liability” on SSRN. Here is the abstract:

This article discusses some of the issues surrounding artificial intelligence systems and whether artificial intelligence systems should be granted legal personhood. The first part of the article discusses whether current artificial intelligence systems should be granted rights and obligations, akin to a legal person. The second part of the article deals with imposing liability on artificial intelligence beings by analogizing with incorporation and veil piercing principles in company law. It examines this by considering that a future board may be replaced entirely by an artificial intelligence director managing the company. It also explores the possibility of disregarding the corporate veil to ascribe liability on such an artificial intelligence beings and the ramifications of such an approach in the areas of fraud and crime.

Eidenmueller on Why Personalized Law?

Horst Eidenmueller (University of Oxford Law; ECGI) has posted “Why Personalized Law?” (U. Chi. L. Rev. Online (Forthcoming) on SSRN. Here is the abstract:

Big data and advances in Artificial Intelligence (AI) have made it possible to personalize legal rules. In this essay, I investigate the question of whether laws should be personalized. Omri Ben-Shahar and Ariel Porat argue that personalized law could be a “precision tool” to achieve whatever goal the lawmaker wants to achieve. This argument is not convincing. The most “natural” fit and best normative justification for a personalized law program is welfarism/utilitarianism. This is because personalized law and welfarism/utilitarianism are both based on normative individualism. But welfarism/utilitarianism is a highly problematic social philosophy. Against this background, it becomes clear why personalized law should only have a limited role to play in lawmaking. The focus of state action should not be the design and running of a personalized law program. Rather, it should be on controlling “wild personalization” by powerful private actors.

Voss on Data Protection Issues for Smart Contracts

W. Gregory Voss (TBS Business School) has posted “Data Protection Issues for Smart Contracts” (Smart Contracts: Technological, Business and Legal Perspectives (Marcelo Corrales, Mark Fenwick & Stefan Wrbka, eds., 2021) on SSRN. Here is the abstract:

Smart contracts offer promise for facilitating and streamlining transactions in many areas of business and government. However, they also may be subject to the provisions of relevant data protection laws such as the European Union’s General Data Protection Regulation (GDPR) if personal data is processed. Initially, this chapter discusses the data protection/data privacy distinction in the context of differing legal models. However, the focus of analysis is the GDPR, as the most significant and influential data protection legislation at this time, given in part to its omnibus nature and extraterritorial scope, and its application to smart contracts.

By their very nature, smart contracts raise difficulties for the classification of the various actors involved, which will have an impact on their responsibilities under the law and their potential liability for violations. The analysis in this chapter turns on the roles of the data controller in the context of smart contracts, and this contract review the definition of that term and of ‘joint controller’ considering supervisory authority guidance. In doing so, the signification of the classification is highlighted, especially in the case of the GDPR.

Furthermore, certain rights granted to data subjects under the GDPR may be difficult to provide in the context of smart contracts, such as the right to be forgotten/right to erasure, the right to rectification and the right not to be subject to a decision based solely on automated processing. This chapter addresses such issues, together with relevant supervisory authority advice, such as the use of encryption to make data nearly inaccessible to approach as nearly as possible the same result as erasure. On the way, the important distinction between anonymized data and personal data is explained, together with its practical implications, and requirements for data integrity and confidentiality (security) are detailed.

In addition, the GDPR requirement of privacy by design and by default must be respected, when that that legislation applies. Data protection principles such as purpose limitation and data minimisation in the case of smart contracts are also scrutinized in this chapter. Data protection and privacy must be considered when smart contracts are designed. This chapter will help the reader understand the contours of such requirement. Even for jurisdictions outside of the European Union, privacy by design will be interesting as best practice.

Finally, problems related to cross-border data transfers in the case of public blockchains are debated, prior to this chapter setting out key elements to allow for a GDPR-compliant blockchain and other concluding remarks.

Douek on The Siren Call of Content Moderation Formalism

Evelyn Douek (Harvard Law School) has posted “The Siren Call of Content Moderation Formalism” (New Technologies of Communication and The First Amendment (Bollinger & Stone eds., 2022) on SSRN. Here is the abstract:

Systems of online content moderation governance are becoming some of the most elaborate and extensive bureaucracies in history, and they are deeply imperfect and need reform. Would-be reformers of content moderation systems are drawn to a highly rule-bound and formalistic vision of how these bureaucracies should operate, but the sprawling chaos of online speech is too vast, ever-changing, and varied to be brought into consistent compliance with rigid rules. This essay argues that the quest to make content moderation systems ever more formalistic will not fix public and regulatory concerns about the legitimacy and accountability of how platforms moderate content on their services. The largest social media platforms operate massive unelected, unaccountable, and increasingly complex bureaucracies that decide to act or not act on millions of pieces of content uploaded to their platforms every day. A formalistic model, invoking judicial-style norms of reasoning and precedent, is doomed to fail at this scale and complexity. As these governance systems mature, it is time to be content moderation realists about the task ahead.

Leiter on The Epistemology of the Internet and the Regulation of Speech in America

Brian Leiter (University of Chicago) has posted “The Epistemology of the Internet and the Regulation of Speech in America” (Georgetown Journal of Law & Public Policy, 2022) on SSRN. Here is the abstract:

The Internet is the epistemological crisis of the 21st-century: it has fundamentally altered the social epistemology of societies with relative freedom to access it. Most of what we think we know about the world is due to reliance on epistemic authorities, individuals or institutions that tell us what we ought to believe about Newtonian mechanics, evolution by natural selection, climate change, resurrection from the dead, or the Holocaust. The most practically fruitful epistemic norm of modernity, empiricism, demands that knowledge be grounded in sensory experience, but almost no one who believes in evolution by natural selection or the reality of the Holocaust has any sensory evidence in support of those beliefs. Instead, we rely on epistemic authorities—biologists and historians, for example. Epistemic authority cannot be sustained by empiricist criteria, for obvious reasons: salient anecdotal evidence, the favorite tool of propagandists, appeals to ordinary faith in the senses, but is easily exploited given that most people understand neither the perils of induction nor the finer points of sampling and Bayesian inference. Sustaining epistemic authority depends, crucially, on social institutions that inculcate reliable second-order norms about whom to believe about what. The traditional media were crucial, in the age of mass democracy, with promulgating and sustaining such norms. The Internet has obliterated the intermediaries who made that possible (and, in the process, undermined the epistemic standing of experts), while even the traditional media in the U.S., thanks to the demise of the “Fairness Doctrine,” has contributed to the same phenomenon. I argue that this crisis cries out for changes in the regulation of speech in cyberspace—including liability for certain kinds of false speech, incitement, and hate speech–but also a restoration of a version of the Fairness Doctrine for the traditional media.

Pretelli on Internet Platform Users as Weaker Parties

Ilaria Pretelli (Swiss Institute of Comparative Law; University of Urbino) has posted “A Humanist Approach to Private International Law and the Internet a Focus on Platform Users as Weaker Parties” (Yearbook of Private International Law, Volume 22 (2020/2021), pp. 201-243) on SSRN. Here is the abstract:

The apps and platforms that we use on a daily basis have increased the effective enjoyment of many fundamental rights enshrined in our constitutions and universal declarations. These were drafted to guarantee a fairer distribution of the benefits of human progress among the population. The present article argues that a humanist approach to private international law can bring just solutions to disputes arising from digital interactions. It analyses cases where platform users are pitted against a digital platform and cases where platform users are pitted against each other. For the first set of cases, an enhanced protection of digital platform users, as weaker parties, points to an expansion of the principle of favor laesi in tortious liability and to a restriction of the operation of party autonomy by clickwrapping, in consideration that a gross inequality of bargaining power also exists in business to platform contracts. In the second set of cases, reliable guidance is offered by the principles of effectiveness and of protection of vulnerable parties. Exploiting the global reach of the internet to improve the situation of crowdworkers worldwide is also considered as a task for the ILO to seriously commit upon. In line with the most recent achievements in human rights due diligence, protection clauses pointing to destination-based labour standards will be a welcome step forward. The principle of effectiveness justifies the enforcement of court decisions in cyberspace, which has become a political and juridical necessity.

Schwartz on The Data Privacy Law of Brexit

Paul M. Schwartz (University of California, Berkeley – School of Law) has posted “The Data Privacy Law of Brexit: Theories of Preference Change” (Theoretical Inquiries in Law, Vol. 22.2:111, 2021) on SSRN. Here is the abstract:

Upon Brexit, the United Kingdom chose to follow the path of EU data protection and remain tied to the requirements of the General Data Protection Regulation (GDPR). It even enacted the GDPR into its domestic law. This Article evaluates five models relating to preference change, demonstrating how they identify different dimensions of Brexit while providing a rich explanation of why a legal system may or may not reject an established transnational legal order. While market forces and a “Brussels Effect” played the most significant role in the decision of the UK government to accept the GDPR, important nonmarket factors were also present in this choice. This Article’s models of preference change are also useful in thinking about the likely extent of the UK’s future divergence from EU data protection.