Lubin on The Prohibition on Extraterritorial Enforcement Jurisdiction in the Datasphere

Asaf Lubin (Indiana University Maurer School of Law; Berkman Klein Center for Internet & Society; Yale University – Information Society Project; Federmann Cybersecurity Center, Hebrew University of Jerusalem Faculty of Law) has posted “The Prohibition on Extraterritorial Enforcement Jurisdiction in the Datasphere” (Handbook on Extraterritoriality in International Law (Austen L. Parrish and Cedric Ryngaert eds., forthcoming, 2022)) on SSRN. Here is the abstract:

The omnipresent and ever-fluid nature of the datasphere complicates the work of our cyber constables. Our conventional understanding of a sovereign’s right to exclude others—the prohibition on extraterritorial enforcement jurisdiction that was reaffirmed in the famous Lotus case—may start to feel somewhat anachronistic in the face of new emerging technologies for remote searches and seizures. Modern law enforcement agencies are further bolstered by a data ecosystem which centers around powerful corporate intermediaries who may, on occasion, be coopted or coerced to collaborate in incidents of extraterritorial enforcement overreach.

Consider, for example, the following non-exhaustive list of cyber enforcement activities. Which of these techniques might you deem tolerable when employed against a target abroad without the consent or knowledge of the foreign state? Which of these might you consider to be crossing a threshold, and what factual and legal factors might influence your determination?

(1) Data scraping from social media platforms, other websites, and open-access databases located on servers abroad to import information.
(2) Subverting the command-and-control server of an anonymized botnet operating from one of the corners of the “dark web.”
(3) Electronically tracing and restoring cryptocurrency payments that were paid to a foreign criminal cyber gang involved in a crippling ransomware attack.
(4) Compelling a domestically registered company to release certain data concerning a national involved in a domestic crime, where the data is stored abroad.

In this chapter I explore each of these four scenarios. Each scenario ties to a different aspect of the datasphere which frays at the edges of traditional doctrine. These four aspects are: (1) consent, (2) anonymization, (3) piracy, and (4) data un-territoriality. For each of these aspects I try to demonstrate how jurisdictional rules may evolve, as a matter of lex ferenda, to better balance territorial integrity and cyber stability. My analysis thus attempts to provide a preliminary taxonomy of certain categories of cyber policing activity that could serve as a roadmap for future rule-prescribers and rule-appliers. Given the rise in cybercrime in recent years the paper ultimately challenges the normative validity and factual sustainability of the current doctrinal tradeoffs between external sovereignty and cyber stability.

Pohle & Thiel on Digital Sovereignty

Julia Pohle (WZB Berlin Social Science Center) and Thorsten Thiel (same) have posted “Digital Sovereignty” (in Herlo, et al. (eds.): Practicing Sovereignty, Digital Involvement in Times of Crises (2021)) on SSRN. Here is the abstract:

Over the last decade, digital sovereignty has become a central element in policy discourses on digital issues. Although it has become popular in both centralized/authoritarian and democratic countries alike, the concept remains highly contested. After investigating the challenges to sovereignty apparently posed by the digital transformation, this essay retraces how sovereignty has re-emerged as a key category with regard to the digital. By systematizing the various normative claims to digital sovereignty, it then goes on to show how, today, the concept is understood more as a discursive practice in politics and policy than as a legal or organizational concept.

Chatziathanasiou on ‘Hungry Judges’ Should not Motivate the Use of ‘Artificial Intelligence’ in Law

Konstantin Chatziathanasiou (Institute for International and Comparative Public Law, University of Münster; MPI for Research on Collective Goods) has posted “Beware the Lure of Narratives: ‘Hungry Judges’ Should not Motivate the Use of ‘Artificial Intelligence’ in Law” (German Law Journal, forthcoming) on SSRN. Here is the abstract:

The ‘hungry judge’ effect, as presented by a famous study, is a common point of reference to underline human bias in judicial decision-making. This is particularly pronounced in the literature on ‘artificial intelligence’ (AI) in law. Here, the effect is invoked to counter concerns about bias in automated decision-aids and to motivate their use. However, the validity of the ‘hungry judge’ effect is doubtful. In our context, this is problematic for, at least, two reasons. First, shaky evidence leads to a misconstruction of the problem that may warrant an AI intervention. Second, painting the justice system worse than it actually is, is a dangerous argumentative strategy as it undermines institu-tional trust. Against this background, this article revisits the original ‘hungry judge’ study and argues that it cannot be relied on as an argument in the AI discourse or beyond. The case of ‘hungry judges’ demonstrates the lure of narratives, the dangers of ‘problem gerrymandering’, and ultimately the need for a careful reception of social science.

Mökander et al. on A Guide to the Role of Auditing in the Proposed European AI Regulation

Jakob Mökander (Oxford Internet Institute) et al. have posted “Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation” on SSRN. Here is the abstract:

The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the conformity assessments that providers of high-risk AI systems are expected to conduct, and the post-market monitoring plans that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. We argue that AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing, albeit in other words. Our analysis offers two main contributions. First, by describing the enforcement mechanisms included in the AIA in terminology borrowed from existing literature on AI auditing, we help providers of AI systems understand how they can prove adherence to the requirements set out in the AIA in practice. Second, by examining the AIA from an auditing perspective, we seek to provide transferable lessons from previous research about how to refine further the regulatory approach outlined in the AIA. We conclude by highlighting seven aspects of the AIA where amendments (or simply clarifications) would be helpful. These include, above all, the need to translate vague concepts into verifiable criteria and to strengthen the institutional safeguards concerning conformity assessments based on internal checks.

Raizonville & Lambin on Algorithmic Explainability and Obfuscation under Regulatory Audits

Adrien Raizonville (Institut Polytechnique de Paris) and Xavier Lambin (ESSEC Business School) have posted “Algorithmic Explainability and Obfuscation under Regulatory Audits” on SSRN. Here is the abstract:

The best-performing and most popular algorithms are often the least explainable. In parallel, there is growing concern and evidence that sophisticated algorithms may engage, autonomously, in profit-maximizing but welfare-reducing strategies. Drawing on the literature on self-regulation, we model a regulator who seeks to encourage firms’ compliance to socially desirable strategies through the threat of (costly and imperfect) audits. Firms may invest in explainability to better understand their algorithms and reduce their cost of compliance. We find that, when audit efficacy is not affected by explainability, firms invest voluntarily in explainability. Technology-specific regulation induces greater explainability and compliance than technology-neutral regulation. If, instead, explainability facilitates the regulator’s detection of misconduct, a firm may hide its misconduct behind algorithmic opacity. Regulatory opportunism further deters investment in explainability. To promote explainability and compliance, command-and-control regulation with minimum explainability standards may be needed.

Taeihagh on Governance of AI

Araz Taeihagh (NUS) has posted “Governance of Artificial Intelligence” (Policy and Society, 40:2) on SSRN. Here is the abstract:

The rapid developments in Artificial Intelligence (AI) and the intensification in the adoption of AI in domains such as autonomous vehicles, lethal weapon systems, robotics and alike pose serious challenges to governments as they must manage the scale and speed of socio-technical transitions occurring. While there is considerable literature emerging on various aspects of AI, governance of AI is a significantly underdeveloped area. The new applications of AI offer opportunities for increasing economic efficiency and quality of life, but they also generate unexpected and unintended consequences and pose new forms of risks that need to be addressed. To enhance the benefits from AI while minimising the adverse risks, governments worldwide need to understand better the scope and depth of the risks posed and develop regulatory and governance processes and structures to address these challenges. This introductory article unpacks AI and describes why the Governance of AI should be gaining far more attention given the myriad of challenges it presents. It then summarises the special issue articles and highlights their key contributions. This special issue introduces the multifaceted challenges of governance of AI, including emerging governance approaches to AI, policy capacity building, exploring legal and regulatory challenges of AI and Robotics, and outstanding issues and gaps that need attention. The special issue showcases the state-of-the-art in the governance of AI, aiming to enable researchers and practitioners to appreciate the challenges and complexities of AI governance and highlight future avenues for exploration.

van der Donk on What Exactly Is a Social Media Platform? A Study of the Equivalents of Social Media Platforms in European Law

Berdien van der Donk (University of Copenhagen Law) has posted “What Exactly Is a Social Media Platform? A Study of the Equivalents of Social Media Platforms in European Law” on SSRN. Here is the abstract:

What exactly is a social media platform? Can it be compared to a public park, a stadium, an electricity company, or perhaps to something non-existing in the physical world around us? The question on who gets to decide what can be posted on social media platforms is closely intertwined with the question what social media platforms are and how these platforms and their content should be regulated. However, a consensus on the answer to these questions does not exist.

This article contributes to the discussion on the qualification and regulation of social media platforms. It starts by clarifying the terminological inconsistencies regarding public utilities, services of general interest, universal services, and essential facilities in European law. The author continues with a literature review to summarise the current debate on the offline equivalent of social media platforms. It will show that, overarchingly, two different debates exist: on the one hand, whether platforms can be regulated as public utilities, and on the other hand, whether platforms can be compared to either a private space, a public space, or a public sphere. Subsequently, an in-depth analysis is carried out.

The author concludes, firstly, that under European law, a social media platform cannot be an essential facility as these platforms simply do not fulfil the requirements. Secondly, social media platforms should not be regulated as a service of general interest, because of their worldwide application. They could, however, be regulated as universal services, but not without extensive justification. Thirdly, since they are privately owned, social media platforms are not public places. The author argues that a social media platform is more suited to be compared to a privately owned, freely accessible place (e.g., a stadium) than a public sphere, as social media platforms do not significantly differ from existing private undertakings open to the general public.

Cachon, Dizdarer & Tsoukalas on Decentralized or Centralized Control of Online Service Platforms: Who Should Set Prices?

Gerard P Cachon (UPenn Wharton), Tolga Dizdarer (UPenn Wharton), and Gerry Tsoukalas (UPenn Wharton) have posted “Decentralized or Centralized Control of Online Service Platforms: Who Should Set Prices?” on SSRN. Here is the abstract:

Online service platforms that enable customers to connect with a large population of independent servers have been successfully developed in many sectors, including transportation, lodging, and delivery, among others. We ask a basic, yet fundamentally important, question – who should set the prices on the platform? The platform or the servers? In addition to regulatory implications for the classification of the workers on the platform as either employees or contractors, this choice influences the degree of competition among servers, and in turn determines both the amount of supply available and the overall attractiveness of the platform to consumers. We find that when the platform uses a simple commission contract to earn revenue, the price delegation decision depends on the importance of regulating competition among the large population of servers relative to the value of allowing servers to tailor their prices to their privately known costs. The same tradeoff exists in fully disintermediated platforms, such as those enabled with blockchain technology. However, merely adding appropriate linear quantity discounts or surcharges to the basic commission contract maximizes the platform’s revenue and allows all participants to enjoy the benefits of both centralized and decentralized control of prices.

Alston on Norms, Institutions and Digital Veils of Ignorance

Eric Alston (University of Colorado) has posted “Norms, Institutions and Digital Veils of Ignorance – Do Network Protocols Need Trust Anyway?” on SSRN. Here is the abstract:

In larger groups, social rules reduce individuals’ uncertainty regarding the choices other individual group members might make. But uncertainty varies as to the extent to which it is knowable and quantifiable ex-ante. Therefore, different classes of social rules deal with the future uncertainty of individuals’ conduct in structurally distinct ways, with institutions and norms being the hallmark example of this distinction. Institutions, through their costly definition and enforcement by a known organization, require specific delineation of behavior and penalties ex-ante, meaning they of necessity confront “known unknowns” (risks), or the conduct of members of an organization that can be predicted ex-ante. Norms, in contrast, are only effective in shaping behavior if sufficiently shared within a community. This makes the application of norms automatic in expectation to an individual ordering their conduct given potential norms. This makes norms apply to ex-ante known and unknown situations alike, relative to the precision that the articulation of institutions requires with respect to human behavior. Although digital governance carries the benefits (and costs) of considerable institutional “completeness”, governance by protocol is nonetheless incomplete in the face of the complex set of exogenous shocks and human actions that a given digital networked organization will experience. This means digital institutions need to mimic the adaptability of institutions more generally, through the institutional mechanisms of flexibility detailed in this analysis, considered with respect to their specific application to distributed blockchain and centralized networks alike. More generally, though, the fact that norms can serve as a complementary gap-filler in contexts where institutions do not reach suggest that digital organization designers cannot avoid simultaneous consideration of the human community of network users that will define the norms that become crucial in periods of true uncertainty for any organization.

Recommended.

Colangelo & Mezzanotte on Colluding Through Smart Technologies

Giuseppe Colangelo (University of Basilicata, Department of Mathematics, Computer Science and Economics; Stanford Law School; LUISS) and Francesco Mezzanotte (Roma Tre University) have posted “Colluding Through Smart Technologies: Understanding Agreements in the Age of Algorithms” on SSRN. Here is the abstract:

By affecting business strategies and consumers’ behavior, the wide-scale use of algorithms, prediction machines and blockchain technology is currently challenging the suitability of several legal rules and notions which have been designed to deal with human intervention. In the specific sector of antitrust law, the question is arising on the adequacy of the traditional doctrines sanctioning anticompetitive cartels to tackle coordinated practices which, in the absence of an explicit “meeting of the minds” of their participants, may be facilitated by algorithmic processes adopted, and eventually shared, by market actors. The main concern in these cases, discussed both at regulatory and academic level, derives from the general observation that while the traditional concept of collusive agreement requires some form of mutual understanding among parties, nowadays decision-making of firms is increasingly transferred to digitalized tools. Moving on from these premises, the paper investigates the impact that the rules applicable to the conclusion of (smart) contracts may have, from an antitrust law perspective, in the detection and regulation of anticompetitive practices.