Levin & Maas on How Could We Tell When Artificial General Intelligence is a ‘Manhattan Project’ Away

John-Clark Levin (University of Cambridge) and Matthijs M. Maas (Centre for the Study of Existential Risk, University of Cambridge, University of Cambridge – King’s College, Cambridge, University of Copenhagen – CECS- Centre for European and Comparative Legal Studies) have posted “Roadmap to a Roadmap: How Could We Tell When AGI is a ‘Manhattan Project’ Away?” to SSRN. Here is the abstract:

This paper argues that at a certain point in research toward AGI, the problem may become well-enough theorized that a clear roadmap exists for achieving it, such that a Manhattan Projectlike effort could greatly shorten the time to completion. If state actors perceive that this threshold has been crossed, their incentives around openness and international cooperation may shift rather suddenly, with serious implications for AI risks and the stability of international AI governance regimes. The paper characterizes how such a ‘runway’ period would be qualitatively different from preceding stages of AI research, and accordingly proposes a research program aimed at assessing how close the field of AI is to such a threshold—that is, it calls for the formulation of a ‘roadmap to the roadmap.’

Gutierrez, Marchant, Carden, Hoffner & Kearl on the Soft Law Governance of Artificial Intelligence

Carlos Ignacio Gutierrez (Arizona State University Sandra Day O’Connor College of Law), Gary E. Marchant (Arizona State University Sandra Day O’Connor College of Law), Alec Carden (Arizona State University Sandra Day O’Connor College of Law), Kaylee Hoffner (Arizona State University Sandra Day O’Connor College of Law), and Alexander Kearl (Arizona State University Sandra Day O’Connor College of Law) have posted “Preliminary Results of a Global Database on the Soft Law Governance of Artificial Intelligence” to SSRN. Here is the abstract:

Soft law programs create substantive expectations that are not directly enforceable by government. All kinds of organizations apply soft law to regulate the development or use of methods and applications of artificial intelligence (AI), yet limited scholarship is devoted to studying their prevalence. This article describes the methodology and preliminary results of a project that compiled a global database of AI soft law programs. It uncovers information on the type of organizations that create them, distinguishes how they are enforced, their origin and jurisdiction, influence, and the themes of their text. As developers and users of soft law, stakeholders (private sector, governments, and civil society) face a dearth of information regarding their option space on how to govern AI. The objective of this work is to make available an analysis and compilation of programs that facilitates the creation of effective soft law programs.

Clifford, Richardson & Witzleb on Artificial Intelligence and Sensitive Inferences

Damian Clifford (Australian National University College of Law), Megan Richardson (University of Melbourne Law School), and Normann Witzleb (Monash University Law School) have posted “Artificial Intelligence and Sensitive Inferences: New Challenges for Data Protection Laws” (Mark Findlay, Jolyon Ford, Josephine Seoh and Dilan Thampapillai (eds.), Regulatory Insights on Artificial Intelligence: Research for Policy (Edward Elgar, 2021)) to SSRN. Here is the abstract:

Data protection laws are under strain to respond to the continuing advances in information and communications technologies, including now AI technologies. How strictly they regulate the handling of personal information and its effects for human identity varies between jurisdictions, despite efforts to achieve international harmonisation. One such area of disparity between existing data protection laws is on the question of whether some types of data, designated ‘sensitive’, or ‘special’, should be subject to stricter legal or practical protection. In this article, we consider the basis on which some categories of data are accorded enhanced protection as sensitive (or special) in modern data protection regimes, and why the categories themselves may vary between jurisdictions. The blurring of the boundaries between ‘ordinary’ personal data and these sensitive categories through the potential to draw inferences from intensive data processing facilitated by developments in artificial intelligence (and more specifically machine learning), raises important new questions for policymakers.

Albert, Delano, Penney, Rigot & Kumar on Evaluating Physical Testing of Adversarial Machine Learning

Kendra Albert (Harvard Law School), Maggie Delano (Swarthmore College Engineering Department), Jon Penney (Citizen Lab, University of Toronto, Harvard University – Berkman Klein Center for Internet & Society, Harvard Law School), Afsaneh Rigot (ARTICLE 19), and Ram Shankar Siva Kumar, Microsoft Corporation, Harvard University – Berkman Klein Center for Internet & Society have posted “Ethical Testing in the Real World: Evaluating Physical Testing of Adversarial Machine Learning” to SSRN. Here is the abstract:

This paper critically assesses the adequacy and representativeness of physical domain testing for various adversarial machine learning (ML) attacks against computer vision systems involving human subjects. Many papers that deploy such attacks characterize themselves as “real world.” Despite this framing, however, we found the physical or real-world testing conducted was minimal, provided few details about testing subjects and was often conducted as an afterthought or demonstration. Adversarial ML research without representative trials or testing is an ethical, scientific, and health/safety issue that can cause real harms. We introduce the problem and our methodology, and then critique the physical domain testing methodologies employed by papers in the field. We then explore various barriers to more inclusive physical testing in adversarial ML and offer recommendations to improve such testing notwithstanding these challenges.

Voss on the CCPA and GDPR

W. Gregroy Voss (Toulouse Business School) has posted “The CCPA and the GDPR Are Not the Same: Why You Should Understand Both” to SSRN. Here is the abstract:

This article gives a comparative view of two main pieces of data privacy legislation from, respectively, California and the EU: the CCPA and the GDPR. While there are similarities between the two, there are differences, as well, providing challenges for compliance. For example, both instruments have extraterritorial effect, however only the GDPR is truly omnibus legislation given the CCPA carveouts for areas of federal legislation and its thresholds for application. Thus, this article aims to provide certain elements to be taken into consideration in evaluating legislation on both sides of the Atlantic.

Scholz on Indivisibilities in Technology Regulation

Lauren Henry Scholz (Florida State University – College of Law) has posted “Indivisibilities in Technology Regulation” on SSRN. Here is the abstract:

Lee Fennell’s “Slices and Lumps: Division and Aggregation in Law and Life” reveals the benefits of isolating configurations in legal analysis. A key characteristic of configurations — or “lumps” — whether found or created, is that they are indivisible. To say a lump is indivisible is not to say that it is literally impossible to divide, but rather “that it is considerably less valuable when divided, or that it is expensive (perhaps prohibitively so) to divide successfully.”

This Essay will extend Fennell’s approach to indivisibilities to the context of technology regulation. Fennell discusses at least two types of indivisibilities in the book. I will call these indivisibilities of fact and indivisibilities of law. Indivisibilities of fact are facts about the world that make it difficult to divide up a resource in ways other than predetermined lumps. Indivisibilities of law are outcomes at law that are relatively “all-or-nothing.” Indivisibilities of both types are at play in current issues in technology regulation.

With respect to indivisibilities of fact, this Essay will discuss the example of indivisibility of privacy regulation. Some argue that piecemeal, sector-specific privacy regulation is the same as no regulation at all due to realities of the technosocial environment. This comes down to a debate about the degree to which the level of consumer privacy-a fact about the world-is indivisible. With respect to indivisibilities of law, this Essay will discuss the example of consent in the law of adhesion contracts in the digital age. Whether there is consent is a binary distinction, with major implications at law. Some consumer advocates have argued that consent should be segmented into meaningful consent and less meaningful consent. But, perhaps, the concept of consent is indivisible. Whether or not consent can be understood as divisible-a characteristic of the law-has major implications for this area of law and policy.

Recommended.