Lehr & Stocker on The Growing Complexity of Digital Economies over the GenAI Waterfall: Challenges and Policy Implications

William Lehr (Massachusetts Institute Technology (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL)) and Volker Stocker (Technische U Berlin (TU Berlin)) have posted “The Growing Complexity of Digital Economies over the GenAI Waterfall: Challenges and Policy Implications” on SSRN. Here is the abstract:

The GenAI genie is out of the bottle. AI, and its vanguard GenAI, is a change agent that profoundly impacts the global transition to a digital economy. GenAI is already percolating through businesses and tasks, transforming how we create, innovate, and consume content (information), products, and services. Being deployed more widely across all layers and components of value chains, it brings new affordances that have transformed (or are slated to transform) nearly all conceivable social and economic contexts. Changes will affect online and offline worlds directly and indirectly. Users of AI models, tools, and services have been interacting with GenAI for some time already, but the indirect effects of GenAI are inherently less obvious and harder to assess, especially at this early stage. End users are often unaware of how the products and services they use are produced, and even for domain experts, the challenge of assessing the social and economic impact of ICTs has always proved difficult. Those challenges will only become more challenging with GenAI because its ability to operate in the background (a direct result of automation) means that many of those affected by GenAI will be unaware that—or how—GenAI is already impacting them.

In this article, we examine emerging policy challenges in two interrelated areas: the growing complexity of technical and business relationships in AI-driven digital ecosystems and changing concerns about asymmetric information and transparency. While GenAI should be viewed as part of a broader trajectory of ICT-based automation, our aim is to highlight how and why GenAI-related policy challenges differ. Although we cannot predict with any precision the post-waterfall future, it is clear that GenAI will be part of the landscape and will be a tool policymakers will need to use to address the future challenges. That makes two requirements for future policymaking clear. We need a much better and more capable multi-stakeholder measurement ecosystem, and we need to strengthen policymakers’ human multidisciplinary institutional capacity.

Stazi on Creativity, Authorship and AI

Andrea Stazi (U San Raffaele Roma) has posted “Creativity, Authorship and AI” on SSRN. Here is the abstract:

With AI, the relationship between technology and IP is more complex than ever.

The David Guetta example, where AI was used to create lyrics and a voice in the style of Eminem, illustrates a new model of creativity.

This model involves an iterative, dynamic process of 1. conception, 2. prompting, 3. generation, 4. refining, and 5. deployment, where a human plays a crucial role.

However, regulatory approaches around the world are diverging – think of the USPTO guidelines which limit protection to human works v UK protection of computer generated works – and questions arise about how to protect creativity on the one hand and investments on the other.

To craft a balanced IP policy framework, we must carefully reconsider the key features of authorship, the interplay of idea and expression, the essence of creativity and the proper way to protect investments.

Rethinking copyright from this perspective can incentivize both the development and creative application of AI while upholding the fundamental principles of copyright for human authors and promoting broad access to AI-assisted works.

Ríos on Can What an AI Produces be Understood and Unraveled

Mauro D. Ríos (The Internet Society) has posted “Can What an AI Produces be Understood and Unraveled” on SSRN. Here is the abstract:

Over the past few decades, AI has radically transformed industries as diverse as medicine and finance, providing solutions with high levels of efficiency and accuracy that were previously unattainable (Revolutionizing healthcare, 2023)

However, the sophistication of these models, which include deep neural networks with millions of parameters and sophisticated mechanisms for producing results, has led to the perception that they operate as a “black box” whose internal logic is inaccessible to human understanding (Hyperight, 2024)

Far from being an intrinsic feature of AI, this opacity is the result of both the volume and heterogeneity of training data and the lack of adequate methodologies to record and unravel each phase of the internal calculation (Stop Explaining Black Box Models, 2022).

To overcome these myths and reveal the “why” and “how” of AI decisions, various interpretability and auditing techniques have been developed. In addition, relevance propagation methodologies, such as Layer‐Wise Relevance Propagation (LRP), make it possible to track, layer by layer, the influence of each “digital neuron” on an AI’s final decision (Montavon et al., 2019).

While these tools offer an unprecedented level of visibility, their practical application involves addressing challenges of scale and computational cost. Exhaustive logging of execution traces and parameters during training demands distributed computing infrastructures and storage systems designed for metadata versioning (Unfooling Perturbation-Based Post Hoc Explainers, 2022).

A comprehensive understanding of AI processes requires not only the use of advanced interpretability techniques, but also the establishment of governance frameworks and structured documentation. Reports from organizations such as the Centre for International Governance Innovation (CIGI) underline the need for accountability policies that require detailed records of each phase of the AI model lifecycle, from data selection to production (Explainable AI Policy, 2023). Without these mechanisms, the aspiration to full interpretability will remain limited by practical and organizational barriers, not because we can claim that we do not know why an AI does what it does, but because we have failed to implement the appropriate mechanisms to know, and thus we will be compromising transparency and trust in critical AI applications. 

Knowing what an AI does and why is then a possibility that is in our hands but requires instruments, time and resources that we must decide if they are justified in each case or we will be selective when.

Levantino on Assessing the Risks of Emotion Recognition Technology in Domestic Security Settings: What Safeguards against the Rise of ‘Emotional Dominance’?

Francesco Paolo Levantino (Scuola Superiore Sant’Anna di Pisa) has posted “Assessing the Risks of Emotion Recognition Technology in Domestic Security Settings: What Safeguards against the Rise of ‘Emotional Dominance’?” on SSRN. Here is the abstract:

In light of the growing interest in biometric technologies among public authorities, civil society, and international organisations, this chapter focusses on some risks associated with the use of Emotion Recognition Technology (ERT) by Law Enforcement Agencies (LEAs). In fact, despite significant attention being directed towards Facial Recognition Technology (FRT) and its uses for analogous purposes, ERT has received comparatively limited scrutiny. The chapter argues that this imbalance is reflected in the European Union’s AI Act, which inadequately addresses the potential risks associated with ERT – including its capacity to generate forms of ‘emotional dominance’. By contextualising ERT within the broader category of biometric systems, the discussion highlights the distinctive characteristics of ERT, while detailing its similarities and differences with FRT and biometric categorisation systems. The analysis shows that, beyond the intuitive perception of ERT as more intrusive due to its focus on extracting emotional states from bodily cues, its contested scientific foundations raise substantial concerns regarding its deployment in the security sector. Before some concluding remarks, the chapter draws historical parallels between the military origins of some biometric technologies and their adaptation for law enforcement, illustrating how the integration of military technologies into LEAs’ practices can affect their relationship with fundamental rights and freedoms in democratic societies. Also, it specifies how the AI Act classifies the use of ERT in law enforcement and highlights the core issues it identifies in this respect. The chapter emphasises the need for more robust regulatory frameworks to protect against the interferences posed by ERT in the context of law enforcement, asserting that existing protections under international and European human rights law should serve as a litmus test for the deployment of modern technologies by LEAs.

Hine et al. on The Impact of Modern Big Tech Antitrust on Digital Sovereignty

Emmie Hine (Yale U Digital Ethics Center) et al. have posted “The Impact of Modern Big Tech Antitrust on Digital Sovereignty” on SSRN. Here is the abstract:

This article examines the history of antitrust cases against Big Tech companies in the United States. It highlights a shift in the attitudes of enforcers away from the economic-analysis-informed Chicago and post-Chicago schools of antitrust thought, which are informed by economic analysis, towards New Brandeisian thinking, which emphasizes structural concerns and broader consumer welfare. However, it has yet to catch on in courtrooms. By contrasting the US’s antitrust strategy with those of the European Union and China, we argue that antitrust enforcement may hinder economic and technological competitiveness in the short term, but may have long-term benefits. Regarding global digital sovereignty, the US increasing enforcement likely would not impact its global competitiveness, as it still presents a more favorable regulatory environment than the EU, and targeted economic measures prevent Chinese companies from being competitive in the US. New legislation may help address the complexities of modern digital markets so that the US can maintain its competitive edge in technology while enhancing consumer welfare.

Chaiehloudj on Musk v. OpenAI: Antitrust and the Boundaries of Strategic Litigation in the AI Sector

Walid Chaiehloudj (U Côte d’Azur) has posted “Musk v. OpenAI: Antitrust and the Boundaries of Strategic Litigation in the AI Sector” (European Competition and Regulatory Law Review (CoRe), forthcoming) on SSRN. Here is the abstract:

This paper analyzes the recent decision in Musk v. Altman (N.D. Cal., March 2025), in which the United States District Court denied a preliminary injunction sought by Elon Musk and his company xAI against OpenAI and Microsoft. The plaintiffs alleged that OpenAI and Microsoft had entered into an unlawful group boycott by pressuring investors not to fund competing AI companies, in violation of Section 1 of the Sherman Act. The court rejected the claim on both procedural and substantive grounds, notably finding that Musk lacked standing, and that the evidence presented-consisting mainly of media articles-was insufficient to establish a plausible antitrust violation or irreparable harm.

Beyond its procedural lessons, Musk v. Altman illustrates the intensifying global battle for dominance in AI markets and the legal complexities accompanying it. The court’s decision ultimately favors a model of competition based on innovation rather than speculative or strategic litigation.

Bonadio & Felisberto on Copyrightability of AI Outputs: The US Copyright Office’s Perspective European Intellectual Property Review

Enrico Bonadio (City U London) and Honor Felisberto (U Lausanne Law) have posted “Copyrightability of AI Outputs: The US Copyright Office’s Perspective European Intellectual Property Review” on SSRN. Here is the abstract:

In August 2023, the United States Copyright Office (USCO) published a Notice of Inquiry (NOI) and request for comments on the intersection between Artificial Intelligence (AI) and copyright. The USCO had earlier announced it would issue a Report in several Parts analysing the comments received. On July 31 st , 2024, the first Part of the Report, on the topic of digital replicas, had been published. The second part of this Report, available since January 29 th , 2025, addresses the copyrightability of outputs generated by AI systems. This short note offers a summary of the latter, more precisely of the USCO’s recommendations.

Takhshid on Virtual Dignitary Torts

Zahra Takhshid (U Denver Sturm College Law) has posted “Virtual Dignitary Torts” (The Journal of Tort Law forthcoming in Volume 18 Issue 1, 2025) on SSRN. Here is the abstract:

The emergence of the metaverse and spatial computing, which has enabled immersive digital interactions, raise complex legal questions. This work examines the feasibility of addressing dignitary torts-such as battery and intentional infliction of emotional distress-committed via avatars. The particular challenge for tort law is the nonphysical nature of selfrepresentations in these virtual spaces. Drawing from the historical evolutions of several dignitary torts, such as the law of battery and emotional harm, this article argues that the key in allowing for the recognition of such harms is appreciating the expansion of the protection of physical body within these torts, to the protection of a broader concept of the “self.” By this, tort law has demonstrated both its willingness and capacity to recognize new forms of wrongs without sacrificing its core principles. Accordingly, this essay lays the groundwork for recognizing harms in virtual spaces and offers several initial considerations for dignitary tort liability regime and the extension of the self in extended reality spaces. Bridging the gap between evolving technology and traditional tort law is a must in a world where virtual interactions are carrying increasingly real consequences.

Raso on Interoperable AI Regulation

Jennifer Raso (McGill U Law) has posted “Interoperable AI Regulation” (Forthcoming in the Canadian Journal of Law and Technology) on SSRN. Here is the abstract:

This article explores “interoperability” as a new goal in AI regulation in Canada and beyond. Drawing on sociotechnical, computer science, and digital government literatures, it traces interoperability’s conceptual genealogy to reveal an underlying politics that prioritizes harmony over discord and consistency over plurality. This politics, the article argues, is in tension with the distinct role of statutory law (as opposed to regulation) in a democratic society. Legislation is not simply a technology through which one achieves the smooth operation of governance. Rather, legislation is better understood as a “boundary object”: an information system through which members of different communities make sense of, and communicate about, complex phenomena. This sense-making includes and even requires disagreement, the managing and resolution of which is a vital function of both law and indeed of any information system.

Lee & Souther on Beyond Bias: AI as a Proxy Advisor

Choonsik Lee (U Rhode Island) and Matthew E. Souther (U South Carolina Darla Moore Business) have posted “Beyond Bias: AI as a Proxy Advisor” on SSRN. Here is the abstract:

After documenting a trend towards increasingly subjective proxy advisor voting guidelines, we evaluate the use of artificial intelligence as an unbiased proxy advisor for shareholder proposals. Using ISS guidelines, our AI model produces voting recommendations that match ISS in 79% of proposals and better predicts shareholder support than ISS recommendations alone. Disagreements between AI and ISS are more likely when firms disclose hiring a third-party governance consultant, suggesting these consultants-often the proxy advisor itself-may influence recommendations. These findings offer insight into proxy advisor conflicts of interest and demonstrate AI’s potential to improve transparency and objectivity in voting decisions.