Park on Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework

Sangchul Park (Seoul National U Law) has posted “Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework” (Washington International Law Journal, Volume 33, No. 2, pp. 216-269 (forthcoming)) on SSRN. Here is the abstract:

As debates on potential societal harm from artificial intelligence (AI) culminate in legislation and international norms, a global divide is emerging in both AI regulatory frameworks and international governance structures. In terms of local regulatory frameworks, the European Union (E.U.), Canada, and Brazil follow a “horizontal” or “lateral” approach that postulates the homogeneity of AI, seeks to identify common causes of harm, and demands uniform human interventions. In contrast, the United States (U.S.), the United Kingdom (U.K.), Israel, and Switzerland (and potentially China) have pursued a “context-specific” or “modular” approach, tailoring regulations to the specific use cases of AI systems. In terms of international governance structures, the United Nations is exploring a centralized AI governance framework to be overseen by a superlative body comparable to the International Atomic Energy Agency. However, the U.K. is spearheading, and the U.S. and several other countries have endorsed, a decentralized governance model, where AI safety institutes in each jurisdiction conduct evaluations of the safety of high-performance general-purpose models pursuant to interoperable standards. This paper argues for a context-specific approach alongside decentralized governance, to effectively address evolving risks in diverse mission-critical domains, while avoiding social costs associated with one-size-fits-all approaches. However, to enhance the systematicity and interoperability of international norms and accelerate global harmonization, this paper proposes an alternative contextual, coherent, and commensurable (3C) framework. To ensure contextuality, the framework (i) bifurcates the AI life cycle into two phases: learning and deployment for specific tasks, instead of defining foundation or general-purpose models; and (ii) categorizes these tasks based on their application and interaction with humans as follows: autonomous, discriminative (allocative, punitive, and cognitive), and generative AI. To ensure coherency, each category is assigned specific regulatory objectives replacing 2010s vintage “AI ethics.” To ensure commensurability, the framework promotes the adoption of international standards for measuring and mitigating risks.

Nemitz on Culture, Democracy, the Rule of Law and the New Vienna School of Critical Practice of AI

Paul Nemitz (European Commission) has posted “Culture, Democracy, the Rule of Law and the New Vienna School of Critical Practice of AI” on SSRN. Here is the abstract:

From a European perspective, Austria is the hotbed of a new democratic, critical practice of AI. In contrast to the Frankfurt School and its critical theory, this new Vienna School is not primarily concerned with theory, but with the practice of shaping the new digital world, with its power imbalances and new risks for fundamental rights of people, democracy and the rule of law. The practice to be shaped is the practice of technology and business models of AI and the digital.

Jacques & Flynn on Protecting Human Creativity in AI-Generated Music with the Introduction of an AI-Royalty Fund

Sabine Jacques (U Liverpool) and Mathew Flynn (U Liverpool) have posted “Protecting Human Creativity in AI-Generated Music with the Introduction of an AI-Royalty Fund” on SSRN. Here is the abstract:

Artificial Intelligence (AI) is posited to revolutionise the creative industries, prompting global calls for legislative intervention to ensure human creativity remain at the centre of the copyright system. As AI systems gain prowess in analysing and generating content, they promise new levels of creativity and innovation at accelerated pace and reduced costs compared to human production. Alongside these benefits come concerns of displacement, particularly in fields like music, where AI-generated music could potentially supplant human-authored creative endeavours. Suggestions ranging from taxation to levies have been proposed to address this challenge. This paper, however, advocates for a novel perspective: evolving copyright law to not only compensate creators for income lost to technological disruption but also to foster sustainability aligned with the principles of the Council of Europe’s European Social Charter. Proposing an ‘AI-Royalty Fund’ represents a more optimal approach to this dilemma. Such a fund would acknowledge the intrinsic value of music and support a sustainable and inclusive creative industry ecosystem. Essential to this vision is the role of a national collective, entrusted with administering this fund to ensure equitable distribution and uphold the interests of human authors in an AI-driven landscape, contribute to regional and local plans of growth and foster cultural diversity and innovation. In essence, as AI redefines the boundaries of creativity, adapting the copyright paradigm becomes imperative to preserving the livelihoods of human creators while promoting a resilient and sustainable creative economy.

Grossman et al. on Does the LLMperor Have New Clothes? Some Thoughts on the Use of LLMs in eDiscovery

Maura R. Grossman (U Waterloo David R. Cheriton Computer Science) et al. have posted “Does the LLMperor Have New Clothes? Some Thoughts on the Use of LLMs in eDiscovery” on SSRN. Here is the abstract:

Is generative artificial intelligence (Gen AI)—specifically the use of large language models (LLMs)—the answer to eDiscovery? Widespread lavish praise for the application of LLMs to the eDiscovery task generally fails to state precisely how LLMs contribute to the core task of identifying substantially all documents responsive to a discovery request ina legal dispute. This article argues that the efficacy of LLMs for this purpose will not be established until well-defined, reproducible protocols for their use are shown to be effective through benchmark testing and through application-specific validation.