Torrance & Tomlinson on Organic Websites: Certification of AI-Generated or Human-Written Content on the Internet

Andrew W. Torrance (U Kansas Law) and Bill Tomlinson (U California) have posted “Organic Websites: Certification of AI-Generated or Human-Written Content on the Internet” (Chicago-Kent Journal of Intellectual Property (Forthcoming)) on SSRN. Here is the abstract:

This paper proposes the development of a certification system analogous to the standards used in organic food labeling, designed to distinguish websites based on the proportion of human-written versus AI-generated content. In an era where AI plays an increasingly prominent role in content creation, this system would provide transparency for consumers and uphold fair competition in digital markets. The certification would allow website creators to present verifiable evidence of their content’s provenance, ranging from entirely human-made, to a mix of human and AI contributions, to fully AI-generated content. The paper explores the legal and policy frameworks necessary for implementing such a system, drawing on principles from trademark law, unfair advertising, and competition law. It also considers the potential administrative structures for the certification process, whether through private organizations, as seen with the Forestry Sustainability Council and the Sustainable Seafood Group, or under federal oversight, possibly by the Department of Commerce. By examining consumer preferences and the ethical implications of AI in content creation, this paper argues for a certification system that aligns with public expectations and enhances trust in digital information. The proposed system seeks to foster an environment where consumers are informed about the provenance of website content, thereby supporting informed decision-making and maintaining a level playing field in the online marketplace.

Posner & Saran on Judge AI: Assessing Large Language Models in Judicial Decision-Making

Eric A. Posner (U Chicago Law) and Shivam Saran (U Chicago Law) have posted “Judge AI: Assessing Large Language Models in Judicial Decision-Making” on SSRN. Here is the abstract:

Abstract. Can large language models (LLMs) replace human judges? By replicating a prior 2 x 2 factorial experiment conducted on 31 U.S. federal judges, we evaluate the legal reasoning of OpenAI’s GPT-4o. The experiment involves a simulated appeal in an international war crimes case, with two altered variables: the degree to which the defendant is sympathetically portrayed and the consistency of the lower court’s decision with precedent. We find that GPT-4o is strongly affected by precedent but not by sympathy, similar to students who were subjects in the same experiment but the opposite of the professional judges, who were influenced by sympathy. We try prompt engineering techniques to spur the LLM to act more like human judges, but with no success. “Judge AI” is a formalist judge, not a human judge.

Lobel on The Future of Work in the Era of AI

Orly Lobel (U San Diego Law) has posted “The Future of Work in the Era of AI” (Indiana Law Journal, Vol. 100, No. 1 (2024)) on SSRN. Here is the abstract:

Artificial intelligence (AI) is revolutionizing both work itself and the processes of employment—hiring, recruitment, evaluation, compensation, performance analysis, retention, and job mobility. This Essay, based upon the 2024 Indiana Law Journal annual William R. Stewart Lecture, examines the effects of AI on work and argues for a holistic approach that harnesses the benefits of automation while addressing the inevitable systemic changes that AI is rapidly bringing to the labor market. The Essay examines two industries in which AI is already changing labor market demands: trucking and the performing arts. The Essay argues that while the automation can often increase efficiency and productivity, as well as accuracy and fairness in the labor market, inevitably the rapid acceleration of AI integration will bring significant disruptions. Policymakers should separately address the emergence of new forms of inequities, the necessity for reskilling, and the need to establish more robust economic security safeguards that are not dependent on fulltime, continuous employment for all. The Essay thus considers how a more equitable tax framework, publicly funded reskilling programs, safety nets like Universal Basic Income (UBI), and a proactive reimagining of work can help displaced workers adapt, thrive, and contribute to the evolving economy.

Allen et al. on Governing Intelligence: Singapore’s Evolving AI Governance Framework

Jason G Allen (Singapore Management U Centre Digital Law) et al. have posted “Governing Intelligence: Singapore’s Evolving AI Governance Framework” on SSRN. Here is the abstract:

This paper provides an outline analysis of the evolving governance framework for Artificial Intelligence (AI) in Singapore. Across the Singapore government, AI solutions are being adopted in line with Singapore’s “Smart Nation Initiative” to leverage technology to make impactful changes to the nation and the economy. In tandem, Singaporean authorities have been assiduous to release a growing number of governance documents, which we analyse together to chart the city-state’s approach to AI governance in international comparison. Characteristics of Singapore’s AI governance approach include an emphasis on consensus- building between stakeholders (particularly government and industry but also citizens) and voluntary or “quasi” regulation, lately with an emphasis on promulgating standards (AI Standards, n.d.) and audit-like frameworks. Singaporean regulators have also been early movers (globally, and especially in the region) in the promulgation of normative instruments on AI governance including developing the world’s first AI Governance Testing Framework and Toolkit, AI Verify. The Singapore approach may be compelling for other jurisdictions in the region and around the world with an interest in a collaborative, balanced, and consensual approach to governing AI outside of strict regulatory mechanisms. However, any jurisdiction adopting aspects of its evolving model would have to duly account for relevant diOerences in social and institutional conditions.

Torrance & Tomlinson on Decentral Intelligence Agency: The Law and Autonomous Artificial Intelligence

Andrew W. Torrance (U Kansas Law) and Bill Tomlinson (U California) have posted “Decentral Intelligence Agency: The Law and Autonomous Artificial Intelligence” (Touro Law Review (to appear)) on SSRN. Here is the abstract:

Artificial intelligence is rapidly gaining autonomy across a range of domains, such as business, education, social relationships, and warfare. This article examines the legal and policy implications of autonomous AI agents, a rapidly evolving technology that challenges existing regulatory frameworks. Drawing from tort, agency, property, contract, privacy, human rights, and constitutional law, we propose a comprehensive approach to govern these increasingly independent entities. Our analysis begins with a historical perspective, tracing both the evolution of autonomous computational systems and of legal responses to such technologies. We then conduct a comparative study of AI governance across jurisdictions, highlighting regulatory gaps and best practices. Central to our discussion is the challenge of defining AI autonomy and agency in legal terms. We explore the concept of “legal personhood for AI” and its potential ramifications for liability, responsibility, and other legal issues. Through case studies and hypothetical scenarios, we illustrate the practical challenges of applying current laws to AI agents. These examples inform our proposals for legal reforms, including the creation of new AI-specific legal categories and the establishment of specialized regulatory bodies. The article also addresses the ethical dimensions of AI deployment, discussing issues of bias, privacy, and societal impact. This article aims to offer a roadmap for policymakers, legal practitioners, and technologists navigating the future of AI regulation. By balancing innovation with accountability, we seek to foster a legal environment that promotes responsible AI development while safeguarding societal interests.

Schäfer on AI, IP, and Competition Policy: Adjusting Policy Levers to a new GPT

Quentin B. Schäfer (U Strathclyde Strathclyde Law) has posted “AI, IP, and Competition Policy: Adjusting Policy Levers to a new GPT” (Abbott and T Schrepel (eds), Artificial Intelligence and Competition Policy (Concurrences 2024)) on SSRN. Here is the abstract:

This chapter contributes to the emerging debate on the intersection between IP as a set of legal norms and AI as a novel technology. The issue of the regulation of IP as applied to AI is highly topical in light of the rapid progress and increasing impact of AI as a technology on our lives and the numerous debates surrounding IP protection for AI tools and outputs. The chapter examines the suitability of different policy levers to improve and maintain incentives to invent and invest in AI. It argues that the fundamental technological and economic uncertainty surrounding AI renders anticipatory doctrinal adjustments in IP law, such as AI inventorship, undesirable. Rather, scholarship should focus on the design of policy levers capable of flexibly accommodating a variety of different technological and market outcomes, in particular on the IP-Competition Interface. It also proposes a research agenda for the IP-Competition Interface in relation to AI, focusing on obligations to share access to closed systems and IP rights.

Metikoš & Ausloos on The Right to an Explanation in Practice: Insights from Case Law for the GDPR and the AI Act

Ljubiša Metikoš (U Amsterdam Institute Information Law (IViR)) and Jef Ausloos (U Amsterdam Institute Information Law (IViR)) have posted “The Right to an Explanation in Practice: Insights from Case Law for the GDPR and the AI Act” (Forthcoming in Law, Innovation, and Technology 17.2 (October 2025)) on SSRN. Here is the abstract:

The right to an explanation under the GDPR has been much discussed in legal-doctrinal scholarship. This paper expands upon this academic discourse, by providing insights into what questions the application of the right to an explanation has raised in legal practice. By looking at cases brought before various judicial bodies and data protection authorities across the European Union, we discuss questions regarding the scope, content, and balancing exercise of the right to an explanation. We argue, moreover, that these questions also raise important interpretative issues regarding the right to an explanation under the AI Act. Similar to the GDPR, the AI Act’s right to an explanation leaves many legal questions unanswered. Therefore, the insights from the already established case law under the GDPR, can help us to understand better how the AI Act’s right to an explanation should be understood in practice.

Marcus-Obiene on Towards a New International Framework for Data Governance: Proposing Data Embassy Status for Global Data Centres

Fernandez Marcus-Obiene (Independent) has posted “Towards a New International Framework for Data Governance: Proposing Data Embassy Status for Global Data Centres” on SSRN. Here is the abstract:

This article proposes a novel approach to data governance: data embassies. These extraterritorial entities would facilitate secure cross-border data transfers while addressing privacy, security, and national sovereignty concerns. By granting data embassies immunity from host country laws, the framework aims to enhance trust, reduce legal hurdles, and promote international cooperation. While challenges remain, such as concerns about accountability and varying privacy standards, data embassies offer a promising solution for the complexities of modern data governance. The framework could potentially reshape the future of the global digital economy by creating a more secure and efficient environment for data flows.

Weinzierl et al. on How Risky is my AI System? A Method for Transparent Classification of AI System Descriptions by Regulated AI Risk Categories

Sven Weinzierl (Friedrich Alexander U Erlangen Nuremberg) et al. have posted “How Risky is my AI System? A Method for Transparent Classification of AI System Descriptions by Regulated AI Risk Categories” (Proceedings of the 45th International Conference on Information Systems) on SSRN. Here is the abstract:

Risk-based artificial intelligence (AI) regulations define risk categories for AI-enabled systems. The operators of such systems must determine the risk category applicable to their AI systems. This requires detailed knowledge of the classification rules defined in the regulations. Only a few supporting tools have been developed to facilitate the task of risk classification. This paper presents a novel method that describes all the necessary steps to develop such a tool. To demonstrate and evaluate the method, it is instantiated for the European Union’s AI Act. The evaluation shows i) that the classification model achieves promising performance in predicting the risk categories for AI systems, ii) that users can effectively use the web application to carry out a risk classification, and iii) that users find SHAP text plots integrated into the web application helpful for understanding the reasons of a classification prediction.

Schrepel & Groza on Computing ‘Innovation Competition’

Thibault Schrepel (Vrije Universiteit Amsterdam) and Teodora Groza (Sciences Po Paris) have posted “Computing ‘Innovation Competition’” on SSRN. Here is the abstract:

The digitalization of markets is shifting competitive dynamics away from price-based strategies toward ‘innovation competition,’ where companies compete through new technologies. While traditional antitrust frameworks often struggle to capture the complexities of innovation-driven markets, we show that ‘computational antitrust’ provides opportunities for improvement. Drawing on a review of the latest literature, case law, and cutting-edge computational methods, we conclude with an overview of current and potential solutions to give innovation a central role in antitrust analysis.