Tschider & Ho on Artificial Intelligence and Intellectual Property in Healthcare Technologies

Charlotte Tschider (Loyola U Chicago Law) and Cynthia M. Ho (Loyola U Chicago Law) have posted “Artificial Intelligence and Intellectual Property in Healthcare Technologies” (Ch. 11: Artificial intelligence and intellectual property in healthcare technologies, in Research Handbook on Health, AI and the Law (Edgar, ed. Barry Solaiman & I. Glenn Cohen), https://doi.org/10.4337/9781802205657.00018) on SSRN. Here is the abstract:

Artificial intelligence (AI) healthcare technologies involve a wide variety of AI innovations that could potentially qualify for intellectual property (IP) protection, corresponding to multiple forms of protection. In addition, protection for AI raises novel issues that may require modifying existing laws. This chapter examines how current IP law applies to human-generated AI creations and policy issues that should be considered as organisations and countries re-examine IP policy. After this brief introduction, section 2 provides an introduction to IP, section 3 details AI in healthcare to better understand IP issues and section 4 addresses issues AI owners will likely encounter in IP strategy. Finally, section 5 addresses policy issues for lawmakers to consider.

DOU et al. on The Solution to Copyright Abuse in the Era of Artificial Intelligence in China

Wu Dou (Guangxi Normal U) et al. have posted “The Solution to Copyright Abuse in the Era of Artificial Intelligence in China” on SSRN. Here is the abstract:

The era of artificial intelligence is like a “double-edged sword” for copyright, which facilitates the authors and the society. If the rights are not restricted, it is easy to produce the copyright abuse crisis. Age of artificial intelligence different areas in copyright abuse problem is obvious, but also have different voice, we need to from phenomenon to the essence in the process of understanding, concrete, know the specific harm, from human science, economics, law theory and constantly clear cause, to explore the legal regulation. At the same time, to build a social standard system with law as the core, supplemented by a comprehensive governance means with technology as the leading and ethics as the guide. In terms of law, improve the legal system for setting implied permission restrictions; in terms of technology, use blockchain technology to monitor copyright abuse, use big data and filtering means to optimize platform governance; establish ethical value community ideologically, and improve ethical risk governance methods in practice.

Atkinson et al. on Intentionally Unintentional: GenAI Exceptionalism and the First Amendment

David Atkinson (The U Texas Austin) et al. have posted “Intentionally Unintentional: GenAI Exceptionalism and the First Amendment” (Forthcoming, First Amendment Law Review 2025) on SSRN. Here is the abstract:

This paper challenges the assumption that courts should grant outputs from large generative AI models, such as GPT-4 and Gemini, First Amendment protections. We argue that because these models lack intentionality, their outputs do not constitute speech as understood in the context of established legal precedent, so there can be no speech to protect. Furthermore, if the model outputs are not speech, users cannot claim a First Amendment right to receive the outputs. We also argue that extending First Amendment rights to AI models would not serve the fundamental purposes of free speech, such as promoting a marketplace of ideas, facilitating self-governance, or fostering self-expression. In fact, granting First Amendment protections to AI models would be detrimental to society because it would hinder the government’s ability to regulate these powerful technologies effectively, potentially leading to the unchecked spread of misinformation and other harms.

Crouch on Using Intellectual Property to Regulate Artificial Intelligence

Dennis David Crouch (U Missouri Law) has posted “Using Intellectual Property to Regulate Artificial Intelligence” (89:3 Missouri Law Review 1 (2024)) on SSRN. Here is the abstract:

This Article examines the complex relationship between intellectual property (“IP”) rights and the regulation of artificial intelligence (“AI”). It advances two primary claims: First, while IP plays a role in guiding innovative behaviors in AI development, it does not serve as an effective mechanism for direct regulation of AI. This claim is based on the observation that IP rights, such as patents and copyrights, are primarily designed to incentivize innovation and protect creative works, while lacking the levers necessary to address the broader societal implications of AI technology. The narrow focus of IP rights on rewarding creators makes them ill-suited for managing the more complex ethical, safety, and societal challenges posed by AI systems. Furthermore, it contends that relying on IP for AI regulation could lead to unintended consequences, such as stifling important research or exacerbating existing power imbalances in the tech industry. 

The Article’s second primary claim is that the relationship between IP rights and AI regulation can be pernicious, as IP rights may hinder AI regulation and development in several ways. This analysis is done largely through the lens of copyright and trade secrecy. The Article analyzes how copyright law impacts AI development, particularly regarding the use of copyrighted works for training AI models and the protection of AI-generated outputs. The discussion also examines the tension between trade secret protection and the regulatory goals of transparency and explainability in AI systems. 

Ultimately, the Article concludes that IP should play a supporting role in AI governance rather than serve as the primary legal and regulatory lever.

Cohen et al. on Provisioning Digital Tools and Systems for Government Use

Julie E. Cohen (Georgetown U Law Center) et al. have posted “Provisioning Digital Tools and Systems for Government Use” (Redesigning the Governance Stack Project at Georgetown Law) on SSRN. Here is the abstract:

This document is part of a larger project aimed at reinventing the administrative state for effective governance of the digital, information-driven economy. It explores how the administrative state can more effectively equip itself with digital tools and systems that align with and improve government’s ability to serve public values.  Established approaches to digital provisioning fail in many important respects. Among others, they introduce thorny coordination problems while doing little to ensure design for broader public values; they cause obsolete and/or poorly conceived requirements to cascade through the development process for new tools and systems; they magnify the potential for technology-driven lock-in and vendor capture at scale; and they are unacceptably opaque to policymakers and the public. We trace some of these dysfunctions to the private-sector preference that underpins federal govtech provisioning and others to a top-down mode of development in which “solutions” are decreed at the outset rather than after consultation and conversation. The paper recommends a series of changes to the current policy landscape for govtech provisioning to correct these dysfunctions. One important recommendation involves rethinking the traditional “make vs. buy” dichotomy in public procurement and the underlying presumptions that have animated the dichotomy. Recentering public values and outcomes in govtech development also requires measures for ensuring the interoperability and transparency of govtech tools and systems. Another important recommendation involves reenvisioning processes for govtech development and implementation.

Coan & Surden on Artificial Intelligence and Constitutional Interpretation

Andrew Coan (U Arizona) and Harry Surden (U Colorado Law) have posted “Artificial Intelligence and Constitutional Interpretation” on SSRN. Here is the abstract:

This Article examines the potential use of large language models (LLMs) like ChatGPT in constitutional interpretation. LLMs are extremely powerful tools, with significant potential to improve the quality and efficiency of constitutional analysis. But their outputs are highly sensitive to variations in prompts and counterarguments, illustrating the importance of human framing choices. As a result, using LLMs for constitutional interpretation implicates substantially the same theoretical issues that confront human interpreters. Two key implications emerge: First, it is crucial to attend carefully to particular use cases and institutional contexts. Relatedly, judges and lawyers must develop “AI literacy” to use LLMs responsibly. Second, there is no avoiding the burdens of judgment. For any given task, LLMs may be better or worse than humans, but the choice of whether and how to use them is itself a judgment requiring normative justification.

Pasquale & Kiriakos on Contesting the Inevitability of Scoring: The Value(s) of Narrative in Consumer Credit Allocation

Frank Pasquale (Cornell Law) and Mathieu Kiriakos (U Sherbrooke) have posted “Contesting the Inevitability of Scoring: The Value(s) of Narrative in Consumer Credit Allocation” (Algorithmic Transformations of Power: Between Trust, Conflict, and Uncertainty, edited by C. Burchard and I. Spiecker (Nomos, forthcoming 2025).) on SSRN. Here is the abstract:

When firms allocate credit to consumers, credit scoring often seems both inevitable (how else could the decision be made?) and desirable (how else could the decision be objective and fair?). We challenge both assumptions, after exploring the power asymmetries generated by scoring. Evaluations of narrative accounts of creditworthiness are plausible in at least some scenarios, despite the volume of credit applications. Moreover, these alternative paths to credit reflect normative values (such as intelligibility and fair consideration) that are just as compelling as the objectivity and fairness attributed to scoring. 

One of these values is trust. While quantitative assessments of reliability based on third-party data are designed to enable “trustless” transactions, qualitative accounts of creditworthiness depend on evaluators’ trusting the accounts of creditworthiness offered by those applying for credit. What this shift potentially loses in efficiency it has the potential to gain in mutual understanding, the alleviation of alienation, and opportunities for redemption. It also represents a democratization of power in financial relationships, requiring those with funds to lend to do a bit more to understand at least some of those applying for credit on their own terms, rather than forcing applicants into Procrustean beds of data analytics.

Lévesque on Brief on Bill C-27 (Canada AI and Data Act)

Maroussia Lévesque (Harvard U) has posted “Brief on Bill C-27 (Canada AI and Data Act)” on SSRN. Here is the abstract:

With a rich and diverse AI ecosystem and a longstanding tradition of rights- promoting policies, Canada has what it takes to lead innovative AI regulation. Section 1 recommends that the best path forward is a reset on AIDA, with broader upstream consultations to inform the drafting. Should this Committee move forward with clause-by-clause review of the current bill, section 2 proposes some amendments to improve the current version.

Smith on Law and Technological Innovations: Three Reasons to Pause

Michael L. Smith (St. Mary’s U Law) has posted “Law and Technological Innovations: Three Reasons to Pause” (12 Belmont Law Review (Forthcoming 2025)) on SSRN. Here is the abstract:

Faced with optimistic accounts of technological innovations, businesses, law firms, and governments face pressure to rush into adopting these technologies and enjoying the increased efficiency, reduced costs, and other benefits that are promised. This essay sets forth reasons to pause before adopting such technologies. First, new technology is often contrasted with unrealistically dire portrayals of the status quo, which leads to exaggerated accounts of how beneficial the new technology will be. Second, overconfidence in technological fixes, as well as tendencies against revisiting and critiquing traditional ways of doing things may lead to an entrenchment of harmful systems. Third, the institutional incentives and pressures in which technology is employed may affect how that technology is used—leading to unanticipated consequences for those who only consider how technology functions in non-legal settings.

While I urge reasons to pause, I do not counsel wholesale rejection of technological innovation. Those considering adopting new technologies should, at the outset, demand transparency from those who manufacture and market technology, particularly the avoidance of imprecise terminology. Developing policies in advance to review and audit new technology may also ensure that those adopting it get what they pay for, and may help mitigate unanticipated harmful consequences. Finally, contracts with those offering new technology should have frequent renewal opportunities built in to allow those adopting the technology to demand action or back out of adopting the technology should promised benefits never materialize.

Pasquale on Review of High Wire: How China Regulates Big Tech and Governs its Economy

Frank Pasquale (Cornell Law) has posted “Review of High Wire: How China Regulates Big Tech and Governs its Economy” (Regulation and Governance, Volume 18 (forthcoming, 2024)) on SSRN. Here is the abstract:

High Wire should prove a vital starting point for both experts and non-experts seeking to understand the nature of Chinese technology regulation. Building from a focus on technology platforms, Zhang has also developed a more general theory of economic regulation. I expect to see future studies of Chinese economic governance apply and build upon her framework.