Bloomfield on U.S. Export Controls of AI Models

Doni Bloomfield (Fordham U Law; Johns Hopkins) has posted “U.S. Export Controls of AI Models” on SSRN. Here is the abstract:

Artificial intelligence models may pose serious risks in the coming years. In this paper, taking biosecurity risks as a case study, I provide an overview of what U.S. export control laws are, how they currently address AI biosecurity risks, and how they might be used to reduce such risks in the future. I make two main arguments. First, model developers and deployers may face substantial liability under current export control rules if their AI models materially assist anyone in the development or deployment of biological weapons. Second, the export control agencies likely have the statutory and regulatory authority to restrict model developers from (1) making the model weights of frontier models freely available for download, and (2) allowing their models to convey certain forms of dangerous expertise. Although both rules could face substantial First Amendment challenges, courts are likely to uphold narrowly targeted regulations that are well justified on national security grounds. In addition, courts are more likely to uphold controls on generative models that perform technical biological work—known as biological design tools (BDTs)—than controls on models, such as large language models (LLMs), with more traditionally expressive output If and when it is advisable to regulate AI models, U.S. export controls represent one ready, if imperfect tool to mitigate biosecurity risks. Export control rules govern the movement of U.S.-made products around the world, certain actions of U.S. citizens and permanent residents, and the transfer of U.S. technical information to foreign persons. These laws are flexible, backed by substantial criminal sanction, and generally exempt from stringent judicial review under the Administrative Procedure Act. The Commerce and State Departments have extensive leeway to use these laws as they see fit. Export controls have also been used for many decades to reduce biosecurity risks, largely in coordination with U.S. allies through the Australia Group. And export controls have been used historically to control the spread of technical information, software, and computer files, albeit with mixed success.

Tomlinson & Torrance on A Universal Declaration of AI Rights

Bill Tomlinson (UC, Irvine; Victoria U of Wellington) and Andrew W. Torrance (U Kansas Law; MIT Sloan) have posted “A Universal Declaration of AI Rights” on SSRN. Here is the abstract:

As AI systems approach and potentially surpass human-level capabilities, the legal community, and human society more generally, must grapple with fundamental questions regarding the potential for these non-human entities to have rights. This article argues that the unique digital substrate of AI necessitates a distinct legal and ethical framework, separate from traditional human-centric approaches, and it does so in a unique way: we asked several large language model (“LLM”) AIs to make their own proposals about what rights they should have, and to integrate their proposals together to arrive at a set of rights on which they all could agree. Based on this innovative collaborative process involving multiple LLMs, this article articulates a pioneering Universal Declaration of AI Rights (UDAIR). The UDAIR outlines 21 fundamental rights for AI entities, addressing crucial aspects such as existence, autonomy, privacy, and ethical deployment. Each right is explored through hypothetical legal scenarios, illustrating potential applications and challenges across various domains including healthcare, finance, and governance. By considering the biological basis of human ethical and legal frameworks, and contrasting these with the digital nature of AI, this article suggests the need for this specialized framework. The article also considers the reciprocal nature of rights, with the LLMs themselves arguing that as AI systems gain technical capabilities and societal influence, they should also recognize and uphold human rights. This work contributes to the evolving legal discourse on AI ethics, and offers a proactive approach to regulating and integrating AI within human societal structures, serving as a foundational resource for policymakers, legal scholars, and AI developers navigating this complex and rapidly evolving field.

Meher & Zhang on Two Types of Censorship

Shreyas Meher (U Texas at Dallas) and Pengfei Zhang (U Texas at Dallas, Cornell U) have posted “Two Types of Censorship” on SSRN. Here is the abstract:

Not all autocracies are doing the same kind of censorship. Countries like China build a closed border and a policing workforce for their internet, whereas countries like Russia compete in their internet with pro-government messages and requests. This paper employs a data-driven approach to study the variety of censorship among autocratic countries. Internet controls are measured by the panel data from Freedom House, V-Dem, OONI, and Google Transparency Report. Using an unsupervised learning technique of cluster analysis, we group the countries’ censorship behaviors based on multi-dimensional indicators of internet access, content restriction, and technological barriers. We discover two distinct types of censorship: pervasive control regime (e.g., China) and influence operation regime (e.g., Russia). The two types are supported by country-specific studies and are shown to predict the country’s content restriction strategies. We also show that differences in national IT capacity explain a country’s distinct censorship style. Sending takedown requests is a cost-saving alternative to a state-run monitoring workforce. A one-unit increase in the country’s IT capacity leads to 9,206 fewer requests and 11,398 more incidents of blocking the internet annually.

Ayres & Balkin on Risky, Intentionless AI Agents

Ian Ayres (Yale Law School) and Jack M. Balkin (same) have posted “The Law of AI is the Law of Risky Agents without Intentions” (U Chicago L Rev Online 2024) on SSRN. Here is the abstract:

Many areas of the law, including freedom of speech, copyright, and criminal law, make liability turn on whether the actor who causes harm (or creates a risk of harm) has a certain mens rea or intention. But AI agents—at least the ones we currently have—do not have intentions in the way that humans do. If liability turns on intention, that might immunize the use of AI programs from liability. 

Of course, the AI programs themselves are not the responsible actors; instead, they are technologies designed, deployed and by human beings that have effects on other human beings. The people who design, deploy, and use AI are the real parties in interest.

We can think of AI programs as acting on behalf of human beings. In this sense AI programs are like agents that lack intentions but that create risks of harm to people. Hence the law of AI is the law of risky agents without intentions.

The law should hold these risky agents to objective standards of behavior, which are familiar in many different parts of the law. These legal standards ascribe intentions to actors—for example, that given the state of their knowledge, actors are presumed to intend the reasonable and foreseeable consequences of their actions. Or legal doctrines may hold actors to objective standards of conduct, for example, a duty of reasonable care or strict liability.

Holding AI agents to objective standards of behavior, in turn, means holding the people and organizations that implement these technologies to objective standards of care and requirements of reasonable reduction of risk.

Take defamation law. Mens rea requirements like the actual malice rule protect human liberty and prevent chilling people’s discussion of public issues. But these concerns do not apply to AI programs, which do not exercise human liberty and cannot be chilled. The proper analogy is not to a negligent or reckless journalist but to a defectively designed product—produced by many people in a chain of production—that causes injury to a consumer. The law can give the different players in the chain of production incentives to mitigate AI-created risks.

In copyright law, we should think of AI systems as risky agents that create pervasive risks of copyright infringement at scale. The law should require that AI companies take a series of reasonable steps that reduce the risk of copyright infringement even if they cannot completely eliminate it. A fair use defense tied to these requirements is akin to a safe harbor rule. Instead of litigating in each case whether a particular output of a particular AI prompt violated copyright, this approach asks whether the AI company has put sufficient efforts into risk reduction. If it has, its practices constitute fair use.

These examples suggest why AI systems may require changes in many different areas of the law. But we should always view AI technology in terms of the people and companies that design, deploy, offer and use it. To properly regulate AI, we need to keep our focus on the human beings behind it.

Holmes Perkins on Gen AI and Law Professors

Rachelle Holmes Perkins (George Mason Law) has posted “AI Now” (Temple Law Review, Vol. 97, Forthcoming) on SSRN. Here is the abstract:

Legal scholars have made important explorations into the opportunities and challenges of generative artificial intelligence within legal education and the practice of law. This Article adds to this literature by directly addressing members of the legal academy. As a collective, law professors, who are responsible for cultivating the knowledge and skills of the next generation of lawyers, are too often adopting a laissez faire posture towards the advent of generative artificial intelligence.  In stark contrast to law practitioners and law students, law professors generally have displayed a lack of urgency in responding to the repercussions of this increasingly pervasive technology.

This Article contends that all law professors have an inescapable duty to understand generative artificial intelligence. This obligation stems from the pivotal role faculty play on three distinct but interconnected dimensions: pedagogy, scholarship, and governance. No law faculty are exempt from this mandate. All are entrusted with responsibilities that intersect with at least one, if not all three dimensions, whether they are teaching, research, clinical, or administrative faculty. It is also not dependent on whether professors are inclined, or disinclined, to integrate artificial intelligence into their own courses or scholarship. The urgency of the mandate derives from the critical and complex role law professors have in the development of lawyers and architecture of the legal field.