Peng on Autonomous Vehicle Standards under the TBT Agreement

Shin-yi Peng (National Tsing Hua University) has posted “Autonomous Vehicle Standards under the TBT Agreement: Disrupting the Boundaries?” in Shin-yi Peng, Ching-Fu Lin and Thomas Streinz (eds) Artificial Intelligence and International Economic Law: Disruption, Regulation, and Reconfiguration (Cambridge University Press, 2021) on SSRN. Here is the abstract:

Products that incorporate AI will require the development of a range of new standards. This chapter uses the case of connected and autonomous vehicles (CAVs) standards as a window to explore how this “disruptive innovation” may alter the boundaries of international trade agreements. Amid the transition to a driverless future, the transformative nature of disruptive innovation renders the interpretation and application of trade rules challenging. This chapter offers a critical assessment of the two systematic issues – the goods/services boundaries, and the public/private sector boundaries. Looking to the future, regulations governing CAVs will become increasingly complex, as the level of systemic automation evolves into levels 3-5. The author argues that disruptive technologies have a greater fundamental and structural impact on the existing trade disciplines.

Reyes on Autonomous Corporate Personhood

Carla Reyes (Southern Methodist University – Dedman School of Law) has posted “Autonomous Corporate Personhood” (Washington Law Review, Forthcoming) on SSRN. Here is the abstract:

Currently, several states are considering changes to their business organization law to accommodate autonomous businesses—businesses operated entirely through computer code. Several international civil society groups are also actively developing new frameworks and a model law for enabling decentralized, autonomous businesses to achieve a corporate or corporate-like status that bestows legal personhood. Meanwhile, various jurisdictions, including the European Union, have considered whether and to what extent artificial intelligence (AI) more broadly should be endowed with personhood in order to respond to AI’s increasing presence in society. Despite the fairly obvious overlap between the two sets of inquiries, the legal and policy discussions between the two only rarely overlap. As a result of this failure to communicate, both areas of personhood theory fail to account for the important role that socio-technical and socio-legal context plays for law and policy development. This Article fills the gap by investigating the limits of artificial rights at the intersection of corporations and artificial intelligence. Specifically, this Article argues that building a comprehensive legal approach to artificial rights—rights enjoyed by artificial people, whether entity, machine, or otherwise—requires approaching the issue through a systems lens to ensure law’s consideration of the varied socio-technical contexts in which artificial people exist.

To make these claims, this Article first establishes a baseline of terminology, emphasizes the importance of viewing AI as part of a socio-technical system, and reviews the existing market for autonomous corporations. Sections II and III then examine the existing debates around both artificially intelligent persons and corporate personhood, arguing that the socio-legal needs driving artificial personhood debates in both contexts include: protecting the rights of natural people, upholding social values, and creating a fiction for legal convenience. Sections II and III explore the extent to which the theories from either set of literature fits the reality of autonomous businesses, illuminating gaps and using them to demonstrate that the law must consider the socio-technical context of AI systems and the socio-legal complexity of corporations to decide how autonomous businesses will interact with the world. Ultimately, the Article identifies links between both areas of legal personhood, leveraging those links to demonstrate the Article’s core claim: developing law for artificial systems in any context should use the systems nature of the artificial artifact to tie its legal treatment directly to the system’s socio-technical reality.

Bloch-Wehba on Transparency’s AI Problem

Hannah Bloch-Wehba (Texas A&M University School of Law; Yale Information Society Project) has posted “Transparency’s AI Problem” on SSRN. Here is the abstract:

A consensus seems to be emerging that algorithmic governance is too opaque and ought to be made more accountable and transparent. But algorithmic governance underscores the limited capacity of transparency law—the Freedom of Information Act and its state equivalents—to promote accountability. Drawing on the critical literature on “open government,” this Essay shows that algorithmic governance reflects and amplifies systemic weaknesses in the transparency regime, including privatization, secrecy, private sector cooptation, and reactive disclosure. These deficiencies highlight the urgent need to reorient transparency and accountability law toward meaningful public engagement in ongoing oversight. This shift requires rethinking FOIA’s core commitment to public disclosure of agency records, exploring instead alternative ways to empower the public and to shed light on decisionmaking. The Essay argues that new approaches to transparency and accountability for algorithmic governance should be independent of private vendors, and ought to adequately represent the interests of affected individuals and communities. These considerations, of vital importance for the oversight of automated systems, also hold broader lessons for efforts to recraft open government obligations in the public interest.