Widen on Autonomous Vehicles, Moral Hazards & the ‘AV Problem’

William H. Widen (Miami Law) has posted “Autonomous Vehicles, Moral Hazards & the ‘AV Problem'” on SSRN. Here is the abstract:

The autonomous vehicle (“AV”) industry faces the following ethical question: “How do we know when our AV technology is safe enough to deploy at scale?” The search for an answer to this question is the “AV Problem.” This essay examines that question through the lens of the July 15, 2021 filing on Form S-4 with the Securities and Exchange Commission in the going public transaction for Aurora Inventions, Inc.

The filing reveals that successful implementation of Aurora’s business plan in the long term depends on the truth of the following proposition: A vehicle controlled by a machine driver is safer than a vehicle controlled by a human driver (the “Safety Proposition”).

In a material omission for which securities law liability may attach, the S-4 fails to state Aurora’s position on deployment: will Aurora delay deployment until such time as it believes the Safety Proposition is true to a reasonable certainty or will it deploy at scale earlier in the hope that increased current losses will be offset by anticipated future safety gains?

The Safety Proposition is a statement about physical probability which is either true or false. For success, AV companies need the public to believe the Safety Proposition, yet belief is not the same as truth. The difference between truth and belief creates tension in the S-4 because the filing both fosters a belief in the Safety Proposition while at the same time making clear there is insufficient evidence to support the truth of the Safety Proposition.

A moral hazard results when financial pressures push for early deployment of AV systems before evidence shows that the Safety Proposition is true to a reasonable certainty. This problem is analyzed by comparison with the famous trolley problem in ethics and consideration of corporate governance techniques which an AV company might use to ensure the integrity of its decision process for deployment. The AV industry works to promote belief in the safety proposition in the hope that the public will accept that AV technology has benefits, thus avoiding the need to confront the truth of the Safety Proposition directly. This hinders a meaningful public debate about the merits and timing of deployment of AV technology, raising the question of whether there is a place for meaningful government regulation.

Recommended.

Lehr on Smart Contracts, Real-Virtual World Convergence and Economic Implications

William Lehr (MIT) has posted “Smart Contracts, Real-Virtual World Convergence and Economic Implications” on SSRN. Here is the abstract:

Smart Contracts (SCs) are usually defined as contracts that are instantiated in computer-executable code that automatically executes all or parts of an agreement with the assistance of block-chain’s distributed trust technology. This is principally a technical description and results in an overly narrow focus. The goal of this paper is to provide an overview of the rapidly evolving multidisciplinary literature on Smart Contracts to provide a synthesis perspective on the economic implications of smart contracts. This necessitates casting a wider-net that ties SCs to the literature on the economics of AI and the earlier Industrial Organization literature to support speculation about the role of SCs in the evolution of AI and the organization of economic activity. Accomplishing this goal builds on a repurposing of the Internet hourglass model that puts SCs at the narrow waist between the real (non-digital) and virtual (digital) realms, serving as the connecting glue or portal by which AIs may play a larger role in controlling the organization of economic activity.