Jonathan Gingerich (King’s College London – The Dickson Poon School of Law) has posted “Is Spotify Bad for Democracy? Artificial Intelligence, Cultural Democracy, and Law” (Yale Journal of Law & Technology, forthcoming) on SSRN. Here is the abstract:
Much scholarly attention has recently been devoted to ways in which artificial intelligence (AI) might weaken formal political democracy, but little attention has been devoted to the effect of AI on “cultural democracy”—that is, democratic control over the forms of life, aesthetic values, and conceptions of the good that circulate in a society. This work is the first to consider in detail the dangers that AI-driven cultural recommendations pose to cultural democracy. This Article argues that AI threatens to weaken cultural democracy by undermining individuals’ direct and spontaneous engagement with a diverse range of cultural materials. It further contends that United States law, in its present form, is ill equipped to address these challenges, and suggests several strategies for better regulating culture-mediating AI. Finally, it argues that while such regulations might run afoul of contemporary First Amendment doctrine, the most normatively attractive interpretation of the First Amendment not only allows but encourages such interventions.
Jeffrey J. Rachlinski (Cornell Law School) & Andrew J. Wistrich (California Central District Court) have posted “Judging Autonomous Vehicles” on SSRN. Here is the abstract:
The introduction of any new technology challenges judges to determine how it into existing liability schemes. If judges choose poorly, they can unleash novel injuries on society without redress or stifle progress by overburdening a technological breakthrough. The emergence of self-driving, or autonomous, vehicles will present an enormous challenge of this sort to judges, as this technology will alter the foundation of the largest source of civil liability in the United States. Although regulatory agencies will determine when and how autonomous cars may be placed into service, judges will likely play a central role in defining the standards for liability for them. How will judges treat this new technology? People commonly exhibit biases against innovations such as a naturalness bias, in which people disfavor injuries arising from artificial sources. In this paper we present data from 933 trial judges showing that judges exhibit bias against self-driving vehicles. They both assigned more liability to a self-driving vehicle than they would to a human-driven vehicle and treated injuries caused by a self-driving vehicle as more serious than injuries caused by a human-driven vehicle.
W. Nicholson Price II (University of Michigan Law School) has posted “Problematic Interactions between AI and Health Privacy” (Utah Law Review, Forthcoming) on SSRN. Here is the abstract:
The interaction of artificial intelligence (AI) and health privacy is a two-way street. Both directions are problematic. This Essay makes two main points. First, the advent of artificial intelligence weakens the legal protections for health privacy by rendering deidentification less reliable and by inferring health information from unprotected data sources. Second, the legal rules that protect health privacy nonetheless detrimentally impact the development of AI used in the health system by introducing multiple sources of bias: collection and sharing of data by a small set of entities, the process of data collection while following privacy rules, and the use of non-health data to infer health information. The result is an unfortunate anti-synergy: privacy protections are weak and illusory, but rules meant to protect privacy hinder other socially valuable goals. The state of affairs creates biases in health AI, privileges commercial research over academic research, and is ill-suited to either improve health care or protect patients. The health system deeply needs a new bargain between patients and the health system about the uses of patient data.