Orly Lobel (U San Diego Law) has posted “Do We Need to Know What Is Artificial? Unpacking Disclosure & Generating Trust in an Era of Algorithmic Action” (Presented at “Dynamics of Generative AI” symposium, March 22, 2024) on SSRN. Here is the abstract:
Should users have the right to know when they are chatting with a bot? Should companies providing generative AI applications be obliged to mark the generated products as AI-generated or alert users of generative chats that the responder is “merely an LLM (or a Large Language Model)”? Should citizens or consumers—patients, job applicants, tenants, students—have the right to know when a decision affecting them was made by an automated system? Should art lovers, or online browsers, have the right to know that they are viewing an AI-generated image?
As automation accelerates and AI is deployed in every sector, the question about knowing about artificiality becomes relevant in all aspects of our lives. This essay, written for the 2024 Network Law Review Symposium on the Dynamic of Generative AI, aims to unpack the question—which is in fact a set of complex questions—and to provide a richer context and analysis than the often default, absolute answer: yes! we, the public, must always have the right to know what is artificial and what is not. The question is more complicated and layered than may initially seem. The answer, in turn, is not as easy as some of the recent regulatory initiatives suggest in their resolute yes. The answer, instead, depends on the goals of information disclosure. Is disclosure a deontological or dignitarian good, and in turn, right, in and of itself? Or does disclosure serve a utilitarian purpose of supporting the goals of the human-machine interaction, for example, ensuring accuracy, safety, or unbiased decision-making? Does disclosure increase trust in the system, process, and results? Or does the disclosure under certain circumstances hinder those very goals, for example, if knowing that a decision was made by a bot reduces the AI user’s trust and increases the likelihood the AI user will disregard the recommendation (e.g., an AI radiology or insulin bolus system recommendation? An AI landing device in aviation?).
The essay presents a range of contexts and regulatory requirements centered around the right to know about AI involvement. It then suggests a set of reasons for disclosure of artificiality: dignity; control; trust – including accuracy, consistency, safety, fairness; authenticity; ownership/attribution, and aesthetic/experiential. The essay further presents recent behavioral literature on AI rationality, algorithmic aversion, and algorithmic adoration to suggest a more robust framework within which the question about disclosure rights, and their effective timing, should be answered. It then shows how labeling and marking AI-generated images is a distinct inquiry separate from disclosure of AI-generated decisions. In each of these contexts, the answers should be based on empirical evidence on how disclosures affect perception, rationality, behavior, and measurable goals of these deployed technologies.
