Lobel on Behavioral Law & Policy of AI Trust

Orly Lobel (University of San Diego School of Law) has posted “Behavioral Law & Policy of AI Trust” on SSRN. Here is the abstract:

With the dazzling advances in artificial intelligence capabilities, regulatory policy should aim at spurring the right amount — and the correct kind — of AI trust. In my recent research on AI policy, including my new book The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future and my article, The Law of AI for Good, I aim to pivot policy debates about automation and artificial intelligence (AI) toward more rational and grounded analysis. Just as behavioral research first developed in relation to marketing and consumer behavior and only later came to be recognized as significant in policymaking, so should policymakers turn their attention to understanding the human biases that lead to irrational algorithmic aversion and algorithmic adoration. In this short essay, I argue that the emerging experimental literature on trust, and distrust, of AI can serve as a blueprint for policy research and interventions. We do not yet have a common language, or even shared taxonomy, to compare and evaluate the tradeoffs inherent to automation. I call this the human-AI trust gap, which I argue is a significant barrier to benefiting from automation opportunities. That is, whether we have too little trust or too much trust in algorithms, the human-AI trust gap is that we are missing a shared literature and methods to understand when trust is given and when trust is due. The existing research insights on human-machine trust should raise doubt about recent policy reforms, such as laws requiring real-time consumer notification about the use of automated processes. I argue that there may be inadvertent irrationality in some aspects of contemporary AI policy. Government entities should commit to improving AI and building rational social trust in these systems.