Chen, Stremitzer & Tobia on Having Your Day in Robot Court

Benjamin Minhao Chen (The University of Hong Kong – Faculty of Law), Alexander Stremitzer (ETH Zurich), and Kevin Tobia (Georgetown University Law Center; Georgetown University – Department of Philosophy) have posted “Having Your Day in Robot Court” on SSRN. Here is the abstract:

Should machines be judges? Some balk at this possibility, holding that ordinary citizens would see a robot-led legal proceeding as procedurally unfair: To have your “day in court” is to have a human hear and adjudicate your claims. Two original experiments assess whether laypeople share this intuition. We discover that laypeople do, in fact, see human judges as fairer than artificially intelligent (“AI”) robot judges: All else equal, there is a perceived human-AI “fairness gap.” However, it is also possible to eliminate the fairness gap. The perceived advantage of human judges over AI judges is related to perceptions of accuracy and comprehensiveness of the decision, rather than “softer” and more distinctively human factors. Moreover, the study reveals that laypeople are amenable to “algorithm offsetting.” Adding an AI hearing and increasing the AI interpretability reduces the perceived human-AI fairness gap. Ultimately, the results support a common challenge to robot judges: there is a concerning human-AI fairness gap. Yet, the results also indicate that the strongest version of this challenge — human judges have inimitable procedural fairness advantages — is not reflected in the views of laypeople. In some circumstances, people see a day in robot court as no less fair than day in human court.

Normann & Sternberg on Hybrid Collusion: Algorithmic Pricing in Human-Computer Laboratory Markets

Hans-Theo Normann (Heinrich Heine University Dusseldorf – Department of Economics; Max Planck Institute for Research on Collective Goods) and Martin Sternberg (MPI for Research on Collective Goods, Bonn) have posted “Hybrid Collusion: Algorithmic Pricing in Human-Computer Laboratory Markets” on SSRN. Here is the abstract:

We investigate collusive pricing in laboratory markets when human players interact with an algorithm. We compare the degree of (tacit) collusion when exclusively humans interact to the case of one firm in the market delegating its decisions to an algorithm. We further vary whether participants know about the presence of the algorithm. We find that three-firm markets involving an algorithmic player are significantly more collusive than human-only markets. Firms employing an algorithm earn significantly less profit than their rivals. For four-firm markets, we find no significant differences. (Un)certainty about the actual presence of an algorithm does not significantly affect collusion.

Burri & Trusilo on Ethical Artificial Intelligence

Thomas Burri (University of St. Gallen) and Daniel Trusilo (University of St. Gallen) have posted “Ethical Artificial Intelligence: An Approach to Evaluating Disembodied Autonomous Systems” (In: Rain Liivoja and Ann Valjataga (eds), Autonomous Cyber Capabilities in International Law, forthcoming 2021) on SSRN. Here is the abstract:

Building off our prior work on the practical evaluation of autonomous robotic systems, this chapter discusses how an existing framework can be extended to apply to autonomous cyber systems. It is hoped that such a framework can inform pragmatic discussions of ethical and regulatory norms in a proactive way. Issues raised by autonomous systems in the physical and cyber realms are distinct; however, discussions about the norms and laws governing these two related manifestations of autonomy can and should inform one another. Therefore, this paper emphasizes the factors that distinguish autonomous systems in cyberspace, labeled disembodied autonomous systems, from systems that physically exist in the form of embodied autonomous systems. By highlighting the distinguishing factors of these two forms of autonomy, this paper informs the extension of our assessment tool to software systems, bringing us into the legal and ethical discussions of autonomy in cyberspace.

Goldman on Content Moderation Remedies

Eric Goldman (Santa Clara University – School of Law) has posted “Content Moderation Remedies” (Michigan Technology Law Review, Forthcoming) on SSRN. Here is the abstract:

This Article addresses a critical but underexplored topic of “platform” law: if a user’s online content or actions violate the rules, what should happen next? The longstanding expectation is that Internet services should remove violative content or accounts from their services, and many laws mandate that result. However, Internet services have a wide range of other options—what I call “remedies”—they can use to redress content or accounts that violate the applicable rules. The Article describes dozens of remedies that Internet services have actually imposed. The Article then provides a normative framework to help Internet services and regulators navigate these remedial options to address the many difficult tradeoffs involved in content moderation. By moving past the binary remove-or-not remedy framework that dominates the current discourse about content moderation, this Article helps to improve the efficacy of content moderation, promote free expression, promote competition among Internet services, and improve Internet services’ community-building functions.