Kemper on Kafkaesque AI

Carolin Kemper (German Research Institute for Public Administration) has posted “Kafkaesque AI? Legal Decision-Making in the Era of Machine Learning” (University of San Francisco Intellectual Property and Technology Law Journal, Vol. 24, No. 2, 2020) on SSRN. Here is the abstract:

Artificial Intelligence (“AI”) is already being employed to make critical legal decisions in many countries all over the world. The use of AI in decision-making is a widely debated issue due to allegations of bias, opacity, and lack of accountability. For many, algorithmic decision-making seems obscure, inscrutable, or virtually dystopic. Like in Kafka’s The Trial, the decision-makers are anonymous and cannot be challenged in a discursive manner. This article addresses the question of how AI technology can be used for legal decisionmaking and decision-support without appearing Kafkaesque.

First, two types of machine learning algorithms are outlined: both Decision Trees and Artificial Neural Networks are commonly used in decision-making software. The real-world use of those technologies is shown on a few examples. Three types of use-cases are identified, depending on how directly humans are influenced by the decision. To establish criteria for evaluating the use of AI in decision-making, machine ethics, the theory of procedural justice, the rule of law, and the principles of due process are consulted. Subsequently, transparency, fairness, accountability, the right to be heard and the right to notice, as well as dignity and respect are discussed. Furthermore, possible safeguards and potential solutions to tackle existing problems are presented. In conclusion, AI rendering decisions on humans does not have to be Kafkaesque. Many solutions and approaches offer possibilities to not only ameliorate the downsides of current AI technologies, but to enrich and enhance the legal system.

Salib on Machine Learning and Class Certification

Peter Salid (Harvard Law School) has posted “Machine Learning and Class Certification” on SSRN. Here is the abstract:

Class actions are supposed to allow plaintiffs to recover for their high-merit, low-dollar claims. But current law leaves many such plaintiffs out in the cold. To be certified, classes seeking damages must show that, at trial, “common” questions (those for which a single answer will help resolve all class members’ claims) will predominate over “individual” ones (those that must be answered separately as to each member). Currently, many putative class actions in important doctrinal areas—mass torts, consumer fraud, employment discrimination, and more—are regarded as uncertifiable for lack of predominance. As a result, even plaintiffs with valid claims in these areas have little or no access to justice. This state of affairs is exacerbated by a line of Supreme Court cases beginning with Wal-Mart Stores, Inc. v. Dukes. There, the Court disapproved of certain statistical methods for answering individual questions and achieving the predominance of common ones.

This Article proposes a first-of-its kind solution: A.I. class actions. Advanced machine learning algorithms could be trained to mimic the decisions of a jury in a particular case. Then, those algorithms would expeditiously resolve the case’s individual questions. As a result, common questions would predominate at trial, facilitating certification for innumerable currently-uncertifiable classes. This Article lays out the A.I. class action proposal in detail. It argues that the proposal is feasible; the necessary elements are precedented in both complex litigation and computer science. The Article also argues that A.I. class actions would survive scrutiny under Wal-Mart, though other statistical methods have not. To demonstrate this, the Article develops a new, comprehensive theory of the higher-order values animating Wal-Mart and its progeny. It shows that these cases are best explained as approving statistical proof only if it can deliver accurate answers at the level of individual plaintiffs. Machine learning can deliver such accuracy in spades.

McBride on The Inevitable Conflict between Contract Law and Free Speech in Cyberspace

Nicholas McBride (University of Cambridge – Faculty of Law) has posted “‘All Watched Over By Machines of Loving Grace’? The Inevitable Conflict between Contract Law and Free Speech in Cyberspace” (Davies and Raczynska, Contents of Commercial Contracts: Terms Affecting Freedoms (Hart Publishing, 2020)) to SSRN. Here is the abstract:

This paper discusses why it is inevitable that contract law will be used as a means of censoring speech on the Internet, and why contract law allows itself to be used to limit freedom of speech.

Recommended. For an economic theory of why the state trades off public with private lawmaking in this context, see “Optimal Social Media Content Moderation and Platform Immunities”.