G’sell on AI Judges

Florence G’sell (Science Po Law) has posted “AI Judges” (Larry A. Dimatteo, Cristina Poncibo, Michal Cannarsa (edit.), The Cambridge Handbook of Artificial Intelligence, Global Perspectives on Law and Ethics, Cambridge University Press, 2022) on SSRN. Here is the abstract:

The prospect of a “robot judge” raises many fantasies and concerns. Some argue that only humans are endowed with the modes of thought, intuition and empathy that would be necessary to analyze or judge a case. As early as 1976, Joseph Weizenbaum, creator of Eliza, one of the very first conversational agents, strongly asserted that important decisions should not be left to machines, which are sorely lacking in human qualities such as compassion and wisdom. On the other hand, it could be argued today that the courts would be wrong to deprive themselves of the possibilities opened up by artificial intelligence tools, whose capabilities are expected to improve greatly in the future. In reality, the question of the use of AI in the judicial system should probably be asked in a nuanced way, without considering the dystopian and highly unlikely scenario of the “robot judge” portrayed by Trevor Noah in a famous episode of The Daily Show. Rather, the question is how courts can benefit from increasingly sophisticated machines. To what extent can these tools help them render justice? What is their contribution in terms of decision support? Can we seriously consider delegating to a machine the entire power to make a judicial decision?

This chapter proceeds as follow. Section 23.2 is devoted to the use of AI tools by the courts. It is divided into three subsections. Section 23.2.1 deals with the use of risk assessment tools, which are widespread in the United States but highly regulated in Europe, particularly in France. Section 23.2.2 presents the possibilities opened by machine learning algorithms trained on databases composed of judicial decisions, which are able to anticipate court decisions or recommend solutions to judges. Section 23.2.3 considers the very unlikely eventuality of full automation of judicial decision making.