Coupette & Hartung on Creating a Culture of Constructive Criticism in Computational Legal Studies

Corinna Coupette (Max Planck Institute for Informatics) and Dirk Hartung (Bucerius Law School; Stanford Codex Center) have posted “Sharing and Caring: Creating a Culture of Constructive Criticism in Computational Legal Studies” on SSRN. Here is the abstract:

We introduce seven foundational principles for creating a culture of constructive criticism in computational legal studies. Beginning by challenging the current perception of papers as the primary scholarly output, we call for a more comprehensive interpretation of publications. We then suggest to make these publications computationally reproducible, releasing all of the data and all of the code all of the time, on time, and in the most functioning form possible. Subsequently, we invite constructive criticism in all phases of the publication life cycle. We posit that our proposals will help form our field, and float the idea of marking this maturity by the creation of a modern flagship publication outlet for computational legal studies.

Mowbray, Chung & Greenleaf on Explainable AI (XAI) in Rules as Code (RaC): The DataLex approach

Andrew Mowbray (University of Technology Sydney, Faculty of Law), Philip Chung (University of New South Wales (UNSW Sydney), Faculty of Law and Justice), and Graham Greenleaf (University of New South Wales, Faculty of Law) have posted “Explainable AI (XAI) in Rules as Code (RaC): The DataLex approach” on SSRN. Here is the abstract:

The need for explainability in implementations of ‘Rules as Code (RaC)’ has similarities to the concept of ‘Explainable AI (XAI)’. Explainability is also necessary to avoid RaC being controlled or monopolised by governments and big business. We identify the following desirable features of ‘explainability’ relevant to RaC: Transparency (in various forms); Traceability; Availability; Sustainability; Links to legal sources; and Accountability. Where RaC applications are used to develop automated decision-making systems, some forms of explainability are increasingly likely to be required by law. We then assess how AustLII’s DataLex environment implements ‘explainability’ when used to develop RaC: in open software and codebases; in development and maintenance methodologies; and in explanatory features when codebases are executed. All of these XAI aspects of DataLex’s RaC are consistent with keeping legislation in the public domain no matter how it is encoded.