Waddington on Rules As Code: Drawing Out the Logic of Legislation for Drafters and Computers

Matthew Waddington (Legislative Drafting Office, States of Jersey) has posted “Rules As Code: Drawing Out the Logic of Legislation for Drafters and Computers” (Modern Legislative Drafting – A Research Companion, Constantin Stefanou (ed.) (Routledge) (Forthcoming)) on SSRN. Here is the abstract:

This chapter outlines developments in the digitisation of legislative drafts, looking at “computational law” and the spectre of artificial intelligence (“AI”), but focussing mainly on “Rules as Code” (or “RaC”). The concept of RaC presented here is not one that claims to be able to digitise all of the law, or even all aspects of a piece of legislation. Nor is it one that claims computers should interpret the substantive terms used in legislation, or fill in implied concepts, as opposed to interpreting terms like “if”, “and”, “or”, “not”, “means”, “includes”, “must” and “may” that drafters are already trying to use in a disciplined way.

The chapter examines what can be drawn from the increasingly systematic approach among legislative drafters in Commonwealth countries to handling different key components of legislation. The shift from “shall” to “must” (and “is”) has more clearly exposed differences between constitutive provisions (such as definitions, or rules about whether a notice or application is valid), provisions taking effect by operation of law (such as establishing a statutory body corporate), and normative provisions (such as imposing an obligation, or creating an offence). Basic deontic logic symbols can be used to illustrate the way drafters limit normative terms (other than in offences) to basic building blocks of “must”, “must not” and “may”. Drafters use those building blocks to create what others label as “rights” and “powers” in ways that mean modern legislative drafting in the Commonwealth may be able to avoid many of the problems and complications that have beset those who have tried to formalise or digitise law. In particular it may avoid some of the difficulties that occur when, before attempting the digital capture of the elements of legislation, an attempt is made to apply deontic logic to law in general, or to wrestle with systematising legal expressions such as rights and privileges in the fashion attempted by Hohfeld, or to formalise fundamental legal concepts like Sartor’s, or to pin down a large range of concepts as in LegalRuleML. Those broader issues may need to be tackled in the longer term, but in the short term a simplified approach could produce results and help drafters grasp the insights that can be obtained from this approach.

Blasimme on Machine Learning in Paediatrics and the Childs’s Right to An Open Future

Alessandro Blasimme (ETH Zurich) has posted “Machine Learning in Paediatrics and the Childs’s Right to An Open Future” on SSRN. Here is the abstract:

Machine Learning (ML)-driven diagnostic systems for mental and behavioural paediatric conditions can have profound implications for child development, children’s image of themselves and their prospects for social integration.

The use of machine learning (ML) in biomedical research, clinical practice and public health is set to radically transform medicine. Ethical challenges associated to such transformation are particularly salient in the case of vulnerable or dependent patients. One relatively neglected ethical issue in this space is the extent to which the clinical implementation of ML-based predictive analytics is bound to erode what philosopher Joel Feinberg has defined as children’s right to an open future.

An ethical analysis of how the unprecedented predictive power of ML diagnostic systems can affect a child’s right to an open future has not yet been undertaken. In this paper, I illustrate the right to an open future and explain its relevance in relation to diagnostic uses of ML in paediatric medicine, with a particular focus on Attention-Deficit/Hyperactivity Disorder and autism.

ML-based diagnostic tools focused on brain imaging run the risk of objectifying mental and behavioural conditions as brain abnormalities, even though the neuropathological mechanisms causing such abnormalities at the level of the brain are far from clear.

Gains in automating psychiatric diagnosis have to be weighed against the risks that ML-driven diagnoses may affect a child’s capacity to uphold a sense of self-worth and social integration.