Blasimme on Machine Learning in Paediatrics and the Childs’s Right to An Open Future

Alessandro Blasimme (ETH Zurich) has posted “Machine Learning in Paediatrics and the Childs’s Right to An Open Future” on SSRN. Here is the abstract:

Machine Learning (ML)-driven diagnostic systems for mental and behavioural paediatric conditions can have profound implications for child development, children’s image of themselves and their prospects for social integration.

The use of machine learning (ML) in biomedical research, clinical practice and public health is set to radically transform medicine. Ethical challenges associated to such transformation are particularly salient in the case of vulnerable or dependent patients. One relatively neglected ethical issue in this space is the extent to which the clinical implementation of ML-based predictive analytics is bound to erode what philosopher Joel Feinberg has defined as children’s right to an open future.

An ethical analysis of how the unprecedented predictive power of ML diagnostic systems can affect a child’s right to an open future has not yet been undertaken. In this paper, I illustrate the right to an open future and explain its relevance in relation to diagnostic uses of ML in paediatric medicine, with a particular focus on Attention-Deficit/Hyperactivity Disorder and autism.

ML-based diagnostic tools focused on brain imaging run the risk of objectifying mental and behavioural conditions as brain abnormalities, even though the neuropathological mechanisms causing such abnormalities at the level of the brain are far from clear.

Gains in automating psychiatric diagnosis have to be weighed against the risks that ML-driven diagnoses may affect a child’s capacity to uphold a sense of self-worth and social integration.

Tschider on Prescribing Exploitation

Charlotte Tschider (Loyola University Chicago School of Law) has posted “Prescribing Exploitation” (Maryland Law Review, Forthcoming 2023) on SSRN. Here is the abstract:

Patients are increasingly reliant temporarily, if not indefinitely, on connected medical devices and wearables, many of which use artificial intelligence (AI) infrastructures and physical housing that directly interacts with the human body. The automated systems that drive the infrastructures of medical devices and wearables, especially those using complex AI, often use dynamically inscrutable algorithms that may render discriminatory effects that alter paths of treatment and other aspects of patient welfare.

Previous contributions to the literature, however, have not explored how AI technologies animate exploitation of medical technology users. Although all commercial relationships may exploit users to some degree, some forms of health data exploitation exceed the bounds of normative acceptability. The factors that illustrate excessive exploitation that should require some legal intervention include: 1) existence of a fiduciary relationship or approximation of such a relationship, 2) a technology-user relationship that does not involve the expertise of the fiduciary, 3) existence of a critical health event or health status requiring use of a medical device, 4) ubiquitous sensitive data collection essential to AI functionality, 5) lack of reasonably similar analog technology alternatives, and 6) compulsory reliance on a medical device.

This paper makes three key contributions to existing literature. First, this paper establishes the existence of a type of exploitation that is not only exacerbated by technology but creates additional risk by its use. Second, this paper illustrates the need for cross-disciplinary engagement across privacy scholarship and AI ethical goals that typically involve representative data collection for fairness and safety. This paper then illustrates how a modern information fiduciary model can neutralize patient exploitation risk when exploitation exceeds normative bounds of community acceptability.

Recommended.

Price on Distributed Governance of Medical AI

W. Nicholson Price II (University of Michigan Law School) has posted “Distributed Governance of Medical AI” (25 SMU Sci. & Tech. L. Rev. (Forthcoming 2022)) on SSRN. Here is the abstract:

Artificial intelligence (AI) promises to bring substantial benefits to medicine. In addition to pushing the frontiers of what is humanly possible, like predicting kidney failure or sepsis before any human can notice, it can democratize expertise beyond the circle of highly specialized practitioners, like letting generalists diagnose diabetic degeneration of the retina. But AI doesn’t always work, and it doesn’t always work for everyone, and it doesn’t always work in every context. AI is likely to behave differently in well-resourced hospitals where it is developed than in poorly resourced frontline health environments where it might well make the biggest difference for patient care. To make the situation even more complicated, AI is unlikely to go through the centralized review and validation process that other medical technologies undergo, like drugs and most medical devices. Even if it did go through those centralized processes, ensuring high-quality performance across a wide variety of settings, including poorly resourced settings, is especially challenging for such centralized mechanisms. What are policymakers to do? This short Essay argues that the diffusion of medical AI, with its many potential benefits, will require policy support for a process of distributed governance, where quality evaluation and oversight take place in the settings of application—but with policy assistance in developing capacities and making that oversight more straightforward to undertake. Getting governance right will not be easy (it never is), but ignoring the issue is likely to leave benefits on the table and patients at risk.

Forti on The Deployment of Artificial Intelligence Tools in the Health Sector

Mirko Forti (Scuola Superiore Sant’Anna di Pisa – School of Law) has posted “The Deployment of Artificial Intelligence Tools in the Health Sector: Privacy Concerns and Regulatory Answers within the GDPR” on SSRN. Here is the abstract:

This article examines the privacy and data protection implications of the deployment of machine learning algorithms in the medical sector. Researchers and physicians are developing advanced algorithms to forecast possible developments of illnesses or disease statuses, basing their analysis on the processing of a wide range of data sets. Predictive medicine aims to maximize the effectiveness of disease treatment by taking into account individual variability in genes, environment, and lifestyle. These kinds of predictions could eventually anticipate a patient’s possible health conditions years, and potentially decades, into the future and become a vital instrument in the future development of diagnostic medicine. However, the current European data protection legal framework may be incompatible with inherent features of artificial intelligence algorithms and their constant need for data and information. This article proposes possible new approaches and normative solutions to this dilemma.

Johnson on Flexible Regulation for Artificial Intelligence

Walter G. Johnson (RegNet, Australian National University) has posted “Flexible Regulation for Dynamic Products? The Case of Applying Principles-Based Regulation to Medical Products Using Artificial Intelligence” (Law, Innovation and Technology 14(2)) on SSRN. Here is the abstract:

Emerging technologies including artificial intelligence (AI) enable novel products to have dynamic and even self-modifying designs, challenging approval-based products regulation. This article uses a proposed framework by the US Food and Drug Administration (FDA) to explore how flexible regulatory tools, specifically principles-based regulation, could be used to manage ‘dynamic’ products. It examines the appropriateness of principles-based approaches for managing the complexity and fragmentation found in the setting of dynamic products in terms of regulatory capacity and accountability, balancing flexibility and predictability, and the role of third parties. The article concludes that successfully deploying principles-based regulation for dynamic products will require taking serious lessons from the global financial crisis on managing complexity and fragmentation while placing equity at the centre of the framework.

Griffin on Artificial Intelligence and Liability in Health Care

Frank Griffin (University of Arkansas) has posted “Artificial Intelligence and Liability in Health Care” (31 Health Matrix: Journal of Law-Medicine 65-106 (2021)) on SSRN. Here is the abstract:

Artificial intelligence (AI) is revolutionizing medical care. Patients with problems ranging from Alzheimer’s disease to heart attacks to sepsis to diabetic eye problems are potentially benefiting from the inclusion of AI in their medical care. AI is likely to play an ever- expanding role in health care liability in the future. AI-enabled electronic health records are already playing an increasing role in medical malpractice cases. AI-enabled surgical robot lawsuits are also on the rise. Understanding the liability implications of AI in the health care system will help facilitate its incorporation and maximize the potential patient benefits. This paper discusses the unique legal implications of medical AI in existing products liability, medical malpractice, and other law.

Gerke, Babic, Evgeniou, and Cohen on The Need for a System View to Regulate AI/ML Software as Medical Device

Sara Gerke (Harvard University – Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics), Boris Babic, Theodoros Evgeniou (INSEAD), and I. Glenn Cohen (Harvard Law School) have posted “The Need for a System View to Regulate Artificial Intelligence/Machine Learning-Based Software as Medical Device” (NPJ Digit Med. 2020 Apr 7;3:53) on SSRN. Here is the abstract:

Artificial intelligence (AI) and Machine learning (ML) systems in medicine are poised to significantly improve health care, for example, by offering earlier diagnoses of diseases or recommending optimally individualized treatment plans. However, the emergence of AI/ML in medicine also creates challenges, which regulators must pay attention to. Which medical AI/ML-based products should be reviewed by regulators? What evidence should be required to permit marketing for AI/ML-based software as a medical device (SaMD)? How can we ensure the safety and effectiveness of AI/ML-based SaMD that may change over time as they are applied to new data? The U.S. Food and Drug Administration (FDA), for example, has recently proposed a discussion paper to address some of these issues. But it misses an important point: we argue that regulators like the FDA need to widen their scope from evaluating medical AI/ML-based products to assessing systems. This shift in perspective—from a product view to a system view—is central to maximizing the safety and efficacy of AI/ML in health care, but it also poses significant challenges for agencies like the FDA who are used to regulating products, not systems. We offer several suggestions for regulators to make this challenging but important transition.

Bambauer on Cybersecurity for Idiots

Derek E. Bambauer (University of Arizona – James E. Rogers College of Law) has posted “Cybersecurity for Idiots” (106 Minnesota Law Review Headnotes __ (2021 Forthcoming)) on SSRN. Here is the abstract:


Cybersecurity remains a critical issue facing regulators, particularly with the advent of the Internet of Things. General-purpose security regulators such as the Federal Trade Commission continually struggle with limited resources and information in their oversight. This Essay contends that a new approach to cybersecurity modeled on the negligence per se doctrine in tort law will significantly improve cybersecurity and reduce regulatory burdens. It introduces a taxonomy of regulators based upon the scope of their oversight and the pace of technological change in industries within their purview. Then, the Essay describes negligence per se for cybersecurity, which establishes a floor for security precautions that draws upon extant security standards. By focusing on the worst offenders, this framework improves notice to regulated entities, reduces information asymmetries, and traverses objections from legal scholars about the cost and efficacy of cybersecurity mandates. The Essay concludes by offering an emerging case study for its approach: regulation of quasi-medical devices by the Food and Drug Administration. As consumer devices increasingly offer functionality for both medical and non-medical purposes, the FDA will partly transition to a general-purpose regulator of information technology, and the negligence per se model can help the agency balance security precautions with promoting innovation.

Colonna on Artificial Intelligence in the Internet of Health Things

Liane Colonna (Stockholm University – Faculty of Law) has posted “Artificial Intelligence in the Internet of Health Things: Is the Solution to AI Privacy More AI?” on SSRN. Here is the abstract:

The emerging power of Artificial Intelligence (AI), driven by the exponential growth in computer processing and the digitization of things, has the capacity to bring unfathomable benefits to society. In particular, AI promises to reinvent modern healthcare through devices that can predict, comprehend, learn, and act in astonishing and novel ways. While AI has an enormous potential to produce societal benefits, it will not be a sustainable technology without developing solutions to safeguard privacy while processing ever-growing sets of sensitive data.

This paper considers the tension that exists between privacy and AI and examines how AI and privacy can coexist, enjoying the advantages that each can bring. Rejecting the idea that AI means the end of privacy, and taking a technoprogressive stance, the paper seeks to explore how AI can be actively used to protect individual privacy. It contributes to the literature by reconfiguring AI not as a source of threats and challenges, but rather as a phenomenon that has the potential to empower individuals to protect their privacy.

The first part of the paper sets forward a brief taxonomy of AI and clarifies its role in the Internet of Health Things (IoHT). It then addresses privacy concerns that arise in this context. Next, the paper shifts towards a discussion of Data Protection by Design, exploring how AI can be utilized to meet this standard and in turn preserve individual privacy and data protection rights in the IoHT. Finally, the paper presents a case study of how some are actively using AI to preserve privacy in the IoHT.

Schwarcz on Health-Based Proxy Discrimination, Artificial Intelligence, and Big Data

Daniel Schwarcz (University of Minnesota Law School) has posted “Health-Based Proxy Discrimination, Artificial Intelligence, and Big Data” (Houston Journal of Health Law and Policy, 2021) on SSRN. Here is the abstract:

Insurers and employers often have financial incentives to discriminate against people who are relatively likely to experience future healthcare costs. Numerous federal and states laws nonetheless seek to restrict such health-based discrimination. Examples include the Pregnancy Discrimination Act (PDA), the Americans with Disabilities Act (ADA), the Age Discrimination in Employment Act (ADEA), and the Genetic Information Non-Discrimination Act (GINA). But this Essay argues that these laws are incapable of reliably preventing health-based discrimination when employers or insurers rely on machine-learning AIs to inform their decision-making. At bottom, this is because machine-learning AIs are inherently structured to identify and rely upon proxies for traits that directly predict whatever “target variable” they are programmed to maximize. Because the future health status of employees and insureds is in fact directly predictive of innumerable facially neutral goals for employers and insurers respectively, machine-learning AIs will tend to produce similar results as intentional discrimination based on health-related factors. Although laws like the Affordable Care Act (ACA) can avoid this outcome by prohibiting all forms of discrimination that are not pre-approved, this approach is not broadly applicable. Complicating the issue even further, virtually all technical strategies for developing “fair algorithms” are not workable when it comes to health-based proxy discrimination, because health information is generally private and hence cannot be used to correct unwanted biases. The Essay nonetheless closes by suggesting a new strategy for combatting health-based proxy discrimination by AI: limiting firms’ capacity to program their AIs using target variables that have a strong possible link to health-related factors.

Recommended.