Tschider on Prescribing Exploitation

Charlotte Tschider (Loyola University Chicago School of Law) has posted “Prescribing Exploitation” (Maryland Law Review, Forthcoming 2023) on SSRN. Here is the abstract:

Patients are increasingly reliant temporarily, if not indefinitely, on connected medical devices and wearables, many of which use artificial intelligence (AI) infrastructures and physical housing that directly interacts with the human body. The automated systems that drive the infrastructures of medical devices and wearables, especially those using complex AI, often use dynamically inscrutable algorithms that may render discriminatory effects that alter paths of treatment and other aspects of patient welfare.

Previous contributions to the literature, however, have not explored how AI technologies animate exploitation of medical technology users. Although all commercial relationships may exploit users to some degree, some forms of health data exploitation exceed the bounds of normative acceptability. The factors that illustrate excessive exploitation that should require some legal intervention include: 1) existence of a fiduciary relationship or approximation of such a relationship, 2) a technology-user relationship that does not involve the expertise of the fiduciary, 3) existence of a critical health event or health status requiring use of a medical device, 4) ubiquitous sensitive data collection essential to AI functionality, 5) lack of reasonably similar analog technology alternatives, and 6) compulsory reliance on a medical device.

This paper makes three key contributions to existing literature. First, this paper establishes the existence of a type of exploitation that is not only exacerbated by technology but creates additional risk by its use. Second, this paper illustrates the need for cross-disciplinary engagement across privacy scholarship and AI ethical goals that typically involve representative data collection for fairness and safety. This paper then illustrates how a modern information fiduciary model can neutralize patient exploitation risk when exploitation exceeds normative bounds of community acceptability.


Hod et al. on Data Science Meets Law in the Classroom

Shlomi Hod (Boston University), Karni Chagal-Feferkorn (University of Ottawa Common Law Section), Niva Elkin-Koren (Tel-Aviv University – Faculty of Law), and Avigdor Gal (Technion-Israel Institute of Technology) have posted “Data Science Meets Law” (65 Communications of the ACM, 2022) on SSRN. Here is the abstract:

Engaging lawyers and data scientists in multi-disciplinary dialogue may result in the design of better AI systems. Combining the joint input of these two disciplines as early as possible in the life cycle of AI systems may help in properly embedding human values in these systems and in minimizing their risks of unintended harms.

Lawyers and data scientists, however, often think of the other discipline as “speaking a different language”, and facilitating dialogue between them is not always trivial.

This paper describes a “hands on” course taught to both law and data science students in academic institutions in the U.S., Europe and Israel. The unique format of the course, which is based on students working in small mixed groups, enables meaningful dialogue between the disciplines and is intended to contribute to the design of “responsible AI” systems.

In the paper we share the pedagogic principles that guided us as well as insights on how to foster multi disciplinary dialogue between law and data science students.