Gallegos on AI Accountability & Auditability In Clinical Decision Software Act Of 2025 (AACDSA)

Lucas Gallegos (Independent) has posted “AI Accountability & Auditability In Clinical Decision Software Act Of 2025 (AACDSA)” on SSRN. Here is the abstract:

The AI Clinical Safety Act of 2025 establishes the first comprehensive federal framework for transparency, auditing, and accountability in artificial intelligence and machine-learning systems used for clinical decision-making in the United States. The Act mandates public explainability summaries, independent risk-scaled audits, and real-time drift monitoring for high-risk tools. Hospitals and vendors must log and report clinician overrides and adverse events, while vendors meeting rigorous audit and transparency standards are eligible for expedited FDA review and liability protection. The Act includes strong anti-gaming provisions, randomized spot audits, whistle-blower rewards, and a five-year sunset with GAO evaluation. By combining transparency, robust independent oversight, and carefully structured incentives, the AI Clinical Safety Act aims to safeguard patients, prevent bias and drift in clinical AI, and foster responsible innovation. This bill provides a statutory template for building trust and safety as AI systems become foundational in modern medicine.

Ziegler on AI-Generated Denials: Medical Necessity in Medicare Advantage Today

Emma Ziegler (Columbia Law) has posted “AI-Generated Denials: Medical Necessity in Medicare Advantage Today” on SSRN. Here is the abstract:

Medicare Advantage insurers hold vast power over access to care for Medicare beneficiaries enrolled in their plans. Among other things, these insurers make the all-important determination as to whether care is “medically necessary” and thus warrants coverage under Medicare. Recently, these insurers have turned to artificial intelligence to help with these determinations. This trend has had concerning results, exacerbating both inaccuracy and nontransparency in the coverage determination process. This Note overviews the current state of determinations and the role Medicare Advantage insurers play in such determinations. Taking an outcomes-focused approach, it argues that the government must demand greater transparency from these insurers and enhance access to the appeals process for beneficiaries. Such reforms are an important first step in ensuring beneficiaries have access to the care they are entitled to.

Omeonga wa Kayembe on Clinical Affordance as a Framework for Barriers, Transitions, and Policy: A Use Case on AI and NLP Integration in Psychiatry

Naomi Omeonga Wa Kayembe (U Nantes Law and Political Science) has posted “Clinical Affordance as a Framework for Barriers, Transitions, and Policy: A Use Case on AI and NLP Integration in Psychiatry” on SSRN. Here is the abstract:

The integration of Artificial Intelligence (AI) and Natural Language Processing (NLP) in psychiatry has significantly progressed, evolving from early feasibility studies to sophisticated transformer-based models capable of automating clinical assessments, symptom detection, and treatment monitoring. While these technologies hold promise for enhancing psychiatric care, their adoption remains limited due to translational barriers related to the accessibility and acceptability of digital health solutions.

This narrative review synthesizes foundational NLP contributions from the 2010s alongside recent advancements in AI-driven psychiatry, emphasizing both technical scalability and regulatory considerations. To systematize the variables influencing AI adoption in care practice, we introduce Clinical Affordance, a conceptual framework that evaluates the integration potential of AI tools through two interdependent dimensions: accessibility (practical and organizational fit) and acceptability (normative expectations).

Drawing from a selective literature review, we identify the main translational constraints affecting NLP deployment in psychiatry. Ranging from EHR system fragmentation to the burden of explainability mandates and uneven usage patterns, these challenges are analyzed through the lens of Clinical Affordance, with emphasis on their implications for clinical implementation. We further argue that the transition from clinical decision support systems (AI-CDSS) to autonomous medical treatment (AI-Treatment) is central to understanding risk allocation and liability in AI-assisted psychiatry. Finally, we assess how the COVID-19 pandemic impacted public trust in AI-driven mental health solutions, particularly in relation to surveillance and ethical governance.

The article concludes with policy recommendations aimed at reinforcing Clinical Affordance through outcome-based regulation, differentiated accountability, and data governance. By bridging technical innovation with contextual viability, the Clinical Affordance framework supports the sustainable integration of AI and NLP into psychiatric practice and offers a generalizable model for evaluating other digital health technologies.

Park & Cohen on The Regulation of Polygenic Risk Scores

Jin Park (Harvard U Harvard Medical) and I. Glenn Cohen (Harvard Law) have posted “The Regulation of Polygenic Risk Scores” (Harvard Journal of Law & Technology,) on SSRN. Here is the abstract:

Polygenic risk scores (“PRSs”) provide genome-wide estimates of disease risk by aggregating the effects of thousands of genetic variants across the genome. These scores are the subject of immense scientific interest as research tools and more recently as clinical instruments that may allow for physicians to stratify populations based on underlying genetic predisposition, or to tailor therapeutic interventions based on their needs and likelihood of benefit. While their status as research tools has long-been recognized, these scores are now undergoing clinical trials, increasing the evidence base for their use in clinical settings. These scores have also entered the consumer market, prompting industry experts to call on greater regulatory oversight. However, in part due to the speed of these developments, the legal literature has failed to comprehensively assess the nature of these scores, and whether they differ fundamentally from previous forms of genetic scoring which have been regulated by the complex (yet familiar) regulatory regime for genetic testing. 

This Article fills this gap in the literature by comparing the state-of-the-art methodological tools used to generate these scores with familiar forms of genetic testing (e.g., IVDs and LDTs). We identify four dimensions that make PRS distinct from previous genetic testing regimes-(1) the underlying method of assessing genetic risk; (2) an evolving evidence base; (3) lack of consensus on methodology; (4) diversity of device functions that PRSs may apply to. Taking these insights in concert, this Article also offers several principles for regulatory design as it relates to PRSs. 

These principles include the need for a unified approach across all devices that incorporate PRSs, the value of taking a risk-based framework, and drawing lessons from AI/ML regulation. Ultimately, while the existing risk-based device framework will serve as a stopgap for the most clinically impactful use cases (and those that pose the most risk to patients and the public), PRSs and other novel technologies may evince the need for updates to the authorities granted to the existing regulatory regime to balance scientific innovation with the public interest.

Gerke on The Need for ‘Nutrition Facts Labels’ and ‘Front-Of-Package Nutrition Labeling’ For Artificial Intelligence/Machine Learning-Based Medical Devices – Lessons Learned From Food Labeling

Sara Gerke (U Illinois College Law) has posted “The Need for ‘Nutrition Facts Labels’ and ‘Front-Of-Package Nutrition Labeling’ For Artificial Intelligence/Machine Learning-Based Medical Devices – Lessons Learned From Food Labeling” (Forthcoming, Emory Law Journal (Vol. 74, 2025)) on SSRN. Here is the abstract:

Medical AI is rapidly transforming healthcare. The U.S. Food and Drug Administration (FDA) has already authorized the marketing of over 1000 AI/ML-based medical devices, and many more products are in the development pipeline. However, despite this fast development, the regulatory framework for AI/ML-based medical devices could be improved. This Article focuses on the labeling for AI/ML-based medical devices, a crucial topic that needs to receive more attention in the legal literature and by regulators like the FDA. The current lack of labeling standards tailored explicitly to AI/ML-based medical devices is an obstacle to transparency in the use of such devices. It prevents users from receiving essential information about many AI/ML-based medical devices necessary for their safe use, such as race/ethnicity and gender breakdowns of the used training data. To ensure transparency and protect patients’ health, the FDA must develop labeling standards for AI/ML-based medical devices as quickly as possible. 

This Article argues that valuable lessons can be learned from food labeling and applied to labeling for AI/ML-based medical devices. In particular, it argues that there is not only a need for regulators like the FDA to develop “nutrition facts labels,” called here “AI Facts labels” for AI/ML-based medical devices, but also a “front-of-package (FOP) nutrition labeling system,” called here “FOP AI labeling system.” The use of FOP AI labels as a complement to AI Facts labels can further users’ literacy by providing at-a-glance, easy-to-understand information about the AI/ML-based medical device and enable them to make better-informed decisions about their use. This Article is the first to establish a connection between FOP nutrition labeling systems and their promise for AI/ML-based medical devices and make concrete suggestions on what such a system could look like. It also makes additional concrete proposals on other aspects of labeling for AI/ML-based medical devices, including the development of an innovative, user-friendly app based on the FOP AI labeling system as well as labeling requirements for AI/ML-generated content.

Kop et al. on A Brief Quantum Medicine Policy Guide

Mauritz Kop (Stanford U Stanford Law Center Internet and Society) et al. have posted “A Brief Quantum Medicine Policy Guide” (Harvard Law School, Petrie-Flom Center Bill of Health, Dec. 6, 2024, https://petrieflom.law.harvard.edu/2024/12/06/a-brief-quantum-medicine-policy-guide/) on SSRN. Here is the abstract:

This brief healthcare policy guide explores how the convergence of quantum technology (QT) and artificial intelligence (AI) could revolutionize precision medicine, offering hyper-personalized treatments and innovative solutions to longstanding healthcare challenges. Second-generation (2G) quantum technologies leverage quantum mechanical phenomena like superposition and entanglement to solve problems beyond the reach of classical methods. By integrating quantum and classical computing, “quantum-classical hybrids” can improve drug discovery, optimize healthcare operations, enhance medical imaging, and facilitate personalized medicine design.

The article describes 2G quantum technology healthcare use cases, categorized per quantum domain. Potential applications include using quantum simulations to model complex biological systems, accelerating drug development by predicting drug-protein interactions, and employing quantum dots for targeted gene and drug delivery, which can help treat diseases like Alzheimer’s, Parkinson’s, and certain cancers. Quantum sensors can enable real-time health monitoring with exceptional precision, while quantum cryptography provides robust data protection methods—essential for safeguarding patient information under regulations like HIPAA and GDPR.

However, these breakthroughs also raise ethical, legal, socio-economic, and policy (ELSPI) concerns. Drawing lessons from AI, nanotechnology, genetics, and nuclear technology governance, policymakers must ensure responsible oversight. Neither the European Union nor the United States currently has dedicated regulations for quantum healthcare devices, though both rely on existing frameworks like the EU’s Medical Device Regulation, the EU AI Act, the Federal Trade Commission (FTC) Regulations, and the FDA regulatory categories. To manage these complexities, a combination of ex-ante, ex-durante, and ex-post regulatory approaches, as well as international standard-setting, adaptive guidelines, and multidisciplinary collaboration, is recommended. The article offers quantum-specific considerations in medical device regulatory oversight and proposes 10 guiding principles for healthcare policy makers.

By promoting quantum literacy, anticipating societal impacts, fostering global cooperation, and implementing principles-based, future-oriented regulation, we can harness quantum’s transformative potential in medicine while maintaining public trust and safety.

Luan et al. on Algorithmic Bias and Physician Liability

Shujie Luan (Johns Hopkins U Carey Business) et al. have posted “Algorithmic Bias and Physician Liability” on SSRN. Here is the abstract:

With the growing use of artificial intelligence (AI) in clinical decision-making, concerns about bias—manifested as differences in algorithmic accuracy across patient groups—have intensified. In response, the U.S. Centers for Medicare and Medicaid Services (CMS) has introduced a liability rule that penalizes healthcare providers who rely on biased algorithms that result in erroneous decisions. This paper examines the impact of this anti-bias liability rule on an AI firm’s development decision as well as a healthcare provider’s decision to use AI. The AI firm develops an algorithm that serves two patient groups, where achieving the same level of accuracy for the disadvantaged group is more costly. The provider then decides whether and how to use AI to make treatment decisions, balancing the reduction in clinical uncertainty against the risk of incurring anti-bias liability. We find the liability rule may induce biased use of AI: The provider may underuse AI overall and disproportionately disregard AI’s recommendations for disadvantaged patients. Interestingly, the effect of liability on AI use is non-monotone: as liability increases, the provider is first less likely to use AI for disadvantaged patients, but then more likely to rely on it. Furthermore, mandating equal algorithmic accuracy across patient groups may inadvertently harm all patients, in part because such mandates may lead to overusing AI for disadvantaged patients.

Oliva on Regulating Healthcare Coverage Algorithms

Jennifer D. Oliva (Indiana U Maurer Law) has posted “Regulating Healthcare Coverage Algorithms” (Indiana Law Journal, Forthcoming) on SSRN. Here is the abstract:

Healthcare insurers utilize algorithms to generate treatment coverage determinations. Insurers use such algorithms to decide whether a particular health intervention is “medically necessary” and, therefore, covered by the plan. Assuming that criteria is satisfied, insurers further deploy these algorithms to determine the breadth and scope of covered services (e.g., the number of days that a patient is entitled to hospital-level care after a “medically necessary” surgery). Unlike clinical algorithms used by healthcare institutions and providers to diagnose and treat patients, coverage algorithms are unregulated, and, therefore, not evaluated for safety and effectiveness by the FDA before they go to market. In addition, coverage algorithm manufacturers—many of whom are the very health insurance companies that use them to make coverage decisions—take the view that their products are “proprietary” and not subject to public disclosure.Consequently, coverage algorithms are immunized from external validation for safety and effectiveness by peer review. 

Like clinical algorithms, coverage algorithms hold promise for more cost-effective and improved healthcare delivery and outcomes. Unfortunately, health insurers often rely on them to generate ever-higher profits by improperly denying patient claims and delaying patient care. Insurance plan reliance on coverage algorithms designed to maximize profits by denying or delaying medically necessary treatment at the expense of patient health and well-being is unlawful.It is also a lucrative strategy. Such use of coverage algorithms (1) saves the insurance plan money up front by relieving its medical staff from having to engage in the time and resource-intensive, patient-specific claims evaluation process and (2) is likely to save the plan money over the longer run when used strategically because the claims denial appeals process generally takes several years. Simply stated, when a patient is projected to die within a few years, the insurer is motivated to rely on the algorithm to deny that patient medically necessary care, force the patient to appeal that decision, and anticipate that the patient will die before the conclusion of the appeals process so that the claim is never paid. As this scenario makes obvious, health plan reliance on profit-driven coverage algorithms to deny and delay treatment disparately impacts the health of patients who have medically complex needs and, therefore, tend to utilize high-cost health care resources at high rates, such as Medicare and Medicaid beneficiaries and individuals with chronic or terminal conditions and other debilitating disabilities. As one investigative reporter put it, “[o]lder patients who spent their lives paying into Medicare, and are now facing amputation, fast-spreading cancers, and other devastating diagnoses, are left to pay for their care themselves or get by without it.”

Susser et al. on Synthetic Health Data: Real Ethical Promise and Peril

Daniel Susser (Cornell U) et al. have posted “Synthetic Health Data: Real Ethical Promise and Peril” (Hastings Center Report, volume 54, issue 5, 2024[10.1002/hast.4911]) on SSRN. Here is the abstract:

Researchers and practitioners are increasingly using machine-generated synthetic data as a tool for advancing health science and practice, by expanding access to health data while—potentially—mitigating privacy and related ethical concerns around data sharing. While using synthetic data in this way holds promise, we argue that it also raises significant ethical, legal, and policy concerns, including persistent privacy and security problems, accuracy and reliability issues, worries about fairness and bias, and new regulatory challenges. The virtue of synthetic data is often understood to be its detachment from the data subjects whose measurement data is used to generate it. However, we argue that addressing the ethical issues synthetic data raises might require bringing data subjects back into the picture, finding ways that researchers and data subjects can be more meaningfully engaged in the construction and evaluation of datasets and in the creation of institutional safeguards that promote responsible use.

Sessa et al. on Identifying Bias in Data Collection: A Case Study on Drugs Distribution

Claudia Sessa (Carlo Cattaneo LIUC U Carlo Cattaneo LIUC U) et al. have posted “Identifying Bias in Data Collection: A Case Study on Drugs Distribution” on SSRN. Here is the abstract:

A critical aspect of modern healthcare involves recognizing and addressing pharmaceutical needs. Predictive models serve as valuable decision-making tools in the healthcare sector to proactively prevent supply chain failures. However, training these models on real historical data to reliably reflect actual demand is a delicate process. An effective model, capable of reliably estimating the amount of drugs to be distributed in relation to the patient’s needs, must be accurate and inherently fair. Our study endeavors to bridge legal perspectives on fairness with practical assessments of algorithmic fairness, specifically in the context of predicting drugs to be distributed in the specific area of reference. We provide an in-depth overview of the Italian National Healthcare Service, emphasizing its regulatory role in drug dispensation and its inherent challenges. Furthermore, we delve into fundamental bias research principles, encompassing legal and statistical viewpoints. In addition, we present a comprehensive Exploratory Data Analysis using real-world data to highlight challenges encountered in the initial modeling phase. Our analysis unveils the presence of crucial missing data fields and disparities in medication utilization between genders, potentially indicative of social bias. These findings contribute to an in-depth understanding of patient populations concerning drug collection. Importantly, our study promotes a comprehensive approach that incorporates legal considerations and technical elements to improve the fairness and efficacy of predictive models in healthcare.