Wang et al. on Artificial Intelligence “Law(s)” in China: Retrospect and Prospect

Wayne Wei Wang (The U Hong Kong Law) et al. have posted “Artificial Intelligence “Law(s)” in China: Retrospect and Prospect” on SSRN. Here is the abstract:

This paper examines China’s evolving AI regulation, focusing on the interplay between fragmented laws, technical standards, and sectoral governance frameworks. Case studies on autonomous driving and financial AI demonstrate how adaptive regulatory models balance innovation with risk management via pilot projects, stringent data protection, and iterative policy evolution. These models transition from localized experiments to national standards, managing risks through data governance and public safety measures. Analyzing legislative proposals like the Model Artificial Intelligence Law (MAIL) and the Artificial Intelligence Law of P.R. China (Scholarly Draft Proposal), this paper contrasts MAIL’s centralized, precautionary framework with the Scholarly Draft’s flexible, tiered system that promotes innovation through differentiated risk management. This reflects the tension between central regulatory control and sector-specific governance in aligning rapid technological advancement with coherent legislative oversight. The paper argues that a phased legislative strategy emphasizing flexibility, cross-sectoral consistency, and proactive engagement with emerging technologies is essential for China to sustain global competitiveness while ensuring ethical and safe AI development. By integrating local experimentation, sectoral adaptation, and incremental national standardization, it advocates for balancing regulatory oversight with technological innovation. Ultimately, the findings reflect China’s efforts to craft a resilient legal framework that mitigates AI risks while fostering sustained and responsible innovation.

Oliva on Regulating Healthcare Coverage Algorithms

Jennifer D. Oliva (Indiana U Maurer Law) has posted “Regulating Healthcare Coverage Algorithms” (Indiana Law Journal, Forthcoming) on SSRN. Here is the abstract:

Healthcare insurers utilize algorithms to generate treatment coverage determinations. Insurers use such algorithms to decide whether a particular health intervention is “medically necessary” and, therefore, covered by the plan. Assuming that criteria is satisfied, insurers further deploy these algorithms to determine the breadth and scope of covered services (e.g., the number of days that a patient is entitled to hospital-level care after a “medically necessary” surgery). Unlike clinical algorithms used by healthcare institutions and providers to diagnose and treat patients, coverage algorithms are unregulated, and, therefore, not evaluated for safety and effectiveness by the FDA before they go to market. In addition, coverage algorithm manufacturers—many of whom are the very health insurance companies that use them to make coverage decisions—take the view that their products are “proprietary” and not subject to public disclosure.Consequently, coverage algorithms are immunized from external validation for safety and effectiveness by peer review. 

Like clinical algorithms, coverage algorithms hold promise for more cost-effective and improved healthcare delivery and outcomes. Unfortunately, health insurers often rely on them to generate ever-higher profits by improperly denying patient claims and delaying patient care. Insurance plan reliance on coverage algorithms designed to maximize profits by denying or delaying medically necessary treatment at the expense of patient health and well-being is unlawful.It is also a lucrative strategy. Such use of coverage algorithms (1) saves the insurance plan money up front by relieving its medical staff from having to engage in the time and resource-intensive, patient-specific claims evaluation process and (2) is likely to save the plan money over the longer run when used strategically because the claims denial appeals process generally takes several years. Simply stated, when a patient is projected to die within a few years, the insurer is motivated to rely on the algorithm to deny that patient medically necessary care, force the patient to appeal that decision, and anticipate that the patient will die before the conclusion of the appeals process so that the claim is never paid. As this scenario makes obvious, health plan reliance on profit-driven coverage algorithms to deny and delay treatment disparately impacts the health of patients who have medically complex needs and, therefore, tend to utilize high-cost health care resources at high rates, such as Medicare and Medicaid beneficiaries and individuals with chronic or terminal conditions and other debilitating disabilities. As one investigative reporter put it, “[o]lder patients who spent their lives paying into Medicare, and are now facing amputation, fast-spreading cancers, and other devastating diagnoses, are left to pay for their care themselves or get by without it.”

Heydari et al. on Putting Police Body-Worn Camera Footage to Work: A Civil Liberties Evaluation of Truleo’s AI Analytics Platform

Farhang Heydari (Vanderbilt Law) et al. have posted “Putting Police Body-Worn Camera Footage to Work: A Civil Liberties Evaluation of Truleo’s AI Analytics Platform” on SSRN. Here is the abstract:

This report summarizes findings from a civil liberties evaluation of Truleo, an AI-powered analytics platform designed to automate the review of police body-worn camera (BWC) footage. It includes a summary of how Truleo’s platform works, policy choices made by the company, and our assessment of safeguards and risks of the platform from a civil liberties perspective. The report also offers a series of recommendations for policymakers considering the adoption of Truleo or similar technologies. These include the necessity for independent testing of claimed benefits, democratic authorization for deployment, and ongoing transparency and public input around the platform’s design and operation. Importantly, the report argues that BWC footage should be treated as “civic data” owned by the public, not the police, to enable wider access and use for purposes such as research, oversight, and the exploration of alternative public safety approaches.

Generalizing beyond Truleo, we note that despite their cost, explosive growth, and the incredible amount of personal data they capture, BWCs are significantly underregulated by law, with many critical policy choices left to the law enforcement agencies that use the technology. As a result, the use of the technology has shifted away from its original impetus—to improve outcomes for members of the public interacting with the police and to provide transparency and accountability when things went wrong—and increasingly toward an investigative tool. But we view BWC as the largest collection of data on policing in existence, and one that has been woefully underutilized as a tool for evaluating and improving policing, thus leaving much of the value of our nation’s investment in BWCs untapped. Given this gap, there is great potential in AI technologies, like Truleo, that can rebalance the scales by automating the review of this footage. Although we see great potential in a platform like Truleo’s, we worry that its full potential will never be achieved so long as police retain sole control of BWC footage. Accordingly, we emphasize the need for proactive policymaking by legislators to ensure that emerging AI analytics technologies serve the public interest and help realize the full potential of the significant public investment in BWCs.

Fagan on Reducing Proxy Discrimination

Frank Fagan (South Texas College Law Houston) has posted “Reducing Proxy Discrimination” (Journal of Law & Technology at Texas (forthcoming 2025)) on SSRN. Here is the abstract:

Law protects people from discrimination. Algorithms, however, can easily circumvent the appearance of discrimination through the artful use of proxy variables. For instance, a lending algorithm may appear to satisfy a legal standard by ignoring race, but the same algorithm might deny loan applicants on the basis of having attended a particular high school-a variable that may closely correlate with race. An algorithm that assesses work performance and recommends promotions may ignore sex, but the same algorithm might penalize employees who take, on average, more paternity leave-a variable that may closely correlate with sex. The abuse of proxies cuts across political views on affirmative action. For example, an admissions committee might technically ignore race consistent with recent changes to Equal Protection rules, but the same committee might consider variables that are highly correlated to race, such as zip code, high school, and the income of parents, in order to achieve a university’s diversity goals.

Today, there is no clear legal test for regulating the use of variables that proxy for race and other protected classes and classifications. This Article develops such a test. Decision tools that use proxies are narrowly tailored when they exhibit the weakest total proxy power. The test is necessarily comparative. Thus, if two algorithms predict loan repayment or university academic performance with identical accuracy rates, but one uses zip code and the other does not, then the second algorithm can be said to have deployed a more equitable means for achieving the same result as the first algorithm. Scenarios in which two algorithms produce comparable and non-identical results present a greater challenge. This Article suggests that lawmakers can develop caps to permissible proxy power over time, as courts and algorithm builders learn more about the power of variables. Finally, the Article considers who should bear the burden of producing less discriminatory alternatives and suggests plaintiffs remain in the best position to keep defendants honest—so long as testing data is made available.

Uuk et al. on Effective Mitigations for Systemic Risks from General-Purpose AI

Risto Uuk (Future Life Institute) et al. have posted “Effective Mitigations for Systemic Risks from General-Purpose AI” on SSRN. Here is the abstract:

The systemic risks posed by general-purpose AI models are a growing concern, yet the effectiveness of mitigations remains underexplored. Previous research has proposed frameworks for risk mitigation, but has left gaps in our understanding of the perceived effectiveness of measures for mitigating systemic risks. Our study addresses this gap by evaluating how experts perceive different mitigations that aim to reduce the systemic risks of general-purpose AI models. We surveyed 76 experts whose expertise spans AI safety; critical infrastructure; democratic processes; chemical, biological, radiological, and nuclear risks (CBRN); and discrimination and bias. Among 27 mitigations identified through a literature review, we find that a broad range of risk mitigation measures are perceived as effective in reducing various systemic risks and technically feasible by domain experts. In particular, three mitigation measures stand out: safety incident reports and security information sharing, third-party pre-deployment model audits, and pre-deployment risk assessments. These measures show both the highest expert agreement ratings (>60%) across all four risk areas and are most frequently selected in experts’ preferred combinations of measures (>40%). The surveyed experts highlighted that external scrutiny, proactive evaluation and transparency are key principles for effective mitigation of systemic risks. We provide policy recommendations for implementing the most promising measures, incorporating the qualitative contributions from experts. These insights should inform regulatory frameworks and industry practices for mitigating the systemic risks associated with general-purpose AI.

Lubin on Technology and the Law of Jus Ante Bellum

Asaf Lubin (Indiana U Maurer Law) has posted “Technology and the Law of Jus Ante Bellum” (26(1) Chicago Journal of International Law (forthcoming, 2025)) on SSRN. Here is the abstract:

The temporal boundaries of international rules governing military force are myopic. By focusing only on the initiation and conduct of war, the legal dichotomy between Jus Ad Bellum and Jus In Bello fails to address the critical role of peacetime military preparations in shaping future conflicts. Disruptive military technologies, such as artificial intelligence and cyber offensive capabilities, only further underscore this deficiency. During their pre-war development, these technologies embed countless design choices, hardcoding into their software and user interfaces policy rationales, legal interpretations, and value judgments. Once deployed in battle, these choices have the potential to precondition warfighters and set in motion violations of international humanitarian law (IHL).     

This article highlights glaring inadequacies in how the U.N. Charter, IHL, and International Criminal Law (ICL) currently regulate peacetime military preparations, particularly those involving disruptive technologies. The article juxtaposes these normative gaps with a growing literature in moral philosophy and theology advocating for Jus Ante Bellum (just preparation for war) as a new limb in the Just War Theory model. By reimagining international law’s temporalities Jus Ante Bellum offer a proactive framework for addressing the risks posed by the development of disruptive military technologies. Without this recalibration, international law will continue to cede regulatory authority to the silent decisions made in the server farms of defense contractors and the fortified war rooms of central command, where algorithms and military strategies converge to dictate the contours of conflict long before it even begins.

Hrdy on Trade Secrecy Meets Generative AI

Camilla Alexandra Hrdy (Rutgers) has posted “Trade Secrecy Meets Generative AI” (“Disrupting AI” Symposium Issue of the Chicago Kent Law Review, Forthcoming 2025) on SSRN. Here is the abstract:

Generative AI models like ChatGPT raise novel issues for trade secret law. This Essay identifies three major developments and explains how the law will likely respond based on analogies to past technologies and past case law. 

First, widespread use of generative AI poses new risks to companies’ existing trade secrets. For example, trade secret owners’ own employees might inadvertently share trade secrets with a generative AI tool like ChatGPT, which might disseminate this information to competitors or third parties. I argue this new disclosure risk, at the margins, raises the bar for keeping trade secrets. But companies will likely adapt their risk management strategies, as they did in the face of prior information-distribution technologies, such as the internet. 

Second, generative AI will add to the universe of information that can be protected under trade secret law. Trade secret law will be available even for information that is not protected by patent and copyright law. Patent and copyright law have human creator requirements. But trade secret law has no human creator requirement. Therefore, purely AI-generated outputs that do not qualify for patent or copyright protection can be protected as trade secrets. 

Third, companies that develop valuable new generative AI tools will be able to rely on trade secrecy to protect that technology, even when other forms of IP are unavailing. Trade secret law, especially when supplemented by restrictive contractual “terms of use,” can protect various types of information related to generative AI, including information that does not qualify for copyright or patent protection. 

Even though generative AI models will initially benefit from a combination of trade secrecy and contract protection, the models are highly vulnerable to “reverse engineering.” For example, OpenAI, the maker of ChatGPT, recently accused the makers of the new AI model, “DeepSeek,” of engaging in “knowledge distillation” to develop their competing system—using the larger, more complex, and more expensive ChatGPT model to build a smaller, simpler, and cheaper one. Trade secret law, although it generally permits reverse engineering, may or may not condone this conduct. Courts might construe these activities as a violation of contract law, since knowledge distillation seems to violate OpenAI’s contractual terms of use, but courts may also view these activities as a violation of federal and state trade secret law. In software cases, courts have held that using cutting-edge techniques like data scraping to access trade secrets constitutes  acquisition by “improper means,” and thus misappropriation, especially when contractual terms of use explicitly prohibit this conduct. The makers of DeepSeek claim they independently developed their model, but if this is not true, trade secret law could provide an avenue for legal liability.

Ginsburg & Austin on Regulating Deepfakes at Home and Abroad

Jane C. Ginsburg (Columbia U Law) and Graeme W. Austin (Victoria U Wellington) have posted “Regulating Deepfakes at Home and Abroad” on SSRN. Here is the abstract:

AI technology enables the creation of “deepfakes”—known in legal documents as “digital replicas”—capable of simulating the visual and vocal appearance of real people, living or dead. AI programs can also generate musical compositions in the style of well-known composers or performers, as well as video sequences. What may be good fun in private may become pernicious, offensive, and even dangerous, if widely disseminated over social media or through commercial channels. But, at least in the U.S., legal protections for performers and ordinary individuals against digital replicas, are at best, scanty. The first part of this Essay reviews existing protections against the creation and dissemination of deep fakes under U.S. copyright and trademarks laws as well as representative State right of publicity laws. Our brief survey supports the conclusion of the U.S. Copyright Office that “new federal legislation is urgently needed” because “existing laws fail to provide fully adequate protection.” These failures appear plainer still once consideration extends to the capacity of these doctrines to reach foreign violations. The second part of this Essay’s analysis will show how the currently pending U.S. legislation may, and may not, provide performers and ordinary individuals with enforceable rights against the use of their voices and visual likenesses in digital replicas. Given the few material barriers to cross-border dissemination of deep fakes, any evaluation of the strength of the protections afforded by a new U.S. intellectual property right should consider its international scope, particularly in light of recent Supreme Court caselaw restricting the territorial reach of U.S. intellectual property protections.

Almenar et al. on The Protection of AI-Based Space Systems from a Data-Driven Governance Perspective

Roser Almenar (U Valencia Law) et al. have posted “The Protection of AI-Based Space Systems from a Data-Driven Governance Perspective” (75th International Astronautical Congress (IAC), Milan, Italy, 14-18 October 2024.) on SSRN. Here is the abstract:

Space infrastructures have long represented the pinnacle of technological and engineering achievements. This complexity has been further amplified by the advent of the new space race, where private actors are taking the lead, alongside states, in deploying thousands of satellites in outer space. The outer space environment of 2040 will look very different from today. Spacecraft will necessitate more frequent maneuvers to avoid potential collisions, with the need to be more conscious of their surroundings. Indeed, as the frequency of events and the number of space objects rises, decision-making tasks will increasingly challenge human operators, especially as physical and temporal margins diminish. Such complexity is enveloping thanks to the synergy of space technologies and Artificial Intelligence (AI), which is revolutionizing the functioning of space systems.

The forward trajectory clarifies the significance that AI in outer space will retain in the years ahead. TheCorpus Juris Spatialis finds itself at a crossroads, faced with the defiance of withstanding the technological advances catalyzed by the impending integration of AI into all facets of space missions. Given the ubiquitous nature of AI, its implementation will invariably pose multifaceted legal challenges across diverse aspects of International Space Law. The acquired autonomy of space assets prompts crucial questions regarding the legal standards applicable to AI in outer space, and how these autonomous space systems should be protected against hostile interference.

The main purpose of this paper, presented by the Space Law and Policy Project Group of the Space Generation Advisory Council (SGAC), is to examine the pivotal legal dimensions stemming from the automation of space-based applications from a ‘data-driven governance’ standpoint. The increase in production and acquisition of space data will just augment the sophistication of AI systems, therefore necessitating their data assets to be reliable, accurate, and consistent to safeguard the long-term success of AI technologies in space missions. The paper aims to address the overarching legal challenges posed by the integration of AI into outer space operations, specifically on cybersecurity, intellectual property, and data governance, which are critical for safeguarding autonomous systems. By examining the various nuances of these domains, it seeks to contribute to a comprehensive understanding of the legal landscape of the current AI-space pairing. Ultimately, the conclusion will offer a set of recommendations to pave the way for a secure, ethical evolution of autonomous space systems in the near future.

Sundararajan on How Corporate Boards Must Approach AI Governance

Arun Sundararajan (New York U (NYU) Leonard N. Stern Business) has posted “How Corporate Boards Must Approach AI Governance” on SSRN. Here is the abstract:

As the landscape of artificial intelligence (AI) and generative AI evolves rapidly, AI oversight by corporate boards is essential for managing AI exposure and complying with new AI laws. Competitive pressure to stay ahead in the AI race is inducing CEOs to embrace innovation aggressively, making board oversight especially critical. I present a framework for corporate boards that identifies some key AI governance dimensions and provides guidelines for assessing their organizational risk and regulatory likelihood. The dual lenses of risk and regulation can simultaneously aid a board in prioritizing governance aspects to pay attention to and in choosing a robust oversight strategy. Mapping the risk-regulation matrix shapes appropriate recommended oversight strategies, ranging from proactive self-regulation and compliance monitoring to more passive wait-and-watch strategies. I provide a structured way to navigate the evolving regulatory and governance landscape while unshackling boards from the subjectivity and imprecision of terms like “responsible” or “ethical” AI, leading to oversight that aligns with a company’s unique risk profile and industryspecific regulatory context, while recognizing that AI governance touches a range of topics, from technology, intellectual property and sustainability to audit, measurement and risk assessment.