Haynes on Governing at a Distance: The EU AI Act and GDPR as Pillars of Global Privacy and Corporate Governance

Maria De Lourdes Haynes (American U Dubai) has posted “Governing at a Distance: The EU AI Act and GDPR as Pillars of Global Privacy and Corporate Governance” on SSRN. Here is the abstract:

The European Artificial Intelligence Act (AI Act) constitutes a landmark regulatory framework governing artificial intelligence technologies, with core principles grounded in transparency, accountability, and risk mitigation. While designed to foster innovation and safeguard fundamental rights, the Act poses considerable implementation challenges. Organisations must navigate complex compliance obligations imposed to various actors across the value chain. These requirements entail rigorous reporting, auditing, monitoring and governance mechanisms, placing increased demands on corporate governance structures.A defining feature of the AI Act is its extraterritorial scope, mirroring the reach of the General Data Protection Regulation (GDPR). The AI Act applies not only to entities established within the European Union but also to non-EU businesses operating or placing AI products on the EU Market. Its extensive provision, covering authorised representatives and specific duties for actors across the AI value chain, are expected to incentivise non-EU jurisdictions and corporations to align their AI development and deployment practices with EU standards. Non-compliance may lead to hefty fines and exposure to reputational damage along with an erosion of consumer trust.AI Act is poised to emerge as a global benchmark for AI regulation. Board-level governance bodies must reconcile innovation and business objectives with regulatory imperatives, address liability risks, and embed AI literacy into strategic management and decision-making. As the regulatory framework evolves, it reinforces the necessity of integrating multidisciplinary legal, ethical, and strategic considerations into managerial and corporate governance frameworks to navigate this dynamic environment effectively and mitigate emerging risks.

Nugent on Generative Cybersecurity

Nicholas Nugent (U Tennessee) has posted “Generative Cybersecurity” on SSRN. Here is the abstract:

Cybersecurity is experiencing a sea change, and AI is to blame. Bots, which now outnumber human users, prowl networks day and night, using deep learning to discover vulnerabilities and threatening to make all software essentially transparent. The number of skilled human hackers alive in the world no longer poses a meaningful constraint on the amount of damage that can be done, as even the least experienced “script kiddie” can outsource his dark arts to hundreds of self-executing AI agents, each independently working to worm its way into a target’s system. And the age of real-time deepfakes is now upon us, as scammers personally converse with the victims of their social engineering schemes while powerful hardware dynamically swaps their faces and voices with those of impersonated relatives or coworkers.

At the same time, traditional legal doctrines are showing their age. Firms have few legal options to stop bots from continually probing their systems for vulnerabilities, as courts long ago hollowed out the tort of cyber-trespass. The federal Computer Fraud and Abuse Act punishes hackers who use AI to break into protected computers just as surely as it punishes traditional hacking. But the 1986 statute is showing its age, its language poorly suited to situations in which adversaries trick lawful AI systems into voluntarily spilling their secrets without ever crossing the access barrier—the problem of “adversarial AI.” And wire fraud, theft, and right-of-publicity laws map awkwardly, if they map at all, onto certain elements of deepfake scams.

Existing liability frameworks compound the problem, making it difficult to hold AI companies accountable when bad actors use their tools to harm others. Negligence doctrines typically insulate vendors from secondary liability where products admit of substantial lawful uses or where intervening criminality breaks the chain of proximate causation. And firms that deploy defensive AI systems to fight fire with fire may likewise find themselves without a backstop if those systems fail, or unexpectedly wreak havoc on others, given tort law’s reluctance to apply product liability rules to software.

Despite a growing literature on legal issues related to artificial intelligence and a separate body of cybersecurity scholarship, the legal academy has not yet treated AI-driven cybersecurity as a distinct, system-level field of inquiry. Where scholars or policymakers acknowledge that a particular AI use case challenges a traditional rule, they tend to offer ad hoc fixes (or none at all). As a result, cybersecurity law risks falling behind in a rapidly evolving threat environment, leaving firms and individuals without adequate remedies.

This Article tackles the problem head-on, offering the first system-level treatment of the “AI problem” facing cybersecurity and, by extension, cybersecurity law. It provides a comprehensive taxonomy of the ways AI intersects with cybersecurity. That taxonomy organizes the field around three primary roles: using AI as a tool for malicious cyber-activity (“AI as Threat”), attacking AI systems (“AI as Target”), and leveraging AI’s defensive capabilities (“AI as Shield”). It builds out detailed subcategories grounded in specific technologies, operations, and injuries, and draws on the computer science literature and real-world incidents to show that each distinct threat is real rather than theoretical.

Not limited to technical description, the Article systematically identifies the existing laws and doctrines that apply to each distinct use case and exposes the structural gaps AI has created. It then advances an integrated reform agenda designed to realign cybersecurity law to a landscape defined by autonomous, learning systems. The Article proposes five core shifts: rethinking the doctrine of electronic trespass, decentering intrusion as a necessary element in hacking offenses, protecting individual likeness per se, establishing artificial duties of care, and recalibrating negligence doctrine for agentic systems. Taken together, these reforms would move cybersecurity law beyond its human- and intrusion-era origins and toward a design suited to the new reality of machine-mediated threats and security.

Morley et al. on Closing the AI Benefits Gap: Systems Design for Population Health Equity

Jessica Morley (Yale U Digital Ethics Center) and Luciano Floridi (Yale U Digital Ethics Center) have posted “Closing the AI Benefits Gap: Systems Design for Population Health Equity” on SSRN. Here is the abstract:

Artificial Intelligence (AI) is currently failing to live up to its potential. Its champions promise that it will make healthcare more effective, efficient, and equitable, thereby improving population health. However, these benefits are not consistently materialising. Examples of AI working effectively at scale remain limited, and even when implementation succeeds, group or population-level improvements in outcomes are often not discernible. Drawing on the 2024 Global Health in the Age of AI symposium, we argue that this benefits gap stems from two fundamental problems. First, AI is being built on inadequate foundations. Second, AI has been tasked with optimising individual health; a function incapable of improving population outcomes. The benefits gap cannot, therefore, be closed through ad hoc policy interventions designed to address specific implementation barriers. Instead, AI must first be assigned a new population-level function, then robust foundations must be built through systems design to support it. Crucially, both the function and the foundations must be co-created by those most affected by health inequities, working together with frontline health workers, public health practitioners, AI developers, and governance bodies. Only by taking this approach will it be possible to realise AI’s population health potential and avoid a disillusionment-driven healthcare-specific AI winter.

Lim et al. on Introduction to Inclusive Innovation in the Age of AI and Big Data

Daryl Lim (Pennsylvania State U) and Peter K. Yu (Texas A&M U Law) have posted “Introduction to Inclusive Innovation in the Age of AI and Big Data” (INCLUSIVE INNOVATION IN THE AGE OF AI AND BIG DATA, Daryl Lim and Peter K. Yu, eds., Oxford University Press, 2026, Forthcoming) on SSRN. Here is the abstract:

As artificial intelligence and big data analytics reshape economies and societies, the promise of innovation is increasingly shadowed by concerns over inclusion, equity, and global justice.Inclusive Innovation in the Age of AI and Big Data brings together established and emerging voices from across the world to critically examine issues lying at the intersection of innovation, intellectual property, and inequality in the age of AI and big data.

Featuring empirical studies, legal analyses, policy critiques, interdisciplinary perspectives, and global insights, this accessible, interdisciplinary, and open-access volume underscores the tremendous impact gender, race, and other socioeconomic factors have on innovation and intellectual property ecosystems. It also explores structural barriers in these ecosystems, diversity initiatives in the patent area, metrics for measuring inclusivity and diversity in innovation, changes brought about by AI and big data, and the evolution of the global innovation and intellectual property systems.

This introductory chapter begins by identifying three core questions in the emerging debate on inclusive innovation: Innovation by whom? Innovation for whom? And innovation to what end? The chapter then discusses the interrelationship between inclusive innovation and intellectual property, new equity concerns raised by AI-driven innovation, and the multiple pathways to promote inclusive innovation. The chapter continues to outline the structure of the volume, which is organized into five thematic parts: (1) innovation gaps and demographics; (2) disparities in the patent system; (3) initiatives to promote inclusive innovation; (4) AI technology and equitable development; and (5) AI-driven innovation and global challenges.

Lavi et al. on Seeing is Believing? Deepfakes in Financial Markets

Michal Lavi (The Hadar Jabotinsky Center Interdisciplinary Research Financial Markets) and Hadar Yoana Jabotinsky (The Hadar Jabotinsky Center Interdisciplinary Research Financial Markets) have posted “Seeing is Believing? Deepfakes in Financial Markets” (44 Cardozo Arts & Ent. L.J. 55 (2026)) on SSRN. Here is the abstract:

Seeing is Believing? Deepfakes in Financial Markets 44 Cardozo Arts & Ent. L.J. 55 (2026) 

We let a genie out of the bottle when we developed nuclear weapons… AI is somewhat similar-it’s part way out of the bottle.”

-Warren Buffett, at his annual shareholding meeting. 

An AI-powered tool recently mimicked Warren Buffett’s image and voice so convincingly that even his own family could have been deceived. This striking example highlights the transformative potential of voice cloning and deepfakes. This innovative technology leverages artificial intelligence (AI) to create hyper-realistic audio and video content. By blurring the boundaries between authenticity and synthetic creation, deepfakes make it possible to fabricate moments that never occurred. Recent advancements in AI and user-friendly software have made deepfakes more accessible and further contributed to the proliferation of deepfakes, enabling even individuals with minimal technical skills to produce compelling deepfakes at little to no cost.

While deepfakes can be used positively and offer promising applications, such as restoring voices, animating art, or enhancing online shopping, they also have a dark side. Deepfakes have been weaponized to spread misinformation, create fake pornography, and disseminate fake news. Although research often focuses on deepfakes in social media, targeted scams using deepfakes are a growing concern. These scams often involve fabricated evidence, identity theft, or highly convincing impersonations executed with alarming precision, that can aim at facilitating financial scams.

Deepfakes pose significant threats to personal security, national security, financial stability, and democracy. Addressing their harmful effects is urgent. This Article asks how should policy makers construct the use of this technology, confront its harmful effects and mitigate them in the context of financial markets. Rejecting a one-size-fits-all regulatory framework, it advocates for tailored strategies. For social media deepfakes, the focus should be on balancing free speech with improved content moderation. For targeted scams, new security standards and verification mechanisms are imperative.

Contributing to the legal scholarship, this Article provides a comprehensive overview of the deepfake phenomenon, detailing its motivations, harms, and societal impacts. It emphasizes the overlooked yet pressing issue of deepfake-driven financial scams, analyzing the unique challenges these targeted distortions of reality pose. The Article critiques existing legislative efforts, arguing they are ill-suited to address narrow, targeted scams. Finally, it proposes tailored, context-specific solutions to mitigate the dangers posed by this technology. The Article concludes by underscoring that as the line between real and fake continues to blur, our legal, organizational and ethical frameworks must evolve to safeguard truth.

Hartzog et al. on How AI Destroys Institutions

Woodrow Hartzog (Boston U Law) and Jessica M. Silbey (Boston U Law) have posted “How AI Destroys Institutions” (77 UC Law Journal (forthcoming 2026)) on SSRN. Here is the abstract:

Civic institutions—the rule of law, universities, and a free press—are the backbone of democratic life. They are the mechanisms through which complex societies encourage cooperation and stability, while also adapting to changing circumstances. The real superpower of institutions is their ability to evolve and adapt within a hierarchy of authority and a framework for roles and rules while maintaining legitimacy in the knowledge produced and the actions taken. Purpose-driven institutions built around transparency, cooperation, and accountability empower individuals to take intellectual risks and challenge the status quo. This happens through the machinations of interpersonal relationships within those institutions, which broaden perspectives and strengthen shared commitment to civic goals.

Unfortunately, the affordances of AI systems extinguish these institutional features at every turn. In this essay, we make one simple point: AI systems are built to function in ways that degrade and are likely to destroy our crucial civic institutions. The affordances of AI systems have the effect of eroding expertise, short-circuiting decision-making, and isolating people from each other. These systems are anathema to the kind of evolution, transparency, cooperation, and accountability that give vital institutions their purpose and sustainability. In short, current AI systems are a death sentence for civic institutions, and we should treat them as such.

Shah et al. on Robust AI Personalization Controls: The Human Context Protocol

Anand Shah (Massachusetts Institute Technology (MIT)) et al. have posted “Robust AI Personalization Controls: The Human Context Protocol” on SSRN. Here is the abstract:

Personalization underpins the modern digital economy. Today, personalization is largely implemented through provider-managed infrastructure that infers user preferences from behavioral data, with limited portability or user control. However, large language models (LLMs) are increasingly being used to perform tasks on users’ behalf. The age of LLMs for the first time provides a path to a more controllable and interpretable personalization paradigm, grounded in user-expressed natural language preferences and context. We propose the Human Context Protocol (HCP), a user-centric approach to representing and sharing personal preferences across AI systems. HCP treats preferences as a portable, user-governed layer in the personalization stack, enabling interoperability, scoped access, and revocation. Along with a working prototype to ground discussion, we consider adoption dynamics and market incentives, high-stakes use cases, and outline novel paths via the HCP towards trustworthy personalization in the human-AI economy.

Alonso et al. on AI And Copyright “Hallucinations”: Does the Text and Data Mining Exception Really Support Generative AI Training?

Eduardo Alonso (City U London) and Nicola Lucchi (Universitat Pompeu Fabra Law) have posted “AI And Copyright “Hallucinations”: Does the Text and Data Mining Exception Really Support Generative AI Training?” (European Intellectual Property Review, 2025, volume 47, issue 9, pp. 515-526) on SSRN. Here is the abstract:

This article critically challenges the widespread – and, it is argued, conceptually flawed – assumption that arts 3 and 4 of the CDSM Directive provide a lawful basis for training generative AI systems on copyright-protected content. The article describes this misinterpretation as a form of legal “hallucination”, underscoring its disconnect from the Directive’s textual, technical, and normative foundations. Designed to enable automated analytical extraction for scientific or informational purposes, the TDM exceptions do not encompass the large-scale reproduction, internalisation, and expressive re-use of works characteristic of GenAI training. Article 3 is limited to non-commercial research; Art.4’s opt-out mechanism, based on non-standardised signals, exacerbates uncertainty without ensuring transparency or fair compensation. This misclassification not only undermines core copyright incentives but also distorts the scope of EU exceptions, placing the framework in tension with the three-step test and international norms. The article argues that applying TDM rules to GenAI training introduces structural imbalances, both doctrinal and distributive, that risk entrenching platform asymmetries, weakening authorial agency, and threatening cultural diversity. Rather than relying on strained legal interpretations, a forward-looking response requires bespoke legal reforms that preserve normative coherence while addressing the specific challenges posed by synthetic content creation.

Neill et al. on A Framework for Applying Copyright Law to the Training of Textual Generative Artificial Intelligence

Arthur H. Neill (New Media Rights) et al. have posted “A Framework for Applying Copyright Law to the Training of Textual Generative Artificial Intelligence” (32 Texas Intellectual Property Law Journal 225 (2024)) on SSRN. Here is the abstract:

The rise in the popularity of consumer-facing generative artificial intelligence (“GenAI”) has created considerable confusion and consternation among some copyright owners. Copyright owners argue that GenAI’s ability to automatically generate works is made possible by large-scale direct infringement by OpenAI, Microsoft, and other major GenAI developers. This article explores the application of copyright law to the training of OpenAI’s ChatGPT, specifically focusing on the legal issues surrounding the unauthorized use of copyrighted textual works in the GenAI training process.

The large language models (“LLMs”) that drive ChatGPT and similar GenAI can summarize written works, generate movie scripts, write poetry, and compose stories nearly instantaneously. LLMs can only function in this way due to the use of vast, diverse training datasets comprised of billions of websites and expansive repositories of books. These datasets are processed to derive the functionality and syntax of language, allowing the LLMs to generate new works.

This article discusses the recent lawsuits launched by high-profile authors and copyright owners against OpenAI and Microsoft, claiming direct, vicarious, and derivative infringement. Authors such as George RR Martin, Sarah Silverman, Christopher Golden, and professional organizations such as the Authors Guild contended their works were infringed upon to turn OpenAI into an $80 billion company.

In considering the merits of these lawsuits, we discuss the curation and content of training datasets used in the known iterations of ChatGPT, and characterize the protectability of the different works the datasets included. We then explore whether the transitory nature of OpenAI’s training process uses acceptable, non-infringing copies and how that undermines claims of direct infringement.

The article then looks at the applicability of current fair use precedent to textual GenAI and the various types of works used in training datasets. To do so, we apply settled caselaw and leading decisions to discuss OpenAI’s use of copyrighted works regarding purpose and character, nature of the original work, the amount and substantiality of the works used, and the impact on the market value of the works by ChatGPT. We pay special attention to other innovative technologies that rely on a fair use defense to draw analogies and comparisons to GenAI.

Finally, this article considers the policy and legislation of other countries and their approach to ChatGPT and copyright. In doing so, policy considerations are taken into account to argue the necessity of a finding of fair use to maintain international competitiveness and to prevent an erosion of fair use in other sectors outside of GenAI. The article concludes that there is substantial support for arguments that GenAI training involves only transitory, non-actionable copying, and is also permissable under fair use.

Mazumder on Human-AI Collaboration with ChatGPT: A Systematic Review of Implications for Finance, Law, and Healthcare

Pristly Turjo Mazumder (Georgia State U) has posted “Human-AI Collaboration with ChatGPT: A Systematic Review of Implications for Finance, Law, and Healthcare” on SSRN. Here is the abstract:

ChatGPT is rapidly shaping high-stakes sectors including education, healthcare, finance, law, and business. This paper combines a systematic review with practical research to examine ChatGPT and large language models (LLMs) in high-stakes sectors. Evidence shows ChatGPT enhances adaptive learning, academic writing, and clinical decision support, while our finance case study highlights its potential for anti-money laundering (AML) compliance and regulatory reporting. At the same time, challenges such as hallucinations, bias, privacy risks, and plagiarism persist, raising concerns over reliability and accountability. Ethical and regulatory gaps, spanning data protection, intellectual property, and transparency, further complicate adoption. To address these issues, we propose a human-AI collaboration framework built on domain-specific fine-tuning, expert oversight, and policy safeguards. Our findings underscore that ChatGPT holds significant promise for advancing innovation and national interest in critical industries, but responsible integration requires clear guidelines, rigorous validation, and continuous governance.