Waller et al. on Explaining the Algorithm Does Not Explain the Decision: Unpacking Accountabilities in Organisational Decision Making

Paul Waller (U Bradford) et al. have posted “Explaining the Algorithm Does Not Explain the Decision: Unpacking Accountabilities in Organisational Decision Making” on SSRN. Here is the abstract:

An organisational decision-making process has many component parts (with or without the involvement of a computer-based algorithm). Many technical discussions such as on transparency and explainability of algorithm-supported decisions omit many of them and thus address issues about accountability, liability and explanations for decisions in too narrow a sense. There are many choices made in the construction and operation of an organisational decision-making process, particularly if an algorithm-based model is used as part of it. This may lead to a chain of accountabilities of persons who may be required to explain or justify choices made at any point.

This paper unpacks the general architecture of organisational decision making and examines the location and role of one or more algorithmic components that may feature within it. It will identify the design choices involved in constructing a decision-making process and the corresponding responsibilities and accountabilities. Within those accountabilities, the differences between functional reasons, explanations and justifications will be explored together with the actors who may be responsible for providing them. A case study of a public sector algorithmic decision-making system illustrates how the architecture helps unpack the key issues to interrogate. Crucially, the architecture makes a clear distinction between the generation of a prediction by an algorithmic process and the execution of an organisation’s decision-making policy. In “automated decision-making”, it is the execution of an organisation’s decision-making policy that is automated. A computerised algorithm may or may not provide input to it.

Kaminski on Voices In, Voices Out: Impacted Stakeholders and the Governance of AI

Margot E. Kaminski (U Colorado Law) has posted “Voices In, Voices Out: Impacted Stakeholders and the Governance of AI” (71 UCLA Law Review Discourse 176 (2024)) on SSRN. Here is the abstract:

This Essay addresses reasons for impacted stakeholder involvement in AI governance, ranging from democratic accountability norms to principles of regulatory design. It evaluates several recent examples of both soft and hard law, noting a range of examples of impacted stakeholder participation. It closes with a critique: none of these laws adequately contemplates how to craft transparency and provide expertise so as to meaningfully empower impacted stakeholders.

Kaal on The Future of Law – Dynamic Web3 Governance

Wulf A. Kaal (U St. Thomas Law (Minnesota)) has posted “The Future of Law – Dynamic Web3 Governance” on SSRN. Here is the abstract:

This article proposes a novel web3 governance model using Weighted Directed Acyclic Graphs (WDAGs) and validation pools with reputation staking, combined with a federated communications protocol, to address the negative externalities of continuous legal growth. The traditional methods of legal garbage removal, such as sunset provisions and periodic legal reviews, are hindered by inefficiencies, political manipulation, and resource demands. In contrast, the WDAG system enables a dynamic, self-enforcing, and community-driven approach to legal remediation, ensuring the continuous relevance, efficiency, and adaptability of legal frameworks in a decentralized environment. This model utilizes real-time data analysis and community input to organically adjust legal norms, minimizing political resistance and unintended consequences while preserving legal history and promoting transparency.

The integration of the WDAG framework into web3 governance aligns with the needs of a rapidly evolving technological and social landscape. By facilitating a more responsive and equitable legal system, the WDAG model supports economic growth, fosters democratic engagement, and ensures that legal rules and regulations remain aligned with contemporary societal values. This approach represents a significant advancement in legal governance, demonstrating how emerging technologies can create sustainable and adaptive legal frameworks that better reflect community consensus and adapt to technological and societal changes.

Hiltbrand on Guarding The News Media’s Intellectual Property in the Age of Generative AI

Olivia S. Hiltbrand (Ohio State U (OSU)) has posted “Guarding The News Media’s Intellectual Property in the Age of Generative AI” (28 Stanford Technology Law Review –– (forthcoming 2025)) on SSRN. Here is the abstract:

Technology has posed threats to various creative industries in the past, and as generative artificial intelligence (AI) becomes more widespread, these issues have resurfaced. Not only are there concerns that generative AI might diminish creators’ work opportunities, there are also intellectual property concerns about the data companies use to train generative AI products. For journalists, whose work is copyrighted but often publicly available, this is a problem, especially because facts underlying news reports are not copyrightable. Given the press’s unique societal role, any distortion of news through generative AI could have implications far beyond copyright infringement.

Giladi Shtub & Gal on Data Without Borders: International Effects of Data Flow Regulation

Tamar Giladi Shtub (U Haifa Law) and Michal Gal (U Haifa Law) have posted “Data Without Borders: International Effects of Data Flow Regulation” (Forthcoming, Vanderbilt Journal of Transnational Law (2025)) on SSRN. Here is the abstract:

Data has no inherent jurisdictional boundaries, and cross-border data and data-based-information transfers can significantly affect national and global welfare. Accordingly, local data flow regulation in one jurisdiction may create intended or unforeseen externalities in other jurisdictions. This article examines the complex challenges and implications of national regulation on data flows in an increasingly interconnected world. Given the pivotal role of data in our economies and societies, it is essential that governments recognize such externalities and take measures to ensure that an efficient balance is reached between the relevant considerations, including economic growth, privacy, and national security.

To illustrate such cross-border effects, we analyze two contrasting case studies: China’s data localization requirements and the European Union’s Data Act of 2023, which facilitates data sharing. Through these examples, we demonstrate how local regulation can create externalities that ripple across the global digital landscape. The analysis highlights the inadequacy of current international frameworks in addressing the complexities of data flows.

Our findings underscore the urgent need for increased international cooperation on data governance frameworks, as unilateral actions risk fragmenting the global digital landscape and limiting the welfare-enhancing potential of data synergies. We contend that countries, particularly the United States, are missing crucial opportunities by delaying engagement in shaping international data flow policies. By highlighting the complex interplay between local data flow policies and global effects, the article provides a foundation for governments to take a more proactive role in shaping welfare-enhancing frameworks for international data flows.

Whitaker on Who Owns AI?

Amy Whitaker (New York U) has posted “Who Owns AI?” on SSRN. Here is the abstract:

While artificial intelligence (AI) stands to transform artistic practice and creative industries, little has been theorized about who owns AI for creative work. Lawsuits brought against AI companies such as OpenAI and Meta under copyright law invite novel reconsideration of the value of creative work. This paper synthesizes across copyright, hybrid practice, and cooperative governance to work toward collective ownership and decision-making. This paper adds to research in arts entrepreneurship because copyright and shared value is so vital to the livelihood of working artists, including writers, filmmakers, and others in the creative industries. Sarah Silverman’s lawsuit against OpenAI is used as the main case study. The conceptual framework of material and machine, one and many, offers a lens onto value creation and shared ownership of AI. The framework includes a reinterpretation of the fourth factor of fair use under U.S. copyright law to refocus on the doctrinal language of value. AI uses the entirety of creative work in a way that is overlooked because of the small scale of one whole work relative to the overall size of the AI model. Yet a theory of value for creative work gives itdignity in its smallness, the way that one vote still has dignity in a national election of millions.As we navigate these frontiers of AI, experimental models pioneered by artists may be instructive far outside the arts.

Seng on Electronic Evidence

Daniel Kiat Boon Seng (Director) has posted “Electronic Evidence” on SSRN. Here is the abstract:

In any modern society, information of probative value is increasingly being produced, processed, transmitted and stored in digital devices and storage facilities. When admited in a court of law, such information takes the form of electronic evidence. To address the peculiar issues associated with the admissibility of electronic evidence in Singapore, the rules of hearsay and authentication require that electronic evidence receive special treatment. In Singapore, the rules of evidence enacted in 2012 that provide for the authentication of electronic evidence seeks to strike a balance between facilitating the admission of uncontested electronic evidence and providing for the authentication of contested electronic evidence. In addition, existing rules of evidence in Singapore are well placed to deal with the admission of electronic and digital signatures in evidence as well as the proof of copies of digital evidence. However, it remains to be seen what new legal measures are required to address the issues of digitally-altered or generated electronic evidence.

Logan on Deepfakes in Interrogations

Wayne A. Logan (Florida State U College Law) has posted “Deepfakes in Interrogations” on SSRN. Here is the abstract:

In recent years, academics, policymakers, and others have expressed concern over police use of artificial intelligence, in areas such as predictive policing and facial recognition. One area not receiving attention is the interrogation of suspects. This article addresses that gap, focusing on the inevitable coming use by police of AI-generated deepfakes to secure confessions, such as by creating and presenting to suspects a highly realistic still photo or video falsely indicating their presence at a crime scene, or an equally convincing audio recording of an associate or witness implicating them in a crime.

Police authority to lie in interrogations dates back to Frazier v. Cupp (1969), where the Supreme Court condoned a police lie to a suspect that an associate implicated him in a crime, holding that the deceit did not render the confession secured involuntary for due process purposes, while positing that an innocent individual would not falsely confess. Building upon the now-recognized reality that innocents do indeed confess, and research demonstrating the coercive impact of police use of the “false evidence ploy” (FEP) in securing confessions, scholars have urged a general ban on its use. Courts, while often expressing dismay over police resort to FEPs, typically conclude that they do not violate due process, but at times have held otherwise, expressing particular concern over police presentation of fabricated physical evidence to suspects (versus orally relating its existence, as in Frazier).

While sympathetic to a ban on police deceit in interrogations more generally, this Article singles out deepfakes for specific concern, based on their unprecedented verisimilitude, the demonstrated inability of the public to identify their falsity, and the common belief that police are not permitted to lie about evidence, much less fabricate it. Ultimately, the article makes the case for reconsideration of Frazier, based on research findings of the past fifty years, as well as the many major changes to the criminal legal system since 1969, especially the significantly increased pressure felt by defendants to plead guilty (very often on the basis of confessions, rendering them more susceptible to FEPs).

Beyond doctrine, a ban will have important functional benefits. These include providing ex ante guidance to police and judges alike who lack clarity on the parameters of the notoriously indeterminate due process voluntariness standard. More broadly, a ban will serve as an important bulwark against the deleterious wave of disinformation now sowing distrust in governmental actors and institutions. If deepfakes are condoned in interrogations, it is not hard to imagine that judges, jurors, witnesses, and the public will be skeptical of police, as well the reliability of evidence in criminal cases, undermining a cornerstone of the nation’s constitutional democracy.

Atkinson on Unfair Learning: GenAI Exceptionalism and Copyright Law

David Atkinson (The U Texas Austin) has posted “Unfair Learning: GenAI Exceptionalism and Copyright Law” on SSRN. Here is the abstract:

This paper challenges the argument that generative artificial intelligence (GenAI) is entitled to broad immunity due to a fair use defense under copyright law for reproducing copyrighted works without authorization. It examines fair use legal arguments and eight distinct substantive arguments, contending that every legal and substantive argument favoring fair use for GenAI applies equally, if not more so, to humans. Therefore, granting GenAI exceptional privileges in this domain is legally and logically inconsistent with withholding broad fair use exemptions from individual humans. The solution is to take a circumspect view of any fair use claim for mass copyright reproduction by any entity and focus on the first principles of whether permitting such exceptionalism for GenAI promotes science and the arts.

Almada on Two Dogmas of Technology-neutral Regulation

Marco Almada (European U Institute) has posted “Two Dogmas of Technology-neutral Regulation” on SSRN. Here is the abstract:

Technology neutrality is a popular concept in regulation. Laws targeted at new technologies, such as the European Union’s new Artificial Intelligence Act, are often designed as technology-neutral regulations, and scholars and stakeholders often praise (or criticize) instruments in light of their (lack of) neutrality towards different technical arrangements. Yet, those assessments rely on various-and potentially conflicting-understandings of what technology neutrality is and what it entails for regulation. This article argues that the ambiguity surrounding “technology neutrality” and related concepts follows, in no small part, from two unquestioned assumptions permeating debates on the topic: that technology neutrality is a simple concept and that it is always an effective form of regulation. Based on a narrative review of scholarly literatures, legal instruments, and policy documents, this article unpacks those two dogmas, making the case that technology neutrality involves complex institutional choices that might or not be adequate in certain contexts. As a result, technology neutrality should not be taken as a default assumption for regulation; instead, its suitability should be examined on a case-to-case basis.