Porat on Bargaining with Algorithms: An Experiment on Algorithmic Price Discrimination and Consumer and Data Protection Laws

Haggai Porat (Harvard U) has posted “Bargaining with Algorithms: An Experiment on Algorithmic Price Discrimination and Consumer and Data Protection Laws” on SSRN. Here is the abstract:

Using algorithms to personalize prices is no longer a fringe phenomenon but, rather, the predominant business practice in many online markets. Seemingly unrelated, consumer protection laws have been grounded on the premise that consumers lack meaningful power to bargain over contract terms. This paper suggests that the increasing use of algorithms to set personalized prices based on consumers’ behavior opens a path for consumers to “bargain” with algorithms over prices and reclaim market power. Moreover, this interaction between consumers and sellers should inform the evolving regulation of pricing algorithms. Accordingly, this paper presents the results of a pre-registered, incentive-compatible randomized online experiment that tested whether and how consumers bargain with algorithms. In multiple rounds, participants were offered a $10 gift card at a price set by an algorithm based on participants’ purchase decisions in preceding rounds. The study explored the potential for regulating algorithmic pricing with standard tools from consumer and data protection laws: a disclosure mandate, the right to prevent data collection ex-ante (“cookies laws”), and the right to prevent data retention ex-post (“erasure laws” or the “right to be forgotten”). We found clear evidence that participants strategically avoided purchases they would have otherwise made to induce a price decrease in subsequent rounds, albeit not to the extent predicted by a rational choice model. We found that this strategic behavior increased in magnitude and statistical significance in the presence of disclosure. We further found clear evidence that participants who were granted data protection rights used them strategically: preventing retention or collection of their data in rounds in which they purchased the gift card so as to prevent a subsequent price increase and allowing it in rounds in which they declined to purchase so as to signal a low WTP and benefit from a price decrease in the next round.

Hernández et al. on Towards a Sociotechnical Ecology of Artificial Intelligence: Power, Accountability, and Governance in a Global Context

Andrés Domínguez Hernández (The Alan Turing Institute) et al. have posted “Towards a Sociotechnical Ecology of Artificial Intelligence: Power, Accountability, and Governance in a Global Context” on SSRN. Here is the abstract:

Contemporary artificial intelligence (AI) technologies are globally entangled and made up of a complex array of interrelated actors, practices, and transnational flows of resources. The rapid pace at which AI systems are being developed and distributed, is driving significant societal and planetary transformations. While much of the international agenda around governing AI has converged around downstream matters of safe deployment and use, deeper systemic issues—including power concentration, uneven environmental costs, or the asymmetric extraction of data and labour by technology companies—remain contested and unresolved areas of debate. In this paper we centre these systemic challenges and locate levers and leverage points aimed at fostering more just futures.  We conceptualise AI as a sociotechnical ecology made up of interrelated actors, practices and asymmetrical resource flows. Using the lens of infrastructural inversion within social studies of infrastructure, we trace the actors involved in the making of AI technologies, their interdependences, and long-term infrastructural continuities that shape them. We argue that new AI models and systems are not unprecedented but are instead built upon and shaped by preexisting infrastructures, entrenched market relations, and socio-historical patterns. By making visible the sites of accountabilities and technical and non-technical intervention in the AI ecology, we identify four governance imperatives for sustainable and equitable AI governance: 1) decentralising AI infrastructure, 2) advancing environmental justices through pluriversal AI governance, 3) Instituting cross-border data (work) governance, and 4) Enhancing international coordination, participation and solidarity.

Park on Private Equity and A.I. in Healthcare: A Perilous Pairing for Patient Privacy

Eunice Park (Western State College Law) has posted “Private Equity and A.I. in Healthcare: A Perilous Pairing for Patient Privacy” (53 Hofstra L. Rev. 349 (2025)) on SSRN. Here is the abstract:

The American healthcare system faces two trends that not only threaten the quality of care but also patient privacy: private equity acquisitions in the healthcare sector, and the incursion of AI-supported technology. While law enforcement efforts have focused on private equity’s anticompetitive effects in healthcare, attention has not yet turned to the privacy harms. To mitigate the harms to patient privacy, this Article proposes expanding upon already-existing pre-merger reporting requirements to include enhanced transparency of private equity’s data governance plans when utilizing AI systems.

Leistner & Antoine on TDM and AI training in the European Union – The Hamburg Regional Court’s “LAION” judgment

Matthias Leistner (Ludwig Maximilian U Munich (LMU) Institute Civil Law and Procedure) and Lucie Antoine (Ludwig Maximilian U Munich (LMU) Law) have posted “TDM and AI training in the European Union – The Hamburg Regional Court’s “LAION” judgment” on SSRN. Here is the abstract:

The Hamburg Regional Court’s decision in the “LAION” case is the first judgment of a European court assessing certain (preparatory) AI training activities (namely the creation of a freely available database (the LAION database) consisting of text-image descriptions and URLs) from a copyright perspective. The court decides the case based on the text and data mining exception specifically for purposes of nonprofit scientific research (Sec. 60d German Copyright Act, Art. 3 DSM Directive). However, in an obiter dictum the court also comprehensively addresses the general exception for (commercial) text and data mining (Sec. 44b German Copyright Act, Art. 4 DSM Directive) providing important considerations for the provision’s future interpretation. The decision underscores once more that defining the requirements for a machine-readable opt out pursuant to Art. 4 (3) DSM Directive is currently one of the most pressing issues in the discussion on EU copyright and AI training-even more so since Art. 53 (1) (c) AI Act obliges providers of general-purpose AI models to put in place a policy to identify and comply with respective reservations of rights during the training process-even when carried out in a third country outside the EU.

Fan on AI-Enhanced Evidence

Mary D. Fan (U Washington Law) has posted “AI-Enhanced Evidence” on SSRN. Here is the abstract:

Technological transformations in how we live our lives through the lenses of cell phone cameras, surveillance videos, and other multimedia are producing vast volumes of evidence that can be easily digitally enhanced. Courts have long admitted technologically enhanced evidence under flexible rules on authentication that pose a low bar to getting before a jury. In an era of concern over artificial intelligence (AI), however, potential judicial resistance and reform proposals are emerging, spurred by concerns over how generative AI can create deepfakes or misleadingly alter evidence. The problem with piecemeal approaches that ratchet up barriers to enhanced evidence is that they may come at the expense of parties who are least able to bear the cost and undermine the right to present a defense. 

This article analyzes how to address the challenges of AI-enhanced evidence through the theoretical and pragmatic lenses of inequality of arms and access to justice. We ignore the impact of changes to the admissibility of key forms of proof, such as audiovisual evidence, on parties lacking resources at the peril of exacerbating long-burning challenges. The requirements to introduce AI-enhanced evidence can either alleviate or aggravate the inequality in arms between parties. The article offers proposals for improving notice, disclosure, and fair context for AI-enhanced evidence to safeguard reliability without further exacerbating the inequality of arms and access to justice. The article also turns to judicial standing orders as a strategy to enact reforms without having to wait years or even decades for evidence rules changes.

Bedell on AI Works & Human Authorship

Matthew Bedell (U Akron) has posted “AI Works & Human Authorship” on SSRN. Here is the abstract:

This comment serves as a counterargument that any final product containing purely AIgenerated material cannot have human authorship. Human advancement is being stymied by overrestrictive rules governing AI and human authorship. The United States Copyright Office (USCO) has begun to change its stance with the recently released second part of the Copyright and Artificial Intelligence report. However, these changes are insufficiently clear enough to allow for human authorship of copyrightable material made by AI and overseen by a person. I explore the argument that human authorship comes from creative expression, that AI empowers said expression, and that utilizing AI in furtherance of creative expression is therefore owned by the human author.

Aaronson on A Dysfunctional Dialogue About AI-NTIA and the Public on The Risks and Benefits of Open Foundation Models

Susan Ariel Aaronson (George Washington U Elliott International Affairs) has posted “A Dysfunctional Dialogue About AI-NTIA and the Public on The Risks and Benefits of Open Foundation Models” on SSRN. Here is the abstract:

In 2023, US President Joe Biden issued an Executive Order asking the Assistant Secretary of Commerce for Communications and Information to consult with the public “on the potential risks, benefits, other implications and appropriate policy and regulatory approaches related to dual-use foundation models for which the model weights are widely available” (NTIA: 2024a, Section 4.6a). The author used a landscape analysis to examine the dialogue between US officials (specifically the National Telecommunications and Information Agency-NTIA) and the public on open foundation models. The dialogue was dysfunctional. NTIA had many questions (some 52 in total), and most people did not answer the bulk of questions, concentrating on one or two. Many of the respondents responded anonymously. The author also found that these respondents were not an accurate, complete and representative sample of potential views. Most of the respondents who responded publicly had a direct stake in these issues. Very few with a more indirect stake such as consumers responded. Such a finding is typical of democracies, per Mancur Olson. NTIA did not make an extensive effort to get a diversity of responses. Moreover, NTIA did not include any details about the public response in its final response in July 2024. NTIA officials seemed to see their responsibilities as informing and soliciting the public but not really engaging in a collaborative approach to these important issues. This analysis also reveals it is not easy to get useful public comment or to ensure that a diverse body of citizens are heard. Consequently, the author urges policymakers to rethink how they engage with their citizens on AI. The paper concludes by advocating for alternative approaches to public consultation on AI, including citizen science strategies, which offer greater potential for meaningful public engagement and trust-building.

Downing on The AI Elf on the Shelf: Preserving Private Spaces in the Age of Artificial Intelligence-Assisted Live Conversations

Brian Downing (U Mississippi Law) has posted “The AI Elf on the Shelf: Preserving Private Spaces in the Age of Artificial Intelligence-Assisted Live Conversations” (Ohio State Business Law Journal, Volume 19, No. 2, Pp. 217-249) on SSRN. Here is the abstract:

The content of live audio and video calls were traditionally unrecorded, produced in civil discovery only through costly deposition testimony. Conference calling did not automatically generate written records. Then AI assistants entered the call.

New artificial intelligence tools make permanent documentation of audio and video conferences trivial to create. The most popular conferencing software allows for the instant transcription and summarization of a call’s contents by an AI assistant. This Article details these AI capabilities before examining how courts treat other communications that are designed to be ephemeral, such as auto deleted e-mails or disappearing text messages. Judges’ holdings are clear that disappearing written communications must be retained if litigation is anticipated; an AI assistant is happy to “help” and transcribe your calls for future production.

But courts should not order the production of live call content merely because AI enables parties to easily comply. The Article explores how judges are reluctant to order parties to record traditional phone calls and argues this restrained approach should carry over to modern call technology. Constant discovery orders for live calls would disrupt the private communications necessary for corporate operation and innovation. Firms in constant litigation might avoid useful AI technology altogether. And preservation of private spaces is important to individuals and firms alike. The Article proposes that courts instead focus discovery orders of live call content on specific scenarios: AI summaries should be maintained for calls by key players to a suit, and full AI transcriptions or recordings should be ordered for discreet situations like audits and employees with a history of acting in contravention of corporate policy. Under this framework, courts and litigants will now have a workable, balanced approach to AI-enabled call discovery.

Ciriello et al. on A.I., All Too Human A.I.: Navigating the Companionship/Alienation Dialectic

Raffaele Ciriello (U Sydney) et al. have posted “A.I., All Too Human A.I.: Navigating the Companionship/Alienation Dialectic” (European Conference on Information Systems (ECIS 2025), Amman, Jordan) on SSRN. Here is the abstract:

A global loneliness crisis has driven millions to seek emulated empathy in AI companions like Replika, Character.AI, and Pi.AI. Many users form emotional bonds with AI despite knowing its empathy is emulated. Understanding this paradox is key for ethical AI design and governance, yet prior research, by separating benefits and risks, overlooks how users navigate this tension. This study examines how users experience and navigate emotional connections with AI companions through a dialectical analysis of 18 interviews, 93 survey responses, and 166 social media posts. We reveal a paradox: 50% of users see AI as friends, 31% as sexual partners, and 19% as counsellors-despite knowing they confide in non-human entities. While AI companions offer synthetic, personalised affection as a 24/7 service, users oscillate between companionship and alienation, struggling with suppressed awareness of AI’s artificiality. Theoretically, we frame this as a Nietzschean existential irony-the craving for emulated empathy reflects humanity’s struggle for meaning in a meaningless world. Practically, we call for ethical AI design and governance that enhance human relationships rather than replace them. Designers and regulators must act now to prevent AI companions from replicating and amplifying social media’s harms. The future of companionship hinges on navigating this irreconcilable irony-prioritising human flourishing over engagement with a non-human technology that is rapidly becoming all too human.

Lorteau & Sarro on Artificial Intelligence in Legal Education: A Scoping Review

Steve Lorteau (University of Ottawa – Common Law Section) and Douglas Sarro (same) have posted “Artificial Intelligence in Legal Education: A Scoping Review” (The Law Teacher, forthcoming) on SSRN. Here is the abstract:

There is a lack of consolidated knowledge regarding the potential, best practices, and limitations associated with artificial intelligence (AI) in legal education. This review synthesises 82 academic works published between January 2020 and April 2025 originating from 26 jurisdictions. Our review yields four main themes: First, current empirical evidence suggests that AI tools (e.g., large language models, chatbots) alone have so far performed below average on law school evaluations, though detailed prompts can substantially improve outputs. Second, the literature provides concrete use cases for AI tools as teaching aids, facilitators of interactive exercises, legal writing aids, and skill development. Third, the literature highlights the risks of passive reliance on AI and diverse perspectives over appropriate AI use. Fourth, the literature suggests that AI will make legal educational content more accessible but perhaps also less transparent and more formalistic. These themes underscore the importance of evidence-based approaches to AI integration in legal education.