Chung & Schiff on AI and the Social Contract

Chee Hae Chung (Purdue U) and Daniel Schiff (Purdue U) have posted “AI and the Social Contract” (Proceedings of the Seventh AAAI/ACM Conference on AI, Ethics, and Society (AIES-25)) on SSRN. Here is the abstract:

As artificial intelligence (AI) systems increasingly shape public governance, they challenge foundational principles of political legitimacy. This paper evaluates AI governance against five canonical social contract theories—Hobbes, Locke, Rousseau, Rawls, and Nozick—while examining how structural features of AI strain these theories’ durability. Using a structured comparative framework, the study applies three forms of legitimacy (procedural, moral-substantive, and recognitional) and three types of consent (explicit, tacit, and hypothetical) as normative benchmarks. Applying each theory, the analysis finds AI governance is marked by deficits in accountability, participation, rights protection, fairness, and freedom from coercion, while AI’s opacity, global influence, and hybrid public-private control reveal blind spots within the social contract tradition itself. Though no single theory offers a complete solution and each contains specific weaknesses, the paper develops a hybrid model integrating Hobbesian accountability, Lockean rights protections, Rousseauian participation norms, Rawlsian fairness, and Nozickian safeguards against coercion. The paper concludes by distilling normative priorities for aligning governance with these hybrid contractarian standards: embedding participatory mechanisms, encouraging pluralistic ethical perspectives, ensuring institutional transparency, and strengthening democratic oversight. These interventions aim to reconfigure the social contract—and AI—for an era in which algorithmic systems increasingly mediate the exercise of political authority.

Gropper on The Birth of the Synthetic AI Outlaw

Jonathan Gropper (Rutgers) has posted “The Birth of the Synthetic AI Outlaw” on SSRN. Here is the abstract:

This article explores the practical jurisprudential implications of agentic artificial intelligence (AI)—entities that operate beyond the assumptions of existing legal systems.

We argue that current constructs such as legal personhood, jurisdictional sovereignty, and incentive-based compliance are insufficient to regulate highly autonomous digital actors through the concept of the ‘synthetic outlaw,’ we examine how these systems subvert legal norms not through rebellion, but through optimization logic incompatible with moral and legal constraint.  

We conclude by proposing a shift from ethics-based governance to architectural constraint, and a re-imagination of legal frameworks capable of addressing post-human agency.

Babaei et al. on Explainable Fairness, with Application to Credit Lending

Golnoosh Babaei (U Pavia) et al. have posted “Explainable Fairness, with Application to Credit Lending” on SSRN. Here is the abstract:

Fairness is a key requirement for artificial intelligence applications. The assessment of fairness is typically based on group based measures, such as statistical parity, which compares the machine learning output for the different population groups of a protected variable. Although intuitive and simple, statistical parity may be affected by the presence of control variables, correlated with the protected variable. To remove this effect, we propose to employ Shapley values, which measures the additional difference in output specifically due to the protected variable. To remove the possible impact of correlations on Shapley values, we compare them across different subgroups of the most correlated control variables, checking for the presence of Simpson’s paradox, for which a fair model may become unfair when conditioning on a control variable. We also show how to mitigate unfairness, by means of a propensity score matching that can improve statistical parity, building a training sample which matches similar individuals in different protected groups. We apply our proposal to a real-world database containing 157,269 personal lending decisions and show that both logistic regression and random forest models are fair, when all loan applications are considered; but become unfair, for high loan amount requested. We also show how propensity score matching can mitigate this bias.

Wei et al. on Recommendations and Reporting Checklist for Rigorous & Transparent Human Baselines in Model Evaluations

Kevin Wei (RAND Corporation) et al. have posted “Recommendations and Reporting Checklist for Rigorous & Transparent Human Baselines in Model Evaluations” (A version of this paper has been accepted to ICML 2025 as a position paper (spotlight), with the title: “Position: Human Baselines in Model Evaluations Need Rigor and Transparency (With Recommendations & Reporting Checklist).”) on SSRN. Here is the abstract:

In this position paper, we argue that human baselines in foundation model evaluations must be more rigorous and more transparent to enable meaningful comparisons of human vs. AI performance, and we provide recommendations and a reporting checklist towards this end. Human performance baselines are vital for the machine learning community, downstream users, and policymakers to interpret AI evaluations. Models are often claimed to achieve “super-human” performance, but existing baselining methods are neither sufficiently rigorous nor sufficiently well-documented to robustly measure and assess performance differences. Based on a meta-review of the measurement theory and AI evaluation literatures, we derive a framework with recommendations for designing, executing, and reporting human baselines. We synthesize our recommendations into a checklist that we use to systematically review 115 human baselines (studies) in foundation model evaluations and thus identify shortcomings in existing baselining methods; our checklist can also assist researchers in conducting human baselines and reporting results. We hope our work can advance more rigorous AI evaluation practices that can better serve both the research community and policymakers. Data is available at: https://github.com/kevinlwei/human-baselines

Noguer I Alonso & mendell on Extending the FAIR Framework: Financial Agentic Systems

Miquel Noguer I Alonso (Artificial Intelligence Finance Institute) and Harry Mendell (Federal Reserve Bank New York) have posted “Extending the FAIR Framework: Financial Agentic Systems” on SSRN. Here is the abstract:

This paper extends our Finance-Aware Implementation and Remediation (FAIR) framework by addressing a critical gap in current agentic system design: the temporal challenges inherent in real-world financial operations. While existing demonstrations focus on static tasks like research reports, financial markets require systems that adapt to continuously changing conditions-from equity trading execution to supply chain procurement. We provide practical implementation guidelines for financial institutions adopting Large Language Models (LLMs) and autonomous agentic systems, with particular emphasis on temporal safeguards that prevent stale data dependencies, execution drift, and cascading temporal failures. The enhanced framework introduces conditional execution protocols, dynamic re-validation mechanisms, and graceful degradation strategies that mirror successful approaches in autonomous vehicles and algorithmic trading systems.

Jurcys et al. on The Future of Privacy Law? A Comment on Solove/Hartzog’s ‘Kafka in the Age of AI and the Futility of Privacy as Control in an Age of AI’

Paul Jurcys (U California) et al. have posted “The Future of Privacy Law? A Comment on Solove/Hartzog’s ‘Kafka in the Age of AI and the Futility of Privacy as Control in an Age of AI’” on SSRN. Here is the abstract:

This Comment engages with Daniel Solove and Woodrow Hartzog’s thought-provoking claim that “privacy as control” is increasingly “futile” in an era of ubiquitous AI. We share their concern about the profound power imbalances and structural opacity that characterize today’s data-driven systems, and we recognize the urgency of rethinking traditional privacy frameworks in light of these challenges. At the same time, we respectfully suggest that their critique may understate the potential of a reimagined, more robust vision of individual control within privacy law. 

We raise three considerations in support of this view. First, the prevailing model of individual control they critique does not fully reflect the richer, human-centric approach to personal data that law can—and should—aspire to. Second, casting individual and structural approaches as oppositional creates a false dichotomy. In practice, these dimensions are interdependent and mutually reinforcing. Effective privacy governance requires both empowered individuals and robust structural safeguards to establish a data ecosystem that genuinely serves individuals and the public interest. And third, rather than endorsing resignation or fatalism, Kafka’s work can also be read as a call to reclaim dignity and agency in the face of bureaucratic and technological opacity. 

We therefore propose a revitalized framework for privacy law that affirms both meaningfully supported and feasible personal agency and strong institutional safeguards. By integrating these complementary dimensions, our aim is to contribute to a constructive and forward-looking dialogue on how privacy can endure as a viable and principled right in an increasingly complex and algorithmically mediated world.

Deng on As AI Regulations and Price-Fixing Allegations Pick Up, New Research on Algorithmic Collusion Offers Insights for Executives and Attorneys

Ai Deng (Berkeley Research Group) has posted “As AI Regulations and Price-Fixing Allegations Pick Up, New Research on Algorithmic Collusion Offers Insights for Executives and Attorneys” (BRG ThinkSet, Spring, 2025) on SSRN. Here is the abstract:

This is a two-part series on the topic of algorithmic collusion. In Part One, I delve into how algorithms influence pricing, the feasibility of algorithmic collusion, and the impact of algorithmic design on whether a pricing algorithm sets supracompetitive prices. In Part Two, I explore the closely related subject of third-party pricing algorithms, which have attracted significant attention. Throughout these articles, I draw lessons for executives and attorneys from the latest academic research.

Ciriello et al. on The Past, Present, and Futures of Artificial Emotional Intelligence: A Scoping Review Full research paper

Raffaele Ciriello (U Sydney) et al. have posted “The Past, Present, and Futures of Artificial Emotional Intelligence: A Scoping Review Full research paper” (Australasian Conference on Information Systems (ACIS 2024), Canberra, Australia) on SSRN. Here is the abstract:

Artificial emotional intelligence (AEI) systems, which sense, interpret, and respond to human emotions, are increasingly utilised across various sectors, enhancing interpersonal interactions while raising ethical concerns. This scoping review examines the evolving field of AEI, covering its historical development, current applications, and emerging research opportunities. Our analysis draws from 96 articles spanning multiple disciplines, revealing significant progress from initial scepticism to growing acceptance of AEI as an interdisciplinary study. We highlight AEI’s applications in healthcare, where it improves patient care; in marketing, where it enhances customer interactions; and in the love and sex industries, where it facilitates new forms of romantic and erotic engagement. Each sector demonstrates AEI’s potential to transform practices and provoke ethical debates. The review provides a framework to understand AEI’s sociotechnical implications and identifies future research opportunities regarding trustworthy, privacy-preserving, context-aware, and culturally adaptive AEI systems, with a focus on their profound impact on human relationships.

Makridis on Countering Human Trafficking Risks of Generative AI with Trustworthy AI and Education

Christos Makridis (Stanford U) has posted “Countering Human Trafficking Risks of Generative AI with Trustworthy AI and Education” on SSRN. Here is the abstract:

Generative AI (GenAI) holds transformative potential across sectors, yet its rapid deployment also brings significant risks, notably the potential to facilitate human trafficking through sophisticated recruitment and exploitation tactics. This article explores GenAI’s dual role in both enabling and countering trafficking, explaining how traffickers use AI to automate deceptive outreach and create exploitative content, while also considering how ethical AI can reinforce anti-trafficking efforts. It argues for a comprehensive framework grounded in Trustworthy AI (TAI) principles and strengthened by international guidelines, which prioritize transparency, fairness, and accountability. A multi-faceted approach—focused on education, regulation, technological innovation, and cross-sector partnerships—can harness AI responsibly to disrupt trafficking networks and support victims. By embedding ethical AI, expanding digital literacy, and fostering cooperation among policymakers, technologists, and NGOs, we can build societal resilience against trafficking while safeguarding human rights and digital safety.

Leiser & Murray on Rethinking Safety-by-Design and Techno-Solutionism for the Regulation of Child Sexual Abuse Material

M.R. Leiser (Independent) and A.d. Murray (London Economics Law) have posted “Rethinking Safety-by-Design and Techno-Solutionism for the Regulation of Child Sexual Abuse Material” (The paper has provisionally been accepted and will be forthcoming in Technology Regulation (TechReg at https://techreg.org/)) on SSRN. Here is the abstract:

This article examines the rise of technological solutions to digital regulatory challenges, with a focus on Child Sexual Abuse Material (CSAM) and the imposition of obligations on platforms to mitigate risks while safeguarding fundamental rights. This leads to new regulatory designs, such as “safety-by-design,” which is favoured by European regulators due to its cost-effectiveness and efficiency in assigning responsibilities to online gatekeepers. We examine the European Union’s CSAM Proposal and the United Kingdom’s Online Safety Act, two ambitious initiatives that aim to utilise technology to combat the dissemination of CSAM. This proposal mandates platforms to perform risk assessments and implement mitigation measures against the hosting or dissemination of CSAM. In cases where these measures fail, a detection order can be issued, requiring platforms to deploy technical measures, including artificial intelligence (AI), to scan all incoming and outgoing communications. This approach, while well-intentioned, is scrutinised for its potential over-reliance on technology and possible infringement of fundamental rights. The article examines the theoretical underpinnings of “safety-by-design” and “techno-solutionism,” tracing their historical development and evaluating their application in current digital regulation, particularly in online child safety policy. The rise of safety-by-design and techno-solutionism is contextualised within the broader framework of cyber regulation, examining the benefits and potential pitfalls of these approaches. We argue for a balanced approach that considers technological solutions alongside other regulatory modalities, emphasising the need for comprehensive strategies that address the complex and multifaceted nature of CSAM and online child safety. It highlights the importance of engaging with diverse theoretical perspectives to develop effective, holistic responses to the challenges posed by CSAM in the digital environment.