Solove on Privacy in Authoritarian Times: Surveillance Capitalism and Government Surveillance

Daniel J. Solove (George Washington U Law) has posted “Privacy in Authoritarian Times: Surveillance Capitalism and Government Surveillance” on SSRN. Here is the abstract:

As the United States and much of the world face a resurgence of authoritarianism, the critical importance of privacy cannot be overstated. Privacy serves as a fundamental safeguard against the overreach of authoritarian governments.

Authoritarian power is greatly enhanced in today’s era of pervasive surveillance and relentless data collection. We are living in the age of “surveillance capitalism.” There are vast digital dossiers about every person assembled by thousands of corporations and readily available for the government to access.

In the coming years, both the federal government and some state governments may intensify surveillance and data collection efforts, targeting immigrants, punishing those involved in seeking or providing abortion services, and cracking down on gender-affirming healthcare. Personal data could also be weaponized against critics and others who resist these efforts. These campaigns may be bolstered by vigilante groups, using personal data to dox, threaten, and harm individuals they oppose—echoing historical instances where ordinary citizens actively aided totalitarian regimes in identifying and punishing dissenters or perceived “undesirables.”

In this Article, I contend that privacy protections must be significantly heightened to respond to growing threats of authoritarianism. Major regulatory interventions are necessary to prevent government surveillance from being used in inimical ways. But reforming Fourth Amendment jurisprudence and government surveillance alone will not protect against many authoritarian invasions of privacy, especially given the oligarchical character of the current strain of authoritarianism.

To adequately regulate government surveillance, it is essential to also regulate surveillance capitalism. Government surveillance and surveillance capitalism are two sides of the same coin. It is impossible to protect privacy from authoritarianism without addressing consumer privacy.

This Article proposes regulatory measures that should be taken to address government surveillance and surveillance capitalism – on both sides of the coin – to guard against authoritarianism. Federal lower court judges have some leeway to strengthen Fourth Amendment and other Constitutional protections as well as consumer privacy protections. State court judges can interpret their state’s constitutions in ways that diverge from the way U.S. Supreme Court interpretations. State legislators can enact a wide array of measures to limit government surveillance by their states and others as well as to reign in surveillance capitalism, minimize the data available to authoritarian regimes, regulate data brokers, incentivize the creation of less privacy-invasive surveillance technologies, and curtail the increasing government-industrial collusion. There is no silver bullet, but these measures across the entire landscape of privacy law can make a meaningful difference.

Gerke on The Need for ‘Nutrition Facts Labels’ and ‘Front-Of-Package Nutrition Labeling’ For Artificial Intelligence/Machine Learning-Based Medical Devices – Lessons Learned From Food Labeling

Sara Gerke (U Illinois College Law) has posted “The Need for ‘Nutrition Facts Labels’ and ‘Front-Of-Package Nutrition Labeling’ For Artificial Intelligence/Machine Learning-Based Medical Devices – Lessons Learned From Food Labeling” (Forthcoming, Emory Law Journal (Vol. 74, 2025)) on SSRN. Here is the abstract:

Medical AI is rapidly transforming healthcare. The U.S. Food and Drug Administration (FDA) has already authorized the marketing of over 1000 AI/ML-based medical devices, and many more products are in the development pipeline. However, despite this fast development, the regulatory framework for AI/ML-based medical devices could be improved. This Article focuses on the labeling for AI/ML-based medical devices, a crucial topic that needs to receive more attention in the legal literature and by regulators like the FDA. The current lack of labeling standards tailored explicitly to AI/ML-based medical devices is an obstacle to transparency in the use of such devices. It prevents users from receiving essential information about many AI/ML-based medical devices necessary for their safe use, such as race/ethnicity and gender breakdowns of the used training data. To ensure transparency and protect patients’ health, the FDA must develop labeling standards for AI/ML-based medical devices as quickly as possible. 

This Article argues that valuable lessons can be learned from food labeling and applied to labeling for AI/ML-based medical devices. In particular, it argues that there is not only a need for regulators like the FDA to develop “nutrition facts labels,” called here “AI Facts labels” for AI/ML-based medical devices, but also a “front-of-package (FOP) nutrition labeling system,” called here “FOP AI labeling system.” The use of FOP AI labels as a complement to AI Facts labels can further users’ literacy by providing at-a-glance, easy-to-understand information about the AI/ML-based medical device and enable them to make better-informed decisions about their use. This Article is the first to establish a connection between FOP nutrition labeling systems and their promise for AI/ML-based medical devices and make concrete suggestions on what such a system could look like. It also makes additional concrete proposals on other aspects of labeling for AI/ML-based medical devices, including the development of an innovative, user-friendly app based on the FOP AI labeling system as well as labeling requirements for AI/ML-generated content.

Cofone & Khern-am-nuai on The Overstated Cost of AI Fairness in Criminal Justice

Ignacio Cofone (Oxford U Law) and Warut Khern-am-nuai (McGill U Desautels Management) have posted “The Overstated Cost of AI Fairness in Criminal Justice” (Indiana Law Journal (forthcoming 2025)) on SSRN. Here is the abstract:

The dominant critique of algorithmic fairness in AI decision-making, particularly in criminal justice, is that increasing fairness reduces the accuracy of predictions, thereby imposing a cost on society. This article challenges that assumption by empirically analyzing the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithm, a widely used and widely discussed risk assessment tool in the U.S. criminal justice system.

This Essay makes two contributions. First, it demonstrates that widely used AI models do more than replicate existing biases—they exacerbate them. Using causal inference methods, we show that racial bias not only is present in the COMPAS training dataset but is also worsened by AI models such as COMPAS. This finding has implications for legal scholarship and policymaking, as it (a) challenges the assumption that AI can offer an objective or neutral improvement over human decision-making and (b) provides counterevidence to the idea that AI merely mirrors pre-existing human biases.

Second, this Essay reframes the debate over the cost of fairness in algorithmic decision-making for criminal justice. It shows that applying fairness constraints does not necessarily lead to a cost in terms of loss in predictive accuracy regarding recidivism.  AI systems operationalize concepts such as risk by making implicit and often flawed normative choices about what to predict and how to predict it. The claim that fair AI models decrease accuracy assumes that the model’s prediction is an optimal baseline. Fairness constraints, rather, they can correct distortions introduced by biased outcome variables—which magnify systemic racial disparities in rearrest data rather than reflect actual risk. In some cases, interventions can introduce algorithmic fairness without imposing the cost often presumed in policy discussions.

These findings are consequential beyond criminal justice. Similar dynamics exist in AI-driven decision-making in lending, hiring, and housing, where biased outcome variables reinforce systemic inequalities beyond the choices of proxies. By providing empirical evidence that fairness constraints can improve rather than undermine decision-making, this article advances the conversation on how law and policy should approach AI bias, particularly when algorithmic decisions affect fundamental rights.

Arun on The Silicon Valley Effect

Chinmayi Arun (Yale Law) has posted “The Silicon Valley Effect” on SSRN. Here is the abstract:

The most influential Artificial Intelligence (“AI”) companies are shaping AI’s legal order and regulatory discourse to protect their business interests and shift focus away from how their practices harm human beings. I call Big Tech’s influence on AI’s legal order the Silicon Valley Effect and argue that it is understudied and underestimated.

The major AI companies rely on global value chains and global markets. Capitalism drives them to exploit and experiment on vulnerable populations in permissive regulatory environments. Industry-influenced transnational legal orders – including domestic regulation and treaties –protect companies’ practices and products from regulation.  Legal scholarship should account for how global informational capitalism drives the industry to influence the development of law transnationally.

Scholars who study technology’s political economy tend to advocate for localized regulation, and scholars who focus on technology’s global legal orders tend to focus on states. Focusing on isolated domestic remedies for transnational phenomena is a mistake, since it permits the industry to develop harmful products and practices elsewhere. Focusing exclusively on states’ transnational influence elides the industry’s significant influence on regulatory discourse, and on foreign and domestic policy.

As the AI industry accumulates power, it can overwhelm weakening state regulators in parts of the world that could initially resist their persuasive and material power. ‘Strong’ states like the US are one election away from vulnerability. To be resilient, they should stop relying solely on domestic regulation and develop transnationally harmonized legal orders to curtail the industry’s power and counteract the Silicon Valley Effect.

Swoboda et al. on Examining Popular Arguments Against AI Existential Risk: A Philosophical Analysis

Torben Swoboda (KU Leuven) et al. have posted “Examining Popular Arguments Against AI Existential Risk: A Philosophical Analysis” on SSRN. Here is the abstract:

Concerns about artificial intelligence (AI) and its potential existential risks have garnered significant attention, with figures like Geoffrey Hinton and Dennis Hassabis advocating for robust safeguards against catastrophic outcomes. Prominent scholars, such as Nick Bostrom and Max Tegmark, have further advanced the discourse by exploring the long-term impacts of superintelligent AI. However, this existential risk narrative faces criticism, particularly in popular media, where scholars like Timnit Gebru, Melanie Mitchell, and Nick Clegg argue, among other things, that it distracts from pressing current issues. Despite extensive media coverage, skepticism toward the existential risk discourse has received limited rigorous treatment in academic literature. Addressing this imbalance, this paper reconstructs and evaluates three common arguments against the existential risk perspective: the Distraction Argument, the Argument from Human Frailty, and the Checkpoints for Intervention Argument. By systematically reconstructing and assessing these arguments, the paper aims to provide a foundation for more balanced academic discourse and further research on AI.

Deffains & Fluet on Decision Making Algorithms: Product Liability and the Challenges of AI

Bruno Deffains (U Paris II Panthéon-Assas) and Claude Fluet (U Laval) have posted “Decision Making Algorithms: Product Liability and the Challenges of AI” on SSRN. Here is the abstract:

The question of AI liability (e.g., for robots, autonomous systems or decisionmaking devices) has been widely discussed in recent years. The issue is how to adapt non-contractual civil liability rules and in particular producer liability legislation to the challenges posed by the risk of harm caused by AI applications, centering on notions such as fault-based liability vs strict liability vs liability for defective products. The purpose of this paper is to discuss the lessons that can be drawn from the canonical Law & Economics model of producer liability, insofar as it can be applied to decision-making AI applications. We extend the canonical model by relating the risk of harm facing the users of an application to the risk of decisonmaking errors. Investments in safety, e.g. through better design and software, reduce the risk of decision-making errors. The cost of improving safety is shared by all users of the product.

Silbey on How Theories of Art Can Inform Debates About AI

Jessica M. Silbey (Boston U Law) has posted “How Theories of Art Can Inform Debates About AI” (74 Emory Law Journal __ (forthcoming 2025, Issue 5)) on SSRN. Here is the abstract:

Debates about artificial intelligence (AI) tend to swing between the optimistic and the apocalyptic. I propose a less binary approach that frames conversations about AI from the perspectives of theories of art and creativity. Whether we agree that AI is artificial or intelligent, whether it should be constrained or liberated, we cannot deny its influence on literary, artistic, and innovative production. AI may be described as simply a new tool to produce art and science, like the camera or the microscope, or it may transform art and science, the way the internet transformed global communication. Either way, these debates about AI concern its relationship to treasured human activity, and thus, this Essay asserts, they have something to learn from philosophies of art and aesthetics. Copyright law may be the most obvious legal regime to address some of AI’s effects on creative practices, but copyright cannot and should not solve the problems AI raises for artists and authors. Better automation (improved technology) or more precise laws (targeting harms) inadequately address the problems generative-AI pose. Instead, art history and aesthetic theory-and its attention to literature, painting, poetry, music or any other art form (or “artifice”)-provide better frameworks for thinking about the challenges and opportunities of generative-AI because of their focus on struggles over our common humanity.

Malgieri & Rebrean on Vulnerability in the EU AI Act: building an interpretation

Gianclaudio Malgieri (Universiteit Leiden) and Maria-lucia Rebrean (Universiteit Leiden) have posted “Vulnerability in the EU AI Act: building an interpretation” on SSRN. Here is the abstract:

Vulnerability, a key concept in EU law, remains ambiguous in the Artificial Intelligence Act (AIA), complicating its legal application. This paper offers a comprehensive interpretation of vulnerability, exploring its historical context in EU law and analysing its references in the AIA. It proposes a tripartite definition encompassing identity, context, and power imbalances, arguing that vulnerability is a fluid condition shaped by AI deployment and fundamental rights. Aligning with Article 7(2) of the AIA, this interpretation advocates for a broader, adaptive understanding of vulnerability across EU digital regulations, enhancing legal protections for vulnerable individuals in the digital age.

Kop et al. on A Brief Quantum Medicine Policy Guide

Mauritz Kop (Stanford U Stanford Law Center Internet and Society) et al. have posted “A Brief Quantum Medicine Policy Guide” (Harvard Law School, Petrie-Flom Center Bill of Health, Dec. 6, 2024, https://petrieflom.law.harvard.edu/2024/12/06/a-brief-quantum-medicine-policy-guide/) on SSRN. Here is the abstract:

This brief healthcare policy guide explores how the convergence of quantum technology (QT) and artificial intelligence (AI) could revolutionize precision medicine, offering hyper-personalized treatments and innovative solutions to longstanding healthcare challenges. Second-generation (2G) quantum technologies leverage quantum mechanical phenomena like superposition and entanglement to solve problems beyond the reach of classical methods. By integrating quantum and classical computing, “quantum-classical hybrids” can improve drug discovery, optimize healthcare operations, enhance medical imaging, and facilitate personalized medicine design.

The article describes 2G quantum technology healthcare use cases, categorized per quantum domain. Potential applications include using quantum simulations to model complex biological systems, accelerating drug development by predicting drug-protein interactions, and employing quantum dots for targeted gene and drug delivery, which can help treat diseases like Alzheimer’s, Parkinson’s, and certain cancers. Quantum sensors can enable real-time health monitoring with exceptional precision, while quantum cryptography provides robust data protection methods—essential for safeguarding patient information under regulations like HIPAA and GDPR.

However, these breakthroughs also raise ethical, legal, socio-economic, and policy (ELSPI) concerns. Drawing lessons from AI, nanotechnology, genetics, and nuclear technology governance, policymakers must ensure responsible oversight. Neither the European Union nor the United States currently has dedicated regulations for quantum healthcare devices, though both rely on existing frameworks like the EU’s Medical Device Regulation, the EU AI Act, the Federal Trade Commission (FTC) Regulations, and the FDA regulatory categories. To manage these complexities, a combination of ex-ante, ex-durante, and ex-post regulatory approaches, as well as international standard-setting, adaptive guidelines, and multidisciplinary collaboration, is recommended. The article offers quantum-specific considerations in medical device regulatory oversight and proposes 10 guiding principles for healthcare policy makers.

By promoting quantum literacy, anticipating societal impacts, fostering global cooperation, and implementing principles-based, future-oriented regulation, we can harness quantum’s transformative potential in medicine while maintaining public trust and safety.

Kaal on Artificial Intelligence: The Final Frontier

Wulf A. Kaal (U St. Thomas Law (Minnesota)) has posted “Artificial Intelligence: The Final Frontier” on SSRN. Here is the abstract:

Contemporary Artificial Intelligence (“AI”) systems, particularly Large Language Models (“LLMs”), face an imminent shortage of high-quality, humangenerated textual data, a phenomenon often termed “data exhaustion”. This article examines the limitations of existing centralized data-annotation frameworks, highlighting critical issues such as bias, high computational overhead, and insufficiently adaptive infrastructures. Current market participants-including Scale AI, Appen, CloudFactory, and others-excel at rapidly scaling annotation services yet struggle with ethical sourcing, privacy compliance, and equitable compensation. In addition, legal and regulatory concerns, exemplified by stringent mandates such as the General Data Protection Regulation (“GDPR”), constrain the free flow of data essential for advanced AI research. As a corrective measure, decentralized data production paradigms are proposed, including the adoption of smart contracts, token-based incentives, and participatory governance through Decentralized Autonomous Organizations (“DAOs”). While existing decentralized initiatives-SingularityNET, Fetch.ai, Ocean Protocol, Numeraire, and DcentAI-offer incremental innovations in reputation management and stakeholder engagement, they fail to fully address the nuanced requirements of large-scale “Mechanical Turk”-style data creation. In contrast, the author proposes a Weighted Directed Acyclic Graph (“WDAG”) governance model which provides a multi-dimensional reputation framework, facilitating real-time validation of data contributions, adaptive ethical and legal compliance, and collaborative oversight by diverse community members. Findings suggest that such WDAGcentric systems can more effectively maintain data quality, ensure ethical alignment, and incentivize broad participation, thereby mitigating the looming data shortage and expanding AI’s societal benefits. Ultimately, successful implementation requires coordinated efforts among policymakers, industry practitioners, and civil society actors to sustain both the technological and ethical integrity of AI research. By integrating WDAG-based governance with emerging decentralized solutions, the AI community may realize a more equitable, scalable, and future-ready paradigm for data provisioning.