Mihet et al. on Is It AI or Data That Drives Market Power?

Roxana Mihet (Swiss Finance Institute HEC Lausanne) et al. have posted “Is It AI or Data That Drives Market Power?” on SSRN. Here is the abstract:

Artificial intelligence (AI) is transforming productivity and market structure, yet the roots of firm dominance in the modern economy remain unclear. Is market power driven by AI capabilities, access to data, or the interaction between them? We develop a dynamic model in which firms learn from data using AI, but face informational entropy: without sufficient AI, raw data has diminishing or even negative returns. The model predicts two key dynamics: (1) improvements in AI disproportionately benefit data-rich firms, reinforcing concentration; and (2) access to processed data substitutes for compute, allowing low-AI firms to compete and reducing concentration. We test these predictions using novel data from 2000–2023 and two exogenous shocks—the 2006 launch of Amazon Web Services (AWS) and the 2017 introduction of transformer-based architectures. The results confirm both mechanisms: compute access enhances the advantage of data-intensive firms, while access to processed data closes the performance gap between AI leaders and laggards. Our findings suggest that regulating data usability—not just AI models—is essential to preserving competition in the modern economy.

Dooling on Ghostwriting the Government

Bridget C.E. Dooling (The Ohio State U) has posted “Ghostwriting the Government” (109 Marq. L. Rev. (forthcoming 2026)) on SSRN. Here is the abstract:

Ghostwriting is when a writer prepares materials to be issued under someone else’s name. It is very common and sometimes unseemly, but why? Ghostwriting describes a politician’s use of a speechwriter, a student’s purchase of a term paper, or a tongue-twisted admirer asking a poet to craft a love letter on his behalf. It is also what happens inside organizations every day: staff draft documents for others “up the chain” to sign. Lots of people in institutions ghostwrite, but we don’t tend to call it that. We don’t call it anything, really; it’s just writing. But when legislators rely on staff and lobbyists to draft bills, when an agency head relies on staff or contractors to write a rule, and when a judge relies on her clerk for a draft opinion, the benefits of ghostwriting come into tension with the duties of government decisionmakers. This Article argues that when a government decisionmaker has a duty to reason, ghostwriting can violate that duty. A critique based on duty enhances our ability to assess governmental ghostwriting, and it comes just in time. In the quest for government efficiency, generative AI looms large. If it doesn’t matter who writes what, so long as someone “signs off” at the end, why not hand governmental drafting over to the algorithm?

Park & Cohen on The Regulation of Polygenic Risk Scores

Jin Park (Harvard U Harvard Medical) and I. Glenn Cohen (Harvard Law) have posted “The Regulation of Polygenic Risk Scores” (Harvard Journal of Law & Technology,) on SSRN. Here is the abstract:

Polygenic risk scores (“PRSs”) provide genome-wide estimates of disease risk by aggregating the effects of thousands of genetic variants across the genome. These scores are the subject of immense scientific interest as research tools and more recently as clinical instruments that may allow for physicians to stratify populations based on underlying genetic predisposition, or to tailor therapeutic interventions based on their needs and likelihood of benefit. While their status as research tools has long-been recognized, these scores are now undergoing clinical trials, increasing the evidence base for their use in clinical settings. These scores have also entered the consumer market, prompting industry experts to call on greater regulatory oversight. However, in part due to the speed of these developments, the legal literature has failed to comprehensively assess the nature of these scores, and whether they differ fundamentally from previous forms of genetic scoring which have been regulated by the complex (yet familiar) regulatory regime for genetic testing. 

This Article fills this gap in the literature by comparing the state-of-the-art methodological tools used to generate these scores with familiar forms of genetic testing (e.g., IVDs and LDTs). We identify four dimensions that make PRS distinct from previous genetic testing regimes-(1) the underlying method of assessing genetic risk; (2) an evolving evidence base; (3) lack of consensus on methodology; (4) diversity of device functions that PRSs may apply to. Taking these insights in concert, this Article also offers several principles for regulatory design as it relates to PRSs. 

These principles include the need for a unified approach across all devices that incorporate PRSs, the value of taking a risk-based framework, and drawing lessons from AI/ML regulation. Ultimately, while the existing risk-based device framework will serve as a stopgap for the most clinically impactful use cases (and those that pose the most risk to patients and the public), PRSs and other novel technologies may evince the need for updates to the authorities granted to the existing regulatory regime to balance scientific innovation with the public interest.

Lim on Determinants of Socially Responsible AI Governance

Daryl Lim (Pennsylvania State U) has posted “Determinants of Socially Responsible AI Governance” (Duke Law & Technology Review | Vol. 25, No. 1, 2025) on SSRN. Here is the abstract:

The signing of the first international AI treaty by the United States, European Union, and other nations marks a pivotal step in establishing a global framework for AI governance, ensuring that AI systems respect human rights, democracy, and the rule of law. This article advances the concepts of justice, equity, and the rule of law as yardsticks of socially responsible AI-from development through deployment-to ensure that AI technologies do not exacerbate existing inequalities but actively promote fairness and inclusivity. Part I explores AI’s potential to improve access to justice for marginalized communities and small and medium-sized law firms while scrutinizing AI-related risks judges, lawyers, and the communities they serve face. Part II examines the structural biases in AI systems, focusing on how biased data and coding practices can entrench inequity and how intellectual property protections like trade secrets can limit transparency and undermine accountability in AI governance. Part III evaluates the normative impact of AI on traditional legal frameworks, offering a comparative analysis of governance models: the U.S. market-driven approach, the EU’s rights-based model, China’s command economy, and Singapore’s soft law framework. The analysis highlights how different systems balance innovation with safeguards, emphasizing that successful AI governance must integrate risk-based regulation and transparency without stifling technological advancement. Through these comparative insights, the article proposes a proactive governance framework incorporating transparency, equity audits, and tailored regulatory approaches. This forward-looking analysis offers legal scholars and policymakers a comprehensive roadmap for navigating AI’s transformative effects on justice, equity, and the rule of law.

Krause on DeepSeek and FinTech: The Democratization of AI and Its Global Implications

David Krause (Marquette U) has posted “DeepSeek and FinTech: The Democratization of AI and Its Global Implications” on SSRN. Here is the abstract:

DeepSeek, a Chinese AI company, has introduced a new paradigm in FinTech by making high-performing AI models accessible at significantly lower costs. This paper explores how DeepSeek’s cost-efficient and open-source approach is disrupting traditional AI development, lowering barriers to entry for startups, and fostering competition in financial services. The research highlights the transformative applications of democratized AI in lending, investment management, insurance, and payments, as well as the systemic challenges it presents, including regulatory concerns, ethical dilemmas, and geopolitical implications. Additionally, the paper examines the potential impact of DeepSeek on AI efficiency, energy consumption, and market dynamics. While DeepSeek’s success offers opportunities for financial inclusion and innovation, it also raises concerns about security, data governance, and the shifting balance of technological power. The findings underscore the need for global regulatory coordination and strategic adaptation in the FinTech sector.

G’sell on Digital Authoritarianism: from state control to algorithmic despotism

Florence G’sell (Stanford U) has posted “Digital Authoritarianism: from state control to algorithmic despotism” on SSRN. Here is the abstract:

In an era where digital technology serves as both a tool for liberation and a threat to democracy, the term “digital authoritarianism” has emerged to describe the strategies employed by authoritarian regimes to exert control in the digital sphere. This chapter explores the defining characteristics of digital authoritarianism as exemplified by countries such as China and Russia, identifying three primary pillars: information control, mass surveillance, and the creation of a fragmented, isolated Internet. Furthermore, this chapter emphasizes that digital authoritarian practices are not confined to authoritarian regimes. Democratic governments and technologically advanced private corporations, especially the dominant tech companies shaping the modern Internet, are also capable of adopting authoritarian tactics. Finally, the chapter argues that the technology itself—through the omnipotence of code in cyberspace—may inherently foster a form of digital authoritarianism.

Price II & Freilich on Data as Policy

W. Nicholson Price Ii (U Michigan Law) and Janet Freilich (Boston U Law) have posted “Data as Policy” (66 Boston College Law Review (forthcoming 2025)) on SSRN. Here is the abstract:

A large literature on regulation highlights the many different methods of policy-making: command-and-control rulemaking, informational disclosures, tort liability, taxes, and more. But the literature overlooks a powerful method to achieve policy objectives: data. The state can provide (or suppress) data as a regulatory tool to solve policy problems. For administrations with expansive views of government’s purpose, government-provided data can serve as infrastructure for innovation and push innovation in socially desirable directions; for administrations with deregulatory ambitions, suppressing or choosing not to collect data can reduce regulatory power or serve as a back-door mechanism to subvert statutory or common law rules. Government-provided data is particularly powerful for data-driven technologies such as AI where it is sometimes more effective than traditional methods of regulation. But government-provided data is a policy tool beyond AI and can influence policy in any field. We illustrate why government-provided data is a compelling tool both for positive regulation and deregulation in contexts ranging from addressing healthcare discrimination, automating legal practice, smart power generation, and others. We then consider objections and limitations to the role of government-provided data as policy instrument, with substantial focus on privacy concerns and the possibility for autocratic abuse. 

We build on the broad literature on regulation by introducing data as a regulatory tool. We also join-and diverge from-the growing literature on data by showing that while data can be privately produced purely for private gain, they do not need to be.  Rather, government can be deeply involved in the generation and sharing of data, taking a much more publicly oriented view. Ultimately, while government-provided data are not a panacea for either regulatory or data problems, governments should view data provision as an understudied but useful tool in the innovation and governance toolbox.

Merane & Stremitzer on Automated Private Enforcement: Evidence from the Google Fonts Case

Jakob Merane (ETH Zürich) and Alexander Stremitzer (ETH Zurich) have posted “Automated Private Enforcement: Evidence from the Google Fonts Case” on SSRN. Here is the abstract:

Plaintiffs often have little incentive to detect and enforce small claims, which reduces defendants’ incentives to comply. With advances in artificial intelligence, can automated private enforcement increase compliance? The Google Fonts Case offers a unique opportunity to explore this question. After a German court ruled that the dynamic embedding of Google Fonts violated the GDPR, an entrepreneurial lawyer in Austria used automated tools to detect violations and threaten website operators with lawsuits. Drawing on a comprehensive sample of 1,517,429 websites across 32 European countries over a two-year period, we use a difference-in-difference approach to show a significant compliance effect in Austria. Within three months, non-compliance dropped by 22.7 percentage points, a nearly 50% reduction. These findings suggest that automated private enforcement can be highly disruptive, pressuring policymakers to recalibrate legal rules.

Krook on When Autonomy Breaks: The Hidden Existential Risk of AI

Joshua Krook (U Antwerp Law) has posted “When Autonomy Breaks: The Hidden Existential Risk of AI” on SSRN. Here is the abstract:

AI risks are typically framed around physical threats to humanity, a loss of control or an accidental error causing humanity’s extinction. However, I argue in line with the gradual disempowerment thesis, that there is an underappreciated risk in the slow and irrevocable decline of human autonomy. As AI starts to outcompete humans in various areas of life, a tipping point will be reached where it no longer makes sense to rely on human decision-making, creativity, social care or even leadership. 

What may follow is a process of gradual de-skilling, where we lose skills that we currently take for granted. Traditionally, it is argued that AI will gain human skills over time, and that these skills are innate and immutable in humans. By contrast, I argue that humans may lose such skills as critical thinking, decision-making and even social care in an AGI world. The biggest threat to humanity is therefore not that machines will become more like humans, but that humans will become more like machines.

Kogan on Artificial Intelligence, Existential Risk, and the First Amendment

Ilan Kogan (Yale U Law) has posted “Artificial Intelligence, Existential Risk, and the First Amendment” (University of Pennsylvania Journal of Constitutional Law Vol. 27, March 2025) on SSRN. Here is the abstract:

In May 2023, hundreds of public figures signed a statement warning of the growing risk of human extinction from sophisticated artificial-intelligence systems. Yet, in many important cases, the outputs of sophisticated artificial-intelligence systems qualify as protected speech for First Amendment purposes. Regulators’ increasing focus on the potential for artificial intelligence to extinguish humanity is thus minimally actionable. Sophisticated artificial-intelligence systems are unlikely to present sufficient risk such that their regulation may subvert the Constitution. By limiting unnecessary regulation aimed at speculative risks, the First Amendment helps ensure that the United States will benefit from important technological advances in the twenty-first century.