Azzutti on AI-Driven Market Manipulation and Limits of the EU Law Enforcement Regime to Credible Deterrence

Alessio Azzutti (Institute of Law & Economics – University of Hamburg) has posted “AI-Driven Market Manipulation and Limits of the EU Law Enforcement Regime to Credible Deterrence” on SSRN. Here is the abstract:

As in many other sectors of EU economies, ‘artificial intelligence’ (AI) has entered the scene of the financial services industry as a game-changer. Trading on capital markets is undoubtedly one of the most promising AI application domains. A growing number of financial market players have in fact been adopting AI tools within the ramification of algorithmic trading. While AI trading is expected to deliver several efficiency gains, it can also bring unprecedented risks due to the technical specificities and related additional uncertainties of specific ‘machine learning’ methods.
With a focus on new and emerging risks of AI-driven market manipulation, this study critically assesses the ability of the EU anti-manipulation law and enforcement regime to achieve credible deterrence. It argues that AI trading is currently left operating within a (quasi-)lawless market environment with the ultimate risk of jeopardising EU capital markets’ integrity and stability. It shows how ‘deterrence theory’ can serve as a normative framework to think of innovative solutions for fixing the many shortcomings of the current EU legal framework in the fight against AI-driven market manipulation.
In concluding, this study suggests improving the existing EU anti-manipulation law and enforcement with a number of policy proposals. Namely, (i) an improved, ‘harm-centric’ definition of manipulation; (ii) an improved, ‘multi-layered’ liability regime for AI-driven manipulation; and (iii) a novel, ‘hybrid’ public-private enforcement institutional architecture through the introduction of market manipulation ‘bounty-hunters’.

Tang on Privatizing Copyright

Xiyin Tang (UCLA School of Law; Yale Law School) has posted “Privatizing Copyright” (Michigan Law Review, Forthcoming) on SSRN. Here is the abstract:

Much has been written, and much is understood, about how and why digital platforms regulate free expression on the Internet. Much less has been written— and even much less is understood—about how and why digital platforms regulate creative expression on the Internet—expression that makes use of others’ copyrighted content. While § 512 of the Digital Millennium Copyright Act regulates user-generated content incorporating copyrighted works, just as § 230 of the Communications Decency Act regulates other user speech on the Internet, it is, in fact, rarely used by the largest Internet platforms—Facebook and YouTube. Instead, as this Article details, creative speech on those platforms is instead governed by a series of highly confidential licensing agreements entered into with large copyright holders.

Yet despite the dominance of private contracting in ordering how millions of pieces of digital content are made and distributed on a daily basis, little is known, and far less has been written, on just what the new rules governing create expression are. This is of course, by design: these license agreements contain strict confidentiality clauses that prohibit public disclosure. This Article, however, pieces together clues from publicly-available court filings, news reporting, and leaked documents. The picture it reveals is a world where the substantive law of copyright is being quietly rewritten—by removing the First Amendment safeguard of fair use, by inserting in a new moral right for works that Congress had deemed, in the Copyright Act, ineligible for moral rights protection, and, through other small provisions in the numerous agreements digital platforms negotiate with rightsholders, influencing and reshaping administrative, common, and statutory copyright law. Further still, recent changes or lobbied-for changes to copyright’s statutory law seek to either enshrine the primacy of such private contracting or altogether remove copyright rule-making processes from government oversight, shielding copyright’s public law from independent considerations of public policy and public scrutiny.

Changing copyright’s public law to enshrine the primacy of such private ordering insulates the new rules of copyright from the democratic process, from public participation in, and from public oversight of, the laws that shape our daily lives. Creative expression on the Internet now finds itself at a curious precipice: a seeming glut of low-cost, or free, content, much of which is created directly by, and distributed to, users—yet increasingly regulated by an opaque network of rules created by a select few private parties. An understanding of the Internet’s democratizing potential for creativity is incomplete without a concomitant understanding of how the new private rules of copyright may shape, and harm, that creativity.

Aguiar et al. on Facebook Shadow Profiles

Luis Aguiar (University of Zurich – Department of Business Administration) et al. have posted “Facebook Shadow Profiles” on SSRN. Here is the abstract:

Data is often at the core of digital products and services, especially when related to online advertising. This has made data protection and privacy a major policy concern. When surfing the web, consumers leave digital traces that can be used to build user profiles and infer preferences. We quantify the extent to which Facebook can track web behavior outside of their own platform. The network of engagement buttons, placed on third-party websites, lets Facebook follow users as they browse the web. Tracking users outside its core platform enables Facebook to build shadow profiles. For a representative sample of US internet users, 52 percent of websites visited, accounting for 40 percent of browsing time, employ Facebook’s tracking technology. Small differences between Facebook users and non-users are largely explained by differing user activity. The extent of shadow profiling Facebook may engage in is similar on privacy-sensitive domains and across user demographics, documenting the possibility for indiscriminate tracking.

Ashley & Bruninghaus on Computer Models for Legal Prediction

Kevin Ashley (University of Pittsburgh – School of Law) and Stefanie Bruninghaus (same) have posted “Computer Models for Legal Prediction” (Jurimetrics, Vol. 46, p. 309, 2006) on SSRN. Here is the abstract:

Computerized algorithms for predicting the outcomes of legal problems can extract and present information from particular databases of cases to guide the legal analysis of new problems. They can have practical value despite the limitations that make reliance on predictions risky for other real-world purposes such as estimating settlement values. An algorithm’s ability to generate reasonable legal arguments also is important. In this article, computerized prediction algorithms are compared not only in terms of accuracy, but also in terms of their ability to explain predictions and to integrate predictions and arguments. Our approach, the Issue-Based Prediction algorithm, is a program that tests hypotheses about how issues in a new case will be decided. It attempts to explain away counterexamples inconsistent with a hypothesis, while apprising users of the counterexamples and making explanatory arguments based on them.

Chagal-Feferkorn & Elkin-Koren on LEX AI: Revisiting Private Ordering by Design

Karni Chagal-Feferkorn (University of Ottawa Common Law Section) and Niva Elkin-Koren (Tel-Aviv University – Faculty of Law) have posted “LEX AI: Revisiting Private Ordering by Design” (Berkeley Technology Law Journal, Vol. 36) on SSRN. Here is the abstract:

In his seminal paper from 1997, Professor Joel R. Reidenberg articulated a novel governance strategy known as “Lex-Informatica.” Under the principles of Lex-Informatica, norms are no longer shaped by leaders, legislators, or judges, but rather by technological capabilities and design choices that grant users the flexibility to shape their own online experience based on their preferences. A quarter century later, a “second generation” of online governance systems has emerged, making use of artificial intelligence: “Lex-AI”.

The literature on governance by AI often focuses on governance of AI, seeking to render AI decision-making more compatible with principles of fairness, due process, and accountability. Scholars have also focused on who is governing behavior by using AI. Missing from these discussions is an inquiry into how norms are generated and enforced through the proliferation of AI. Ultimately, in order to govern AI and fully understand its social implications, we must first ascertain what is lost in translation as we shift to AI in deciding legal matters.

This paper explores how Lex AI governs and the implications of shifting from governance by a set of legal norms to the governance of human behavior and social relations by data-driven algorithms.
We argue that Lex AI is a sui generis type of governance—one which deserves scrutiny by regulators and policymakers. Lex AI bypasses autonomous choice as it is often based on personalization that is conducted for the user and not by the user. As such, it does not neatly fit the definition of private ordering—the process of setting up of social norms by parties involved in the regulated activity. When viewed as a distinct type of collective action mediated by algorithms, Lex AI may enable efficient collection of granular information on the preferences, needs, and interests of members of society, but Lex AI also raises new types of challenges. Path dependency, coupled with the reduced opportunity to signal users’ true preferences or to take part in the deliberation of the applicable norms, may render Lex AI a less efficient and less legitimate form of governance.
Shaping Lex AI to enhance social welfare may require a fresh way of thinking about these challenges and the public interventions that might address them.

Congiu, Sabatino & Sapi on The Impact of Privacy Regulation on Web Traffic: Evidence From the GDPR

Raffaele Congiu, Lorien Sabatino, and Geza Sapi (European Commission; University of Dusseldorf) have posted “The Impact of Privacy Regulation on Web Traffic: Evidence From the GDPR” on SSRN. Here is the abstract:

We use traffic data from around 5,000 web domains in Europe and United States to investigate the effect of the European Union’s General Data Protection Regulation (GDPR) on website visits and user behaviour. We document an overall traffic reduction of approximately 15% in the long-run and find a measurable reduction of engagement with websites. Traffic from direct visits, organic search, email marketing, social media links, display ads, and referrals dropped significantly, but paid search traffic – mainly Google search ads – was barely affected. We observe an inverted U-shaped relationship between website size and change in visits due to privacy regulation: the smallest and largest websites lost visitors, while medium ones were less affected. Our results are consistent with the view that users care about privacy and may defer visits in response to website data handling policies. Privacy regulation can impact market structure and may increase dependence on large advertising service providers. Enforcement matters as well: The effects were amplified considerably in the long-run, following the first significant fine issued eight months after the entry into force of the GDPR.

Recommended.

Solove on The Limitations of Privacy Rights

Daniel J. Solove (George Washington University Law School) has posted “The Limitations of Privacy Rights” (98 Notre Dame Law Review, forthcoming 2023) on SSRN. Here is the abstract:

Individual privacy rights are often at the heart of information privacy and data protection laws. The most comprehensive set of rights, from the European Union’s General Data Protection Regulation (GDPR), includes the right to access, right to rectification (correction), right to erasure, right to restriction, right to data portability, right to object, and right to not be subject to automated decisions. Privacy laws around the world include many of these rights in various forms.

In this article, I contend that although rights are an important component of privacy regulation, rights are often asked to do far more work than they are capable of doing. Rights can only give individuals a small amount of power. Ultimately, rights are at most capable of being a supporting actor, a small component of a much larger architecture. I advance three reasons why rights cannot serve as the bulwark of privacy protection. First, rights put too much onus on individuals when many privacy problems are systematic. Second, individuals lack the time and expertise to make difficult decisions about privacy, and rights cannot practically be exercised at scale with the number of organizations than process people’s data. Third, privacy cannot be protected by focusing solely on the atomistic individual. The personal data of many people is interrelated, and people’s decisions about their own data have implications for the privacy of other people.

The main goal of providing privacy rights aims to provide individuals with control over their personal data. However, effective privacy protection involves not just facilitating individual control, but also bringing the collection, processing, and transfer of personal data under control. Privacy rights are not designed to achieve the latter goal; and they fail at the former goal.

After discussing these overarching reasons why rights are insufficient for the oversized role they currently play in privacy regulation, I discuss the common privacy rights and why each falls short of providing significant privacy protection. For each right, I propose broader structural measures that can achieve its underlying goals in a more systematic, rigorous, and less haphazard way.

Recommended.

Nabilou on Probabilistic Settlement Finality in Proof-of-Work Blockchains: Legal Considerations

Hossein Nabilou (University of Amsterdam, Amsterdam Law School; UNIDROIT) has posted “Probabilistic Settlement Finality in Proof-of-Work Blockchains: Legal Considerations” on SSRN. Here is the abstract:

The concept of settlement finality sits at the heart of any type of commercial transaction; whether the transaction is in physical or electronic form or is mediated by fiat currencies or cryptocurrencies. Transaction finality refers to the exact moment in time when proprietary interests in the object or medium of transaction pass from one party to his counterparty and the obligations of the parties to a transaction are discharged in an unconditional and irrevocable manner, i.e., in a way that cannot be reversed even by the subsequent legal defenses or actions against the counterparty. Given the benefits of finality in terms of legal certainty and its potential systemic implications, legal systems throughout the globe have devised mechanisms to determine the exact moment of the finality of a transaction and settlement of obligations conducted using fiat currencies as a medium of exchange. However, as the transactions involving cryptocurrencies fall beyond the scope of such rules, they introduce new challenges to determining the exact moment of finality in on-chain cryptocurrency transactions. This complexity arises because the finality of the transactions in the cryptocurrencies that rely on proof-of- work (PoW) consensus algorithms is probabilistic. The probabilistic finality makes the determination of the exact moment of operational finality nearly impossible.

After discussing the mechanisms of settlement of contractual obligations in the traditional sale of goods as well as payment and settlement systems – which rather than relying on the concept of operational finality, rely upon the concept of legal finality – the paper argues that even in the traditional payment and settlement systems the determination of operational settlement finality is nearly impossible. This is because no transaction, even a transaction involving a cash payment, cannot be operationally deemed irrevocable as it remains prone to hacks or unwinding by electronic means or mere brute force. The paper suggests that the concept of finality is inherently a legal concept and, as is the case in the conventional finance, the moment of finality in PoW blockchains should also rely on the conceptual separation of operational finality from legal finality. However, given the decentralized nature of cryptocurrencies, defining the moment of finality in PoW blockchains, which may require a minimum level of institutional infrastructures and centralization to support the credibility of the finality, may face insurmountable challenges.

Grotto & Dempsey on Vulnerability Disclosure and Management for AI/ML Systems

AJ Grotto (Stanford University – Freeman Spogli Institute for International Studies) and James Dempsey (University of California, Berkeley – School of Law; Stanford Freeman Spogli) have posted “Vulnerability Disclosure and Management for AI/ML Systems: A Working Paper with Policy Recommendations” on SSRN. Here is the abstract:

Artificial intelligence systems, especially those dependent on machine learning (ML), can be vulnerable to intentional attacks that involve evasion, data poisoning, model replication, and exploitation of traditional software flaws to deceive, manipulate, compromise, and render them ineffective. Yet too many organizations adopting AI/ML systems are oblivious to their vulnerabilities. Applying the cybersecurity policies of vulnerability disclosure and management to AI/ML can heighten appreciation of the technologies’ vulnerabilities in real-world contexts and inform strategies to manage cybersecurity risk associated with AI/ML systems. Federal policies and programs to improve cybersecurity should expressly address the unique vulnerabilities of AI-based systems, and policies and structures under development for AI governance should expressly include a cybersecurity component.

Mika & Thelisson on Application of Swiss Private Law by AI

Grzegorz P. Mika (AI Transparency Institute) and Eva Thelisson (same) have posted “Application of Swiss Private Law by AI” on SSRN. Here is the abstract:

The current debate on the deployment of Artificial Intelligence (AI) in the judicial process raises the question of how humans apply the law, and if AI can be of use in this process. Similar to other legal systems, Swiss private law provides for explicit rules and guidance prescriptions as to its own application, which judges have to apply in the decision-making process. These rules for instance mandate the inclusion of meaning, as opposed to wording, of any statute. These rules also prescribe the way of ruling in the absence of statute. These rules command good faith as well as equity to construe and sometimes to limit or deny rights and duties at stake. Good faith in particular governs the rules and principles of interpretation of contracts and other expressions of intent in private law. Equity is meant to serve as guidance to apply openly or broadly formulated statutes. AI would also have to observe these rules and principles of application of the law. This article aims at assessing whether AI systems could comply with these rules of judicial ruling.