Azzutti on AI-Driven Market Manipulation and Limits of the EU Law Enforcement Regime to Credible Deterrence

Alessio Azzutti (Institute of Law & Economics – University of Hamburg) has posted “AI-Driven Market Manipulation and Limits of the EU Law Enforcement Regime to Credible Deterrence” on SSRN. Here is the abstract:

As in many other sectors of EU economies, ‘artificial intelligence’ (AI) has entered the scene of the financial services industry as a game-changer. Trading on capital markets is undoubtedly one of the most promising AI application domains. A growing number of financial market players have in fact been adopting AI tools within the ramification of algorithmic trading. While AI trading is expected to deliver several efficiency gains, it can also bring unprecedented risks due to the technical specificities and related additional uncertainties of specific ‘machine learning’ methods.
With a focus on new and emerging risks of AI-driven market manipulation, this study critically assesses the ability of the EU anti-manipulation law and enforcement regime to achieve credible deterrence. It argues that AI trading is currently left operating within a (quasi-)lawless market environment with the ultimate risk of jeopardising EU capital markets’ integrity and stability. It shows how ‘deterrence theory’ can serve as a normative framework to think of innovative solutions for fixing the many shortcomings of the current EU legal framework in the fight against AI-driven market manipulation.
In concluding, this study suggests improving the existing EU anti-manipulation law and enforcement with a number of policy proposals. Namely, (i) an improved, ‘harm-centric’ definition of manipulation; (ii) an improved, ‘multi-layered’ liability regime for AI-driven manipulation; and (iii) a novel, ‘hybrid’ public-private enforcement institutional architecture through the introduction of market manipulation ‘bounty-hunters’.

Tang on Privatizing Copyright

Xiyin Tang (UCLA School of Law; Yale Law School) has posted “Privatizing Copyright” (Michigan Law Review, Forthcoming) on SSRN. Here is the abstract:

Much has been written, and much is understood, about how and why digital platforms regulate free expression on the Internet. Much less has been written— and even much less is understood—about how and why digital platforms regulate creative expression on the Internet—expression that makes use of others’ copyrighted content. While § 512 of the Digital Millennium Copyright Act regulates user-generated content incorporating copyrighted works, just as § 230 of the Communications Decency Act regulates other user speech on the Internet, it is, in fact, rarely used by the largest Internet platforms—Facebook and YouTube. Instead, as this Article details, creative speech on those platforms is instead governed by a series of highly confidential licensing agreements entered into with large copyright holders.

Yet despite the dominance of private contracting in ordering how millions of pieces of digital content are made and distributed on a daily basis, little is known, and far less has been written, on just what the new rules governing create expression are. This is of course, by design: these license agreements contain strict confidentiality clauses that prohibit public disclosure. This Article, however, pieces together clues from publicly-available court filings, news reporting, and leaked documents. The picture it reveals is a world where the substantive law of copyright is being quietly rewritten—by removing the First Amendment safeguard of fair use, by inserting in a new moral right for works that Congress had deemed, in the Copyright Act, ineligible for moral rights protection, and, through other small provisions in the numerous agreements digital platforms negotiate with rightsholders, influencing and reshaping administrative, common, and statutory copyright law. Further still, recent changes or lobbied-for changes to copyright’s statutory law seek to either enshrine the primacy of such private contracting or altogether remove copyright rule-making processes from government oversight, shielding copyright’s public law from independent considerations of public policy and public scrutiny.

Changing copyright’s public law to enshrine the primacy of such private ordering insulates the new rules of copyright from the democratic process, from public participation in, and from public oversight of, the laws that shape our daily lives. Creative expression on the Internet now finds itself at a curious precipice: a seeming glut of low-cost, or free, content, much of which is created directly by, and distributed to, users—yet increasingly regulated by an opaque network of rules created by a select few private parties. An understanding of the Internet’s democratizing potential for creativity is incomplete without a concomitant understanding of how the new private rules of copyright may shape, and harm, that creativity.

Aguiar et al. on Facebook Shadow Profiles

Luis Aguiar (University of Zurich – Department of Business Administration) et al. have posted “Facebook Shadow Profiles” on SSRN. Here is the abstract:

Data is often at the core of digital products and services, especially when related to online advertising. This has made data protection and privacy a major policy concern. When surfing the web, consumers leave digital traces that can be used to build user profiles and infer preferences. We quantify the extent to which Facebook can track web behavior outside of their own platform. The network of engagement buttons, placed on third-party websites, lets Facebook follow users as they browse the web. Tracking users outside its core platform enables Facebook to build shadow profiles. For a representative sample of US internet users, 52 percent of websites visited, accounting for 40 percent of browsing time, employ Facebook’s tracking technology. Small differences between Facebook users and non-users are largely explained by differing user activity. The extent of shadow profiling Facebook may engage in is similar on privacy-sensitive domains and across user demographics, documenting the possibility for indiscriminate tracking.

Ashley & Bruninghaus on Computer Models for Legal Prediction

Kevin Ashley (University of Pittsburgh – School of Law) and Stefanie Bruninghaus (same) have posted “Computer Models for Legal Prediction” (Jurimetrics, Vol. 46, p. 309, 2006) on SSRN. Here is the abstract:

Computerized algorithms for predicting the outcomes of legal problems can extract and present information from particular databases of cases to guide the legal analysis of new problems. They can have practical value despite the limitations that make reliance on predictions risky for other real-world purposes such as estimating settlement values. An algorithm’s ability to generate reasonable legal arguments also is important. In this article, computerized prediction algorithms are compared not only in terms of accuracy, but also in terms of their ability to explain predictions and to integrate predictions and arguments. Our approach, the Issue-Based Prediction algorithm, is a program that tests hypotheses about how issues in a new case will be decided. It attempts to explain away counterexamples inconsistent with a hypothesis, while apprising users of the counterexamples and making explanatory arguments based on them.