Alessio Azzutti (Institute of Law & Economics – University of Hamburg), Wolf-Georg Ringe (University of Hamburg – Institute of Law & Economics, University of Oxford – Faculty of Law, European Corporate Governance Institute (ECGI)), and H. Siegfried Stiehl (University of Hamburg – Department of Informatics) have posted “Machine Learning, Market Manipulation and Collusion on Capital Markets: Why the ‘Black Box’ matters” on SSRN. Here is the abstract:
This paper offers a novel perspective on the implications of increasingly autonomous and “black box” algorithms, within the ramification of algorithmic trading, for the integrity of capital markets. Artificial intelligence (AI) and particularly its subfield of machine learning (ML) methods have gained immense popularity among the great public and achieved tremendous success in many real-life applications by leading to vast efficiency gains. In the financial trading domain, ML can augment human capabilities in both price prediction, dynamic portfolio optimization, and other financial decision-making tasks. However, thanks to constant progress in the ML technology, the prospect of increasingly capable and autonomous agents to delegate operational tasks and even decision-making is now beyond mere imagination, thus opening up the possibility for approximating (truly) autonomous trading agents anytime soon.
Given these spectacular developments, this paper argues that such autonomous algorithmic traders may involve significant risks to market integrity, independent from their human experts, thanks to self-learning capabilities offered by state-of-the-art and innovative ML methods. Using the proprietary trading industry as a case study, we explore emerging threats to the application of established market abuse laws in the event of algorithmic market abuse, by taking an interdisciplinary stance between financial regulation, law & economics, and computational finance. Specifically, our analysis focuses on two emerging market abuse risks by autonomous algorithms: market manipulation and “tacit” collusion. We explore their likelihood to arise on global capital markets and evaluate related social harm as forms of market failures.
With these new risks in mind, this paper questions the adequacy of existing regulatory frameworks and enforcement mechanisms, as well as current legal rules on the governance of algorithmic trading, to cope with increasingly autonomous and ubiquitous algorithmic trading systems. It shows how the “black box” nature of specific ML-powered algorithmic trading strategies can subvert existing market abuse laws, which are based upon traditional liability concepts and tests (such as “intent” and “causation”). In concluding, by addressing the shortcomings of the present legal framework, we develop a number of guiding principles to assist legal and policy reform in the spirit of promoting and safeguarding market integrity and safety.