Chaffee on Schrödinger’s Hacker: Insider Trading and Data Breaches

Eric C. Chaffee (University of Toledo – College of Law) has posted “Schrödinger’s Hacker: Insider Trading and Data Breaches” (Tennessee Journal of Law and Policy, Volume 15 Autumn 2020 Issue 1) on SSRN. Here is the abstract:

The current legal framework governing insider trading is a rich fabric of interwoven stories constructed on a loom of law and regulation. Despite securities law at times gaining a reputation for being cumbersome and onerous, the stories underlying insider trading regulation are usually vibrant and engaging.

The current story regarding the intersection of hacking and insider trading is that of Oleksandr Dorozhko, a Ukrainian hacker held liable for violating section 10(b) and Rule 10b-5. Importantly, because that story ended in an unopposed motion for summary judgment against Dorozhko, the law remains unclear as to whether a hacker who merely exploits a weakness in software to obtain material nonpublic information has violated section 10(b) and Rule 10b-5.

Hacking has become a ubiquitous concern in our society. Oleksandr Dorozhko’s tale illustrates that federal securities regulation is currently inadequate to deal with the securities issues associated with hacking, which poses a threat to the stability of securities markets in the United States. This Essay offers a proposed rule to remedy this problem that gives the SEC the ability to clarify how insider trading regulation can and should address hacking.

Kop on Why Democratic Countries Should Form a Strategic Tech Alliance

Mauritz Kop (Stanford Law School ; AIRecht) has posted “Democratic Countries Should Form a Strategic Tech Alliance” (Stanford – Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 1/2021) on SSRN. Here is the abstract:

China’s relentless advance in Artificial Intelligence (AI) and quantum computing has engendered a significant amount of anxiety about the future of America’s technological supremacy. The resulting debate centres around the impact of China’s digital rise on the economy, security, employment and the profitability of American companies. Absent in these predominantly economic disquiets is what should be a deeper, existential concern: What are the effects of authoritarian regimes exporting their values into our society through their technology? This essay will address this question by examining how democratic countries can, or should respond, and what you can do about it to influence the outcome.

The essay argues that democratic countries should form a global, broadly scoped Strategic Tech Alliance, built on mutual economic interests and common moral, social and legal norms, technological interoperability standards, legal principles and constitutional values. An Alliance committed to safeguarding democratic norms, as enshrined in the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR). The US, the EU and its democratic allies should join forces with countries that share our digital DNA, institute fair reciprocal trading conditions, and establish a global responsible technology governance framework that actively pursues democratic freedoms, human rights and the rule of law.

Currently, two dominant tech blocks exist that have incompatible political systems: the US and China. The competition for AI and quantum ascendancy is a battle between ideologies: liberal democracy mixed with free market capitalism versus authoritarianism blended with surveillance capitalism. Europe stands in the middle, championing a legal-ethical approach to tech governance.

The essay discusses political feasibility of cooperation along transatlantic lines, and examines arguments against the formation of a democratic, value-based Strategic Tech Alliance that will set global technology standards. Then, it weighs the described advantages of the establishment of an Alliance that aims to win the race for democratic technological supremacy against disadvantages, unintended consequences and the harms of doing nothing.

Further, the essay attempts to approach the identified challenges in light of the ‘democracy versus authoritarianism’ discussion from other, sociocritical perspectives, and inquires whether we are democratic enough ourselves.

The essay maintains that technology is shaping our everyday lives, and that the way in which we design and utilize our technology is influencing nearly every aspect of the society we live in. Technology is never neutral. The essay describes that regulating emerging technology is an unending endeavour that follows the lifespan of the technology and its implementation. In addition, it debates how democratic countries should construct regulatory solutions that are tailored to the exponential pace of sustainable innovation in the Fourth Industrial Revolution (4IR).

The essay concludes that to prevent authoritarianism from gaining ground, governments should do three things: (1) inaugurate a Strategic Tech Alliance, (2) set worldwide core rules, interoperability & conformity standards for key 4IR technologies such as AI, quantum and Virtual Reality (VR), and (3) actively embed our common democratic norms, principles and values into the architecture and infrastructure of our technology.

Eidelson on Patterned Inequality, Compounding Injustice, and Algorithmic Prediction

Benjamin Eidelson (Harvard Law School) has posted “Patterned Inequality, Compounding Injustice, and Algorithmic Prediction” (The American Journal of Law and Equality (Forthcoming)) on SSRN. Here is the abstract:

If whatever counts as merit for some purpose is unevenly distributed, a decision procedure that accurately sorts people on that basis will “pick up” and reproduce the pre-existing pattern in ways that more random, less merit-tracking procedures would not. This dynamic is an important cause for concern about the use of predictive models to allocate goods and opportunities. In this article, I distinguish two different objections that give voice to that concern in different ways. First, decision procedures may contribute to future social injustice and other social ills by sustaining or aggravating patterns that undermine equality of status and opportunity. Second, the same decision procedures may wrong particular individuals by compounding prior injustices that explain those persons’ predicted or actual characteristics. I argue for the importance of the first idea and raise doubts about the second. In normative assessments and legal regulation of algorithmic decisionmaking, as in our thinking about anti-discrimination norms more broadly, a central concern ought to be the prospect of entrenching harmful and unjust patterns—quite apart from any personal wrong done to the individuals about whom predictions are made.