Zingales & Renzetti on Digital Platform Ecosystems and Conglomerate Mergers: A Review of the Brazilian Experience

Nicolo Zingales (Getulio Vargas Foundation (FGV); Tilburg Law and Economics Center (TILEC); Stanford University – Stanford Law School Center for Internet and Society) and Bruno Renzetti (Yale University, Law School; University of Sao Paulo (USP), Faculty of Law (FD)) have posted “Digital Platform Ecosystems and Conglomerate Mergers: A Review of the Brazilian Experience” (World Competition 45 (4) (2022)) on SSRN. Here is the abstract:

This paper highlights some of the key challenges for the Brazilian merger control regime in dealing with mergers involving digital platform ecosystems (DPEs). After a quick introduction to DPEs, we illustrate how conglomerate effects that are raised by such mergers remain largely unaddressed in the current landscape for merger control in Brazil. The paper is divided in four sections. First, we introduce the reader to the framework for merger control in Brazil. Second, we identify the possible theories of harm related to conglomerate mergers, and elaborate on the way in which their application may be affected by the context of DPEs. Third, we conduct a review of previous mergers involving DPEs in Brazil, aiming to identify the theories of harm employed (and those that could have been explored) in each case. Fourth and finally, we summarize and results and suggest adaptations to the current regime, advancing proposals for a more consistent and predictable analysis.

Lemert on Facebook’s Corporate Law Paradox

Abby Lemert (Yale Law School) has posted “Facebook’s Corporate Law Paradox” on SSRN. Here is the abstract:

In response to the digital harms created by Facebook’s platforms, lawmakers, the media, and academics repeatedly demand that the company stop putting “profits before people.” But these commentators have consistently overlooked the ways in which Delaware corporate law disincentives and even prohibits Facebook’s directors from prioritizing the public interest. Because Facebook experiences the majority of the harms it creates as negative externalities, Delaware’s unflinching commitment to shareholder primacy prevents Facebook’s directors from making unprofitable decisions to redress those harms. Even Facebook’s attempt to delegate decision-making authority to the independent Oversight Board verges on an unlawful abdication of corporate director fiduciary duties. Facebook’s experience casts doubt on the prospects for effective corporate self-regulation of content moderation, and more broadly, on the ability of existing corporate law to incentivize or even allow social media companies to meaningfully redress digital harms.

Jabri on Algorithmic Policing

Ranae Jabri (National Bureau of Economic Research; Duke University) has posted “Algorithmic Policing” on SSRN. Here is the abstract:

Predictive policing algorithms are increasingly used by law enforcement agencies in the United States. These algorithms use past crime data to generate predictive policing boxes, specifically the highest crime risk areas where law enforcement is instructed to patrol every shift. I collect a novel dataset on predictive policing box locations, crime incidents, and arrests from a major urban jurisdiction where predictive policing is used. Using institutional features of the predictive policing policy, I isolate quasi-experimental variation to examine the causal impacts of algorithm-induced police presence. I find that algorithm-induced police presence decreases serious property and violent crime. At the same time, I also find disproportionate racial impacts on arrests for serious violent crimes as well as arrests in traffic incidents i.e. lower-level offenses where police have discretion. These results highlight that using predictive policing to target neighborhoods can generate a tradeoff between crime prevention and equity.

Mökander et al. on The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act

Jakob Mökander (Oxford Internet Institute) et al. has posted “The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: What can they learn from each other?” (Minds and Machines 2022) on SSRN. Here is the abstract:

On the whole, the U.S. Algorithmic Accountability Act of 2022 (US AAA) is a pragmatic approach to balancing the benefits and risks of automated decision systems. Yet there is still room for improvement. This commentary highlights how the US AAA can both inform and learn from the European Artificial Intelligence Act (EU AIA).

Low, Schuster & Wan on The Company and Blockchain Technology

Kelvin F.K. Low (NUS – Faculty of Law), Edmund Schuster (London School of Economics – Law School), and Wai Yee Wan
(City University of Hong Kong) have posted “The Company and Blockchain Technology” (Elgar Handbook on Corporate Liability, forthcoming).

Blockchain and distributed ledger technology (DLT) has generated much excitement over the past decade, with proclamations that it would disrupt everything from elections to finance. Unsurprisingly, the much-maligned corporate form is also considered ripe for disruption. While certainly imperfect, and currently serviced by creaking legal infrastructure premised upon direct shareholdings, are its problems ones of centralization/intermediation? What exactly are the limits of DLT? In this chapter, we propose to expose the ignorance behind the hype that the venerable corporation will either be revitalized by DLT or replaced by Decentralised Autonomous Organisations (DAOs). We will demonstrate that proponents of DLT disruption either overestimate the potential of the technology by taking at face value its claims of security without unpacking what said security entails (and what it does not) or lack awareness of the history of and market demand for intermediation as well as the complexities of modern corporations.

Wang & Buckley on The Coming Central Bank Digital Currency Revolution and the E-CNY

Heng Wang (Singapore Management University – Yong Pung How School of Law; University of New South Wales (UNSW) – Faculty of Law & Justice) and Ross P. Buckley (University of New South Wales (UNSW) – Faculty of Law & Justice) on SSRN. Here is the abstract:

The only central bank money individuals and businesses have today is cash. Everything else they use as money is commercial bank promises. Central bank digital currencies (CBDC) will likely change all this by putting central bank money into everyone’s hands. China is a front runner in this revolution, and its CBDC, the e-CNY, may well in time profoundly affect the international economic order. This article analyses the major considerations around the e-CNY, its ramifications in particular for trade, and its possible challenges.

Tan on Transnational Transactions on Cryptoasset Exchanges: A Conflict of Laws Perspective

Shao Wei Tan (National University of Singapore) has posted “Transnational Transactions on Cryptoasset Exchanges: A Conflict of Laws Perspective” (Singapore Journal of Legal Studies, Sep 2022, pp 384-422) on SSRN. Here is the abstract:

Cryptoassets, now in the mainstream with significant retail and institutional ownership, can be purchased on cryptoasset exchanges online from around the world. Correspondingly, disputes involving transnational cryptoasset transactions—which have already begun to crop up in the US – are likely to become increasingly common in Singapore given its status as a global financial hub. The problem, however, is that there is no global consensus on how to determine the applicable law for transnational transactions on cryptoasset exchanges. This lack of consensus engenders unnecessary uncertainty as to the disputing parties’ rights and obligations, which in turn has significant implications for issuers, potential investors, regulators, and even the entire financial system. Building on the shortcomings of existing conflict of laws solutions in other jurisdictions, this article proposes a conflict of laws solution to this problem for the Singapore courts. The solution entails (1) recognising that the problem should be dealt with using a choice-of-law approach, (2) creating a new category of issues, ‘market issues’, as which issues may be collectively characterised, and (3) choosing only the lex mercatus for issues characterised as market issues.

Paul on The Politics of Regulating Artificial Intelligence Technologies

Regine Paul (U Bergen) has posted “The Politics of Regulating Artificial Intelligence Technologies: A Competition State Perspective” (Handbook on Public Policy and Artificial Intelligence, edited by Regine Paul, Emma Carmel and Jennifer Cobbe (Elgar, forthcoming) on SSRN. Here is the abstract:

This chapter introduces and critically evaluates alternative conceptualizations of public regulation of AITs in what is still a nascent field of research. As often in new regulatory domains, there is a tendency both of re-inventing the wheel – by disregarding insights from neighboring policy domains (e.g. nano-technology or aviation) – and of creating silos of research – by failing to link up and systematize existing accounts in a wider context of regulatory scholarship. The aim of this chapter is to counter both tendencies; first by offering a systematic review of existing social science publications on AIT regulation, second by situating this review in the larger research landscape on (technology) regulation. This opens up for problematizing the relative dominance of narrow and rather a-political concepts of AI regulation in parts of the literature so far. In line with the aims of this Handbook (Paul 2022), I outline a critical political economy perspective that helps expose the politics of regulating AITs beyond applied ethics or “rational” risk-based interventions. Throughout the chapter, I use illustrative examples from my own primary research (documents and semi-structured expert interviews) on how the EU Commission narrates and seeks to enact its proposed AI Act.

Baker & Shortland on The Government Behind Insurance Governance: Lessons for Ransomware

Tom Baker (University of Pennsylvania Carey Law School) and Anja Shortland (King’s College, London) have posted “The Government Behind Insurance Governance: Lessons for Ransomware” (Regulation and Governance, forthcoming) on SSRN. Here is the abstract:

The insurance as governance literature focuses on the ability of private enterprises to collectively regulate, pool, and distribute risks. This paper analyzes how governments support insurance markets to maintain insurability and limit risks to society. We propose a new conceptual framework grouping government interventions into three dimensions: regulation of risky activity, public investment in risk reduction, and co-insurance. We apply this framework to six case studies, describing insurance markets’ reliance on public support in more analytically precise terms. We analyze how mature insurance markets overcame insurability challenges akin to those currently presented by extortive cybercrime. Private governance struggled when markets grew too big for informal coordination or when (tail) risks escalated. Government interventions vary widely. Some governments prioritize supporting economic activity while others concentrate on containing risks. Governments also choose between risk reduction and ex post socialization of losses. We apply these insights to the market for ransomware insurance, discussing the merits and potential hazards of current proposals for government intervention.

Pisanelli on Artificial Intelligence as a Tool for Reducing Gender Discrimination in Hiring

Elena Pisanelli (European University Institute) has posted “A New Turning Point for Women: Artificial Intelligence as a Tool for Reducing Gender Discrimination in Hiring” on SSRN. Here is the abstract:

This paper studies whether firms’ adoption of AI has a causal effect on their probability of hiring female managers, using data on the 500 largest firms by revenues in Europe and the US, and a staggered difference-in-differences approach. Despite the concerns the existing literature prompts about AI fairness, I find firms’ use of AI causes, on average, a relative increase by 40% in the hiring of female managers. This result is best explained by one specific type of AI, assessment software. I show the use of such software is correlated with a reduction in firms being sued for gender discrimination in hiring.