Paul on The Politics of Regulating Artificial Intelligence Technologies

Regine Paul (U Bergen) has posted “The Politics of Regulating Artificial Intelligence Technologies: A Competition State Perspective” (Handbook on Public Policy and Artificial Intelligence, edited by Regine Paul, Emma Carmel and Jennifer Cobbe (Elgar, forthcoming) on SSRN. Here is the abstract:

This chapter introduces and critically evaluates alternative conceptualizations of public regulation of AITs in what is still a nascent field of research. As often in new regulatory domains, there is a tendency both of re-inventing the wheel – by disregarding insights from neighboring policy domains (e.g. nano-technology or aviation) – and of creating silos of research – by failing to link up and systematize existing accounts in a wider context of regulatory scholarship. The aim of this chapter is to counter both tendencies; first by offering a systematic review of existing social science publications on AIT regulation, second by situating this review in the larger research landscape on (technology) regulation. This opens up for problematizing the relative dominance of narrow and rather a-political concepts of AI regulation in parts of the literature so far. In line with the aims of this Handbook (Paul 2022), I outline a critical political economy perspective that helps expose the politics of regulating AITs beyond applied ethics or “rational” risk-based interventions. Throughout the chapter, I use illustrative examples from my own primary research (documents and semi-structured expert interviews) on how the EU Commission narrates and seeks to enact its proposed AI Act.

Baker & Shortland on The Government Behind Insurance Governance: Lessons for Ransomware

Tom Baker (University of Pennsylvania Carey Law School) and Anja Shortland (King’s College, London) have posted “The Government Behind Insurance Governance: Lessons for Ransomware” (Regulation and Governance, forthcoming) on SSRN. Here is the abstract:

The insurance as governance literature focuses on the ability of private enterprises to collectively regulate, pool, and distribute risks. This paper analyzes how governments support insurance markets to maintain insurability and limit risks to society. We propose a new conceptual framework grouping government interventions into three dimensions: regulation of risky activity, public investment in risk reduction, and co-insurance. We apply this framework to six case studies, describing insurance markets’ reliance on public support in more analytically precise terms. We analyze how mature insurance markets overcame insurability challenges akin to those currently presented by extortive cybercrime. Private governance struggled when markets grew too big for informal coordination or when (tail) risks escalated. Government interventions vary widely. Some governments prioritize supporting economic activity while others concentrate on containing risks. Governments also choose between risk reduction and ex post socialization of losses. We apply these insights to the market for ransomware insurance, discussing the merits and potential hazards of current proposals for government intervention.

Pisanelli on Artificial Intelligence as a Tool for Reducing Gender Discrimination in Hiring

Elena Pisanelli (European University Institute) has posted “A New Turning Point for Women: Artificial Intelligence as a Tool for Reducing Gender Discrimination in Hiring” on SSRN. Here is the abstract:

This paper studies whether firms’ adoption of AI has a causal effect on their probability of hiring female managers, using data on the 500 largest firms by revenues in Europe and the US, and a staggered difference-in-differences approach. Despite the concerns the existing literature prompts about AI fairness, I find firms’ use of AI causes, on average, a relative increase by 40% in the hiring of female managers. This result is best explained by one specific type of AI, assessment software. I show the use of such software is correlated with a reduction in firms being sued for gender discrimination in hiring.

Goldman on Assuming Good Faith Online

Eric Goldman (Santa Clara University – School of Law) has posted “Assuming Good Faith Online” (30 Catholic U.J.L. & Tech (Forthcoming)) on SSRN. Here is the abstract:

Every internet service enabling user-generated content faces a dilemma of balancing good-faith and bad-faith activity. Without that balance, the service loses one of the internet’s signature features—users’ ability to engage with and learn from each other in pro-social and self-actualizing ways—and instead drives towards one of two suboptimal outcomes. Either it devolves into a cesspool of bad-faith activity or becomes a restrictive locked-down environment with limited expressive options for any user, even well-intentioned ones.

Striking this balance is one of the hardest challenges that internet services must navigate, and yet the U.S. regulatory policy currently lets services prioritize the best interests of their audiences rather than regulators’ paranoia of bad faith actors. However, that regulatory deference is in constant jeopardy. Should it change, it will hurt the internet—and all of us.

Porat on Behavior-Based Price Discrimination and Consumer Protection in the Age of Algorithms

Haggai Porat (Harvard Law School; Tel Aviv University School of Economics) has posted “Behavior-Based Price Discrimination and Consumer Protection in the Age of Algorithms” on SSRN. Here is the abstract:

The legal literature on price discrimination focuses primarily on consumers’ immutable features, like when higher interest rates are offered to black borrowers and higher prices to women at car dealerships. This paper examines a different type of discriminatory pricing practice: behavior-based pricing (BBP), where prices are set based on consumers’ behavior, most prominently their prior purchasing decisions. The increased use of artificial intelligence and machine learning algorithms to set prices has facilitated the growing penetration of BBP in various markets. Unlike race-based and sex-based discrimination, with BBP, consumers can strategically adjust their behavior to impact the prices they will be offered in the future. Sellers, in turn, can adjust prices in early periods to influence consumers’ purchasing decisions so as to increase the informational value of these decisions and thereby maximize profits. This paper analyzes possible legal responses to BBP and arrives at three surprising policy implications: One, when non-BBP discrimination is efficient but with potentially problematic distributional implications, BBP can either increase or decrease efficiency. Two, even if BBP is desirable, mandating its disclosure may reduce overall welfare even though this would reduce informational asymmetry in the market. Three, a right to be forgotten (a right to erasure) may be desirable even though it increases informational asymmetry.

Gunkel on Should Robots Have Standing

David J. Gunkel (Northern Illinois University) has posted “Should Robots Have Standing? From Robot Rights to Robot Rites” (Frontiers of Artificial Intelligence and Applications, IOS Press forthcoming) on SSRN. Here is the abstract:

“Robot” designates something that does not quite fit the standard way of organizing beings into the mutually exclusive categories of “person” or “thing.” The figure of the robot interrupts this fundamental organizing schema, resisting efforts at both reification and personification. Consequently, what is seen reflected in the face or faceplate of the robot is the fact that the existing moral and legal ontology—the way that we make sense of and organize our world—is already broken or at least straining against its own limitations. What is needed in response to this problem is a significantly reformulated moral and legal ontology that can scale to the unique challenges of the 21st century and beyond.

Ranchordas on Smart Cities, Artificial Intelligence and Public Law

Sofia Ranchordas (U Groningen Law; LUISS) has posted “Smart Cities, Artificial Intelligence and Public Law: An Unchained Melody” on SSRN. Here is the abstract:

Governments and citizens are by definition in an unequal relationship. Public law has sought to address this power asymmetry with different legal principles and instruments. However, in the context of smart cities, the inequality between public authorities and citizens is growing, particularly for vulnerable citizens. This paper explains this phenomenon in light of the dissonance between the rationale, principles and instruments of public law and the practical implementation of AI in smart cities. It argues first that public law overlooks that smart cities are complex phenomena that pose novel and different legal problems. Smart cities are strategies, products, narratives, and processes that reshape the relationship between governments and citizens, often excluding citizens who are not deemed as ‘smart’. Second, smart urban solutions tend to be primarily predictive as they seek to anticipate, for example, crime, traffic congestion or pollution. On the contrary, public law principles and tools remain reactive or responsive, failing to regulate potential harms caused by predictive systems. In addition, public law remains focused on the need to constrain human discretion and individual flaws rather than systemic errors and datafication systems which place citizens in novel categories. This paper discusses the dissonance between public law and smart urban solutions, presenting the smart city as a corporate narrative which, with its attempts to optimise citizenship, inevitably excludes thousands of citizens.