Gipson Rankin on the Atuahene’s “Stategraft” and the Implications of Unregulated Artificial Intelligence

Sonia Gipson Rankin (University of New Mexico – School of Law) has posted “The MiDAS Touch: Atuahene’s “Stategraft” and the Implications of Unregulated Artificial Intelligence” on SSRN. Here is the abstract:

Professor Bernadette Atuahene’s article, Corruption 2.0, develops the new theoretical conception of “stategraft,” which provides a term for a disturbing practice by state agents. Professor Atuahene notes that when state agents have engaged in practices of transferring property from persons to the state in violation of the state’s own laws or basic human rights, it sits at the intersection of illegal behavior that generates public profit. Although these measures can be quantified in many other examples of state corruption, the criminality of state practice goes undetected and is compounded when the state uses artificial intelligence to illegally extract resources from people. This essay will apply stategraft to an algorithm implemented in Michigan that falsely accused unemployment benefit recipients of fraud and illegally took their resources.

The software, the Michigan Integrated Data Automated System (“MiDAS”), was supposed to detect unemployment fraud and automatically charge people with misrepresentation. The agency erroneously charged over 37,000 people, taking their tax refunds and garnishing wages. It would take years for the state to repay the people and it was often after disastrous fallout had happened due to the years of trying to clear their record and reclaim their money.
This essay examines the MiDAS situation using the elements of Atuahene’s stategraft as a basis. It will show how Michigan has violated its own state and basic human rights laws, and how this unfettered use of artificial intelligence can be seen as a corrupt state practice.

Kaminski on Regulating the Risks of AI

Margot Kaminski (U Colorado Law School; Yale ISP; U Silicon Flatirons Center for Law, Technology, and Entrepreneurship) has posted “Regulating the Risks of AI” (Boston University Law Review, Vol. 103, forthcoming 2023) on SSRN. Here is the abstract:

Companies and governments now use Artificial Intelligence (AI) in a wide range of settings. But using AI leads to well-known risks—that is, not yet realized but potentially catastrophic future harms that arguably present challenges for a traditional liability model. It is thus unsurprising that lawmakers in both the United States and the European Union (EU) have turned to the tools of risk regulation for governing AI systems.

This Article observes that constructing AI harms as risks is a choice with consequences. Risk regulation comes with its own policy baggage: a set of tools and troubles that have emerged in other fields. Moreover, there are at least four models for risk regulation, each with divergent goals and methods. Emerging conflicts over AI risk regulation illustrate the tensions that emerge when regulators employ one model of risk regulation, while stakeholders call for another.

This Article is the first to examine and compare a number of recently proposed and enacted AI risk regulation regimes. It asks whether risk regulation is, in fact, the right approach. It closes with suggestions for addressing two types of shortcomings: failures to consider other tools in the risk regulation toolkit (including conditional licensing, liability, and design mandates), and shortcomings that stem from the nature of risk regulation itself (including the inherent difficulties of non-quantifiable harms, and the dearth of mechanisms for public or stakeholder input).

Selbst & Barocas on Unfair Artificial Intelligence: How FTC Intervention Can Overcome the Limitations of Discrimination Law

Andrew D. Selbst (UCLA School of Law) and Solon Barocas (Microsoft Research; Cornell University) have posted “Unfair Artificial Intelligence: How FTC Intervention Can Overcome the Limitations of Discrimination Law” (171 University of Pennsylvania Law Review, forthcoming). Here is the abstract:

The Federal Trade Commission has indicated that it intends to regulate discriminatory AI products and services. This is a welcome development, but its true significance has not been appreciated to date. This Article argues that the FTC’s flexible authority to regulate ‘unfair and deceptive acts and practices’ offers several distinct advantages over traditional discrimination law when applied to AI. The Commission can reach a wider range of commercial domains, a larger set of possible actors, a more diverse set of harms, and a broader set of business practices than are currently covered or recognized by discrimination law. For example, while most discrimination laws can address neither vendors that sell discriminatory software to decision-makers nor consumer products that work less well for certain demographic groups than others, the Commission could address both. The Commission’s investigative and enforcement powers can also overcome many of the practical and legal challenges that have limited plaintiffs’ ability to successfully seek remedies under discrimination law. The Article demonstrates that the FTC has the existing authority to address the harms of discriminatory AI and offers a method for the Commission to tackle the problem, based on its existing approach to data security.

Gursoy, Kennedy & Kakadiaris on A Critical Assessment of the Algorithmic Accountability Act of 2022

Furkan Gursoy (University of Houston), Ryan Kennedy (same), and Ioannis Kakadiaris (same) have posted “A Critical Assessment of the Algorithmic Accountability Act of 2022” on SSRN. Here is the abstract:

On April 3rd, 2022, a group of US lawmakers introduced the Algorithmic Accountability Act of 2022. The legislation proposed by Democrats requires sizable entities to conduct impact assessments for automated decision systems deployed for a set of critical decisions. The bill comes at an opportune moment because algorithms have an increasingly substantial influence on human lives, and topics such as transparency, explainability, fairness, privacy, and security are receiving growing attention from researchers and practitioners. This article examines the bill by (i) developing a critical summary and (ii) identifying and presenting ten ambiguities and potential shortcomings for debate. This study paves the way for further discussions to shape algorithmic accountability regulations.

Grafenstein on the Various Draft Data Acts of the EU

Max Grafenstein (Humboldt Institute for Internet and Society, Berlin University of the Arts) has posted “Reconciling Conflicting Interests in Data through Data Governance. An Analytical Framework (and a Brief Discussion of the Data Governance Act Draft, the Data Act Draft, the AI Regulation Draft, as well as the GDPR)” on SSRN. Here is the abstract:

In the current European debate on how to tap the potential of data-driven innovation, data governance is seen to play a key role. However, if one tries to understand what the discussants actually mean by the term data governance, one quickly gets lost in a semantic labyrinth with abrupt dead ends: Either the concrete meaning remains unclear or when an explicit definition is given, it hardly describes the challenges, which are considered essential in this article, at least within the highly regulated EU Single Market. The terminological and conceptual ambiguity makes it difficult to adequately describe certain challenges for data governance and to compare corresponding solution mechanisms in terms of their conditions for success. This article, therefore, critically examines and further develops elements of data governance concepts currently discussed in Information Systems literature to better capture challenges for data governance with particular respect to data-driven innovation and conflicting interests, especially those protected by legal rights. To reach this aim, the article elaborates on a refined data governance framework that reflects practical experience and theoretical considerations particularly from the field of data protection and regulation of innovation. Against this background, the outlook briefly assesses the most relevant current draft laws of the EU Commission, namely: the Data Governance Act, the Data Act and the AI Regulation (especially the last one concerning the General Data Protection Regulation).

Malgieri & Pasquale on Ex Ante Accountability for AI

Gianclaudio Malgieri (EDHEC; Vrije Universiteit Brussel Law) and Frank A. Pasquale (Brooklyn Law School) have posted “From Transparency to Justification: Toward Ex Ante Accountability for AI” on SSRN. Here is the abstract:

At present, policymakers tend to presume that AI used by firms is legal, and only investigate and regulate when there is suspicion of wrongdoing. What if the presumption were flipped? That is, what if a firm had to demonstrate that its AI met clear requirements for security, non-discrimination, accuracy, appropriateness, and correctability, before it was deployed? This paper proposes a system of “unlawfulness by default” for AI systems, an ex-ante model where some AI developers have the burden of proof to demonstrate that their technology is not discriminatory, not manipulative, not unfair, not inaccurate, and not illegitimate in its legal bases and purposes. The EU’s GDPR and proposed AI Act tend toward a sustainable environment of AI systems. However, they are still too lenient and the sanction in case of non-conformity with the Regulation is a monetary sanction, not a prohibition. This paper proposes a pre-approval model in which some AI developers, before launching their systems into the market, must perform a preliminary risk assessment of their technology followed by a self-certification. If the risk assessment proves that these systems are at high-risk, an approval request (to a strict regulatory authority, like a Data Protection Agency) should follow. In other terms, we propose a presumption of unlawfulness for high-risk models, while the AI developers should have the burden of proof to justify why the AI is not illegitimate (and thus not unfair, not discriminatory, and not inaccurate). Such a standard may not seem administrable now, given the widespread and rapid use of AI at firms of all sizes. But such requirements could be applied, at first, to the largest firms’ most troubling practices, and only gradually (if at all) to smaller firms and less menacing practices.

Stepanian on European Artificial Intelligence Act: Should Russia Implement the Same?

Armen Stepanian (Moscow State Law Academy) has posted “European Artificial Intelligence Act: Should Russia Implement the Same?” (8 Kutafin Law Review 2022) on SSRN. Here is the abstract:

The proposal for a European Union Regulation establishing harmonized rules for artificial intelligence (Artificial Intelligence Act) is under consideration. The structure and features of the proposal of this regulatory legal act of the integrational organization are analyzed. EU AI Act scope is analyzed and shown as wider than the current Russian one. The act will contain harmonized rules for placing into market, operation and use of AI systems; bans on certain artificial intelligence methods; special requirements for AI systems with high level of risk and obligations of operators of such systems, harmonized transparency rules for AI systems designed for interaction with individuals, emotion recognition systems and biometric categorization systems, AI systems used to creating or managing images, audio or video content; market surveillance and supervision rules. The provisions of the Act, the features of the proposed institutions and norms, including extraterritoriality (as for GDPR before that raised many questions), risk-oriented approach (which is based both on self-certification and definite criteria for high-risk systems), object, scope, definitions are considered. The possible key concerns based on case-law to undermine possible discrimination are expressed. The author expresses conclusions about the advisability of (non) application of these institutions or rules in Russia.

Yap & Lim on A Legal Framework for Artificial Intelligence Fairness Reporting

Jia Qing Yap (National University of Singapore – Faculty of Law) and Ernest Lim (same) have posted “A Legal Framework for Artificial Intelligence Fairness Reporting” (81 Cambridge Law Journal, Forthcoming 2022) on SSRN. Here is the abstract:

Clear understanding of artificial intelligence (AI) usage risks and how they are being addressed is needed, which require proper and adequate corporate disclosure. We advance a legal framework for AI Fairness Reporting to which companies can and should adhere on a comply or explain basis. We analyse the sources of unfairness arising from different aspects of AI models and the disparities in the performance of machine learning systems. We evaluate how the machine learning literature has sought to address the problem of unfairness through the use of different fairness metrics. We then put forward a nuanced and viable framework for AI Fairness Reporting comprising: (a) disclosure of all machine learning models usage; (b) disclosure of fairness metrics used and the ensuing trade-offs; (c) disclosure of de-biasing methods used; and (d) release of datasets for public inspection or for third-party audit. We then apply this reporting framework to two case studies.

Joh on The Unexpected Consequences of Automation in Policing

Elizabeth E. Joh (UC Davis – School of Law) has posted “The Unexpected Consequences of Automation in Policing” (SMU Law Review 2022 forthcoming) on SSRN. Here is the abstract:

This essay has two aims. First, it explains how automated decisionmaking can produce unexpected results. This is a problem long understood in the field of industrial organization. To identify such effects in policing is no easy task. The police are a notoriously difficult institution to study. They are insular, dislike outsiders, and especially dislike critical outsiders. Fortunately, we have the benefit of a decade’s worth of experimentation in the police use of automated decisionmaking, and the resulting political backlash against some of these uses. As a result, some large urban police departments have undergone external investigations to see whether tools like predictive policing or individual criminal risk assessments are biased or ineffective or simply too costly in light of their benefits. One of these recent reports, on the use of acoustic gunshot detection software in Chicago, provides a window into one type of police automation.

This leads to the article’s second observation. Automation is not just a set of tools that the police use; it changes the environment of policing in unexpected ways. There are now some widely-known criticisms of the increasing use of automated tools in policing, but they focus primarily on the flaws of the technologies used. The training data in facial recognition algorithms may be biased along lines of race, gender, and ethnicity. Risk assessments for gun violence may in truth be poor guides for police intervention. These claims are singularly technology-focused. Accordingly, errors and inefficiencies merit technological improvements. Even calls for bans on technologies like facial recognition are responses to the technology itself. As Chicago’s experience with acoustic gunshot detection technology demonstrates, however, automation serves not just as a tool for the police, but also led to changes in police behavior. These changes in police conduct are documented in a 2021 report from the Chicago Office of Inspector General. And they are noteworthy. If automation unexpectedly changes police behaviors, these changes have implications for how we understand policing through the lens of inequality and unaccountability.

Mazzini & Scalzo on The Proposal for the Artificial Intelligence Act: Considerations around Some Key Concepts

Gabriele Mazzini (European Commission) and Salvatore Scalzo (same) have posted “The Proposal for the Artificial Intelligence Act: Considerations around Some Key Concepts” on SSRN. Here is the abstract:

The proposal for the Artificial Intelligence (“AI”) Act has broken new ground in many respects. Most visibly, the proposal introduces the first comprehensive draft regulatory framework for AI in the EU and, for the time being, on a global level. In addition, the proposal contains several innovative approaches linked to the specificities of its subject matter and to the fact that it has to interact as smoothly as possible with a very wide range of existing legal frameworks in the EU.

A number of important choices were therefore made to ensure that the AI Act could meet quite unprecedented challenges. The paper aims to briefly outline some of those choices with the hope to help facilitating the understanding of the overall logic of the proposal and it is structured as follows.

After some introductory statements, section II explains the classification of AI systems as products. Section III delves into the essential features of the so-called New Legislative Framework (NLF), a well-known and experimented type of EU legislation that constitutes the fundamental regulatory model of the AI Act. This section also highlights certain adaptations made to the NLF tools in order to take into account certain specificities of AI systems. Having clarified the philosophy behind and the core architecture of the AI Act, section IV discusses briefly how that architecture has been shaped by a number of important points of contact (at times real “interlocks”) between the AI Act and other existing or proposed EU legal acts beyond the realm of NLF product legislation.