Zofia Bednarz (University of New South Wales (UNSW)) & Kayleen Manwaring (UNSW Law & Justice) have posted “Risky Business: Legal Implications of Emerging Technologies Affecting Consumers of Financial Services” on SSRN. Here is the abstract:
Artificial Intelligence- (AI) driven Big Data analytics are becoming a core capability for financial institutions, giving rise to promises of profits and increased efficiency both for new FinTech firms and incumbent institutions. This, however, may come at a cost to consumers. This chapter analyses the challenges to legal and regulatory framework applicable to provision of financial services to consumers brought about by the use of AI and Big Data tools by financial services firms. We discuss harms to consumers potentially arising in terms of discrimination, privacy breaches, digital manipulation and financial exclusion, and argue policymakers and regulators must deliver a fit-for-purpose legal and regulatory framework, allowing both financial firms and consumers to reap benefits of the technological revolution.
Rashida Richardson (Northeastern University School of Law) & Amba Kak (New York University (NYU)) have posted “Suspect Development Systems: Databasing Marginality and Enforcing Discipline” (University of Michigan Journal of Law Reform, Vol. 55, Forthcoming). Here is the abstract:
Algorithmic accountability law — focused on the regulation of data-driven systems like artificial intelligence (AI) or automated decision-making (ADM) tools — is the subject of lively policy debates, heated advocacy, and mainstream media attention. Concerns have moved beyond data protection and individual due process to encompass a broader range of group-level harms such as, discrimination and modes of democratic participation. While this is a welcome and long overdue shift, this discourse has ignored systems like databases, that are viewed as technically ‘rudimentary’ and often siloed from regulatory scrutiny and public attention. Additionally, burgeoning regulatory proposals like algorithmic impact assessments are not structured to surface important yet often overlooked social, organizational and political economy contexts that are critical to evaluating the practical functions and outcomes of technological systems.
This article presents a new categorical lens and analytical framework that aims to address and overcome these limitations. “Suspect Development Systems” (SDS) refers to: (1) information technologies used by government and private actors, (2) to manage vague or often immeasurable social risk based on presumed or real social conditions (e.g. violence, corruption, substance abuse), (3) that subjects targeted individuals or groups to greater suspicion, differential treatment, and more punitive and exclusionary outcomes. This frame includes some of the most recent and egregious examples of data-driven tools (such as predictive policing or risk assessments) but critically, it is also inclusive of a broader range of database systems that are currently at the margins of technology policy discourse. By examining the use of various criminal intelligence databases in India, the United Kingdom, and the United States, we developed a framework of five categories of features (technical, legal, political economy, organizational, and social) that together and separately influence how these technologies function in practice, the ways they are used, and the outcomes they produce. We then apply this analytical framework to welfare system databases, universal or ID number databases, and citizenship databases to demonstrate the value of this framework in both identifying and evaluating emergent or under-examined technologies in other sensitive social domains.
Suspect Development Systems is an intervention in legal scholarship and practice as it provides a much-needed definitional and analytical framework for understanding an ever-evolving ecosystem of technologies embedded and employed in modern governance. Our analysis also helps redirect attention toward important yet often under-examined contexts, conditions, and consequences that are pertinent to the development of meaningful legislative or regulatory interventions in the field of algorithmic accountability. The cross-jurisdictional evidence put forth across this Article illuminates the value of examining commonalities between the Global North and South to inform our understanding of how seemingly disparate technologies and contexts are in fact coaxial, which is the basis for building more global solidarity.
Mirko Forti (Scuola Superiore Sant’Anna di Pisa – School of Law) has posted “The Deployment of Artificial Intelligence Tools in the Health Sector: Privacy Concerns and Regulatory Answers within the GDPR” on SSRN. Here is the abstract:
This article examines the privacy and data protection implications of the deployment of machine learning algorithms in the medical sector. Researchers and physicians are developing advanced algorithms to forecast possible developments of illnesses or disease statuses, basing their analysis on the processing of a wide range of data sets. Predictive medicine aims to maximize the effectiveness of disease treatment by taking into account individual variability in genes, environment, and lifestyle. These kinds of predictions could eventually anticipate a patient’s possible health conditions years, and potentially decades, into the future and become a vital instrument in the future development of diagnostic medicine. However, the current European data protection legal framework may be incompatible with inherent features of artificial intelligence algorithms and their constant need for data and information. This article proposes possible new approaches and normative solutions to this dilemma.
Christine Chambers Goodman (Pepperdine University – Rick J. Caruso School of Law) has posted “Just-AIed: An Essay on Just Applications of Artificial Intelligence in Education” (West Virginia Law Review, Vol. 123, No. 3, 2021) on SSRN. Here is the abstract:
This Essay addresses the intersections between AI and justice in the context of K-12 education. Part II provides some foundational context, defining the scope of AI and some aspects of justice that especially apply to educational access and opportunity. The next part sketches an answer to the question: What should we do to promote AI for justice? Part IV considers the potential pitfalls of augmenting AI in K-12 education. The final part makes a bold call to action.
Andrew D. Selbst (UCLA School of Law) has posted “An Institutional View Of Algorithmic Impact Assessments” (35 Harvard Journal of Law & Technology (Forthcoming)) on SSRN. Here is the abstract:
Scholars and advocates have proposed algorithmic impact assessments (AIAs) as a regulatory strategy for addressing and correcting algorithmic harms. An AIA-based regulatory framework would require the creator of an algorithmic system to assess its potential socially harmful impacts before implementation and create documentation that can be later used for accountability and future policy development. In practice, an impact assessment framework relies on the expertise and information that only the creators of the project have access to. It is therefore inevitable that technology firms will have an amount of practical discretion in the assessment, and willing cooperation from firms is necessary to make the regulation work. But a regime that relies on good-faith partnership from the private sector also has strong potential to be undermined by the incentives and institutional logics of the private sector. This Article argues that for AIA regulation to be effective, it must anticipate the ways that such a regulation will be filtered through the private sector institutional environment.
This Article combines insights from governance, organizational theory, and computer science to analyze how future AIA regulations will be implemented on the ground. Institutional logics, such as liability avoidance and the profit motive, will render the first goal—early consideration of social impacts—difficult in the short term. But AIAs can still be beneficial. The second goal—documentation to support future policy learning—does not require full compliance to be successful, and over time, there is reason to believe that AIAs can be part of a broader cultural shift toward accountability within the technical industry. This will lead to greater buy-in and less need for enforcement of documentation requirements.
Given the challenges and reliance on participation, AIAs must have synergy with how the field works rather than be in tension with it. For this reason, the Article argues that it is also crucial that regulators understand the technical industry itself, including the technology, the organizational culture, and emerging documentation standards. This Article demonstrates how emerging research within the field of algorithmic accountability can also inform the shape of AIA regulation. By looking at the different stages of development and so-called “pause points,” regulators can know at which points firms can export information. Looking at AI ethics research can show what social impacts the field thinks are important and where it might miss issues that policymakers care about. Overall, understanding the industry can make the AIA documentation requirements themselves more legible to technology firms, easing the path for a future AIA mandate to be successful on the ground.