Ong & Loo on Gauging the Acceptance of Contact Tracing Technology

Ee-Ing Ong (Singapore Management University School of Law, Singapore Management University – Centre for AI & Data Governance) and Wee Ling Loo (Singapore Management University School of Law, Singapore Management University – Centre for AI & Data Governance) have posted “Gauging the Acceptance of Contact Tracing Technology: An Empirical Study of Singapore Residents’ Concerns and Trust in Information Sharing” (Regulatory Insights on Artificial Intelligence: Research for Policy 2021) on SSRN. Here is the abstract:

In response to the COVID-19 pandemic, governments began implementing various forms of contact tracing technology. Singapore’s implementation of its contact tracing technology, TraceTogether, however, was met with significant concern by its population, with regard to privacy and data security. This concern did not fit with the general perception that Singaporeans have a high level of trust in its government. We explore this disconnect, using responses to our survey (conducted pre-COVID-19) in which we asked participants about their level of concern with the government and business collecting certain categories of personal data. The results show that respondents had less concern with the government as compared to a business collecting most forms of personal data. Nonetheless, they still had a moderately high level of concern about sharing such data with the government. We further found that income, education and perceived self-exposure to AI are associated with higher levels of concern with the government collecting personal data relevant to contact tracing, namely health history, location and social network friends’ information. This has implications for Singapore residents’ trust in government collecting data and hence the success of such projects, not just for contact tracing purposes but for other government-related data collection undertakings.

Seah on “Nose to Glass: Looking In to Get Beyond”

Joseph Seah (Singapore Management University) has posted “Nose to Glass: Looking In to Get Beyond” (Navigating the Broader Impacts of AI Research Workshop at NeurIPS 2020) on SSRN. Here is the abstract:

Brought into the public discourse through investigative work by journalists and scholars, awareness of algorithmic harms is at an all-time high. An increasing amount of research has been dedicated to enhancing responsible artificial intelligence (AI), with the goal of addressing, alleviating, and eventually mitigating the harms brought on by the roll out of algorithmic systems. Nonetheless, implementation of such tools remains low. Given this gap, this paper offers a modest proposal: that the field–particularly researchers concerned with responsible research and innovation–may stand to gain from supporting and prioritising more ethnographic work. This embedded work can flesh out implementation frictions and reveal organisational and institutional norms that existing work on responsible artificial intelligence has not yet been able to offer. In turn, this can contribute to more insights about the anticipation of risks and mitigation of harm. This paper reviews similar empirical work typically found elsewhere–commonly in science and technology studies and safety science research–and lays out challenges of this form of inquiry.

Recommended.

Chen on Interpreting Linear Beta Coefficients Alongside Feature Importance in Machine Learning

James Ming Chen (Michigan State University – College of Law) has posted “Linear Beta Coefficients Alongside Feature Importance in Machine Learning” on SSRN. Here is the abstract:

Machine-learning regression models lack the interpretability of their conventional linear counterparts. Tree- and forest-based models offer feature importances, a vector of probabilities indicating the impact of each predictive variable on a model’s results. This brief note describes how to interpret the beta coefficients of the corresponding linear model so that they may be compared directly to feature importances in machine learning.

Wachter, Mittelstadt & Russel on Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law

Sandra Wachter (University of Oxford – Oxford Internet Institute), Brent Mittelstadt (University of Oxford – Oxford Internet Institute), and Chris Russell (Amazon Web Services, Inc.) have posted “Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law” (West Virginia Law Review, Forthcoming) on SSRN. Here is the abstract:

Western societies are marked by diverse and extensive biases and inequality that are unavoidably embedded in the data used to train machine learning. Algorithms trained on biased data will, without intervention, produce biased outcomes and increase the inequality experienced by historically disadvantaged groups. Recognising this problem, much work has emerged in recent years to test for bias in machine learning and AI systems using various fairness and bias metrics. Often these metrics address technical bias but ignore the underlying causes of inequality. In this paper we make three contributions. First, we assess the compatibility of fairness metrics used in machine learning against the aims and purpose of EU non-discrimination law. We show that the fundamental aim of the law is not only to prevent ongoing discrimination, but also to change society, policies, and practices to ‘level the playing field’ and achieve substantive rather than merely formal equality. Based on this, we then propose a novel classification scheme for fairness metrics in machine learning based on how they handle pre-existing bias and thus align with the aims of non-discrimination law. Specifically, we distinguish between ‘bias preserving’ and ‘bias transforming’ fairness metrics. Our classification system is intended to bridge the gap between non-discrimination law and decisions around how to measure fairness in machine learning and AI in practice. Finally, we show that the legal need for justification in cases of indirect discrimination can impose additional obligations on developers, deployers, and users that choose to use bias preserving fairness metrics when making decisions about individuals because they can give rise to prima facie discrimination. To achieve substantive equality in practice, and thus meet the aims of the law, we instead recommend using bias transforming metrics. To conclude, we provide concrete recommendations including a user-friendly checklist for choosing the most appropriate fairness metric for uses of machine learning and AI under EU non-discrimination law.

Rozenshtein on Cost-Benefit Analysis and the Digital Fourth Amendment

Alan Z. Rozenshtein (University of Minnesota Law School) has posted “Cost-Benefit Analysis and the Digital Fourth Amendment” (40 Criminal Justice Ethics (2021 Forthcoming)) (reviewing Review of Ric Simmons, Smart Surveillance: How to Interpret the Fourth Amendment in the Twenty-First Century (2019)) on SSRN. Here is the abstract:

In “Smart Surveillance,” Ric Simmons argues for the application of cost-benefit analysis (CBA) to digital surveillance. This review argues that, although Simmons is right to look to CBA as a tool for applying the Fourth Amendment to new technology, his faith in the courts as the main practitioners of surveillance CBA is misguided. Across a variety of dimensions of institutional competence, the political branches, not the courts, are best placed to make surveillance policy under conditions of technological change.