Keats Citron on the Surveilled Student

Danielle Keats Citron (U Virginia Law) has posted “The Surveilled Student” (Stanford Law Review, v. 76) on SSRN. Here is the abstract:

We live in a golden age of student surveillance. Some surveillance is old school: video cameras, school resource officers, and tip lines. Old-school surveillance, which is largely cabined in time and location, is now paired with new-school surveillance, which extends monitoring far beyond school hours and hallways. School-provided laptops have corporate software installed that does two things: first, it blocks “objectionable” material and informs administrators about the content that students tried to access; second, it scans students’ online activities wherever (home) and whenever (weekends). If inclined, teachers and school resource officers can watch in real-time students’ searches, browsing, emails, chats, photos, calendar invites, geolocation, and more. Companies continuously monitor students’ laptop activity in the name of safety. Student surveillance is 24 hours a day, seven days a week, 365 days a year. There is no reprieve.

This essay does what many school districts and companies refuse to do in the open—provide a clear-eyed analysis of the costs and benefits of student surveillance. My assessment is limited to what investigative journalists, advocacy groups, and researchers have discovered about opaque corporate practices and companies reveal. What we know is too little—the lack of transparency is part of the problem. Lawmakers and the public need a full view of the stakes, so they can have a meaningful say.

Children’s safety is a paramount value. The question remains whether student surveillance protects students from self-harm, violence, and cyber bullying, as companies claim. School administrators say that the monitoring services make them “feel” safer and better informed. But feeling isn’t fact. Continuous and indiscriminate monitoring of students’ online activities is “security theater.” From what we know, students may be less safe and less well-off. Companies claim that their algorithms detect suicidal ideation, bullying, and impending violence, and that content moderators alert school officials and law enforcement so they can prevent harm. Proof of concept is scant, but from what we do know, most often, alerts from surveillance companies create a chain reaction of discipline for minor infractions. Serious punishment, like suspension, is disproportionately meted out to Black female and male students. Monitoring systems “out” LGBTQ+ students to teachers and parents (who may be unsupportive or worse). These costs are mostly borne by students from disadvantaged backgrounds—a blow to equal opportunity.

Dragnet-style surveillance exacts profound costs to what I describe as student intimate privacy. Student intimate privacy is essential for children’s self-development and self-expression. Unlike most other periods in their lives, students experience tremendous personal growth and development. Students’ job is learning, listening, reading, speaking, exploring, and befriending. It is figuring out who they are and want to become. Schools play a central role in all of that—their job is preparing and cultivating an engaged citizenry. Student surveillance diminishes that potential. It harms students as listeners because filtering software blocks sources of knowledge, including news stories and resources for sexual health; it harms students as speakers because it creates an atmosphere of fear and intimidation that results in self-censorship and conformity.

Schools justify their contracts with surveillance companies by pointing to a federal law designed to prevent students from accessing obscene material, a law that by its own terms rejects continuous tracking of students’ online activities. Congress must step in to clear up the confusion. Lawmakers should provide incentives to schools to ensure that surveillance technologies work and that they minimize intrusions on student intimate privacy, free expression, and equal opportunity to the greatest extent possible. Reforms providing vigorous protection for students’ intimate privacy are crucial to students’ free expression and schools’ role in cultivating democratic citizens.

Selbst, Venkatasubramanian & Kumar on Deconstructing Design Decisions: Why Courts Must Interrogate Machine Learning and Other Technologies

Andrew D. Selbst (UCLA Law), Suresh Venkatasubramanian (University of Utah), and I. Elizabeth Kumar (Brown University, Students) have posted “Deconstructing Design Decisions: Why Courts Must Interrogate Machine Learning and Other Technologies” (85 Ohio State Law Journal (forthcoming 2024)) on SSRN. Here is the abstract:

Technologies do not just come about. They are designed, and those design choices affect everything the technology touches. Yet unless a legal question directly implicates the technological design, courts are not likely to interrogate it. In this Article, we use examples from machine learning to demonstrate that the design choices matter even for cases where the legal questions do not involve technology directly. We start by describing formal “abstraction,” a fundamental design technique in computer science that treats systems and subsystems as defined entirely by their inputs, outputs, and the relationship that transforms inputs to outputs. We show how this technique causes the resulting technologies to be effectively making claims about responsibility and knowability that competes with courts’ own determinations. We further show that these claims are rendered invisible over time. Thus, we argue that courts must unearth—or deconstruct—the original design choices in order to understand the legal claims in a given case—even those cases that do not on their face appear to be about technological design.

There is, of course, a reasonable concern that courts are not capable or are not the best venue to make judgments about technological design. While we agree that courts are not the optimal front-line regulators of technology, we argue that they cannot avoid these questions as technologies begin show up in every type of case—a phenomenon that will only grow with time. But besides being forced to consider technology, courts are actually capable of doing so when motivated to. We demonstrate that in certain cases that clearly tee up technological design, such as products liability, copyright retransmission, and the functionality doctrines of intellectual property, courts have no problem diving in and questioning the design choices, asking what could have and should have been. Where courts can perform analysis in one arena, they can do so in another. Finally, through extended hypotheticals in the areas of negligence, discrimination, and criminal justice, we demonstrate how courts can effectively deconstruct technological design.

Hofmann on Automated Decision-Making in EU Public Law

Herwig C.H. Hofmann (Universite du Luxembourg Law) has posted “Automated Decision-Making (ADM) in EU Public Law” on SSRN. Here is the abstract:

Decision making in EU public law and the implementation of policies is increasingly supported by automation. The understanding of the effects thereof on rights and procedures as well as concepts of how to ensure accountability of such automated decision making (ADM) in EU public law are evolving and are influenced by a developing legislative framework and interpretation by the CJEU as well as developing case law by courts in the EU and the Member States.

Automated decision-making (ADM) is based on software supporting, or replacing, elements of human decision making in implementation of EU law. ADM systems are deployed in an increasing amount of policy areas. Improved availability of information, advanced computation power and advanced forms of programming using fast evolving technologies to process such information produces benefits for decision-making. But integrating technological solutions into decision making procedures also risks introducing potential dysfunctionalities, diminishing individual rights, and reducing accountability.

A key feature of the integration of ADM technologies in various phases of decision-making is that it has a profound effect on procedures leading to the delivery of public policies in the EU. This has the potential to improve decisional quality and efficiency. But equally it can influence the realisation of key procedural values of public law in the EU. Influencing procedures on the basis of technical specifications without clear orientation towards values and rights in EU law risks a growing disconnect between real-life procedures and central values and principles of democratic societies operating under the rule of law, which in turn can result in increasing de-legitimisation of the exercise of public powers.

This chapter looks at central questions which the use of ADM in public decision-making procedures in the scope of EU law raises in terms of public law. It first analyses the role and the origin of information as source of decision making in EU public law and the automated thereof. It secondly looks at the central values and fundamental rights touched by ADM. Third, the chapter asks which requirements of technical design of ADM systems and their relation to the data basis, which are used as sources of information searches and analysis are necessary.

Zhu & Ma on The Chinese Path to Generative AI Governance

Surong Zhu (Beijing Foreign Studies University) and Guoyang Ma (Beijing Jiaotong University) have posted “The Chinese Path to Generative AI Governance” on SSRN. Here is the abstract:

The emergence of generative AI has brought significant development opportunities for the AI industry, but it has also triggered legal issues such as data leakage and technology abuse. Therefore, how to ensure the upward and positive development of generative AI technology has become a focus of attention for countries worldwide. China has taken the lead in legislative measures by introducing July 2023 the world’s first departmental regulation dedicated to generative AI-Interim Administrative Measures for Generative Artificial Intelligence Services. The Interim Measures is a product of the vertical iterative legislative model, upholding the governance attitude of encouraging development and drawing a bottom line, and adopting the governance strategy of inclusiveness and prudence, categorization and grading, complemented by the regulatory means of segmented governance and clear responsibilities. At the same time, the EU and the US have effectively addressed the risks posed by generative AI by taking existing legal norms as the cornerstone and adopting law enforcement and legislative measures. This article provides a detailed description of China’s regulatory ideas and specific methods. It compares the different views of China with those of the EU and the US, further commenting on the innovations and shortcomings of China’s program.

Purves & Jenkins on A Machine Learning Evaluation Framework for Place-based Algorithmic Patrol Management

Duncan Purves (U Florida) and Ryan Jenkins (Cal Polytechnic) have posted “A Machine Learning Evaluation Framework for Place-based Algorithmic Patrol Management” on SSRN. Here is the abstract:

American law enforcement agencies are increasingly adopting data-driven technologies to combat crime, with the market for such technologies projected to grow significantly in the coming years. One prevalent approach, place-based algorithmic patrol management (PAPM), analyzes data on past crimes to optimize police patrols. These systems promise several benefits, including efficient resource allocation, reduced bias, and increased transparency. However, the adoption of these technologies has raised ethical and social concerns, particularly around privacy, bias, and community impact. This report aims to provide a comprehensive framework, including many concrete recommendations, for the ethical and responsible development and deployment of PAPM systems. Targeting developers, law enforcement agencies, policymakers, and community advocates, the recommendations emphasize collaboration among these stakeholders to address the complex challenges presented by PAPM. We suggest that failure to meet the proposed ethical guidelines might make the use of such technologies unacceptable. This report has been supported by National Science Foundation awards #1917707 and #1917712 and the Center for Advancing Safety of Machine Intelligence (CASMI).

Demkova on The EU’s Artificial Intelligence Laboratory and Fundamental Rights

Simona Demkova (Leiden Law School) has posted “The EU’s Artificial Intelligence Laboratory and Fundamental Rights” (in: Melanie Fink (ed), Redressing Fundamental Rights Violations by the EU: The Promise of the ‘Complete System of Remedies (CUP 2024)) on SSRN. Here is the abstract:

This contribution examines the possibilities for individuals to access remedies against potential violations of their fundamental rights by the EU actors, specifically the EU agencies’ deployment of artificial intelligence (AI). Presenting the intricate landscape of the EU’s border surveillance, Section 2 sheds light on the prominent role of Frontex in developing and managing AI systems, including automated risk assessments and drone-based aerial surveillance. In light of the fundamental rights concerns posed by these uses, Section 3 examines the possibilities to access remedies by considering the impact of AI uses on the procedural rights to good administration and effective judicial protection, before clarifying the emerging remedial system under the AI Act in its interplay with the EU’s data protection framework. Lastly, the chapter sketches the evolving role of the European Data Protection Supervisor, pointing out the areas demanding further clarifications in order to fill the remedial gaps (Section 4).

Evans on Some Economic Aspects of Artificial Intelligence Technologies and Their Expected Social Value

David S. Evans (Berkeley Research Group) has posted “Some Economic Aspects of Artificial Intelligence Technologies and Their Expected Social Value” (Forthcoming, CPI TechREG Chronicle, September 2023) on SSRN. Here is the abstract:

Artificial intelligence is a general-purpose technology which will result in disruptive innovation across the economy for many decades to come. AI deserves the superlatives that are often associated with it because it can create enormous social value. That is clear from just considering health care. Early evidence indicates that AI can dramatically improve the accuracy of diagnoses, such as for breast cancer. The deployment of AI as an internet-based technology could help billions of people who lack access to essential medical services. Artificial intelligence has come into its own at an important juncture in human history. Birth rates have fallen sharply for a long time, and below replacement rates, in many parts of the world including the EU, the US, and China. The populations of countries, such as Spain, Japan, and most recently China are declining. AI technologies, which can substitute for human brains, can alleviate the social cost of declining populations. Any discussion of the importance of AI comes with “buts” and the need for laws and regulations. There is no doubt about that. The design of public policy, however, must account for the impact of too little, misguided, or too much regulation on the long-run social value of artificial intelligence.

Botero Arcila on Chat GPT & the European Liability Regime for Large Language Models

Beatriz Botero Arcila (Sciences Po Law School; Harvard Berkman Klein) has posted “Is it a Platform? Is it a Search Engine? It’s Chat GPT! The European Liability Regime for Large Language Models” (Journal of Free Speech Law, Vol. 3, Issue 2, 2023) on SSRN. Here is the abstract:

ChatGPT and other AI large language models (LLMs) raise many of the regulatory and ethical challenges familiar to AI and social media scholars: They have been found to confidently invent information and present it as fact. They can be tricked into providing dangerous information even when they have been trained to not answer some of those questions. Their ability to mimic a personalized conversation can be very persuasive, which creates important disinformation and fraud risks. They reproduce various societal biases because they are trained on data from the internet that embodies such biases, for example on issues related to gender and traditional work roles. Thus, like other AI systems, LLMs risk sustaining or enhancing discrimination perpetuating bias, and promoting the growth of corporate surveillance, while being technically and legally opaque. Like social media, LLMs pose risks associated with the production and dissemination of information online that raise the same kind of concern over the quality and content of online conversations and public debate. All these compounded risks threaten to distort political debate, affect democracy, and even endanger public safety. Additionally, OpenAI reported an estimated 100 million active users of ChatGPT in January 2023, which makes the potential for a vast and systemic impact of these risks a considerable one.

LLMs are also expected to have great potential. They will transform a variety of industries, freeing up professionals’ time to focus on different substantive matters. They may also improve access to various services by facilitating the production of personalized content, for example for medical patients or students. Consequently, one of the critical policy questions LLMs pose is how to regulate them so that some of these risks are mitigated while still encouraging innovation and allowing their benefits to be realized.

This Essay examines this question, with a focus on the liability regime for LLMs for speech and informational harms and risks in the European Union. Even though the AI Act has introduced some risk-mitigation obligations for this tool, we are still at least two years away from this becoming mandatory. That’s too long from now. However, because many of the risks these systems raise are risks to the information ecosystem this Essay argues they can and should be addressed, at the outset, with current content moderation law. The Essay thus proposes an interpretation of the newly enacted Digital Services Act that could apply to these tools when they are released in the market in a way that strongly resembles other intermediaries covered by content moderation laws, such as search engines.

Babic & Cohen on The Algorithmic Explainability ‘Bait and Switch’

Boris Babic (U Toronto) and I. Glenn Cohen (Harvard Law) have posted “The Algorithmic Explainability ‘Bait and Switch'” (Minnesota Law Review, Vol. 108, 2023) on SSRN. Here is the abstract:

Explainability in artificial intelligence and machine learning (“AI/ML”) is emerging as a leading area of academic research and a topic of significant regulatory concern. Indeed, a near-consensus exists in favor of explainable AI/ML among academics, governments, and civil society groups. In this project, we challenge this prevailing trend. We argue that for explainability to be a moral requirement — and even more so for it to be a legal requirement — it should satisfy certain desiderata which it currently does not, and possibly cannot. In particular, we will argue that the currently prevailing approaches to explainable AI/ML are (1) incapable of guiding our action and planning, (2) incapable of making transparent the actual reasons underlying an automated decision, and (3) incapable of underwriting normative (moral/legal) judgments, such as blame and resentment. This stems from the post hoc nature of the explanations offered by prevailing explainability algorithms. As we explain, that these algorithms are “insincere-by-design,” so to speak. And this renders them of very little value to legislators or policymakers who are interested in (the laudable goal of) transparency in automated decision making. There is, however, an alternative — interpretable AI/ML — which we will distinguish from explainable AI/ML. Interpretable AI/ML can be useful where it is appropriate, but represents real trade-offs as to algorithmic performance and in some instances (in medicine and elsewhere) adopting an interpretable AI/ML may mean adopting a less accurate AI/ML. We argue that it is better to face those trade-offs head on, rather than embrace the fool’s gold of explainable AI/ML.