Williams on Algorithmic Price Gouging

Spencer Williams (Golden Gate University School of Law) has posted “Algorithmic Price Gouging” (CUP Research Handbook on Artificial Intelligence & the Law) on SSRN. Here is the abstract:

This chapter examines the intersection of dynamic pricing algorithms and U.S. price gouging laws. Recently, an increasing number of companies, particularly online retailers and technologically- enabled service providers, implemented pricing algorithms that dynamically update prices in real time using data such as competitor prices, market supply quantities, and consumer preferences. While these dynamic pricing algorithms generally increase economic efficiency, they also result in massive price spikes for essential goods and services during emergencies such as the COVID- 19 pandemic. State price gouging laws traditionally regulate price spikes by limiting price increases during emergencies. Drafted largely before the advent of algorithmic pricing and the shift to digital commerce, these laws rely on assumptions such as human-directed pricing and localized markets for goods and services that are inapplicable to companies engaging in algorithmic price gouging. Further, the applicability of state price gouging laws to online retailers selling across state lines remains unclear. The current patchwork of state regulation therefore insufficiently mitigates algorithmic price gouging. The chapter first discusses dynamic pricing algorithms and their relationship to price gouging. The chapter then explores state price gouging laws and identifies key shortfalls with respect to algorithmic price gouging, concluding with the need for a federal solution.

Haber & Harel Ben-Shahar on Algorithmic Parenting

Eldar Haber (University of Haifa – Faculty of Law) and Tammy Harel Ben-Shahar (same) have posted “Algorithmic Parenting” (32 Fordham Intell. Prop. Media & Ent. L.J. 1 (2021)) on SSRN. Here is the abstract:

Growing up in today’s world involves an increasing amount of interaction with technology. The rise in availability, accessibility, and use of the internet, along with social norms that encourage internet connection, make it nearly impossible for children to avoid online engagement. The internet undoubtedly benefits children socially and academically and mastering technological tools at a young age is indispensable for opening doors to valuable opportunities. However, the internet is risky for children in myriad ways. Parents and lawmakers are especially concerned with the tension between important advantages and risks technology bestows on children.

New technological developments in artificial intelligence are beginning to alter the ways parents might choose to safeguard their children from online risks. Recently, emerging AI-based devices and services can automatically detect when a child’s online behavior indicates that their well-being might be compromised or when they are engaging in inappropriate online communication. This technology can notify parents or immediately block harmful content in extreme cases. Referred to as algorithmic parenting in this Article, this new form of parental control has the potential to cheaply and effectively protect children against digital harms. If designed properly, algorithmic parenting would also ensure children’s liberties by neither excessively infringing their privacy nor limiting their freedom of speech and access to information.

This Article offers a balanced solution to the parenting dilemma that allows parents and children to maintain a relationship grounded in trust and respect, while simultaneously providing a safety net in extreme cases of risk. In doing so, it addresses the following questions: What laws should govern platforms with respect to algorithms and data aggregation? Who, if anyone, should be liable when risky behavior goes undetected? Perhaps most fundamentally, relative to the physical world, do parents have a duty to protect their children from online harm? Finally, assuming that algorithmic parenting is a beneficial measure for protecting children from online risks, should legislators and policymakers use laws and regulations to encourage or even mandate the use of such algorithms to protect children? This Article offers a taxonomy of current online threats to children, an examination of the potential shift toward algorithmic parenting, and a regulatory toolkit to guide policymakers in making such a transition.

Ranchordas on Empathy in the Digital Administrative State

Sofia Ranchordas (University of Groningen, Faculty of Law; LUISS) has posted “Empathy in the Digital Administrative State” (Duke Law Journal, Forthcoming) on SSRN. Here is the abstract:

Humans make mistakes. Humans make mistakes especially while filling out tax returns, benefit applications, and other government forms, which are often tainted with complex language, requirements, and short deadlines. However, the unique human feature of forgiving these mistakes is disappearing with the digitalization of government services and the automation of government decision-making. While the role of empathy has long been controversial in law, empathic measures have helped public authorities balance administrative values with citizens’ needs and deliver fair and legitimate decisions. The empathy of public servants has been particularly important for vulnerable citizens (for example, disabled individuals, seniors, and underrepresented minorities). When empathy is threatened in the digital administrative state, vulnerable citizens are at risk of not being able to exercise their rights because they cannot engage with digital bureaucracy.

This Article argues that empathy, which in this context is the ability to relate to others and understand a situation from multiple perspectives, is a key value of administrative law deserving of legal protection in the digital administrative state. Empathy can contribute to the advancement of procedural due process, the promotion of equal treatment, and the legitimacy of automation. The concept of administrative empathy does not aim to create arrays of exceptions, nor imbue law with emotions and individualized justice. Instead, this concept suggests avenues for humanizing digital government and automated decision-making through a more complete understanding of citizens’ needs. This Article explores the role of empathy in the digital administrative state at two levels: First, it argues that empathy can be a partial response to some of the shortcomings of digital bureaucracy. At this level, administrative empathy acknowledges that citizens have different skills and needs, and this requires the redesign of pre-filled application forms, government platforms, algorithms, as well as assistance. Second, empathy should also operate ex post as a humanizing measure which can help ensure that administrative mistakes made in good faith can be forgiven under limited circumstances, and vulnerable individuals are given second chances to exercise their rights.

Drawing on comparative examples of empathic measures employed in the United States, the Netherlands, Estonia, and France, this Article’s contribution is twofold: first, it offers an interdisciplinary reflection on the role of empathy in administrative law and public administration for the digital age, and second, it operationalizes the concept of administrative empathy. These goals combine to advance the position of vulnerable citizens in the administrative state.


Mayson on Personalized Law

Sandra G. Mayson (University of Pennsylvania Carey Law School) has posted “But What Is Personalized Law?” (University of Chicago Law Review Online, 2021) on SSRN. Here is the abstract:

In Personalized Law: Different Rules for Different People, Omri Ben-Shahar and Ariel Porat undertake to ground a burgeoning field of legal thought. The book imagines and thoughtfully assesses an array of personalized legal rules, including individualized speed limits, fines calibrated to income, and medical disclosure requirements responsive to individual health profiles. Throughout, though, the conceptual parameters of “personalized law” remain elusive. It is clear that personalized law involves more data, more machine-learning, and more direct communication to individuals. But it is not clear how deep these changes go. Are they incremental—just today’s law with better tech—or do they represent a change in the very nature of legal rules, as the authors claim?

This Essay aims to help clarify the concept of “personalized law” by starting from the nature of legal rules. Drawing on the scholarship of Frederick Schauer, it argues that all rules must generalize in some way, then offers a taxonomy of different forms of generalization. With the framework in place, it becomes clear that personalized law entails two shifts: (1) from rules that prescribe specific conduct toward rules that prescribe a social outcome, like a risk or efficiency target, and (2) toward greater ex ante specification of what rules require of individuals. Each shift has different practical and normative implications. Lastly, the Essay raises two potential costs of such shifts that Personalized Law does not address at length: the likely disparate racial impact of social-outcome rules driven by big data, and the possible loss of communal experience in shared-rule compliance.