Packin on Financial Inclusion Gone Wrong: Securities Trading for Children

Nizan Geslevich Packin (City University of NY, Baruch College, Zicklin School of Business; City University of New York (CUNY) – Department of Law) has posted “Financial Inclusion Gone Wrong: Securities Trading For Children” on SSRN. Here is the abstract:

For the majority of Americans, money is the primary source of anxiety. It is especially tolling for parents and younger adults, with more than three quarters of parents, millennials and Gen Xers frequently experiencing emotional stress about money, and almost 90 percent of Americans believing that nothing could make them happier than knowing that their finances are in order. Looking for ways to help Americans deal with this source of anxiety, some believe that teaching children about financial investments at young ages can help increase financial literacy, and lower people’s money-related anxiety level.

There are many different ways to increase consumers’ financial literacy starting at a young age. Yet, despite the available public and private sector offered resources, and the clear need for caregivers to focus on this issue, studies show that most people do very little, if at all, to increase financial literacy. In recent years, identifying the educational market opportunity and the potential of the children’s financial literacy business niche, FinTech companies and even traditional financial institutions started offering financial services for children, reassured by research that shows that by six years old, children already are veteran consumers of smart device content.

The potential of this new market’s clientele is valuable for two reasons. First, having more customers is always a good thing. Second, these new customers will eventually mature into more traditional adult customers, and presumably, they will continue using the services with which they are familiar. And while there are some legal challenges associated with children not only entering into investment contracts, but also doing so online, this new market will continue to grow, because offering financial services to children is becoming more socially acceptable for Gen Z and Alpha members than ever before. Especially, given society’s newly adopted paradigm for describing, understanding, and shaping children’s rights, domestic relationships and custodial status, and even digital purchasing power.

But although digital financial apps can help educate children about the value of money, the importance of investing, and even the risks of trading, the trend of offering financial services directly to children should be of concern to anyone focused on consumer protection and financial regulation-related issues. Among those, the SEC, its new Chair, and other public figures, which are tasked with regulating these issues, and have raised concerns about the growing interest of financial service providers in younger users, as that interest became more apparent during the January 2021 RobinHood/GameStop stock controversy. Likewise, FINRA, which enables investors and firms to participate in the market with confidence by safeguarding its integrity, announced that it is looking for public feedback on the gamification used by financial service providers since it identified “risks associated with app-based platforms and ‘game-like’ features that are meant to influence customers.”

Using ethical reasoning and behavioral economics tools, this Essay explores several important issues as it suggests regulating FinTech companies’ and other financial institutions’ offerings of digital financial services to children. First, digital gaming is well-known to be addictive for children. Second, gamifying investing makes it feel less serious, not more serious, contravening the very notion that early education will help young adults understand the seriousness of money. Moreover, there is a connection between gamifying and gambling that is especially relevant in connection with gamifying investing. Third, children’s financial choices are more susceptible to the influence of outside, interested (and uninterested) parties. Lastly, parents are already struggling to keep up with supervising their children’s online activities. Enabling children to use digital financial apps will require much more effort on their parents’ part, as there are many things that parents need to be on the lookout for.

Gervais on The Human Cause

Daniel J. Gervais (Vanderbilt University – Law School) has posted “The Human Cause” on SSRN. Here is the abstract:

This paper argues that, although AI machines are increasingly able to produce outputs that facially qualify for copyright or patent protection, such outputs should not be protected by law when they have no identifiable human cause, that is, when the autonomy of the machine is such that it breaks the causal link between the output and one or more human creators or inventors. As a species, normatively we should seek to preserve incentives for human creativity and inventiveness, as these have been hallmarks of the higher mental faculties often used to define humanness. The paper also discussed situation where human and machines work together and how courts can apply the proposed approach.

Recommended.

Ziaja on Algorithm Assisted Decision Making and Environmental Law

Sonya Ziaja (University of Baltimore – School of Law) has posted “How Algorithm Assisted Decision Making Is Influencing Environmental Law and Climate Adaptation” (Ecology Law Quarterly, Forthcoming (48:3, 2021)) on SSRN. Here is the abstract:

Algorithm-based decision tools in environmental law appear policy neutral but embody bias and hidden values that affect equity and democracy. In effect, algorithm-based tools are new fora for law and policymaking, distinct from legislatures and courts. In turn, these tools influence the development and implementation of environmental law and regulation. As a practical matter, there is a pressing need to understand how these automated decision-making tools interact with and influence law and policy. This Article begins this timely and critical discussion.

Though algorithmic decision-making has been critiqued in other domains, like policing and housing policy, environmental and energy policy may be more dependent on algorithmic tools because of climate change. Expectations of climatic stationarity—for example how frequently or severely a coastal area floods or how many days of extreme heat an energy system needs to anticipate—are no longer valid. Algorithm-based tools are needed to make sense of possible future scenarios in an unstable climate. However, dependence on these tools brings with it a conflict between technocracy (and the need to rapidly adapt and respond to climate change) and democratic participation, which is fundamental to equity. This Article discusses sources of that tension within environmental, algorithm-based tools and offers a pathway forward to integrate values of equity and democratic participation into these tools.

After introducing the problem of water and energy adaptation to climate change, this Article synthesizes prior multidisciplinary work on algorithmic decision making and modeling-informed governance—bringing together the works of early climate scientists and contemporary leaders in algorithmic decision making. From this synthesis, this Article presents a framework for analyzing how well these tools integrate principles of equity, including procedural and substantive fairness—both of which are essential to democracy. The framework evaluates how the tools handle uncertainty, transparency, and stakeholder collaboration across two attributes. The first attribute has to do with the model itself—specifically, how, and whether, existing law and policy are incorporated into these tools. These social parameters can be incorporated as inputs to the model or in the structure of the model, which determines its logic. The second attribute has to do with the modeling process—how, and whether, stakeholders and end-users collaborated in the model’s development.

This Article then applies this framework and compares two algorithm-assisted, decision-making tools currently in use for adapting water and energy systems to climate change. The first tool is called “INFORM.” It is used to allocate water quantity and flow on the Sacramento River, while taking climate and weather into account. The second tool is called “RESOLVE.” It is used by energy utility regulators in California to evaluate scenarios for energy generation. Although the development of both tools involved collaborative processes, there are meaningful distinctions in the history of their development and use. The comparisons indicate that how law and policy are incorporated into the underlying code of models influences the development and regulation of climate adaptation, while inclusiveness and collaboration during the model’s development influences the model’s perceived usefulness and adoption. Both conclusions have implications for equity and accessibility of environmental, natural resource, and energy planning.

Langvardt on Platform Speech Governance and the First Amendment: A User-Centered Approach

Kyle Langvardt (University of Nebraska at Lincoln – College of Law) has posted “Platform Speech Governance and the First Amendment: A User-Centered Approach” (Lawfare’s Digital Social Contract Paper Series 2020) on SSRN. Here is the abstract:

How should the First Amendment apply to laws that tell giant platforms like Facebook or Twitter how to police third-party content? On one view, content moderation is a form of constitutionally protected “speech” in itself, much as a newspaper’s editorial choices are speech. But this view leads to an absurd result in which the First Amendment’s free speech guarantee becomes a mandate for a small number of corporate heads to rule public discourse. This paper therefore offers an alternative: When a law regulates the dominant platforms’ content policies, the law’s downstream effects on the speech of users should determine whether it violates the First Amendment.

This kind of analysis will require significant legal innovation. The dominant platforms today host virality-driven environments whose internal dynamics undermine First Amendment law’s traditional understanding that public discourse can mostly regulate itself. The First Amendment’s high-level purposes will have to translate differently to these spaces, with doctrinal details that often bear little resemblance to the black-letter law that applies in more traditional settings.

At worst, we may find ourselves faced with the question of how much the First Amendment’s traditional guarantees must be watered down to account for the new and dangerous physics of ad-driven viral discourse. But more optimistically, the First Amendment could become a spur for regulators to develop and implement new content-neutral measures for mitigating speech-related harm. These measures might create a new, slower model of online speech—one that is less prone to manipulation and frenzy, less needful of censorship, and therefore more hospitable to the true freedom of speech.

Chawla on Pegasus Spyware – ‘A Privacy Killer’

Ajay Chawla (Delhi High Court) has posted “Pegasus Spyware – ‘A Privacy Killer'” on SSRN. Here is the abstract:

The recent Pegasus Project revelations of about half a lakh people across the world, including several in India, being targeted for cyber surveillance has firmly brought the spotlight on the Pegasus spyware, which is widely understood to be the most sophisticated smartphone attack tool. The revelations also mark the first time that a malicious remote jailbreak exploit had been detected within an iPhone.

Pegasus is a spyware (Trojan/Script) that can be installed remotely on devices running on Apple’s iOS & Google’s Android operating systems. It is developed and marketed by the Israeli technology firm NSO Group. NSO Group sells Pegasus to “vetted governments” for “lawful interception”, which is understood to mean combating terrorism and organized crime, as the firm claims, but suspicions exist that it is availed for other purposes.

Pegasus is a modular malware that can initiate total surveillance on the targeted device, as per a report by digital security company Kaspersky. It installs the necessary modules to read the user’s messages and mail, listen to calls, send back the browser history and more, which basically means taking control of nearly all aspects of your digital life. It can even listen in to encrypted audio and text files on your device that makes all the data on your device up for grabs.

Since Pegasus hacks into the operating system, every activity within the phone can be monitored when the phone is switched on. It’s as if someone is monitoring your phone activity over your shoulders. Pegasus operators can remotely record audio and video from your phone, extract phone messages, use GPS for location tracking, and recover passwords and authentication keys without the user even noticing. It’s only when a device is sent for forensic screening, and experts look into the transfer of data to and from the phone, is when a potential attack can be confirmed. The dooming fact of it all is that since Pegasus exploits zero-day vulnerabilities, there is nothing that can be done regarding such breaches unless operating system developers proactively ship out an update to your phone, aimed to protect you from hi-tech malware like Pegasus.

Reyes & Ward on Digging into Algorithms: Legal Ethics and Legal Access

Carla Reyes (Southern Methodist University – Dedman School of Law) & Jeff Ward (Duke University School of Law) have posted “Digging into Algorithms: Legal Ethics and Legal Access” (Nevada Law Journal, Vol. 21, No. 1, pp. 325-377, 2020) on SSRN. Here is the abstract:

The current discussions around algorithms, legal ethics, and expanding legal access through technological tools gravitate around two themes: (1) protection of the integrity of the legal profession and (2) a desire to ensure greater access to legal services. The hype cycle often pits the desire to protect the integrity of the legal profession against the ability to use algorithms to provide greater access to legal services, as though they are mutually exclusive. In reality, the arguments around protecting the profession from the threats posed by algorithms represent an over-fit in relation to what algorithms can actually achieve, while the visions of employing algorithms for access to justice initiatives represent an under-fit in relation to what algorithms could provide. A lack of precision about algorithms results in blunt protections of professional integrity leaving little room for the potential benefits of algorithmic tools. In other words, this incongruence persists because of imprecise understandings and unrealistic characterizations of the algorithmic technologies and how they fit within the broader technology of law itself. This Article provides an initial set of tools for empowering lawyers with a better understanding of, and critical engagement with, algorithms. With the goal of encouraging a more nuanced discussion around the ethical dimensions of using algorithms in legal technology—a discussion that better fits technological reality—the Article argues for lawyers and non-technologists to shift away from evaluating legal technology through a lens of mere algorithms—as though they can be evaluated outside of a specific context—to a focus on understanding algorithmic systems as technology created, manipulated, and used in a particular context. To make this argument, this Article first reviews the current use of algorithms in legal settings, both criminal and civil, reviewing the related literature and regulatory responses. This Article then uses the shortcomings of legal technology lamented by the current literature and the related regulatory responses to demonstrate the importance of shifting our collective paradigm from a consideration of law and algorithms to law and algorithmic systems. Finally, this Article offers a framework for use in assessing algorithmic systems and applies the framework to algorithmic systems employed in the legal context to demonstrate its usefulness in accurately separating true tensions from those that merely reverberate through the hype cycle. In using the framework to reveal areas at the intersection of law and algorithms truly most ripe for progress, this Article concludes with a call to action for more careful design of both legal systems and algorithmic ones.

Hughes on Designing Effective Regulation for Blockchain-based Markets

Heather Hughes (American University – Washington College of Law) has posted “Designing Effective Regulation for Blockchain-based Markets” (Journal of Corporation Law (forthcoming 2021)) on SSRN. Here is the abstract:

Effective regulation of blockchain-based markets calls for coordination among lawyers, coders, businesses, and lawmakers. How might we achieve adequate coordination and why is it important? This article takes up these questions, using the example of one, increasingly popular blockchain-based transaction: the issuance of tokens backed by off-chain assets. The objective here is not to advocate for a particular regulatory treatment for asset tokenization, but rather to use this deal type as a springboard to discuss what “effective regulation” means in the context of blockchain-enabled markets. The topic of regulation often conjures a public/private dynamic in which private actors generate and trade financial claims and public agencies control for excessive risks. Focusing on a public/private dynamic can obscure the regulatory role of complex private-law doctrines (contract and property) that enable enforceable deals in the first place. Effective regulation of blockchain-based markets should harmonize on-chain asset partitioning and off-chain expectations. Perhaps lawmakers should develop code-friendly rules that can supplant messy common-law doctrines that govern market-dominant transactions that are migrating to decentralized platforms. How do we craft rules that preserve existing private-law policy choices yet also comport with automated transactions? The decentralized issuance of tokenized assets provides a rich example with which to consider this question. We must think critically about what we regulate, who the regulators are, and how regulation supports markets. Failure to do so could squander the potential of emerging platforms.

Lim on Judicial Decision-Making and Explainable Artificial Intelligence

Shaun Lim (National University of Singapore (NUS) – Faculty of Law) has posted “Judicial Decision-Making and Explainable Artificial Intelligence” ((2021) 33 Singapore Academy of Law Journal 280) on SSRN. Here is the abstract:

In light of rapid developments in legal technology, it is timely to begin considering whether, and if so how, artificial intelligence (AI) can replace judges. However, given that law plays a crucial role in maintaining societal order, that judges are a crucial part of ensuring the continued well-functioning of the law, and also that there are still many unknowns in the use and deployment of AI, it would be prudent to examine and understand exactly what roles judges play in the legal system, and how they do so, before we make any bold steps towards replacing judges with AI. This article examines the current and reasonably foreseeable state of AI to consider its capabilities, as well as the process by which judges make decisions and the duties they are subject to. This article will then consider whether or how AI, given its current and foreseeable state of development, may be used in judicial decision-making, and what safeguards may be required to ensure continued confidence in a well-functioning justice system.

Tschider on Beyond the Black Box

Charlotte Tschider (Loyola University Chicago School of Law) has posted “Beyond the Black Box” (98 Denv. L. Rev. 683 (2021)). Here is the abstract:

As algorithms have become more complex, privacy and ethics scholars have urged artificial intelligence (AI) transparency for purposes of ensuring safety and preventing discrimination. International statutes are increasingly mandating that algorithmic decision-making be explained to affected individuals when such decisions impact an individual’s legal rights, and U.S. scholars continue to call for transparency in automated decision-making.

Unfortunately, modern AI technology does not function like traditional, human-designed algorithms. Due to the unavailability of alternative intellectual property (IP) protections and their often dynamically inscrutable status, algorithms created by AI are often protected under trade-secrecy status, which prohibits sharing the details of a trade secret, lest destroy the trade secret. Furthermore, dynamic inscrutability, the true “black box,” makes these algorithms secret by definition: even their creators cannot easily explain how they work. When mandated by statute, it may be tremendously difficult, expensive, and undesirable from an IP perspective to require organizations to explain their AI algorithms. Despite this challenge, it may still be possible to satisfy safety and fairness goals by instead focusing on AI system and process disclosure.

This Article first explains how AI differs from historically defined software and computer code. This Article then explores the dominant scholarship calling for opening the black box and the reciprocal pushback from organizations likely to rely on trade secret protection—a natural fit for AI’s dynamically inscrutable algorithms. Finally, using a simplified information fiduciary framework, I propose an alternative for promoting disclosure while balancing organizational interests via public AI system disclosure and black-box testing.

Peng on Autonomous Vehicle Standards under the TBT Agreement

Shin-yi Peng (National Tsing Hua University) has posted “Autonomous Vehicle Standards under the TBT Agreement: Disrupting the Boundaries?” in Shin-yi Peng, Ching-Fu Lin and Thomas Streinz (eds) Artificial Intelligence and International Economic Law: Disruption, Regulation, and Reconfiguration (Cambridge University Press, 2021) on SSRN. Here is the abstract:

Products that incorporate AI will require the development of a range of new standards. This chapter uses the case of connected and autonomous vehicles (CAVs) standards as a window to explore how this “disruptive innovation” may alter the boundaries of international trade agreements. Amid the transition to a driverless future, the transformative nature of disruptive innovation renders the interpretation and application of trade rules challenging. This chapter offers a critical assessment of the two systematic issues – the goods/services boundaries, and the public/private sector boundaries. Looking to the future, regulations governing CAVs will become increasingly complex, as the level of systemic automation evolves into levels 3-5. The author argues that disruptive technologies have a greater fundamental and structural impact on the existing trade disciplines.