Shetty & Mishra on India’s Policy of Integrating AI with Education

Kashvi Shetty (Maharashtra National Law University Mumbai) Pranjal Mishra (Maharashtra National Law University Mumbai) have posted “India’s New Policy Progresses Towards Integrating Ai with Education” on SSRN. Here is the abstract:

The rapid advancement of technology, such as Artificial Intelligence (AI), is transforming all walks of life, including education. It has become an imperative part for nations to integrate the modernization brought about by the boom of technology to sustain and develop itself. India as a developing country has recognized the transformative potential of AI and is rapidly taking appreciable steps towards integrating AI in various fields. Recently, India has approved a new National Education Policy (NEP), which stresses educational issues, such as digital literacy, integrating AI-assisted pedagogy to think creatively, and articulating new directions for research and innovations in the face of an autonomous intermediary. This article will analyze the specific proposition of whether the NEP has ably incorporated the AI systems and technologies in an accessible and inclusive manner. It would also delve into whether the NEP aids in adopting newer technologies, improving the efficiency of academic tasks, embracing cultural differences, and most importantly, working towards improving the digital divide in the country.

Moore on AI Trainers in the Workplace

Phoebe V Moore (University of Leicester) has posted “AI Trainers: Who is the Smart Worker today?” on SSRN. Here is the abstract:

AI is often linked to automation and potential job losses, but it is more suitably described as an augmentation tool for data collection and usage, rather than a stand-alone entity, or in ways that avoid precise definitions. AI machines and systems are seen to demonstrate competences which are increasingly similar to human decision-making and prediction. AI-augmented tools and applications are intended to improve human resources and allow more sophisticated tracking of productivity, attendance and even health data for workers. These tools are often seen to perform much faster and more accurately than humans. What does this mean for workers of the future, however?
If AI does actually become as prevalent and as significant as predictions would have it – and we really do make ourselves the direct mirror reflection of machines, and/or simply resources for fuelling them through the production of datasets via our own supposed intelligence of, e.g., image recognition – then we will have a very real set of problems on our hands. Potentially, workers will only be necessary for machinic maintenance or, as discussed in this chapter, as ‘AI trainers’. How can we prepare ourselves to work with smart machines, and thus to become ourselves, ‘smart workers’?

Stix on AI Governance in the EU

Charlotte Stix (Eindhoven University of Technology; University of Cambridge – Leverhulme Centre for the Future of Intelligence) has posted “The Ghost of AI Governance Past, Present and Future: AI Governance in the European Union” on SSRN. Here is the abstract:

The received wisdom is that artificial intelligence (AI) is a competition between the US and China. In this chapter, the author will examine how the European Union (EU) fits into that mix and what it can offer as a ‘third way’ to govern AI. The chapter presents this by exploring the past, present and future of AI governance in the EU. Section 1 serves to explore and evidence the EU’s coherent and comprehensive approach to AI governance. In short, the EU ensures and encourages ethical, trustworthy and reliable technological development. This will cover a range of key documents and policy tools that lead to the most crucial effort of the EU to date: to regulate AI. Section 2 maps the EU’s drive towards digital sovereignty through the lens of regulation and infrastructure. This covers topics such as the trustworthiness of AI systems, cloud, compute and foreign direct investment. In Section 3, the chapter concludes by offering several considerations to achieve good AI governance in the EU.

Green on The Contestation of Tech Ethics

Ben Green (University of Michigan; Harvard Berkman Klein Center for Internet & Society) has posted “The Contestation of Tech Ethics: A Sociotechnical Approach to Ethics and Technology in Action” on SSRN. Here is the abstract:

Recent controversies related to topics such as fake news, privacy, and algorithmic bias have prompted increased public scrutiny of digital technologies and soul-searching among many of the people associated with their development. In response, the tech industry, academia, civil society, and governments have rapidly increased their attention to “ethics” in the design and use of digital technologies (“tech ethics”). Yet almost as quickly as ethics discourse has proliferated across the world of digital technologies, the limitations of these approaches have also become apparent: tech ethics is vague and toothless, is subsumed into corporate logics and incentives, and has a myopic focus on individual engineers and technology design rather than on the structures and cultures of technology production. As a result of these limitations, many have grown skeptical of tech ethics and its proponents, charging them with “ethics-washing”: promoting ethics research and discourse to defuse criticism and government regulation without committing to ethical behavior. By looking at how ethics has been taken up in both science and business in superficial and depoliticizing ways, I recast tech ethics as a terrain of contestation where the central fault line is not whether it is desirable to be ethical, but what “ethics” entails and who gets to define it. This framing highlights the significant limits of current approaches to tech ethics and the importance of studying the formulation and real-world effects of tech ethics. In order to identify and develop more rigorous strategies for reforming digital technologies and the social relations that they mediate, I describe a sociotechnical approach to tech ethics, one that reflexively applies many of tech ethics’ own lessons regarding digital technologies to tech ethics itself.

Szoka on Antitrust, Section 230 & the First Amendment

Berin Szóka (TechFreedom) has posted “Antitrust, Section 230 & the First Amendment” (CPI Antitrust Chronicle May 2021) on SSRN. Here is the abstract:

The First Amendment allows antitrust action against media companies for their business practices, but not for their editorial judgments. Section 230 mirrors this distinction by protecting providers of interactive computer services from being “treated as the publisher” of content provided by others, including decisions to withdraw or refuse to publish that content (230(c)(1)), and by further protecting decisions made “in good faith” to take down content, regardless of who created it (230(c)(2)(A)). Section 230 provides a critical civil procedure shortcut: when providers of interactive computer services are sued for refusing to carry the speech of others, they need not endure the expense of litigating constitutional questions. Thus, changing Section 230 could dramatically increase litigation costs, but it would not ultimately create new legal liability for allegedly “biased” or “unfair” content moderation. Nor will the First Amendment permit new quasi-antitrust remedies that compel websites to carry content they find objectionable.

Reyes on Creating Cryptolaw for the Uniform Commercial Code

Carla Reyes (Southern Methodist University – Dedman School of Law) has posted “Creating Cryptolaw for the Uniform Commercial Code” (Washington and Lee Law Review, Forthcoming) on SSRN. Here is the abstract:

A contract generally only binds its parties. Security agreements, which create a security interest in specific personal property, stand out as a glaring exception to this rule. Under certain conditions, security interests not only bind the creditor and debtor, but also third-party creditors seeking to lend against the same collateral. To receive this extraordinary benefit, creditors must put the world on notice, usually by filing a financing statement with the state in which the debtor is located. Unfortunately, the Uniform Commercial Code (U.C.C.) Article 9 filing system fails to provide actual notice to interested parties and introduces risk of heavy financial losses.

To solve this problem, this Article introduces a smart contract-based U.C.C.-1 form built using Lexon, an innovative new programming language that enables the development of smart contracts in English. The proposed “Lexon U.C.C. Financing Statement” does much more than merely replicate the financing statement in digital form; it also performs several U.C.C. rules so that, for the first time, the filing system works as intended. In demonstrating that such a system remains compatible with existing law, the Lexon U.C.C. Financing Statement also reveals important lessons about the interaction of technology and commercial law.

This Article brings cryptolaw to the U.C.C. in three sections. Section I examines the failure of the U.C.C. Article 9 filing system to achieve actual notice and argues that blockchain technology and smart contracts can help the system function as intended. Section II introduces the Lexon U.C.C. Financing Statement, demonstrating how the computer code implements U.C.C. provisions. Section II also examines the goals that influenced the design of the Lexon U.C.C. Financing Statement, discusses the new programming language used to build it, and argues that the prototype could be used now, under existing law. Section III proposes five innovations for the Article 9 filing system enabled by the Lexon U.C.C. Financing Statement. Section III then considers the broader implications of the project for commercial law, legal research around smart contracts, and the interplay between technology neutral law and a lawyer’s increasingly important duty of technological competence. Ultimately, by providing the computer code needed to build the Lexon U.C.C. Financing Statement, this Article demonstrates not only that crypto-legal structures are possible, but that they can simplify the law and make it more accessible.

Recommended.

Bloch-Wehba on Content Moderation as Surveillance

Hannah Bloch-Wehba (Texas A&M University School of Law; Yale University – Yale Information Society Project) has posted “Content Moderation as Surveillance” (Berkeley Technology Law Journal, Vol. 36, 2022 (Forthcoming) on SSRN. Here is the abstract:

Technology platforms are the new governments, and content moderation is the new law, or so goes a common refrain. And as platforms increasingly turn toward new, automated mechanisms of enforcing their rules, the apparent power of the private sector seems only to grow. Yet beneath the surface lies a web of complex relationships between public and private authorities that call into question whether platforms truly possess such unilateral power. Law enforcement and police are exerting influence over platform content rules, giving governments a louder voice in supposedly “private” decisions. At the same time, law enforcement avails itself of the affordances of social media in detecting, investigating, and preventing crime.

This Article, prepared for a symposium dedicated to Joel Reidenberg’s germinal article Lex Informatica, untangles the relationship between content moderation and surveillance. Building on Reidenberg’s fundamental insights regarding the relationships between rules imposed by legal regimes and those imposed by technological design, the Article first traces how content moderation rules intersect with law enforcement, including through formal demands for information, informal relationships between platforms and law enforcement agencies, and the impact of end-to-end encryption. Second, it critically assesses the degree to which government involvement in content moderation actually tempers platform power. Rather than effective oversight and checking of private power, it contends, the emergent arrangements between platforms and law enforcement institutions foster mutual embeddedness and the entrenchment of private authority within public governance.