Amy Adler (NYU School of Law) and Jeanne C. Fromer (NYU School of Law) have posted “Memes on Memes and the New Creativity” (NYU Law Review, Vol. 97, 2022) on SSRN. Here is the abstract:
Memes are the paradigm of a new, flourishing creativity. Not only are these captioned images one of the most pervasive and important forms of online creativity, but they also upend many of copyright law’s fundamental assumptions about creativity, commercialization, and distribution. Chief among them is that copying is harmful. Not only does this mismatch threaten meme culture and expose fundamental problems in copyright law and theory, but the mismatch is even more significant because memes are far from an exceptional case. Indeed, memes are a prototype of a new mode of creativity that is emerging in our contemporary digital era, as can be seen across a range of works. Therefore, the concern with memes signals a much broader problem in copyright law and theory. That is not to say that the traditional creativity that copyright has long sought to protect is dead. Far from it. Both paths of creativity, traditional and new, can be vibrant. Yet we must be sensitive to the misfit between the new creativity and existing copyright law if we want the new creativity to continue to thrive.
Heiko Richter (Max Planck Institute for Innovation and Competition) et al. have posted “To Break Up or Regulate Big Tech? Avenues to Constrain Private Power in the DSA/DMA Package” on SSRN. Here is the abstract:
Are the Digital Services Act (DSA) and the Digital Markets Act (DMA) appropriate instruments to regulate private power in the digital space? From August 30 to September 7, 2021, the Verfassungsblog and the Max Planck Institute for Innovation and Competition co-hosted the online symposium “To Break Up or Regulate Big Tech? Avenues to Constrain Private Power in the DSA/DMA Package“. This eBook brings together the 15 contributions from different perspectives, which were originally published successively on the Verfassungsblog. The collection in this eBook is intended to further advance the scholarly discourse on the regulation of private power.
Authors are Ilaria Buri, Joris van Hoboken, Giovanni De Gregorio, Oreste Pollicino, Alexander Peukert, Naomi Appelman, João Pedro Quintais, Ronan Fahy, Herbert Zech, Catalina Goanta, Hannah Ruschemeier, Paddy Leerssen, Ruth Janal, Teresa Rodríguez de las Heras Ballell, Inge Graef, Jens-Uwe Franck, Martin Peitz, Rupprecht Podszun, Peter Picht, and Suzanne Vergnolle.
Lorenza M. Wilkins (Columbia Southern University) has posted “Artificial Intelligence in the Recruiting Process: Identifying Perceptions of Bias” on SSRN. Here is the abstract:
This study examined the perceived level of bias in former and current employees, supervisors, and directors, who work for companies that utilize artificial intelligence. Despite the evolving role of the artificial intelligence recruitment process, only limited research has been conducted. The theory that reinforced this study is Adams’ Equity Theory. An Organizational Inclusive Behavior survey instrument was applied to former and present employees, supervisors, and directors in the Research Triangle Park area of North Carolina (N=21). Three open-ended survey questions were included and complemented the survey instrument’s reliability. The Shapiro-Wilk test/IBM SPSS 26 and In vivo coding, leveraged by Quirkos software, supported this qualitative and non-experimental design. Age, education, ethnicity, and organizational level (employee rank) did reveal a modest relationship to the perceptions of bias, awareness, trust, and transparency concerning the use of artificial intelligence in the recruiting process.
Michiel Poesen (KU Leuven – Faculty of Law) has posted “Regulating Artificial Intelligence in the European Union: Exploring the Role of Private International Law” on SSRN. Here is the abstract:
This paper explores the role that private international law could play in regulating artificial intelligence (AI) in the European Union (EU). It will conclude that private international law has the potential of being a piece in a complex regulatory puzzle, which nonetheless has the potential of offering an effective contribution to ensuring accountable AI.
Christopher Burr (University of Oxford – Oxford Internet Institute; The Alan Turing Institute) and David Leslie (The Alan Turing Institute) have posted “Ethical Assurance: A Practical Approach to the Responsible Design, Development, and Deployment of Data-Driven Technologies” on SSRN. Here is the abstract:
This article offers several contributions to the interdisciplinary project of responsible research and innovation in data science and AI. First, it provides a critical analysis of current efforts to establish practical mechanisms for algorithmic assessment, which are used to operationalise normative principles, such as sustainability, accountability, transparency, fairness, and explainability, in order to identify limitations and gaps with the current approaches. Second, it provides an accessible introduction to the methodology of argument-based assurance, and explores how it is currently being applied in the development of safety cases for autonomous and intelligent systems. Third, it generalises this method to incorporate wider ethical, social, and legal considerations, in turn establishing a novel version of argument-based assurance that we call ‘ethical assurance.’ Ethical assurance is presented as a structured means for unifying the myriad practical mechanisms that have been proposed, as it is built upon a process-based form of project governance that supports inclusive and participatory ethical deliberation while also remaining grounded in social and technical realities. Finally, it sets an agenda for ethical assurance, by detailing current challenges, open questions, and next steps, which serve as a springboard to build an active (and interdisciplinary) research programme as well as contribute to ongoing discussions in policy and governance.
Pauline Kim (Washington University in St. Louis – School of Law) has posted “AI and Inequality” (Forthcoming in The Cambridge Handbook on Artificial Intelligence & the Law, Kristin Johnson & Carla Reyes, eds. (2022)) on SSRN. Here is the abstract:
This Chapter examines the social consequences of artificial intelligence (AI) when it is used to make predictions about people in contexts like employment, housing and criminal law enforcement. Observers have noted the potential for erroneous or arbitrary decisions about individuals; however, the growing use of predictive AI also threatens broader social harms. In particular, these technologies risk increasing inequality by reproducing or exacerbating the marginalization of historically disadvantaged groups, and by reinforcing power hierarchies that contribute to economic inequality. Using the employment context as the primary example, this Chapter explains how AI-powered tools that are used to recruit, hire and promote workers can reflect race and gender biases, reproducing past patterns of discrimination and exclusion. It then explores how these tools also threaten to worsen class inequality because the choices made in building the models tend to reinforce the existing power hierarchy. This dynamic is visible in two distinct trends. First, firms are severing the employment relationship altogether, relying on AI to maintain control over workers and the value created by their labor without incurring the legal obligations owed to employees. And second, employers are using AI tools to increase scrutiny of and control over employees within the firm. Well-established law prohibiting discrimination provides some leverage for addressing biased algorithms, although uncertainty remains over precisely how these doctrines will be applied. At the same time, U.S. law is far less concerned with power imbalances, and thus, more limited in responding to the risk that predictive AI will contribute to economic inequality. Workers currently have little voice in how algorithmic management tools are used and firms face few constraints on further increasing their control. Addressing concerns about growing inequality will require broad legal reforms that clarify how anti-discrimination norms apply to predictive AI and strengthen employee voice in the workplace.
Catherine M. Sharkey (NYU School of Law) has posted “AI for Retrospective Review” (8 Belmont Law Review 374 (2021)) on SSRN. Here is the abstract:
This article explores the significant administrative law issues that agencies will face as they devise and implement AI-enhanced strategies to identify rules that should be subject to retrospective review. Against the backdrop of a detailed examination of HHS’s “AI for Deregulation” pilot and the very first use of AI-driven technologies in a published federal rule, the article proposes enhanced public participation and notice-and-comment processes as necessary features of AI-driven retrospective review. It challenges conventional wisdom that divides uses of AI technologies into those that “support” agency action—and therefore do not implicate the APA’s directives—and those that “determine” agency actions and thus should be subject to the full panoply of APA demands. In so doing, it takes aim at the talismanic significance of “human in the loop” that shields AI uses from disclosure and review by casting them in a merely supportive role.
Mauritz Kop (Stanford Law School) has posted “EU Artificial Intelligence Act: The European Approach to AI” (Stanford – Vienna Transatlantic Technology Law Forum vol. 2) on SSRN. Here is the abstract:
On 21 April 2021, the European Commission presented the Artificial Intelligence Act. This Stanford Law School contribution lists the main points of the proposed regulatory framework for AI.
The draft regulation seeks to codify the high standards of the EU trustworthy AI paradigm. It sets out core horizontal rules for the development, trade and use of AI-driven products, services and systems within the territory of the EU, that apply to all industries.
The EU AI Act introduces a sophisticated ‘product safety regime’ constructed around a set of 4 risk categories. It imposes requirements for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. This pre-market conformity regime also applies to machine learning training, testing and validation datasets.
The AI Act draft combines a risk-based approach based on the pyramid of criticality, with a modern, layered enforcement mechanism. This means that as risk increases, stricter rules apply. Applications with an unacceptable risk are banned. Fines for violation of the rules can be up to 6% of global turnover for companies.
The EC aims to prevent the rules from stifling innovation and hindering the creation of a flourishing AI ecosystem in Europe, by introducing legal sandboxes that afford breathing room to AI developers.
The new European rules will forever change the way AI is formed. Pursuing trustworthy AI by design seems like a sensible strategy, wherever you are in the world.
Brian Haney has posted “Cryptosecurity: An Analysis of Cryptocurrency Security and Securities” (Tulane Journal of Technology & Intellectual Property, Vol. 24, 2021) on SSRN. Here is the abstract:
This Essay makes three contributions to the blockchain and law literature. First, this Essay explores technical security aspects evolving with various governance mechanisms across blockchain networks. Next, this Essay analyzes digital assets under U.S. securities laws and executive enforcement policies, in light of several new developments at the U.S. Securities Exchange Commission. Third, this Essay crystalizes cryptocurrency compliance toward an autonomous governance system, introducing a new algorithm for compliance automation.
Ellen P. Goodman (Rutgers Law) has posted “The Stakes of User Interface Design for Democracy” on SSRN. Here is the abstract:
Digital design choices such as color and font, the size and placement of action buttons, and the number of steps required to execute an action all shape the user experience (UX) and what information people absorb and release. Digital platforms and service providers shape the UX in ways that can be respectful of user autonomy and advance accurate, high-quality information, or in ways that subvert user choice and promote deception. Social media platforms have used “deceptive design” in many respects, making it easier to manipulate users into taking actions, sacrificing data, and succumbing to beliefs they might not otherwise want to. This paper proposes that platforms replace “deceptive design” with empowering or “democratic design.” Regulators have incorporated design best practices in a number of offline policies. This paper surveys key examples, ranging from emissions labels on cars to health warnings on cigarette packs where regulations were guided by design principles. Design best practices should inform online policy as well.