Rub on the Rise of Generative Artificial Intelligence

Guy A. Rub (Ohio State Law) has posted “The Rise of Generative Artificial Intelligence” on SSRN. Here is the abstract:

I prepared these reading materials on copyright and generative AI for my copyright and disruptive technologies course. The first part deals with the copyrightability of works created with generative AI tools, including among others edited excepts from the decision of the Columbia District Court in Thaler v. Perlmutter, the U.S. Copyright Office’s guidelines, and its decisions regarding Zarya of the Dawn and Théâtre D’opéra Spatial.The second part deals with the copyright liability for training generative AI tools, including among others edited excerpts from the complaint in NYTimes v. Microsoft and Judge Bibas’s opinion in Thomson Reuters v. Ross Intelligence.

Solove & Hartzog on Kafka in the Age of AI and the Futility of Privacy as Control

Daniel J. Solove (George Washington Law) and Woodrow Hartzog (Boston University Law) have posted “Kafka in the Age of AI and the Futility of Privacy as Control” (104 Boston University Law Review 1021 (2024)) on SSRN. Here is the abstract:

Although writing more than a century ago, Franz Kafka captured the core problem of digital technologies – how individuals are rendered powerless and vulnerable. During the past fifty years, and especially in the 21st century, privacy laws have been sprouting up around the world. These laws are often based heavily on an Individual Control Model that aims to empower individuals with rights to help them control the collection, use, and disclosure of their data.

In this Essay, we argue that although Kafka starkly shows us the plight of the disempowered individual, his work also paradoxically suggests that empowering the individual isn’t the answer to protecting privacy, especially in the age of artificial intelligence. In Kafka’s world, characters readily submit to authority, even when they aren’t forced and even when doing so leads to injury or death. The victims are blamed, and they even blame themselves.

Although Kafka’s view of human nature is exaggerated for darkly comedic effect, it nevertheless captures many truths that privacy law must reckon with. Even if dark patterns and dirty manipulative practices are cleaned up, people will still make bad decisions about privacy. Despite warnings, people will embrace the technologies that hurt them. When given control over their data, people will give it right back. And when people’s data is used in unexpected and harmful ways, people will often blame themselves.

Kafka’s provides key insights for regulating privacy in the age of AI. The law can’t empower individuals when it is the system that renders them powerless. Ultimately, privacy law’s primary goal should not be to give individuals control over their data. Instead, the law should focus on ensuring a societal structure that brings the collection, use, and disclosure of personal data under control.

Kaal on AI Governance

Wulf A. Kaal (University of St. Thomas – School of Law (Minnesota)) has posted “AI Governance” on SSRN. Here is the abstract:

In the rapidly evolving landscape of artificial intelligence (AI), governance frameworks are increasingly pivotal. As AI technologies become more complex and integral to various sectors, the mechanisms to oversee and regulate these systems must evolve correspondingly. Traditional governance approaches often rely on static, predefined rules that may not adapt quickly enough to the pace of AI development or the nuanced challenges it presents. These conventional methods, largely reactive or fixed to ex-post solutions, are proving insufficient for the dynamic nature of AI technologies.

The proposed AI governance system integrates decentralized web3 community governance and federated communication platforms, forming a sophisticated framework for dynamic, anticipatory, and participatory oversight of AI development. Key components include a federated forum platform structured as a Weighted Directed Acyclic Graph (WDAG), and specialized smart contracts for managing tasks and validation. This setup not only facilitates real-time consensus-building and decision-making via web3 community governance but also supports a scalable, transparent communication network. Validation Pools and Reputation tokens within this framework play crucial roles in maintaining an updated and responsive governance system, reflecting the collective decisions and ethical standards of the community.

This system’s effectiveness is demonstrated through applications like medical diagnosis AI and autonomous driving AI, where each development stage is captured as vertices in the WDAG, documenting key compliance and operational metrics. Directed edges in this graph link these stages to relevant legal and ethical standards, with assigned weights emphasizing areas critical for compliance and safety. The dynamic nature of WDAG allows for continuous updates and integration of new regulations or ethical guidelines, ensuring AI governance remains current with technological and societal shifts. This model thus ensures AI systems are not only technologically advanced but also ethically aligned and legally compliant, effectively balancing innovation with responsible governance.