Cao et al. on Multi-Dimensional Risk Identification and Dynamic Evolution Analysis of Generative Artificial Intelligence: A Multi-Source Heterogeneous Data-Driven Approach

Jing Cao (Xiangtan U) et al. have posted “Multi-Dimensional Risk Identification and Dynamic Evolution Analysis of Generative Artificial Intelligence: A Multi-Source Heterogeneous Data-Driven Approach” on SSRN. Here is the abstract:

Addressing the research gap in systematic Generative Artificial Intelligence (GenAI) risk identification, this study constructs a multi-source risk corpus (e.g., official government documents, micro-blog). AI-enhanced methods (TextRank, Word2Vec) extract and expand a domain-specific risk lexicon. Employing LDA topic modeling, a multi-dimensional risk indicator system is developed from public and government perspectives. Critical risk points are identified and evolution pathways traced using significance metrics. Key findings reveal a public focus on social-level impacts versus government emphasis on technical standards; legal and ethical issues as pivotal tensions; emerging interactive effects of composite risks. This provides methodological references for governmental AI risk governance.

Almada & Radu on AI Governance

Marco Almada (U Luxembourg Law) and Anca Radu (European U Institute) have posted “AI Governance” on SSRN. Here is the abstract:

Artificial Intelligence (AI) is a salient topic, as AI-powered technologies gain space in the most varied aspects of our lives. The diffusion of such technologies has been accompanied by the emergence of complex frameworks for AI governance, involving public and private actors from all around the world. This entry illustrates how these frameworks give rise to questions of extraterritoriality. Some of these questions pertain to the reach of national-level legal instruments relating to AI, while others stem from indirect extraterritorial effects to which the actors involved in AI governance are subject or even from the cross-border character of the technologies in question. After drawing a picture of those sources, the entry sketches how AI-related questions might be approached in the broader studies of extraterritoriality.

Abiri on ML-Mediated Creativity

Gilad Abiri (Peking U Transnational Law) has posted “ML-Mediated Creativity” (Harvard Art Law Review Musings, June 2025) on SSRN. Here is the abstract:

This essay examines how machine learning systems fundamentally alter the dynamics of cultural innovation. Using anime’s post-war evolution as a case study, it argues that genuine creativity emerges from productive friction—the collision of different cultural traditions, generations, and artistic approaches. However, ML systems trained on existing cultural works create statistical averages that eliminate this generative friction, replacing dynamic cultural processes with algorithmic optimization. Current intellectual property frameworks cannot address this transformation because they treat cultural works as discrete objects rather than materials for creative play. The essay proposes two interventions: preserving “friction spaces” in educational institutions and regulating ML architecture to maintain distinct cultural lineages rather than collapsing them into optimized averages.

Kumar & Rani on Chapter 5 “Revolutionizing Early Warning Systems for Natural Disasters: Integrating AI and ML-Driven Models, Tools, and Platforms” Book “AI and ML in Early Warning Systems for Natural Disasters, 2024” Bentham Science Publication

DR. Rajendra Kumar (Supreme Court India) and Dr. Deepika Rani (District Court Lucknow Allahabad High Court) have posted “Chapter 5 “Revolutionizing Early Warning Systems for Natural Disasters: Integrating AI and ML-Driven Models, Tools, and Platforms” Book “AI and ML in Early Warning Systems for Natural Disasters, 2024” Bentham Science Publication” (Book “AI and ML in Early Warning Systems for Natural Disasters, 2024” Bentham Science Publication) on SSRN. Here is the abstract:

This chapter explores how AI is revolutionizing early warning systems for natural disasters, addressing the critical need for more effective predictive capabilities in the face of increasing disaster frequency and severity. It examines the integration of cutting-edge AI technologies, particularly Large Language Models (LLMs) and Visual Language Models (VLMs), with modern tools such as remote sensing, IoT sensors, and social media analytics for enhanced early warning and risk assessment. The chapter demonstrates how these technologies improve disaster prediction and detection through advanced data analysis, pattern recognition, and real-time monitoring, showcasing their effectiveness through platforms like NVIDIA’s Earth-2, MOBILISE, and Google Flood Hub. While highlighting AI’s transformative potential in early warning systems, the chapter also addresses critical challenges, including data privacy, algorithmic bias, and the need for transparent, explainable AI systems. Through comprehensive analysis and real-world case studies, this chapter contributes valuable insights for developing more robust and adaptive early warning systems, ultimately enhancing disaster preparedness and community resilience.

Bassini on Speech Without a Speaker: Constitutional Coverage for Generative AI Output?

Marco Bassini (Tilburg U Tilburg Institute Law) has posted “Speech Without a Speaker: Constitutional Coverage for Generative AI Output?” (European Constitutional La Review, First View, pp. 1-37, https://doi.org/10.1017/S1574019625100771) on SSRN. Here is the abstract:

Generative AI systems’ output as speech – Constitutional coverage for AI speech in the absence of a (human) speaker – Right of individuals to receive information as a perspective for framing constitutional coverage of generative AI output – Implications of constitutional coverage for content policing and content moderation by private platforms – Trends in the interpretation of existing content moderation regimes and their applicability to generative AI systems

Baek on The Scale Effects of Data on Firm Growth: Evidence from the GDPR

Youn Baek (New York U (NYU) Leonard N. Stern Business) has posted “The Scale Effects of Data on Firm Growth: Evidence from the GDPR” on SSRN. Here is the abstract:

This paper investigates how the scale of data influences firm growth by leveraging the European Union’s General Data Protection Regulation (GDPR) as a natural experiment. Using bibliometric and patent data, I find that U.S.-based researchers and firms with greater reliance on European collaborators experienced declines in research output and firm performance after the GDPR took effect. While data is critical for improving decision-making and gaining competitive advantage, the analysis reveals that its effect on firm output remains the same regardless of initial AI‑inventor size, implying constant returns to scale. This result challenges the “data feedback loop” theory that more data begets disproportionate productivity gains by documenting that data accumulation alone may not confer a disproportionate advantage to larger firms.