Michal Lavi (The Hadar Jabotinsky Center Interdisciplinary Research Financial Markets) and Hadar Yoana Jabotinsky (The Hadar Jabotinsky Center Interdisciplinary Research Financial Markets) have posted “Seeing is Believing? Deepfakes in Financial Markets” (44 Cardozo Arts & Ent. L.J. 55 (2026)) on SSRN. Here is the abstract:
Seeing is Believing? Deepfakes in Financial Markets 44 Cardozo Arts & Ent. L.J. 55 (2026)
We let a genie out of the bottle when we developed nuclear weapons… AI is somewhat similar-it’s part way out of the bottle.”
-Warren Buffett, at his annual shareholding meeting.
An AI-powered tool recently mimicked Warren Buffett’s image and voice so convincingly that even his own family could have been deceived. This striking example highlights the transformative potential of voice cloning and deepfakes. This innovative technology leverages artificial intelligence (AI) to create hyper-realistic audio and video content. By blurring the boundaries between authenticity and synthetic creation, deepfakes make it possible to fabricate moments that never occurred. Recent advancements in AI and user-friendly software have made deepfakes more accessible and further contributed to the proliferation of deepfakes, enabling even individuals with minimal technical skills to produce compelling deepfakes at little to no cost.
While deepfakes can be used positively and offer promising applications, such as restoring voices, animating art, or enhancing online shopping, they also have a dark side. Deepfakes have been weaponized to spread misinformation, create fake pornography, and disseminate fake news. Although research often focuses on deepfakes in social media, targeted scams using deepfakes are a growing concern. These scams often involve fabricated evidence, identity theft, or highly convincing impersonations executed with alarming precision, that can aim at facilitating financial scams.
Deepfakes pose significant threats to personal security, national security, financial stability, and democracy. Addressing their harmful effects is urgent. This Article asks how should policy makers construct the use of this technology, confront its harmful effects and mitigate them in the context of financial markets. Rejecting a one-size-fits-all regulatory framework, it advocates for tailored strategies. For social media deepfakes, the focus should be on balancing free speech with improved content moderation. For targeted scams, new security standards and verification mechanisms are imperative.
Contributing to the legal scholarship, this Article provides a comprehensive overview of the deepfake phenomenon, detailing its motivations, harms, and societal impacts. It emphasizes the overlooked yet pressing issue of deepfake-driven financial scams, analyzing the unique challenges these targeted distortions of reality pose. The Article critiques existing legislative efforts, arguing they are ill-suited to address narrow, targeted scams. Finally, it proposes tailored, context-specific solutions to mitigate the dangers posed by this technology. The Article concludes by underscoring that as the line between real and fake continues to blur, our legal, organizational and ethical frameworks must evolve to safeguard truth.
