Nicholas Nugent (U Tennessee) has posted “Generative Cybersecurity” on SSRN. Here is the abstract:
Cybersecurity is experiencing a sea change, and AI is to blame. Bots, which now outnumber human users, prowl networks day and night, using deep learning to discover vulnerabilities and threatening to make all software essentially transparent. The number of skilled human hackers alive in the world no longer poses a meaningful constraint on the amount of damage that can be done, as even the least experienced “script kiddie” can outsource his dark arts to hundreds of self-executing AI agents, each independently working to worm its way into a target’s system. And the age of real-time deepfakes is now upon us, as scammers personally converse with the victims of their social engineering schemes while powerful hardware dynamically swaps their faces and voices with those of impersonated relatives or coworkers.
At the same time, traditional legal doctrines are showing their age. Firms have few legal options to stop bots from continually probing their systems for vulnerabilities, as courts long ago hollowed out the tort of cyber-trespass. The federal Computer Fraud and Abuse Act punishes hackers who use AI to break into protected computers just as surely as it punishes traditional hacking. But the 1986 statute is showing its age, its language poorly suited to situations in which adversaries trick lawful AI systems into voluntarily spilling their secrets without ever crossing the access barrier—the problem of “adversarial AI.” And wire fraud, theft, and right-of-publicity laws map awkwardly, if they map at all, onto certain elements of deepfake scams.
Existing liability frameworks compound the problem, making it difficult to hold AI companies accountable when bad actors use their tools to harm others. Negligence doctrines typically insulate vendors from secondary liability where products admit of substantial lawful uses or where intervening criminality breaks the chain of proximate causation. And firms that deploy defensive AI systems to fight fire with fire may likewise find themselves without a backstop if those systems fail, or unexpectedly wreak havoc on others, given tort law’s reluctance to apply product liability rules to software.
Despite a growing literature on legal issues related to artificial intelligence and a separate body of cybersecurity scholarship, the legal academy has not yet treated AI-driven cybersecurity as a distinct, system-level field of inquiry. Where scholars or policymakers acknowledge that a particular AI use case challenges a traditional rule, they tend to offer ad hoc fixes (or none at all). As a result, cybersecurity law risks falling behind in a rapidly evolving threat environment, leaving firms and individuals without adequate remedies.
This Article tackles the problem head-on, offering the first system-level treatment of the “AI problem” facing cybersecurity and, by extension, cybersecurity law. It provides a comprehensive taxonomy of the ways AI intersects with cybersecurity. That taxonomy organizes the field around three primary roles: using AI as a tool for malicious cyber-activity (“AI as Threat”), attacking AI systems (“AI as Target”), and leveraging AI’s defensive capabilities (“AI as Shield”). It builds out detailed subcategories grounded in specific technologies, operations, and injuries, and draws on the computer science literature and real-world incidents to show that each distinct threat is real rather than theoretical.
Not limited to technical description, the Article systematically identifies the existing laws and doctrines that apply to each distinct use case and exposes the structural gaps AI has created. It then advances an integrated reform agenda designed to realign cybersecurity law to a landscape defined by autonomous, learning systems. The Article proposes five core shifts: rethinking the doctrine of electronic trespass, decentering intrusion as a necessary element in hacking offenses, protecting individual likeness per se, establishing artificial duties of care, and recalibrating negligence doctrine for agentic systems. Taken together, these reforms would move cybersecurity law beyond its human- and intrusion-era origins and toward a design suited to the new reality of machine-mediated threats and security.