Packin & Chagal-Feferkorn on This Is Not A Game: The Addictive Allure of Digital Companions

Nizan Geslevich Packin (City U New York (CUNY) Law) and Karni Chagal-feferkorn (U Ottawa Common Law Section) have posted “This Is Not A Game: The Addictive Allure of Digital Companions” (Seattle University Law Review (Forthcoming 2025)) on SSRN. Here is the abstract:

Artificial Intelligence (AI) agents have become an inescapable part of modern childhood, reshaping education, leisure activities, entertainment, and social interaction. From AI-powered tutors that adapt to individual learning styles to emotionally responsive chatbots that simulate human companionship, these systems promise unprecedented personalization, cognitive stimulation, and social support. However, these benefits mask significant risks that remain unregulated and inadequately addressed. Although adults are also susceptible to forming deep emotional bonds with AI companionsoften trusting them as if they possessed genuine understanding and empathy-children are particularly vulnerable. Their misplaced trust can more severely distort social development, weaken critical thinking, and foster unhealthy dependencies. Moreover, longstanding AI-related risks such as misinformation, biased content, breaches of privacy, and harmful interactions become even more concerning when children, who generally lack awareness and effective coping tools, are involved. This raises urgent questions about the psychological, social, neurological and ethical implications, among others, of AI’s growing influence on young minds. Despite these alarming risks, existing regulatory frameworks have yet to catch up with AI’s rapid expansion. Current laws provide inadequate oversight of features that encourage addictive use, insufficient content moderation, and frequent violations of children’s privacy. Without immediate and comprehensive intervention, AI agents could irrevocably shape childhood experiences, deepening social isolation, impairing decision-making, and worsening mental health challengeswhile exposing young users to additional, emerging threats. This Article delves into the intricate intersection of AI and childhood, critically examining both the tangible benefits and the profound hazards posed by emotional dependence, cognitive manipulation, and unregulated engagement. It highlights key case studies, including instances in which AI agents inadvertently promoted harmful behaviors. It proposes specific regulatory and enforcement mechanisms-including AI safety-by-design mandates, restrictions on manipulative engagement loops, and liability standards for AI developers-to safeguard children in an increasingly AI-driven era.