Huntington on AI Companions and the Lessons of Family Law

Clare Huntington (Columbia Law) has posted “AI Companions and the Lessons of Family Law” (110 Minn. L. Rev. (Forthcoming 2025)) on SSRN. Here is the abstract:

Virtual friends and lovers powered by artificial intelligence are rapidly moving to the center of our emotional and social lives. Millions of people turn to AI companions every day for conversation, romance, sexual intimacy, therapy, and education. AI companionship holds promise, potentially reducing loneliness, supporting people without access to mental health treatment, helping students learn, and offering a judgment-free space for sensitive conversations. But AI companionship also raises significant concerns. The technology’s addictiveness can undermine human relationships. Therapy bots may prove more harmful than helpful. AI companions can be emotionally abusive. And their access to the most intimate aspects of users’ lives poses distinct privacy challenges.

As lawmakers and policy experts reckon with the benefits and serious risks of AI companionship, they must account for the distinctive aspects of AI companionship.  Unlike interacting with other forms of AI—being driven in an autonomous vehicle, say, or getting help with coding—people are in a relationship with their AI companion. Any regulatory approach must address this relationality, especially the human drive to attach to others and the vulnerability that comes with that attachment.

Legal scholars have long argued that the regulation of technology must account for relationality, and this Article demonstrates that family law—the law of relationships—is a ready means to do so. As a foundational matter, any effort to regulate AI companionship must explain why the legal system should act. Family law helps answer this question by debunking the widespread belief that relationships are purely a private matter. Family law establishes the strong state interest in nurturing positive relationships and addressing harm in abusive and neglectful relationships. These state interests apply not only to human relationships but also to human-AI relationships.

Family law also helps answer the question of how to regulate AI companionship. Family law recognizes, for example, that legal intervention is often necessary to shift the power imbalance that facilitates harmful relationships—a lesson that should be applied to the power imbalance between technology companies and people using AI companions. And family law teaches that expertise and licensing are necessary for mental health experts to work with a person at any age, although AI companions marketed for therapeutic purposes have not been subject to similar gatekeeping. Finally, family law holds lessons for advocacy, showing that it is possible to advance reasonable regulation notwithstanding the polarized political climate and considerable antipathy to regulating the technology industry, at least at the federal level. Family law points, for example, towards state-level interventions rather than action by Congress or federal agencies, and it demonstrates the broader acceptance of regulations targeted at minors than at adults.

In short, AI companionship is a new kind of relationship, bringing profound and unrecognized change to the landscape of our intimate lives. Legal scholars and policymakers must start grappling with this new world now. Family law holds great promise to accelerate that reckoning. 

Li & Sang on Assessing the Legality of Privacy Policies in Chinese Mobile Applications: A Textual Analysis Approach

Juan Li (Central South U) and Jinze Sang (China U of Political Science and Law) have posted “Assessing the Legality of Privacy Policies in Chinese Mobile Applications: A Textual Analysis Approach” (J Law Ethics & Tech 2024). Here is the abstract:

Chinese regulators have intensified their supervision over personal information collection and utilization, following the promulgation of the Personal Information Protection Law (PIPL) and other data protection regulations in November 2021. Whilst regulators primarily rely on data flows to identify suspicious activities in data collection and transfer during and after the use of apps, there is still a lack of protection measures to prevent privacy breaches. Privacy policies are important for enhancing the prevention and protection of the privacy rights and interests of app users, as well as for fostering self-regulation among operators. However, privacy policies are often lengthy and replete with vague expressions, making it difficult for users to read and thus inadequate for safeguarding users’ information rights. It is crucial to address the power imbalance between operators and users and to assess the legality of app privacy policies formulated by operators. This paper introduces a methodology for evaluating the legality of privacy policies based on legal knowledge. First, we collected policy texts and constructed the Children’s Privacy Policy Corpus (CCPP-181) that covers a variety of privacy policies for children’s apps. Then, we proposed a legality evaluation method grounded in regulatory standards, and applied it to annotate CCPP-181 corpus annotation. After three rounds of annotation, 20.2% of the 1160 sentences in the corpus were identified to have legality problems. Based on the legal text analysis method, this paper analyzes the legality issues in app privacy policies, in order to eliminate the inequality of app privacy policies and protect user data security.

Keywords