Nay on Large Language Models as Corporate Lobbyists

John Nay (Stanford CodeX) has posted “Large Language Models as Corporate Lobbyists” on SSRN. Here is the abstract:

We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. An autoregressive large language model (OpenAI’s text-davinci-003) determines if proposed U.S. Congressional bills are relevant to specific public companies and provides explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of novel ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model. It outperforms the baseline of predicting the most common outcome of irrelevance. We also benchmark the performance of the previous OpenAI GPT-3 model (text-davinci-002), which was the state-of-the-art model on many academic natural language tasks until text-davinci-003 was recently released. The performance of text-davinci-002 is worse than the simple baseline. These results suggest that, as large language models continue to exhibit improved natural language understanding capabilities, performance on lobbying related tasks will continue to improve.​​ Longer-term, if AI begins to influence law in a manner that is not a direct extension of human intentions, this threatens the critical role that law as information could play in aligning AI with humans. Initially, AI is being used to simply augment human lobbyists for a small portion of their daily tasks. However, firms have an incentive to use less and less human oversight over automated assessments of policy ideas and the written communication to regulatory agencies and Congressional staffers. The core question raised is where to draw the line between human-driven and AI-driven policy influence.

Almada & Petit on The EU AI Act: Between Product Safety and Fundamental Rights

Marco Almada (EUI Law) and Nicolas Petit (same) have posted “The EU AI Act: Between Product Safety and Fundamental Rights” on SSRN. Here is the abstract:

The European Union (“EU”) Artificial Intelligence Act (the AI Act) is a legal medley. Under the banner of risk-based regulation, the AI Act combines two repertoires of European Union (EU) law, namely product safety and fundamental rights protection. Like a medley, the AI Act attempts to combine the best features of both repertoires. But like a medley, the AI Act risks delivering insufficient levels of both product safety or fundamental rights protection. This article describes these issues by reference to three classical issues of law and technology. Some adjustments to the text and spirit of the AI Act are suggested.