Coglianese & Lampmann on Contracting for Algorithmic Accountability

Cary Coglianese (University of Pennsylvania Law School) and Erik Lampmann (University of Pennsylvania Law School) have posted “Contracting for Algorithmic Accountability” (Administrative Law Review Accord, vol. 6, p. 175, 2021) on SSRN. Here is the abstract:

As local, state, and federal governments increase their reliance on artificial intelligence (AI) decision-making tools designed and operated by private contractors, so too do public concerns increase over the accountability and transparency of such AI tools. But current calls to respond to these concerns by banning governments from using AI will only deny society the benefits that prudent use of such technology can provide. In this Article, we argue that government agencies should pursue a more nuanced and effective approach to governing the governmental use of AI by structuring their procurement contracts for AI tools and services in ways that promote responsible use of algorithms. By contracting for algorithmic accountability, government agencies can act immediately, without any need for new legislation, to reassure the public that governmental use of machine-learning algorithms will be deployed responsibly. Furthermore, unlike with the adoption of legislation, a contracting approach to AI governance can be tailored to meet the needs of specific agencies and particular uses. Contracting can also provide a means for government to foster improved deployment of AI in the private sector, as vendors that serve government agencies may shift their practices more generally to foster responsible AI practices with their private sector clients. As a result, we argue that government procurement officers and agency officials should consider several key governance issues in their contract negotiations with AI vendors. Perhaps the most fundamental issue relates to vendors’ claims to trade secret protection—an issue that we show can be readily addressed during the procurement process. Government contracts can be designed to balance legitimate protection of proprietary information with the vital public need for transparency about the design and operation of algorithmic systems used by government agencies. We further urge consideration in government contracting of other key governance issues, including data privacy and security, the use of algorithmic impact statements or audits, and the role for public participation in the development of AI systems. In an era of increasing governmental reliance on artificial intelligence, public contracting can serve as an important and tractable governance strategy to promote the responsible use of algorithmic tools.

McCarl on The Limits of Law and AI

Ryan McCarl (UCLA School of Law) has posted “The Limits of Law and AI” (University of Cincinnati Law Review, 2022) on SSRN. Here is the abstract:

For thirty years, scholars in the field of law and artificial intelligence (AI) have explored the extent to which tasks performed by lawyers and judges can be assisted by computers. This article describes the medium-term outlook for AI technologies and explains the obstacles to making legal work computable. I argue that while AI-based software is likely to improve legal research and support human decisionmaking, it is unlikely to replace traditional legal work or otherwise transform the practice of law.

Jessop on Supervising the Tech Giants

Julian Jessop (Institute of Economic Affairs) has posted “Supervising the Tech Giants” (Institute of Economic Affairs Current Controversies No. 56) on SSRN. Here is the abstract:

The rise of the ‘tech giants’ is, of course, a significant commercial threat to more traditional media, but it also raises some potentially important issues of public policy. These companies have variously been accused of facilitating the spread of ‘fake news’ and extremist material, dodging taxes, and exploiting their market dominance. In reality, ‘fake news’ is nothing new, nor is it as influential as many assume. Most people rely on multiple sources for information. Television and newspapers are still trusted far more than online platforms. The market is also coming up with its own checks and balances, such as fact-checking services. The internet may have provided more channels for ‘fake news’, but new technology has also made it easier to find the truth. The UK newspaper industry itself shows how self-regulation can be effective, especially when supported by the backstops of existing criminal and civil law. The internet is not the regulation-free zone that some suppose. But, in any event, the tech companies have a strong economic interest in protecting their brands and being responsive to the demands of their customers and advertisers. It may be worth considering some ways in which these pressures could be strengthened, such as obliging new platforms to publish a code of practice like those adopted by newspapers. However, most already do, and the rest will surely follow. The taxation of tech giants raises many issues relevant to any multinational company. It seems reasonable to expect firms to explain clearly what tax they pay. But an additional levy on the activities of tech companies would be inconsistent with the general principles of fair and efficient taxation.