Elizabeth E. Joh (UC Davis School of Law) has posted “Ethical AI in American Policing” (Notre Dame J. Emerging Tech. 2022) on SSRN. Here is the abstract:
We know there are problems in the use of artificial intelligence in policing, but we don’t quite know what to do about them. One can also find many reports and white papers today offering principles for the responsible use of AI systems by the government, civil society organizations, and the private sector. Yet, largely missing from the current debate in the United States is a shared framework for thinking about the ethical and responsible use of AI that is specific to policing. There are many AI policy guidance documents now, but their value to the police is limited. Simply repeating broad principles about the responsible use of AI systems are less helpful than ones that 1) take into account the specific context of policing, and 2) consider the American experience of policing in particular. There is an emerging consensus about what ethical and responsible values should be part of AI systems. This essay considers what kind of ethical considerations can guide the use of AI systems by American police.
Patrick K. Lin (Brooklyn Law School) has posted “How to Save Face & the Fourth Amendment: Developing an Algorithmic Accountability Industry for Facial Recognition Technology in Law Enforcement” (33 Alb. L.J. Sci. & Tech. 2023 Forthcoming) on SSRN. Here is the abstract:
For more than two decades, police in the United States have used facial recognition to surveil civilians. Local police departments deploy facial recognition technology to identify protestors’ faces while federal law enforcement agencies quietly amass driver’s license and social media photos to build databases containing billions of faces. Yet, despite the widespread use of facial recognition in law enforcement, there are neither federal laws governing the deployment of this technology nor regulations setting standards with respect to its development. To make matters worse, the Fourth Amendment—intended to limit police power and enacted to protect against unreasonable searches—has struggled to rein in new surveillance technologies since its inception.
This Article examines the Supreme Court’s Fourth Amendment jurisprudence leading up to Carpenter v. United States and suggests that the Court is reinterpreting the amendment for the digital age. Still, the too-slow expansion of privacy protections raises challenging questions about racial bias, the legitimacy of police power, and ethical issues in artificial intelligence design. This Article proposes the development of an algorithmic auditing and accountability market that not only sets standards for AI development and limitations on governmental use of facial recognition but encourages collaboration between public interest technologists and regulators. Beyond the necessary changes to the technological and legal landscape, the current system of policing must also be reevaluated if hard-won civil liberties are to endure.
Jay A. Soled (Rutgers University) and Kathleen DeLaney Thomas (UNC School of Law) have posted “AI, Taxation, and Valuation” (Iowa Law Review, Forthcoming 2023) on SSRN. Here is the abstract:
Virtually every tax system relies upon accurate asset valuations. In some cases, this is an easy identification exercise, and the exact fair market value of an asset is readily ascertainable. Often, however, the reverse is true, and ascertaining an asset’s fair market value yields, at best, a numerical range of possible outcomes. Taxpayers commonly capitalize upon this uncertainty in their reporting practices, such that tax compliance lags and the IRS has a difficult time fulfilling its oversight responsibilities. As a by-product of this dynamic, the Treasury suffers.
This Article explores how tax systems, utilizing artificial intelligence, can strategically address asset-valuation concerns, offering practical reforms that would help obviate this nettlesome and age-old problem. Indeed, if the IRS and Congress were to take advantage of this new and innovative technological approach, doing so would bode well for more accurate asset valuations and thereby foster greater tax compliance. Put somewhat differently, in the Information Era in which we exist, it is simply no longer true that accurate asset valuations are unattainable.
Christiane Wendehorst (University of Vienna – Faculty of Law) and Jakob Hirtenlehner (same) have posted “Outlook on the Future Regulatory Requirements for AI in Europe” on SSRN. Here is the abstract:
This report was drafted as part of the research project ‘fAIr by design – solutions for discrimination reduction in AI development’. The aim of the project is to develop models and strategies to mitigate discrimination risks in the development phase of AI systems. The large-scale use of AI systems in everyday situations carries with it the risk that certain individuals or groups will suffer harm or may be disadvantaged by an algorithmic decision. If such risks are to be addressed and ultimately avoided as early as during the product development phase, it is essential to clarify how they are to be classified in legal terms. Therefore, the report first assesses the concepts of fairness, bias and discrimination and illustrates the differences between these terms. In a next step, the existing legal framework is examined with regard to regulations that are already relevant for AI. Building on this analysis, special consideration is given to the Proposal of the European Commission on Artificial Intelligence (AI Act Proposal), which is set to play a fundamental role for the future regulation of AI. The second part of the report comprises a summary of expert interviews with representatives from law, ethics and AI research, as well as standardisation organisations.