Brandon L. Garrett (Duke U Law) has posted “Artificial Intelligence and Procedural Due Process” on SSRN. Here is the abstract:
Artificial intelligence (AI) violates procedural due process rights if the government uses it to deprive people of life, liberty, and property without adequate notice or an opportunity to be heard. A wide range of government agencies deploy AI systems, including in courts, law enforcement, public benefits administration, and national security. If the government refused to disclose the reasons why it denied a person bail, public benefits, or immigration status, there would be substantial due process concerns. If the government delegates such tasks to an AI system, due process analysis does not change. As in any other setting, we still need to ask whether a person received adequate notice and an opportunity to heard. And further, where applicable, we need to ask whether the risk of error and costs to rights justify not using interpretable and adequately tested AI.
Nor is it necessary for AI or other automated systems to operate in a “black box” manner without providing people with notice or a way to meaningfully contest decisions. There is a ready alternative: a “glass box” or interpretable AI systems present results so that users know what factors it relied on, what weight it gave to each, and the strengths and limitations of the association or prediction made. Whether it is a criminal investigation or a public benefits eligibility determination, interpretable AI can ensure that people have notice and can challenge any error, using the procedures available. And such a system can be more readily checked for errors. Due process demands a greater opportunity to contest government decisions that raise greater reliability concerns. We need to know how reliable an AI system performs, under realistic conditions, to assess the risk of error.
Longstanding due process protections and well-developed interpretable AI approaches can ensure that AI systems safeguard due process rights. Conversely, due process rights have little meaning if the government uses “black box” systems that are not fully interpretable or fully tested for reliability, and as a result, cannot comply with procedural due process requirements. So far, there has been little government self-regulation of AI. In response, judges have begun to enforce existing due process rights in AI and other automated decisionmaking settings. As judges consider due process challenges to AI, they should consider the interpretability and the reliability of AI systems. Similarly, as lawmakers and regulators examine government use of AI systems, they should ensure safeguards, including interpretability and reliability, to protect our due process rights in an increasingly AI-dominated world.
