Colonna on Artificial Intelligence in the Internet of Health Things

Liane Colonna (Stockholm University – Faculty of Law) has posted “Artificial Intelligence in the Internet of Health Things: Is the Solution to AI Privacy More AI?” on SSRN. Here is the abstract:

The emerging power of Artificial Intelligence (AI), driven by the exponential growth in computer processing and the digitization of things, has the capacity to bring unfathomable benefits to society. In particular, AI promises to reinvent modern healthcare through devices that can predict, comprehend, learn, and act in astonishing and novel ways. While AI has an enormous potential to produce societal benefits, it will not be a sustainable technology without developing solutions to safeguard privacy while processing ever-growing sets of sensitive data.

This paper considers the tension that exists between privacy and AI and examines how AI and privacy can coexist, enjoying the advantages that each can bring. Rejecting the idea that AI means the end of privacy, and taking a technoprogressive stance, the paper seeks to explore how AI can be actively used to protect individual privacy. It contributes to the literature by reconfiguring AI not as a source of threats and challenges, but rather as a phenomenon that has the potential to empower individuals to protect their privacy.

The first part of the paper sets forward a brief taxonomy of AI and clarifies its role in the Internet of Health Things (IoHT). It then addresses privacy concerns that arise in this context. Next, the paper shifts towards a discussion of Data Protection by Design, exploring how AI can be utilized to meet this standard and in turn preserve individual privacy and data protection rights in the IoHT. Finally, the paper presents a case study of how some are actively using AI to preserve privacy in the IoHT.

Okidegbe on Race, Algorithms and The Practice of Criminal Law

Ngozi Okidegbe (Yeshiva University – Benjamin N. Cardozo School of Law) has posted “When They Hear Us: Race, Algorithms and The Practice of Criminal Law” (Kansas Journal of Law & Public Policy, Vol. 29, 2020) on SSRN. Here is the abstract:

We are in the midst of a fraught debate in criminal justice reform circles about the merits of using algorithms. Proponents claim that these algorithms offer an objective path towards substantially lowering high rates of incarceration and racial and socioeconomic disparities without endangering community safety. On the other hand, racial justice scholars argue that these algorithms threaten to entrench racial inequity within the system because they utilize risk factors that correlate with historic racial inequities, and in so doing, reproduce the same racial status quo, but under the guise of scientific objectivity.

This symposium keynote address discusses the challenge that the continued proliferation of algorithms poses to the pursuit of racial justice in the criminal justice system. I start from the viewpoint that racial justice scholars are correct about currently employed algorithms. However, I advocate that as long as we have algorithms, we should consider whether they could be redesigned and repurposed to counteract racial inequity in the criminal law process. One way that algorithms might counteract inequity is if they were designed by most impacted racially marginalized communities. Then, these algorithms might counterintuitively benefit these communities by endowing them with a democratic mechanism to contest the harms that the criminal justice system’s operation enacts on them.

Wansley on The End of Accidents

Matthew Wansley (Yeshiva University – Benjamin N. Cardozo School of Law) has posted “The End of Accidents” on SSRN. Here is the abstract:

In the next decade, humans will increasingly share the roads with autonomous vehicles (AVs). The deployment of AVs has the potential to dramatically reduce the frequency and severity of motor vehicle crashes. Existing liability rules give companies developing AVs insufficient incentives to develop that potential. Data from real-world autonomous driving indicates that today’s most advanced AVs rarely cause crashes, but often fail to avoid preventable crashes caused by other road users’ errors. A growing number of scholars have proposed reforms that would make it easier for plaintiffs injured in crashes with AVs to hold AV companies liable. These reform proposals either ignore the issue of comparative negligence or would preserve some form of the defense. If AV companies avoid liability for crashes in which a human road user was negligent, they will not invest in developing technology that could prevent those crashes. This Article proposes a solution: AV companies should be held responsible for all crashes in which their AVs come into contact with other vehicles, persons, or property—regardless of fault, cause, or comparative negligence. Contact responsibility would cause AV companies to internalize the costs of all preventable crashes and lead them to make all cost-justified investments in developing safer technology. Crashes would no longer be treated as regrettable but inevitable accidents, but as engineering problems to be solved.