Pietro De Giovanni (Luiss University) has posted “Blockchain Technology Applications in Businesses and Organizations” on SSRN. Here is the abstract:
Blockchain technology has the ability to disrupt industries and transform business models since all intermediaries and stakeholders can now interact with little friction and at a fraction of the current transaction costs. Using blockchain technology, firms can undergo new applications and processes by pursuing transparency and control, low bureaucracy, trustless relationships, high standards of responsibility, and sustainability. As a result, business and organizations can successfully implement blockchain to grant transparency to consumers and end-users; remove challenges linked to pollution, frauds, human rights, abuse, and other inefficiencies; as well as guaranteed traceability of goods and services by univocally identifying the provenance inputs’ quantity and quality along with their treatment and origin. Blockchain Technology Applications in Businesses and Organizations reveals the true advantages that blockchain entails for firms by creating transparent and digital transactions, resolves conflicts and exceptions, and provides incentive-based mechanisms and smart contracts. This book seeks to create a clear understanding of blockchain’s applications such that business leaders can see and evaluate its real advantages. Blockchain is then analyzed not from the typical perspective of financial tools using cryptocurrencies and bitcoins but from the perspective of the business advantages for business and organizations. Specifically, the book highlights the advantages of blockchain across different segments and industries by analyzing specific aspects like procurement, manufacturing, contracts, inventory, logistics, operations, sustainability, technology, and innovation. It is an essential reference source for managers, executives, IT specialists, students, operations managers, supply chain managers, project managers, technology managers, academicians, and researchers.
Ben Chester Cheong (Singapore University of Social Sciences) has posted “Granting Legal Personhood to Artificial Intelligence Systems and Traditional Veil-Piercing Concepts To Impose Liability” on SSRN. Here is the abstract:
This article discusses some of the issues surrounding artificial intelligence systems and whether artificial intelligence systems should be granted legal personhood. The first part of the article discusses whether current artificial intelligence systems should be granted rights and obligations, akin to a legal person. The second part of the article deals with imposing liability on artificial intelligence beings by analogizing with incorporation and veil piercing principles in company law. It examines this by considering that a future board may be replaced entirely by an artificial intelligence director managing the company. It also explores the possibility of disregarding the corporate veil to ascribe liability on such an artificial intelligence beings and the ramifications of such an approach in the areas of fraud and crime.
Christopher M. Bruner (University of Georgia School of Law) has posted “Artificially Intelligent Boards and the Future of Delaware Corporate Law” on SSRN. Here is the abstract:
The prospects for Artificial Intelligence (AI) to impact the development of Delaware corporate law are at once over- and under-stated. As a general matter, claims to the effect that AI systems might ultimately displace human directors not only exaggerate the foreseeable technological potential of these systems, but also tend to ignore doctrinal and institutional impediments intrinsic to Delaware’s competitive model – notably, heavy reliance on nuanced and context-specific applications of the fiduciary duty of loyalty by a true court of equity. At the same time, however, there are specific applications of AI systems that might not merely be accommodated by Delaware corporate law, but perhaps eventually required. Such an outcome would appear most plausible in the oversight context, where fiduciary loyalty has been interpreted to require good faith effort to adopt a reasonable compliance monitoring system, an approach driven by an implicit cost-benefit analysis that could lean decisively in favor of AI-based approaches in the foreseeable future.
This article discusses the prospects for AI to impact Delaware corporate law in both general and specific respects and evaluates their significance. Section II describes the current state of the technology and argues that AI systems are unlikely to develop to the point that they could displace the full range of functions performed by human boards in the foreseeable future. Section III, then, argues that even if the technology were to achieve more impressive results in the near-term than I anticipate, acceptance of non-human directors would likely be blunted by doctrinal and institutional structures that place equity at the very heart of Delaware corporate law. Section IV, however, suggests that there are nevertheless discrete areas within Delaware corporate law where reliance by human directors upon AI systems for assistance in board decision-making might not merely be accommodated, but eventually required. This appears particularly plausible in the oversight context, where fiduciary loyalty has become intrinsically linked with adoption of compliance monitoring systems that are themselves increasingly likely to incorporate AI technologies. Section V briefly concludes.
Carla Reyes (Southern Methodist University – Dedman School of Law) has posted “Autonomous Corporate Personhood” (Washington Law Review, Forthcoming) on SSRN. Here is the abstract:
Currently, several states are considering changes to their business organization law to accommodate autonomous businesses—businesses operated entirely through computer code. Several international civil society groups are also actively developing new frameworks and a model law for enabling decentralized, autonomous businesses to achieve a corporate or corporate-like status that bestows legal personhood. Meanwhile, various jurisdictions, including the European Union, have considered whether and to what extent artificial intelligence (AI) more broadly should be endowed with personhood in order to respond to AI’s increasing presence in society. Despite the fairly obvious overlap between the two sets of inquiries, the legal and policy discussions between the two only rarely overlap. As a result of this failure to communicate, both areas of personhood theory fail to account for the important role that socio-technical and socio-legal context plays for law and policy development. This Article fills the gap by investigating the limits of artificial rights at the intersection of corporations and artificial intelligence. Specifically, this Article argues that building a comprehensive legal approach to artificial rights—rights enjoyed by artificial people, whether entity, machine, or otherwise—requires approaching the issue through a systems lens to ensure law’s consideration of the varied socio-technical contexts in which artificial people exist.
To make these claims, this Article first establishes a baseline of terminology, emphasizes the importance of viewing AI as part of a socio-technical system, and reviews the existing market for autonomous corporations. Sections II and III then examine the existing debates around both artificially intelligent persons and corporate personhood, arguing that the socio-legal needs driving artificial personhood debates in both contexts include: protecting the rights of natural people, upholding social values, and creating a fiction for legal convenience. Sections II and III explore the extent to which the theories from either set of literature fits the reality of autonomous businesses, illuminating gaps and using them to demonstrate that the law must consider the socio-technical context of AI systems and the socio-legal complexity of corporations to decide how autonomous businesses will interact with the world. Ultimately, the Article identifies links between both areas of legal personhood, leveraging those links to demonstrate the Article’s core claim: developing law for artificial systems in any context should use the systems nature of the artificial artifact to tie its legal treatment directly to the system’s socio-technical reality.
Iris H-Y Chiu (University College London – Faculty of Laws, ECGI) and Ernest Lim (National University of Singapore (NUS) – Faculty of Law) have posted “Managing Corporations’ Risk in Adopting Artificial Intelligence: A Corporate Responsibility Paradigm” (Washington University Global Studies Law Review (forthcoming)) on SSRN. Here is the abstract:
Machine learning (ML) raises issues of risk for corporate and commercial use that are distinct from the legal risks involved in deploying robots that may be more deterministic in nature. Such issues of risk relate to what data is being input for the learning processes for ML, the risks of bias, and hidden, sub-optimal assumptions; how such data is processed by ML to reach its ‘outcome,’ leading sometimes to perverse results such as unexpected errors, harm, difficult choices, and even sub-optimal behavioural phenomena; and who should be accountable for such risks. While extant literature provides rich discussion of these issues, there are only emerging regulatory frameworks and soft law in the form of ethical principles to guide corporations navigating this area of innovation.
This article focuses on corporations that deploy ML, rather than on producers of ML innovations, in order to chart a framework for guiding strategic corporate decisions in adopting ML. We argue that such a framework necessarily integrates corporations’ legal risks and their broader accountability to society. The navigation of ML innovations is not carried out within a ‘compliance landscape’ for corporations, given that the laws and regulations governing corporations’ use of ML are yet emerging. Corporations’ deployment of ML is being scrutinised by the industry, stakeholders, and broader society as governance initiatives are being developed in a number of bottom-up quarters. We argue that corporations should frame their strategic deployment of ML innovations within a ‘thick and broad’ paradigm of corporate responsibility that is inextricably connected to business-society relations.