Danielle D’Onfro (Washington University in St. Louis – School of Law) has posted “The New Bailments” on SSRN. Here is the abstract:
The rise of cloud computing has dramatically changed how consumers and firms store their belongings. Property that owners once managed directly now exists primarily on infrastructure maintained by intermediaries. Consumers entrust their photos to Apple instead of scrap-books; businesses put their documents on Amazon’s servers instead of in file cabinets; seemingly everything runs in the cloud. Were these belongings tangible, the relationship between owner and intermediary would be governed by the common-law doctrine of bailment. Bailments are mandatory relationships formed when one party entrusts their property to another. Within this relationship, the bailees owe the bailors a duty of care and may be liable if they failed to return the property. The parties can use contract to customize the relationship but not to disclaim entirely.
Tracing the law of bailment relationships from its ancient roots to the present, this Article argues that cloud storage should be understood as creating a bailment relationship. The law of bailment, though developed in the Middle Ages, provides a robust framework for governing twenty-first century electronic intermediaries. Though the kind of stored property has changed, the parties’ expectations and incentives have not. Yet the decline of litigation, the rise of arbitration, federal diversity jurisdiction, and the ever-growing dominance of contract has thus far prevented courts from applying the law of bailments to these new services.
Recognizing cloud storage as a bailment would have significant implications. Most immediately, it would suggest that important provisions in many cloud storage services’ contracts are unenforceable. A hand-collected dataset of 61 cloud storage contracts, reveals that most have include general disclaimers for any liability for lost data. These disclaimers are inconsistent with the duty of care that is the foundation of the law of bailment. In addition, understanding cloud storage as a bailment would have important implications for both the law of consumer protection and Fourth Amendment protections.
Vikas Didwania (University of Chicago Law School and U.S. Attorney’s Office) has posted “Privacy Amid Prosecution” on SSRN. Here is the abstract:
Prosecutors increasingly marshal electronic evidence from social media companies like Facebook and Twitter to detect and prosecute crime. For example, today’s prosecutors use social media communications to establish a defendant’s illegal activities online or his relationship to co-conspirators. Under the federal Stored Communications Act, the government is able to obtain electronic evidence by serving social media companies with a search warrant. The same is not true for criminal defendants because, on its face, the Act bans litigants other than the government from subpoenaing social media companies to obtain content information. As electronic evidence becomes more important in criminal prosecutions, defendants have sought to challenge this ban. To date, at least six federal and state appellate courts have confronted the question of whether litigants other than the government can compel service providers to produce content information.
The stakes are high. On the one hand, a decision giving access to criminal defendants and other litigants would open up for discovery the most intimate online communications, photographs, videos, and other content belonging to billions of users worldwide. On the other hand, a decision blocking access means defendants may not be able to obtain the evidence they need as they fight for their liberty. The Supreme Court is likely to soon be asked to address this question.
Criminal defendants have been aided by a scholar who has put forward in the Harvard Law Review an innovative argument that federal privilege law requires allowing litigants to subpoena information from social media companies in a way that the text of the Stored Communications Act appears to preclude. This Article challenges this novel privilege argument. Cases dating back to the telegram era of the late nineteenth century and continuing to modern day show that, contrary to this scholar’s argument, Congress does not have to use any specific language to block defendants’ access to content. Rather, courts have applied the plain text of the law, which, alongside the Act’s structure and purpose, shows that defendants are banned from obtaining content. Moreover, I argue, courts should not rely on this new approach because it will both create a doctrinal mess in a carefully structured statute and will enmesh courts in difficult policy decisions about the privacy of billions of users worldwide. Instead, Congress is best-positioned to reform the statute in a way that balances the serious privacy and liberty interests at stake. Given that legislative change can take time, this Article ends by explaining the tools already available to defendants for obtaining online content.
Brett M. Frischmann (Villanova University – School of Law) has posted “Common Sense Commons: The Case of Commonsensical Social Norms” (Governing Markets as Knowledge Commons, Cambridge University Press, forthcoming) on SSRN. Here is the abstract:
This chapter examines common sense, an important domain of social knowledge. Common sense helps us effectively engage with each other and our complex world, and it often functions as social infrastructure for everyday market transactions and social interactions. Common sense does not mean universal, true, or even accurate; it often is culturally contingent, varied, and erroneous (i.e., common nonsense). The chapter explores governance challenges and the dynamic relationships between common sense, social norms, and technology.
Danielle Keats Citron (University of Virginia School of Law) and Daniel J. Solove (George Washington University Law School) have posted “Privacy Harms” on SSRN. Here is the abstract:
Privacy harms have become one of the largest impediments in privacy law enforcement. In most tort and contract cases, plaintiffs must establish that they have been harmed. Even when legislation does not require it, courts have taken it upon themselves to add a harm element. Harm is also a requirement to establish standing in federal court. In Spokeo v. Robins, the U.S. Supreme Court has held that courts can override Congress’s judgments about what harm should be cognizable and dismiss cases brought for privacy statute violations.
The caselaw is an inconsistent, incoherent jumble, with no guiding principles. Countless privacy violations are not remedied or addressed on the grounds that there has been no cognizable harm. Courts conclude that many privacy violations, such as thwarted expectations, improper uses of data, and the wrongful transfer of data to other organizations, lack cognizable harm.
Courts struggle with privacy harms because they often involve future uses of personal data that vary widely. When privacy violations do result in negative consequences, the effects are often small – frustration, aggravation, and inconvenience – and dispersed among a large number of people. When these minor harms are done at a vast scale by a large number of actors, they aggregate into more significant harms to people and society. But these harms do not fit well with existing judicial understandings of harm.
This article makes two central contributions. The first is the construction of a road map for courts to understand harm so that privacy violations can be tackled and remedied in a meaningful way. Privacy harms consist of various different types, which to date have been recognized by courts in inconsistent ways. We set forth a typology of privacy harms that elucidates why certain types of privacy harms should be recognized as cognizable. The second contribution is providing an approach to when privacy harm should be required. In many cases, harm should not be required because it is irrelevant to the purpose of the lawsuit. Currently, much privacy litigations suffers from a misalignment of law enforcement goals and remedies. For example, existing methods of litigating privacy cases, such as class actions, often enrich lawyers but fail to achieve meaningful deterrence. Because the personal data of tens of millions of people could be involved, even small actual damages could put companies out of business without providing much of value to each individual. We contend that the law should be guided by the essential question: When and how should privacy regulation be enforced? We offer an approach that aligns enforcement goals with appropriate remedies.
Mark A. Lemley (Stanford Law School) has posted “The Contradictions of Platform Regulation” on SSRN. Here is the abstract:
Everyone wants to regulate the big tech companies. The desire to regulate the private actors that control so much of our lives is understandable, and some ideas for regulation make sense. But the political consensus around regulating the tech industry is illusory. While everyone wants to regulate big tech, it turns out that they want to do so in very different, indeed contradictory, ways.
These contradictions of platform regulation mean that it will be very hard to turn anti-tech popular sentiment into actual regulation, because the actual regulations some people want are anathema to others. They suggest caution in imposing regulation and an awareness of the difficult tradeoffs that are involved. But they also suggest a way forward: introducing competition to reduce the influence the tech giants have over our lives.