|Editor-in-Chief:||Editor Name, Editor Institution|
|Editors:||Editor Name, Editor Institution|
|Editor Name, Editor Institution|
The Duke Law & Technology Review is a student-edited online publication of Duke Law School that has been published since 2001 and is devoted to examining the evolving intersection of law and technology. Unlike traditional legal journals, DLTR focuses on short, direct, and accessible “issue briefs” or “iBriefs,” intended to provide cutting edge insight to lawyers and non-legal professionals.
iBlawg was a DLTR side blog from 2006 to 2007.
Please note: As of February 2012, the official citation for the Duke Law and Technology Review was altered to include a volume number, followed by the title of the journal, and the page number on which the article begins. Additionally, Volume 1 includes all scholarship published from 2001-2003.
ISSN 2328-9600 (Online)
Icts, Social Media, & the Future of Human Rights
Nikita Mehandru and Alexa Koenig
Date posted: 4-1-2019
As communication increasingly shifts to digital platforms, information derived from online open sources is starting to become critical in creating an evidentiary basis for international crimes. While journalists have led the development of many newly emerging open source investigation methodologies, courts have heightened the requirements for verifying and preserving a chain of custody—information linking all of the individuals who possessed the content and indicating the duration of their custody—creating a need for standards that are just now beginning to be identified, articulated, and accepted by the international legal community. In this article, we discuss the impact of internet-based open source investigations on international criminal legal processes, as well as challenges related to their use. We also offer best practices for lawyers, activists, and other individuals seeking to admit open source information—including content derived from social media—into courts.
Topic: open source evidence, human rights, social media, communication
Date posted: 1-5-2019
It is now possible for anyone with rudimentary computer skills to create a pornographic deepfake portraying an individual engaging in a sex act that never actually occurred. These realistic videos, called “deepfakes,” use artificial intelligence software to impose a person’s face onto another person’s body. While pornographic deepfakes were first created to produce videos of celebrities, they are now being generated to feature other nonconsenting individuals—like a friend or a classmate. This Article argues that several tort doctrines and recent non-consensual pornography laws are unable to handle published deepfakes of non-celebrities. Instead, a federal criminal statute prohibiting these publications is necessary to deter this activity.
Topic: deepfake, pornography, computer, video, sex
Date posted: 12-9-2018
Digitalization makes almost everything quicker, sleeker, and more efficient. Many argue cryptocurrency is the future of money and payment transfers. This paper explores how the unique nature of cryptocurrencies creates barriers to a strict application of traditional regulatory strategies. Indeed, state and federal regulators remain uncertain if and how they can regulate this cutting-edge technology. Cryptocurrency businesses face difficulty navigating the unclear regulatory landscape, and consumers frequently fall prey to misinformation. To reconcile these concerns, this paper asserts cryptocurrency functions as “currency” or “money” and should be treated as such for regulatory purposes. It also proposes each state implement a uniform cryptocurrency-specific framework following the Uniform Regulation of Virtual-Currency Business Act. Such a harmonious approach would reduce compliance costs for cryptocurrency businesses, protect consumers, and provide satisfactory state and federal oversight.
The Future of Freedom of Expression Online
Evelyn Mary Aswad
Date posted: 12-7-2018
Should social media companies ban Holocaust denial from their platforms? What about conspiracy theorists that spew hate? Does good corporate citizenship mean platforms should remove offensive speech or tolerate it? The content moderation rules that companies develop to govern speech on their platforms will have significant implications for the future of freedom of expression. Given that the prospects for compelling platforms to respect users’ free speech rights are bleak within the U.S. system, what can be done to protect this important right? In June 2018, the United Nations’ top expert for freedom of expression called on companies to align their speech codes with standards embodied in international human rights law, particularly the International Covenant on Civil and Political Rights (ICCPR). After the controversy over de-platforming Alex Jones in August 2018, Twitter’s CEO agreed that his company should root its values in international human rights law and Facebook referenced this body of law in discussing its content moderation policies. This is the first article to explore what companies would need to do to align the substantive restrictions in their speech codes with Article 19 of the ICCPR, which is the key international standard for protecting freedom of expression. In order to examine this issue in a concrete way, this Article assesses whether Twitter’s hate speech rules would need to be modified. This Article also evaluates potential benefits of and concerns with aligning corporate speech codes with this international standard. This Article concludes it would be both feasible and desirable for companies to ground their speech codes in this standard; however, further multi-stakeholder discussions would be helpful to clarify certain issues that arise in translating international human rights law into a corporate context.
Date posted: 11-27-2018
The ubiquity of cell phones in today’s society has forced courts to change or dismiss established, but inapplicable analytical frameworks. Two such frameworks in the school setting are regulations of student speech and of student searches. This Article traces the constitutional jurisprudence of both First Amendment off-campus speech protection and Fourth Amendment search standards as applied to the school setting. It then analyzes how the Supreme Court’s ruling in Riley v. California complicates both areas. Finally, it proposes a pragmatic solution: by recognizing a categorical First Amendment exception for “substantial threats” against the school community, courts could accommodate students’ constitutional rights while upholding school administrators’ ability to maintain a safe environment.
Topic: Media & Communications
Systemic Social Media Regulation
Date posted: 6-8-2018
Social media platforms are motivated by profit, corporate image, long-term viability, good citizenship, and a desire for friendly legal environments. These managerial interests stand in contrast to the gubernatorial interests of the state, which include the promotion of free speech, the development of e-commerce, various counter terrorism initiatives, and the discouragement of hate speech. Inasmuch as managerial and gubernatorial interests overlap, a self-regulation model of platform governance should prevail. Inasmuch as they diverge, regulation is desirable when its benefits exceed its costs. An assessment of the benefits and costs of social media regulation should account for how social facts, norms, and falsehoods proliferate. This Article sketches a basic economic model. What emerges from the analysis is that the quality of discourse cannot be controlled through suppression of content, or even disclosure of source. A better approach is to modify, in a manner conducive to discursive excellence, the structure of the forum. Optimal platform architecture should aim to reduce the systemic externalities generated by the social interactions that they enable, including the social costs of unlawful interference in elections and the proliferation of hate speech. Simultaneously, a systemic approach to social media regulation implies fewer controls on user behavior and content creation, and attendant First Amendment complications. Several examples are explored, including algorithmic newsfeeds, online advertising, and invited campus speakers.
Topic: social media, internet, communication
Date posted: 5-19-2018
In the Digital Age, information is more accessible than ever. Unfortunately, that accessibility has come at the expense of privacy. Now, more and more personal information is in the hands of corporations and governments, for uses not known to the average consumer. Although these entities have long been able to keep tabs on individuals, with the advent of virtual assistants and “always-listening” technologies, the ease by which a third party may extract information from a consumer has only increased. The stark reality is that lawmakers have left the American public behind. While other countries have enacted consumer privacy protections, the United States has no satisfactory legal framework in place to curb data collection by greedy businesses or to regulate how those companies may use and protect consumer data. This Article contemplates one use of that data: digital advertising. Inspired by stories of suspiciously well-targeted advertisements appearing on social media websites, this Article additionally questions whether companies have been honest about their collection of audio data. To address the potential harms consumers may suffer as a result of this deficient privacy protection, this Article proposes a framework wherein companies must acquire users' consent and the government must ensure that businesses do not use consumer information for harmful purposes.
Date posted: 5-15-2018
The Communications Decency Act (CDA) provides Internet platforms complete liability protection from user-generated content. This Article discusses the costs of this current legal framework and several potential solutions. It proposes three modifications to the CDA that would use a carrot and stick to incentivize companies to take a more active role in addressing some of the most blatant downsides of user-generated content on the Internet. Despite the modest nature of these proposed changes, they would have a significant impact.
Date posted: 5-4-2018
Initial coin offerings are a source of controversy in the world of startup fundraising, and their legality is, at best, an open question. Amid soaring valuations and rumors of looming SEC action, investors and issuers alike are scrambling to forge a path forward for the token-based startups of tomorrow. While issuers may soon be forced to comply with United States securities laws, the existing regime is inadequate because it does not allow startups to capture the unique benefits of coin sales and, more importantly, it does not allow eager American investors to take part in funding the world’s next generation of technology companies.
Date posted: 5-4-2018
Automated vehicles will not only redefine the role of drivers, but also present new challenges in assessing product liability. In light of the increased risks of software defects in automated vehicles, this Note will review the current legal and regulatory framework related to product liability and assess the challenges in addressing on-board software defects and cybersecurity breaches from both the consumer and manufacturer perspective. While manufacturers are expected to assume more responsibility for accidents as vehicles become fully automated, it can be difficult to determine the scope of liability regarding unexpected software defects. On the other hand, consumers face new challenges in bringing product liability claims against manufacturers and developers.
Date posted: 3-26-2018
Privacy law in the United States has not kept pace with the realities of technological development, nor the growing reliance on the Internet of Things (IoT). As of now, the law has not adequately secured the “smart” home from intrusion by the state, and the Supreme Court further eroded digital privacy by conflating the common law concepts of trespass and exclusion in United States v. Jones. This article argues that the Court must correct this misstep by explicitly recognizing the method by which the Founding Fathers sought to “secure” houses and effects under the Fourth Amendment. Namely, the Court must reject its overly narrow trespass approach in lieu of the more appropriate right to exclude. This will better account for twenty-first century surveillance capabilities and properly constrain the state. Moreover, an exclusion framework will bolster the reasonable expectation of digital privacy by presuming an objective unreasonableness in any warrantless penetration by the state into the smart home.
Regulating Data as Property: A New Construct for Moving Forward
Jeffrey Ritter and Anna Mayer
Date posted: 3-6-2018
The global community urgently needs precise, clear rules that define ownership of data and express the attendant rights to license, transfer, use, modify, and destroy digital information assets. In response, this article proposes a new approach for regulating data as an entirely new class of property. Recently, European and Asian public officials and industries have called for data ownership principles to be developed, above and beyond current privacy and data protection laws. In addition, official policy guidances and legal proposals have been published that offer to accelerate realization of a property rights structure for digital information. But how can ownership of digital information be achieved? How can those rights be transferred and enforced? Those calls for data ownership emphasize the impact of ownership on the automotive industry and the vast quantities of operational data which smart automobiles and self-driving vehicles will produce. We looked at how, if at all, the issue was being considered in consumer-facing statements addressing the data being collected by their vehicles. To formulate our proposal, we also considered continued advances in scientific research, quantum mechanics, and quantum computing which confirm that information in any digital or electronic medium is, and always has been, physical, tangible matter. Yet, to date, data regulation has sought to adapt legal constructs for “intangible” intellectual property or to express a series of permissions and constraints tied to specific classifications of data (such as personally identifiable information). We examined legal reforms that were recently approved by the United Nations Commission on International Trade Law to enable transactions involving electronic transferable records, as well as prior reforms adopted in the United States Uniform Commercial Code and Federal law to enable similar transactions involving digital records that were, historically, physical assets (such as promissory notes or chattel paper). Finally, we surveyed prior academic scholarship in the U.S. and Europe to determine if the physical attributes of digital data had been previously considered in the vigorous debates on how to regulate personal information or the extent, if at all, that the solutions developed for transferable records had been considered for larger classes of digital assets. Based on the preceding, we propose that regulation of digital information assets, and clear concepts of ownership, can be built on existing legal constructs that have enabled electronic commercial practices. We propose a property rules construct that clearly defines a right to own digital information arises upon creation (whether by keystroke or machine), and suggest when and how that right attaches to specific data though the exercise of technological controls. This construct will enable faster, better adaptations of new rules for the ever-evolving portfolio of data assets being created around the world. This approach will also create more predictable, scalable, and extensible mechanisms for regulating data and is consistent with, and may improve the exercise and enforcement of, rights regarding personal information. We conclude by highlighting existing technologies and their potential to support this construct and begin an inventory of the steps necessary to further proceed with this process.
Topic: International, Media & Communications
Hacking the Internet of Things: Vulnerabilities, Dangers, and Legal Responses
Sara Sun Beale and Peter Berris
Date posted: 2-14-2018
The Internet of Things (IoT) is here and growing rapidly as consumers eagerly adopt internet-enabled devices for their utility, features, and convenience. But this dramatic expansion also exacerbates two underlying dangers in the IoT. First, hackers in the IoT may attempt to gain control of internet-enabled devices, causing negative consequences in the physical world. Given that objects with internet connectivity range from household appliances and automobiles to major infrastructure components, this danger is potentially severe. Indeed, in the last few years, hackers have gained control of cars, trains, and dams, and some experts think that even commercial airplanes could be at risk. Second, IoT devices pose an enormous risk to the stability of the internet itself, as they are vulnerable to being hacked and recruited into botnets used for attacks on the digital world. Recent attacks on major websites including Netflix and Twitter exemplify this danger. This article surveys these dangers, summarizes some of their main causes, and then analyzes the extent to which current laws like the Computer Fraud and Abuse Act punish hacking in the IoT. The article finds that although hacking in the IoT is likely illegal, the current legal regime punishes hacking after the fact and therefore lacks the prospective force needed to fully temper the risks posed by the IoT. Therefore, other solutions are needed to address the perilousness of the IoT in its current form. After a discussion of the practical and legal barriers to investigating and prosecuting hacking, we turn to the merits and pitfalls of hacking back from legal, practical, and ethical perspectives. We then discuss the advantages and disadvantages of two possible solutions—regulation and the standards approach.
Date posted: 1-8-2018
As virtual reality rapidly progresses, broadcasts are able to increasingly mimic the experience of actually attending a game. As the technology advances and the viewer can freely move about the game and virtual reality can simulate the in-stadium attendance, the virtual reality broadcast nears the point where the broadcast is indistinguishable from the underlying game. Thus, novel copyright protection issues arise regarding the ability to protect the experience through copyright. Although normal broadcasts may be copyrighted, virtual reality broadcasts of live sports could lack protection under the Copyright Act because the elements of originality, authorship, and fixation are harder to satisfy for this type of work. If the elements that formerly protected broadcasts through copyright no longer apply, the virtual reality broadcast of the game will lose copyright protection. The virtual reality broadcaster can receive protection for the work in several ways, such as (1) by broadcaster-made modifications to the transmitted broadcast, (2) through misappropriation claims, or (3) by inserting contract terms. These additional steps maintain the ability of virtual reality broadcasters to disseminate works without fear the work will not be protectable by the law.
Topic: Copyrights & Trademarks
Peeling Back the Student Privacy Pledge
Date posted: 1-7-2018
Education software is a multi-billion dollar industry that is rapidly growing. The federal government has encouraged this growth through a series of initiatives that reward schools for tracking and aggregating student data. Amid this increasingly digitized education landscape, parents and educators have begun to raise concerns about the scope and security of student data collection. Industry players, rather than policymakers, have so far led efforts to protect student data. Central to these efforts is the Student Privacy Pledge, a set of standards that providers of digital education services have voluntarily adopted. By many accounts, the Pledge has been a success. Since its introduction in 2014, over 300 companies have signed on, indicating widespread commitment to the Pledge’s seemingly broad protections for student privacy. This industry participation is encouraging, but the Pledge does not contain any meaningful oversight or enforcement provisions. This Article analyzes whether signatory companies are actually complying with the Pledge rather than just paying lip service to its goals. By looking to the privacy policies and terms of service of a sample of the Pledge’s signatories, I conclude that noncompliance may be a significant and prevalent issue. Consumers of education software have some power to hold signatories accountable, but their oversight abilities are limited. This Article argues that the federal government, specifically the Federal Trade Commission, is best positioned to enforce compliance with the Pledge and should hold Pledge signatories to their promises.
Topic: Media & Communications
Artificial Intelligence: Application Today and Implications Tomorrow
Sean Semmler and Zeeve Rose
Date posted: 12-5-2017
This paper analyzes the applications of artificial intelligence to the legal industry, specifically in the fields of legal research and contract drafting. First, it will look at the implications of artificial intelligence (A.I.) for the current practice of law. Second, it will delve into the future implications of A.I. on law firms and the possible regulatory challenges that come with A.I. The proliferation of A.I. in the legal sphere will give laymen (clients) access to the information and services traditionally provided exclusively by attorneys. With an increase in access to these services will come a change in the role that lawyers must play. A.I. is a tool that will increase access to cheaper and more efficient services, but non-lawyers lack the training to analyze and understand information it puts out. The role of lawyers will change to fill this role, namely utilizing these tools to create a better work product with greater efficiency for their clients.
Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For
Lilian Edwards and Michael Veale
Date posted: 12-4-2017
Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric" explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers' worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ("right to be forgotten") and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered.
Date posted: 11-19-2017
After granting permission to the Internal Revenue Service to serve a digital exchange company a summons for user information, the Federal District Court for the Northern District of California created some uncertainty regarding the privacy of cryptocurrencies. The IRS views this information gathering as necessary for monitoring compliance with Notice 2014-21, which classifies cryptocurrencies as property for tax purposes. Cryptocurrency users, however, view the attempt for information as an infringement on their privacy rights and are seeking legal protection. This Issue Brief investigates the future tax implications of Notice 2014-21 and considers possible routes the cryptocurrency market can take to avoid the burden of capital gains taxes. Further, this Issue Brief attempts to uncover the validity of the privacy claims made against the customer information summons and will recommend alternative actions for the IRS to take regardless of whether it succeeds in obtaining the information.
Topic: Technology, Cryptocurrency, Tax, Privacy
Date posted: 5-17-2017
Current law concerning the militarization and weaponization of outer space is inadequate for present times. The increased implementation of “dual-use” space technologies poses obstacles for the demilitarization of space. This paper examines how far the militarization of space should be taken and also whether weapons of any kind should be placed in space. Further steps must be taken in international space law to attempt to keep the militarization and weaponization of space under control in order to promote and maintain a free outer space for research and exploration.
Embryos as Patients? Medical Provider Duties in the Age of CRISPR/Cas9
G. Edward Powell III
Date posted: 5-17-2017
The CRISPR/Cas9 genome engineering platform is the first method of gene editing that could potentially be used to treat genetic disorders in human embryos. No past therapies, genetic or otherwise, have been intended or used to treat disorders in existent embryos. Past procedures performed on embryos have exclusively involved creation and implantation (e.g., in-vitro fertilization) or screening and selection of already-healthy embryos (e.g., preimplantation genetic diagnosis). A CRISPR/Cas9 treatment would evade medical malpractice law due to the early stage of the intervention and the fact that it is not a treatment for the mother. In most jurisdictions, medical professionals owe no duty to pre-viable fetuses or embryos as such, but will be held liable for negligent treatment of the mother if the treatment causes injury to a born-alive child. This issue brief discusses the science of CRISPR/Cas9, the background legal status of human embryos, and the case for considering genetically engineered embryos as patients for purposes of medical malpractice law.