Date posted: 11-19-2017
After granting permission to the Internal Revenue Service to serve a digital exchange company a summons for user information, the Federal District Court for the Northern District of California created some uncertainty regarding the privacy of cryptocurrencies. The IRS views this information gathering as necessary for monitoring compliance with Notice 2014-21, which classifies cryptocurrencies as property for tax purposes. Cryptocurrency users, however, view the attempt for information as an infringement on their privacy rights and are seeking legal protection. This Issue Brief investigates the future tax implications of Notice 2014-21 and considers possible routes the cryptocurrency market can take to avoid the burden of capital gains taxes. Further, this Issue Brief attempts to uncover the validity of the privacy claims made against the customer information summons and will recommend alternative actions for the IRS to take regardless of whether it succeeds in obtaining the information.
Topic: Technology, Cryptocurrency, Tax, Privacy
Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For
Lilian Edwards and Michael Veale
Date posted: 12-4-2017
Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric" explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers' worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ("right to be forgotten") and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered.
Artificial Intelligence: Application Today and Implications Tomorrow
Sean Semmler and Zeeve Rose
Date posted: 12-5-2017
This paper analyzes the applications of artificial intelligence to the legal industry, specifically in the fields of legal research and contract drafting. First, it will look at the implications of artificial intelligence (A.I.) for the current practice of law. Second, it will delve into the future implications of A.I. on law firms and the possible regulatory challenges that come with A.I. The proliferation of A.I. in the legal sphere will give laymen (clients) access to the information and services traditionally provided exclusively by attorneys. With an increase in access to these services will come a change in the role that lawyers must play. A.I. is a tool that will increase access to cheaper and more efficient services, but non-lawyers lack the training to analyze and understand information it puts out. The role of lawyers will change to fill this role, namely utilizing these tools to create a better work product with greater efficiency for their clients.
Peeling Back the Student Privacy Pledge
Date posted: 1-7-2018
Education software is a multi-billion dollar industry that is rapidly growing. The federal government has encouraged this growth through a series of initiatives that reward schools for tracking and aggregating student data. Amid this increasingly digitized education landscape, parents and educators have begun to raise concerns about the scope and security of student data collection. Industry players, rather than policymakers, have so far led efforts to protect student data. Central to these efforts is the Student Privacy Pledge, a set of standards that providers of digital education services have voluntarily adopted. By many accounts, the Pledge has been a success. Since its introduction in 2014, over 300 companies have signed on, indicating widespread commitment to the Pledge’s seemingly broad protections for student privacy. This industry participation is encouraging, but the Pledge does not contain any meaningful oversight or enforcement provisions. This Article analyzes whether signatory companies are actually complying with the Pledge rather than just paying lip service to its goals. By looking to the privacy policies and terms of service of a sample of the Pledge’s signatories, I conclude that noncompliance may be a significant and prevalent issue. Consumers of education software have some power to hold signatories accountable, but their oversight abilities are limited. This Article argues that the federal government, specifically the Federal Trade Commission, is best positioned to enforce compliance with the Pledge and should hold Pledge signatories to their promises.
Topic: Media & Communications
Date posted: 1-8-2018
As virtual reality rapidly progresses, broadcasts are able to increasingly mimic the experience of actually attending a game. As the technology advances and the viewer can freely move about the game and virtual reality can simulate the in-stadium attendance, the virtual reality broadcast nears the point where the broadcast is indistinguishable from the underlying game. Thus, novel copyright protection issues arise regarding the ability to protect the experience through copyright. Although normal broadcasts may be copyrighted, virtual reality broadcasts of live sports could lack protection under the Copyright Act because the elements of originality, authorship, and fixation are harder to satisfy for this type of work. If the elements that formerly protected broadcasts through copyright no longer apply, the virtual reality broadcast of the game will lose copyright protection. The virtual reality broadcaster can receive protection for the work in several ways, such as (1) by broadcaster-made modifications to the transmitted broadcast, (2) through misappropriation claims, or (3) by inserting contract terms. These additional steps maintain the ability of virtual reality broadcasters to disseminate works without fear the work will not be protectable by the law.
Topic: Copyrights & Trademarks
Hacking the Internet of Things: Vulnerabilities, Dangers, and Legal Responses
Sara Sun Beale and Peter Berris
Date posted: 2-14-2018
The Internet of Things (IoT) is here and growing rapidly as consumers eagerly adopt internet-enabled devices for their utility, features, and convenience. But this dramatic expansion also exacerbates two underlying dangers in the IoT. First, hackers in the IoT may attempt to gain control of internet-enabled devices, causing negative consequences in the physical world. Given that objects with internet connectivity range from household appliances and automobiles to major infrastructure components, this danger is potentially severe. Indeed, in the last few years, hackers have gained control of cars, trains, and dams, and some experts think that even commercial airplanes could be at risk. Second, IoT devices pose an enormous risk to the stability of the internet itself, as they are vulnerable to being hacked and recruited into botnets used for attacks on the digital world. Recent attacks on major websites including Netflix and Twitter exemplify this danger. This article surveys these dangers, summarizes some of their main causes, and then analyzes the extent to which current laws like the Computer Fraud and Abuse Act punish hacking in the IoT. The article finds that although hacking in the IoT is likely illegal, the current legal regime punishes hacking after the fact and therefore lacks the prospective force needed to fully temper the risks posed by the IoT. Therefore, other solutions are needed to address the perilousness of the IoT in its current form. After a discussion of the practical and legal barriers to investigating and prosecuting hacking, we turn to the merits and pitfalls of hacking back from legal, practical, and ethical perspectives. We then discuss the advantages and disadvantages of two possible solutions—regulation and the standards approach.
Regulating Data as Property: A New Construct for Moving Forward
Jeffrey Ritter and Anna Mayer
Date posted: 3-6-2018
The global community urgently needs precise, clear rules that define ownership of data and express the attendant rights to license, transfer, use, modify, and destroy digital information assets. In response, this article proposes a new approach for regulating data as an entirely new class of property. Recently, European and Asian public officials and industries have called for data ownership principles to be developed, above and beyond current privacy and data protection laws. In addition, official policy guidances and legal proposals have been published that offer to accelerate realization of a property rights structure for digital information. But how can ownership of digital information be achieved? How can those rights be transferred and enforced? Those calls for data ownership emphasize the impact of ownership on the automotive industry and the vast quantities of operational data which smart automobiles and self-driving vehicles will produce. We looked at how, if at all, the issue was being considered in consumer-facing statements addressing the data being collected by their vehicles. To formulate our proposal, we also considered continued advances in scientific research, quantum mechanics, and quantum computing which confirm that information in any digital or electronic medium is, and always has been, physical, tangible matter. Yet, to date, data regulation has sought to adapt legal constructs for “intangible” intellectual property or to express a series of permissions and constraints tied to specific classifications of data (such as personally identifiable information). We examined legal reforms that were recently approved by the United Nations Commission on International Trade Law to enable transactions involving electronic transferable records, as well as prior reforms adopted in the United States Uniform Commercial Code and Federal law to enable similar transactions involving digital records that were, historically, physical assets (such as promissory notes or chattel paper). Finally, we surveyed prior academic scholarship in the U.S. and Europe to determine if the physical attributes of digital data had been previously considered in the vigorous debates on how to regulate personal information or the extent, if at all, that the solutions developed for transferable records had been considered for larger classes of digital assets. Based on the preceding, we propose that regulation of digital information assets, and clear concepts of ownership, can be built on existing legal constructs that have enabled electronic commercial practices. We propose a property rules construct that clearly defines a right to own digital information arises upon creation (whether by keystroke or machine), and suggest when and how that right attaches to specific data though the exercise of technological controls. This construct will enable faster, better adaptations of new rules for the ever-evolving portfolio of data assets being created around the world. This approach will also create more predictable, scalable, and extensible mechanisms for regulating data and is consistent with, and may improve the exercise and enforcement of, rights regarding personal information. We conclude by highlighting existing technologies and their potential to support this construct and begin an inventory of the steps necessary to further proceed with this process.
Topic: International, Media & Communications
Date posted: 3-26-2018
Privacy law in the United States has not kept pace with the realities of technological development, nor the growing reliance on the Internet of Things (IoT). As of now, the law has not adequately secured the “smart” home from intrusion by the state, and the Supreme Court further eroded digital privacy by conflating the common law concepts of trespass and exclusion in United States v. Jones. This article argues that the Court must correct this misstep by explicitly recognizing the method by which the Founding Fathers sought to “secure” houses and effects under the Fourth Amendment. Namely, the Court must reject its overly narrow trespass approach in lieu of the more appropriate right to exclude. This will better account for twenty-first century surveillance capabilities and properly constrain the state. Moreover, an exclusion framework will bolster the reasonable expectation of digital privacy by presuming an objective unreasonableness in any warrantless penetration by the state into the smart home.
Date posted: 5-4-2018
Automated vehicles will not only redefine the role of drivers, but also present new challenges in assessing product liability. In light of the increased risks of software defects in automated vehicles, this Note will review the current legal and regulatory framework related to product liability and assess the challenges in addressing on-board software defects and cybersecurity breaches from both the consumer and manufacturer perspective. While manufacturers are expected to assume more responsibility for accidents as vehicles become fully automated, it can be difficult to determine the scope of liability regarding unexpected software defects. On the other hand, consumers face new challenges in bringing product liability claims against manufacturers and developers.
Date posted: 5-4-2018
Initial coin offerings are a source of controversy in the world of startup fundraising, and their legality is, at best, an open question. Amid soaring valuations and rumors of looming SEC action, investors and issuers alike are scrambling to forge a path forward for the token-based startups of tomorrow. While issuers may soon be forced to comply with United States securities laws, the existing regime is inadequate because it does not allow startups to capture the unique benefits of coin sales and, more importantly, it does not allow eager American investors to take part in funding the world’s next generation of technology companies.
Date posted: 5-15-2018
The Communications Decency Act (CDA) provides Internet platforms complete liability protection from user-generated content. This Article discusses the costs of this current legal framework and several potential solutions. It proposes three modifications to the CDA that would use a carrot and stick to incentivize companies to take a more active role in addressing some of the most blatant downsides of user-generated content on the Internet. Despite the modest nature of these proposed changes, they would have a significant impact.
Date posted: 5-19-2018
In the Digital Age, information is more accessible than ever. Unfortunately, that accessibility has come at the expense of privacy. Now, more and more personal information is in the hands of corporations and governments, for uses not known to the average consumer. Although these entities have long been able to keep tabs on individuals, with the advent of virtual assistants and “always-listening” technologies, the ease by which a third party may extract information from a consumer has only increased. The stark reality is that lawmakers have left the American public behind. While other countries have enacted consumer privacy protections, the United States has no satisfactory legal framework in place to curb data collection by greedy businesses or to regulate how those companies may use and protect consumer data. This Article contemplates one use of that data: digital advertising. Inspired by stories of suspiciously well-targeted advertisements appearing on social media websites, this Article additionally questions whether companies have been honest about their collection of audio data. To address the potential harms consumers may suffer as a result of this deficient privacy protection, this Article proposes a framework wherein companies must acquire users' consent and the government must ensure that businesses do not use consumer information for harmful purposes.
Systemic Social Media Regulation
Date posted: 6-8-2018
Social media platforms are motivated by profit, corporate image, long-term viability, good citizenship, and a desire for friendly legal environments. These managerial interests stand in contrast to the gubernatorial interests of the state, which include the promotion of free speech, the development of e-commerce, various counter terrorism initiatives, and the discouragement of hate speech. Inasmuch as managerial and gubernatorial interests overlap, a self-regulation model of platform governance should prevail. Inasmuch as they diverge, regulation is desirable when its benefits exceed its costs. An assessment of the benefits and costs of social media regulation should account for how social facts, norms, and falsehoods proliferate. This Article sketches a basic economic model. What emerges from the analysis is that the quality of discourse cannot be controlled through suppression of content, or even disclosure of source. A better approach is to modify, in a manner conducive to discursive excellence, the structure of the forum. Optimal platform architecture should aim to reduce the systemic externalities generated by the social interactions that they enable, including the social costs of unlawful interference in elections and the proliferation of hate speech. Simultaneously, a systemic approach to social media regulation implies fewer controls on user behavior and content creation, and attendant First Amendment complications. Several examples are explored, including algorithmic newsfeeds, online advertising, and invited campus speakers.
Topic: social media, internet, communication