Date posted: 11-19-2017
After granting permission to the Internal Revenue Service to serve a digital exchange company a summons for user information, the Federal District Court for the Northern District of California created some uncertainty regarding the privacy of cryptocurrencies. The IRS views this information gathering as necessary for monitoring compliance with Notice 2014-21, which classifies cryptocurrencies as property for tax purposes. Cryptocurrency users, however, view the attempt for information as an infringement on their privacy rights and are seeking legal protection. This Issue Brief investigates the future tax implications of Notice 2014-21 and considers possible routes the cryptocurrency market can take to avoid the burden of capital gains taxes. Further, this Issue Brief attempts to uncover the validity of the privacy claims made against the customer information summons and will recommend alternative actions for the IRS to take regardless of whether it succeeds in obtaining the information.
Topic: Technology, Cryptocurrency, Tax, Privacy
Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For
Lilian Edwards and Michael Veale
Date posted: 12-4-2017
Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric" explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers' worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ("right to be forgotten") and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered.
Artificial Intelligence: Application Today and Implications Tomorrow
Sean Semmler and Zeeve Rose
Date posted: 12-5-2017
This paper analyzes the applications of artificial intelligence to the legal industry, specifically in the fields of legal research and contract drafting. First, it will look at the implications of artificial intelligence (A.I.) for the current practice of law. Second, it will delve into the future implications of A.I. on law firms and the possible regulatory challenges that come with A.I. The proliferation of A.I. in the legal sphere will give laymen (clients) access to the information and services traditionally provided exclusively by attorneys. With an increase in access to these services will come a change in the role that lawyers must play. A.I. is a tool that will increase access to cheaper and more efficient services, but non-lawyers lack the training to analyze and understand information it puts out. The role of lawyers will change to fill this role, namely utilizing these tools to create a better work product with greater efficiency for their clients.
Peeling Back the Student Privacy Pledge
Date posted: 1-7-2018
Education software is a multi-billion dollar industry that is rapidly growing. The federal government has encouraged this growth through a series of initiatives that reward schools for tracking and aggregating student data. Amid this increasingly digitized education landscape, parents and educators have begun to raise concerns about the scope and security of student data collection. Industry players, rather than policymakers, have so far led efforts to protect student data. Central to these efforts is the Student Privacy Pledge, a set of standards that providers of digital education services have voluntarily adopted. By many accounts, the Pledge has been a success. Since its introduction in 2014, over 300 companies have signed on, indicating widespread commitment to the Pledge’s seemingly broad protections for student privacy. This industry participation is encouraging, but the Pledge does not contain any meaningful oversight or enforcement provisions. This Article analyzes whether signatory companies are actually complying with the Pledge rather than just paying lip service to its goals. By looking to the privacy policies and terms of service of a sample of the Pledge’s signatories, I conclude that noncompliance may be a significant and prevalent issue. Consumers of education software have some power to hold signatories accountable, but their oversight abilities are limited. This Article argues that the federal government, specifically the Federal Trade Commission, is best positioned to enforce compliance with the Pledge and should hold Pledge signatories to their promises.
Topic: Media & Communications
Date posted: 1-8-2018
As virtual reality rapidly progresses, broadcasts are able to increasingly mimic the experience of actually attending a game. As the technology advances and the viewer can freely move about the game and virtual reality can simulate the in-stadium attendance, the virtual reality broadcast nears the point where the broadcast is indistinguishable from the underlying game. Thus, novel copyright protection issues arise regarding the ability to protect the experience through copyright. Although normal broadcasts may be copyrighted, virtual reality broadcasts of live sports could lack protection under the Copyright Act because the elements of originality, authorship, and fixation are harder to satisfy for this type of work. If the elements that formerly protected broadcasts through copyright no longer apply, the virtual reality broadcast of the game will lose copyright protection. The virtual reality broadcaster can receive protection for the work in several ways, such as (1) by broadcaster-made modifications to the transmitted broadcast, (2) through misappropriation claims, or (3) by inserting contract terms. These additional steps maintain the ability of virtual reality broadcasters to disseminate works without fear the work will not be protectable by the law.
Topic: Copyrights & Trademarks
Hacking the Internet of Things: Vulnerabilities, Dangers, and Legal Responses
Sara Sun Beale and Peter Berris
Date posted: 2-14-2018
The Internet of Things (IoT) is here and growing rapidly as consumers eagerly adopt internet-enabled devices for their utility, features, and convenience. But this dramatic expansion also exacerbates two underlying dangers in the IoT. First, hackers in the IoT may attempt to gain control of internet-enabled devices, causing negative consequences in the physical world. Given that objects with internet connectivity range from household appliances and automobiles to major infrastructure components, this danger is potentially severe. Indeed, in the last few years, hackers have gained control of cars, trains, and dams, and some experts think that even commercial airplanes could be at risk. Second, IoT devices pose an enormous risk to the stability of the internet itself, as they are vulnerable to being hacked and recruited into botnets used for attacks on the digital world. Recent attacks on major websites including Netflix and Twitter exemplify this danger. This article surveys these dangers, summarizes some of their main causes, and then analyzes the extent to which current laws like the Computer Fraud and Abuse Act punish hacking in the IoT. The article finds that although hacking in the IoT is likely illegal, the current legal regime punishes hacking after the fact and therefore lacks the prospective force needed to fully temper the risks posed by the IoT. Therefore, other solutions are needed to address the perilousness of the IoT in its current form. After a discussion of the practical and legal barriers to investigating and prosecuting hacking, we turn to the merits and pitfalls of hacking back from legal, practical, and ethical perspectives. We then discuss the advantages and disadvantages of two possible solutions—regulation and the standards approach.