Emma Ritter


Our voices can reveal intimate details about our lives. Yet, many privacy discussions have focused on the threats from speaker recognition and speech recognition. This Note argues that this focus overlooks another privacy risk: voice-inferred information. This term describes non-obvious information drawn from voice data through a combination of machine learning, artificial intelligence, data mining, and natural language processing. Companies have latched onto voiceinferred information. Early adopters have applied the technology in situations as varied as lending risk analysis and hiring. Consumers may balk at such strategies, but the current United States privacy regime leaves voice insights unprotected. By applying a notice and consent privacy model via sector-specific statutes, the hodgepodge of U.S. federal privacy laws allows voice-inferred information to slip through the regulatory cracks. This Note reviews the current legal landscape and identifies existing gaps. It then suggests two solutions that balance voice privacy with technological innovation: purpose-based consent and independent data review boards. The first bolsters voice protection within the traditional notice and consent framework, while the second imagines a new protective scheme. Together, these solutions complement each other to afford the human voice the protection it deserves.

Included in

Law Commons