Articles
Tribes and AI: Possibilities for Tribal Sovereignty
Adam Crepelle
Date posted: 9-29-2024
Artificial Intelligence (AI) has permeated every facet of modern existence. Governments across the globe are exploring its applications and attempting to establish regulatory frameworks. Numerous scholars have proffered recommendations for governing AI at the local, national, and international levels. However, as is often the case, Indian tribes have been neglected in AI policy discussions. This oversight is significant because the 574 federally recognized tribes are sovereigns with their own judicial, education, and healthcare systems. Due to their relatively small populations and geographic isolation, tribes stand to benefit significantly from the services AI can perform. Moreover, tribes are uniquely well-suited to implement AI. This is the first law review article dedicated to exploring how AI can enhance tribal sovereignty. This article begins with a history of tribal sovereignty and then provides an overview of AI. Subsequent sections delve into the ways AI can augment tribal legal systems, healthcare, education, cultural preservation endeavors, economic development, and administrative capacity. By illuminating the intersection of AI and tribal sovereignty, this article seeks to foster a more inclusive discussion of AI.
Topic: Artificial Intelligence, Internet Law
Gray Advice
Keith Porcaro
Date posted: 11-4-2024
Debates over economic protectionism or the technology flavor-of-the-month obscure a simple, urgent truth: people are going online to find help that they cannot get from legal and health professionals. They are being let down, by products with festering trust and quality issues, by regulators slow to apply consumer protection standards to harmful offerings, and by professionals loath to acknowledge changes to how help is delivered. The status quo cannot continue. Waves of capital and code are empowering ever more organizations to build digital products that blur the line between self-help and professional advice. For good or ill, “gray advice” is changing how ordinary people get help with legal issues and healthcare issues, and even how they perceive professionals. This Article begins the work of articulating what makes a high-quality digital advice product, and how regulators and professionals can engage with the reality of how people seek and find help today.
Topic: Artificial Intelligence, Computer Law, Internet Law
Date posted: 11-15-2024
Many countries worldwide recognize a right to health and provide legal recourse for securing that right. By contrast, for many Americans, there is no legal right or remedy that enables them to access healthcare if they cannot afford it. While there are some statutes and measures in place, such as the Emergency Medical Treatment and Active Labor Act (EMTALA) and Medicaid, coverage remains piecemeal and access to healthcare is not guaranteed. Conditioning people’s access to healthcare upon their ability to pay violates the foundational bioethical principle of justice. In the United States, individuals and advocates use social media to fill this vacuum, raising awareness and funds through platforms like GoFundMe and X.com (formerly Twitter). In the absence of government-provided healthcare or a right to health, the American public sometimes functions as a healthcare access mechanism via social media. However, this insufficient and inequitable stopgap cannot replace effective governance. The public is filling a vital and unmet need through social media advocacy, but there are many drawbacks to effectively delegating this authority to the public instead of the government, including the public’s capriciousness and lack of legal accountability. The government should take proactive steps to ensure that healthcare is more affordable and accessible.
Topic: Health & Biotechnology, Media & Communications
Fine-Tuning LLMs: Structural Fluency and Augmentation for the Great and Powerful Wizard of AI
Verónica C. Gonzales
Date posted: 1-27-2025
The civil legal tradition carries assumptions, biases, and attitudes rooted in racism and ideologies intended to protect the (im)balance of power. This moment in history offers new versions of the same challenges with the integration of artificial intelligence (“AI”) and large language models (“LLMs”) into legal frameworks, and those critiques are being addressed in the legal discourse. Building on these perspectives, this moment also offers civil legal professionals a chance to engage in machine learning frameworks informed by social justice principles and accountable to measures of “equal justice for all.” The perception of fairness in the use of these tools is critical to protecting the integrity of and trust in the civil justice system. Although the features of LLMs may not be able to replace legal analysis just yet, developers anticipate that that is where these tools are headed sooner than one might think. Without intentional approaches to machine learning, LLMs will create a civil legal system twilight zone where machines propose new outcomes based on the ineffective patterns of the past, a never-ending feedback loop that traps litigants and stifles social progress. LLMs, and the AI tools which use them, offer a new reality in which legal analysis is more efficient. But, like a law student almost ready to embark on a legal career, LLMs must be properly trained in this time of early development to correct human error. Legal educators, who are not code or software developers, cannot simply change expensive and vast datasets. However, law professors, well versed in scaffolded learning such as the Socratic method and the nuances of social context, are well-situated for this challenge. In the fight for justice, law professors have relied primarily on cultural competency and racial literacy skills to empower subordinated individuals in their work toward systemic justice, critical lenses which can also prove useful in prompting LLMs. Missing from these competency, policy, and regulatory frameworks is a method for prompting machines in ways that “fine-tune” them for social justice. Prompting to encourage consideration of the macro structures and micro systemic forces at work, the historical legacies of injustice, and modern-day subtleties of patterned structural injustice based on social identity and other factors can improve performance and fairness. This Article, borrowing from medical and social work efforts to improve social determinants of health and outcomes, proposes fine-tuning prompts and prompt augmenting to enhance fluency in structural injustice of LLM outputs.
Topic: Artificial Intelligence, Computer Law, Internet Law, Law and Race
Determinants of Socially Responsible AI Governance
Daryl Lim
Date posted: 1-27-2025
The signing of the first international AI treaty by the United States, European Union, and other nations marks a pivotal step in establishing a global framework for AI governance, ensuring that AI systems respect human rights, democracy, and the rule of law. This article advances the concepts of justice, equity, and the rule of law as yardsticks of socially responsible AI—from development through deployment—to ensure that AI technologies do not exacerbate existing inequalities but actively promote fairness and inclusivity. Part I explores AI’s potential to improve access to justice for marginalized communities and small and medium-sized law firms while scrutinizing AI-related risks judges, lawyers, and the communities they serve face. Part II examines the structural biases in AI systems, focusing on how biased data and coding practices can entrench inequity and how intellectual property protections like trade secrets can limit transparency and undermine accountability in AI governance. Part III evaluates the normative impact of AI on traditional legal frameworks, offering a comparative analysis of governance models: the U.S. market-driven approach, the EU’s rights-based model, China’s command economy, and Singapore’s soft law framework. The analysis highlights how different systems balance innovation with safeguards, emphasizing that successful AI governance must integrate risk-based regulation and transparency without stifling technological advancement. Through these comparative insights, the article proposes a proactive governance framework incorporating transparency, equity audits, and tailored regulatory approaches. This forward-looking analysis offers legal scholars and policymakers a comprehensive roadmap for navigating AI’s transformative effects on justice, equity, and the rule of law.
Topic: Artificial Intelligence, Computer Law, Internet Law