•  
  •  
 

Abstract

The civil legal tradition carries assumptions, biases, and attitudes rooted in racism and ideologies intended to protect the (im)balance of power. This moment in history offers new versions of the same challenges with the integration of artificial intelligence (“AI”) and large language models (“LLMs”) into legal frameworks, and those critiques are being addressed in the legal discourse. Building on these perspectives, this moment also offers civil legal professionals a chance to engage in machine learning frameworks informed by social justice principles and accountable to measures of “equal justice for all.” The perception of fairness in the use of these tools is critical to protecting the integrity of and trust in the civil justice system. Although the features of LLMs may not be able to replace legal analysis just yet, developers anticipate that that is where these tools are headed sooner than one might think. Without intentional approaches to machine learning, LLMs will create a civil legal system twilight zone where machines propose new outcomes based on the ineffective patterns of the past, a never-ending feedback loop that traps litigants and stifles social progress. LLMs, and the AI tools which use them, offer a new reality in which legal analysis is more efficient. But, like a law student almost ready to embark on a legal career, LLMs must be properly trained in this time of early development to correct human error. Legal educators, who are not code or software developers, cannot simply change expensive and vast datasets. However, law professors, well versed in scaffolded learning such as the Socratic method and the nuances of social context, are well-situated for this challenge. In the fight for justice, law professors have relied primarily on cultural competency and racial literacy skills to empower subordinated individuals in their work toward systemic justice, critical lenses which can also prove useful in prompting LLMs. Missing from these competency, policy, and regulatory frameworks is a method for prompting machines in ways that “fine-tune” them for social justice. Prompting to encourage consideration of the macro structures and micro systemic forces at work, the historical legacies of injustice, and modern-day subtleties of patterned structural injustice based on social identity and other factors can improve performance and fairness. This Article, borrowing from medical and social work efforts to improve social determinants of health and outcomes, proposes fine-tuning prompts and prompt augmenting to enhance fluency in structural injustice of LLM outputs.

Share

COinS