Document Type

Article

Publication Date

2025

Abstract

This short piece explores a widespread and yet underexamined or even overlooked misconception, that is, large language models (LLMs) function like traditional legal research databases. They do not. As a matter of fact, information retrieval from databases functions very differently from LLMs in terms of inputs, retrieval processes, and outputs. These differences have significant implications for transparency, traceability, and overall effectiveness in AI-driven legal research. Without intentional oversight and adaption, these changes could profoundly affect how we develop research skills and a cumulative knowledge base, both of which are essential skills for lifelong learning in the legal field.

This article begins with a reflection on the evolution of legal research practices, highlighting key breaking points brought about by the emergence of generative AI (gen AI). It then examines how information retrieval has changed in the age of gen AI and the risks posted by failing to adapt learning habits. Finally, it offers some initial thoughts on how we might reflect on and reshape our approaches to learning and knowledge building in this new era.

Library of Congress Subject Headings

Legal research, Information retrieval, Artificial intelligence--Law and legislation, Law--Study and teaching

Share

COinS