30 Jan 2026

Humans in the Loop: The People Powering Trusted Legal AI

By Serena Wellen, VP Product Management, LexisNexis

As the use of artificial intelligence permeates legal practice, a critical question confronts every legal professional who uses these tools: Can I trust this? When an AI platform is relied upon to draft a motion, summarize case law or identify relevant contract language, the stakes are too high for uncertainty. 

It is therefore not surprising that some of the important AI considerations named by lawyers in the 2025 ABA Law Technology Today report included concerns about trust in the output of legal AI tools and ethical alignment of AI with legal practice. 

LexisNexis has been developing and delivering products and tools to the legal profession for more than 50 years. We know that ensuring the reliability of AI-generated legal content isn’t something that can be left to algorithms alone. So behind every AI tool that we release stands a specialized team of legal professionals who are dedicated to evaluating the accuracy and quality of the content that our databases and delivery models produce. 

We’d like to introduce you to our Data Discovery and Enrichment team, the “humans in the loop” at LexisNexis who serve as a critical bridge between our cutting-edge AI technology and the rigorous standards that legal practitioners demand. 

Who Are These Experts? 

The Data Discovery and Enrichment team comprises legal professionals and legal research specialists who understand both the technical capabilities of AI and the practical realities of legal practice. We leverage a team of more than 300 J.D. experts who bring years of experience in legal research, analysis and quality assessment to evaluate the accuracy and quality of the AI-generated results produced by our tools. 

Many of the team members are assigned to work on AI-generated outputs within the area of legal practice that aligns with their training, and we have other specialized teams who collaborate to tackle the complexities presented by AI generation of legal content. They don’t just understand what AI can do; they understand what lawyers need it to do, and they possess the legal expertise necessary to assess whether the AI-generated output meets those needs. 

This team works at the intersection of legal knowledge and technology development, applying their professional judgment to evaluate AI outputs through the same critical lens that practicing legal professionals would use. Their role is essential because, while AI can process vast amounts of information at remarkable speed, it lacks the professional judgment, contextual understanding and ethical responsibility that define legal practice. 

Our incorporation of this “human in the loop” system of development —where humans actively participate in the AI model’s creation and operation, serving as a guardrail to provide feedback and guidance that shapes the system’s performance, accuracy and ethical boundaries — preserves legal professional judgment while leveraging AI capabilities. 

Key Areas for Quality Evaluation 

The Data Discovery and Enrichment team evaluates AI-generated content across a number of important dimensions, each designed to address specific risks and requirements unique to legal practice. Here are five common themes for our quality assessments: 

  • Accuracy forms the foundation of trust. Team members verify that AI-generated responses correctly reflect legal principles, case holdings, statutory language and procedural rules. They check citations against source materials to ensure claims are properly supported and that the AI hasn’t mischaracterized legal authorities. This dimension prioritizes the fundamental question: Is this information correct? 
  • Comprehensiveness ensures that AI outputs don’t present an incomplete picture that could mislead legal professionals. Legal issues often involve multiple perspectives, conflicting precedents, and nuanced distinctions. Our team assesses whether AI-generated content considers relevant angles, alternative interpretations, and important qualifications. For example, an answer might be technically accurate but incomplete if it omits certain exceptions or contrary authority. 
  • Hallucinations represent one of the most serious risks in AI-generated legal content. The team actively hunts for instances where the AI has created plausible sounding but fictional legal authority. Given the profession’s reliance on precedent and the consequences of citing non-existent cases, this dimension receives intense scrutiny. Every citation must be verifiable; every legal principle must be grounded in authoritative sources. 
  • Usefulness evaluates whether AI outputs actually help legal professionals accomplish their goals. A response might be accurate and comprehensive but structured in a way that makes it difficult to apply in practice. Our team members assess clarity, organization, relevance to the user’s query, and alignment with how lawyers actually work. Does the output answer the question asked? It is presented in a format that integrates naturally into legal workflows? 
  • Bias scrutiny protects outputs that could reflect or perpetuate problematic perspectives. The team examines whether AI-generated content exhibits inappropriate prejudice, excludes relevant viewpoints, or includes offensive material. This dimension is particularly important in legal contexts, where fairness and impartiality are professional obligations, so our AI tools are designed to support these values.  

Our team’s focus on these and other key themes for AI output assessment enables us to strike a balance between leveraging the potential of legal AI and ensuring there are safeguards that protect ethical standards and professional integrity. 

Experience LexisNexis AI-Powered Solutions 

Our Data Discovery and Enrichment team embodies LexisNexis’s commitment to maintaining human oversight throughout our AI product development workflow. These legal professionals stand in the gap between technological capability and the content delivered to our customers. They serve as the guardians of AI reliability, providing the validation, refinement and quality assurance that only trained legal professionals can deliver. 

Discover how LexisNexis is setting the standard for responsible, secure AI in legal technology by learning more about our AI-powered solutions.