Trust built on authority, powered by legal AI The legal profession continues to grapple with the duality of AI technology adoption: 94% of law firm leaders believe AI will increase revenue and improve...
When companies face liability claims, general counsel and corporate legal teams are often asked to ascertain risk, forecast potential outcomes, and justify strategies for resolution before the facts are...
Artificial intelligence is already reshaping legal practice. The question is no longer if it will impact the profession; it’s whether the next generation of lawyers is ready for it. Our latest...
How AI is transforming the practice of law for attorneys Ask any lawyer what the practice of law involves on a day-to-day basis and you will get an answer that goes far beyond writing briefs or reviewing...
A majority of lawyers are now using AI tools in their work, but trust in the technology has not kept up with the speed of adoption. Firms are investing, experimenting and, in many cases, struggling to...
* The views expressed in externally authored materials linked or published on this site do not necessarily reflect the views of LexisNexis Legal & Professional.
The legal profession continues to grapple with the duality of AI technology adoption: 94% of law firm leaders believe AI will increase revenue and improve client service while at the same time 81% of them report internal concern about AI’s risks, according to a March 2026 report in LawSites.
“For the first time, more lawyers are using generative AI than not, even as firm leaders express widespread concern about the technology’s reliability,” writes industry veteran Robert Ambrogi.
This is the third post in a four-part series exploring how LexisNexis is redefining the standard of legal practice in the age of AI. In parts one and two of this series, we looked at the LexisNexis vision for this new standard and why it has to be built for the practice of law, not adapted from generic tools. This third post turns to the pillar that most directly answers the question every lawyer is asking right now: How accurate are AI-generated legal citations and how do I know when to trust what the tool gives me back?
The short answer: AI legal research accuracy depends on whether the output is grounded in authority.
The gap between the speed of adoption and the confidence in outputs is one of the defining tensions of this moment in our industry … and for good reason. U.S. courts documented 487 AI-related errors in 2025 alone, more than 10 times the 2024 total.
Indeed, the ABA reports that accuracy and reliability are the top two concerns cited by lawyers when it comes to the use of AI tools.
The headline-grabbing stories about lawyers sanctioned for AI-generated errors and panicked about quality are useful as warnings, but they obscure a deeper issue. The real question isn’t whether AI makes mistakes — every technology does — but rather what the AI’s output is grounded in.
When lawyers question AI legal research accuracy, what they are really questioning is not the fluency of the output, but the authority behind it.
A generic foundation model for an AI tool, no matter how sophisticated, produces text by predicting which words are likely to come next. It has no inherent concept of what a valid citation is, what jurisdictional hierarchy means or whether the case it’s describing has been overturned, distinguished or superseded.
This is a problem because general-purpose AI tools are therefore generating plausibility, not authority. In a domain where a lawyer’s entire professional obligation rests on candor to the tribunal under Model Rule 3.3 and competence under Model Rule 1.1, plausibility isn’t enough.
This means that the standard of practice in the age of AI can’t be “trust the output.” Legal professionals don’t just trust outputs — they trust their own work, their citations and the present those citations rest on. The standard of practice is, and always has been, trust built on authority. What has changed is that authority now has to be engineered into the AI itself.
Improving AI legal research accuracy requires more than better models. It requires systems built to validate, verify, and trace every citation. High-quality, defensible AI outcomes in legal work require grounding from three distinct sources:
This includes memos, briefs, prior filings and client playbooks that represent institutional knowledge and client-specific judgment. AI that can’t draw on this content is working in a vacuum or perhaps even reinventing positions the legal team has already taken elsewhere.
This is the layer where AI for legal research accuracy is won or lost. Primary law, editorial analysis, citation validation and real-time updates on the treatment of cases and statutes aren’t optional inputs. They make up the essential authoritative bedrock that enables an output to be checked, verified and defended.
Public information has a role, particularly for context, current events and non-legal fact gathering. But for the core legal work (i.e., the holding a lawyer is going to cite in a document or courtroom), general web content cannot be the foundation. It can supplement, but it cannot ground.
An AI system that sits on top of all three sources, and knows which source to weight for which task, is the architecture that earns trust. Any one of those sources alone falls short.
In legal AI, validation is the baseline ethical requirement. For example, a Law360 tracker illustrates that hundreds of federal judges have now adopted standing orders or local rules specifically addressing generative AI use in court filings. The message is that the duty to verify every citation cannot be delegated and serious consequences will be attached to violations of that responsibility.
This means that AI for legal practice must be built with validation as a first-class feature, not an afterthought, and that citation checking must be integrated into the workflow. It means treatment indicators that flag when a case has been overruled or questioned, and transparent courting so a lawyer can click from an AI-drafted passage to the underlying authority in a single step. And it means that when an AI tool doesn’t have a confident answer, it says so, rather than filling the gap with plausibility.
Once the legal authority is right and the generated output is trustworthy, the next question is how AI supports — rather than replaces — the lawyer’s judgment in building an argument. In the final post in this series, we’ll look at the third pillar of the standard of legal practice in the age of AI: drafting as a legal process, not just text generation.
Lexis+ with Protégé delivers purpose-built, end-to-end legal AI workflows with an intuitive user interface designed to make trusted legal work possible with one prompt. New workflow capabilities within Lexis+ with Protégé automate drafting, review, analysis and citation checking into scalable and repeatable legal processes that simplify complex legal work and deliver consistent, high-quality results across teams.
Learn more about Lexis+ with Protégé’s capabilities or request a free trial today.