15 May 2025

No Surprises: How Protégé™ in Lexis+ AI® Reduces Hallucinations and Strengthens Citation Integrity

By Serena Wellen, Vice President of Product Management at LexisNexis Legal and Professional

May 15, 2025

In a legal industry increasingly shaped by AI, the rise of “AI hallucinations” — fake legal citations generated by large language models — has made accuracy and citation integrity a top concern.

This week, The Verge reported on a troubling case involving AI-generated legal research. A California judge sanctioned attorneys for submitting a legal brief containing fake citations and quotes, originally generated using AI tools — including Westlaw Precision’s CoCounsel.

The judge’s response was clear: “I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist. That’s scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order.”

This incident highlights a growing concern in the legal profession – the inclusion of fabricated or unverifiable legal citations by generative AI tools – and raises a critical question: Can you trust your legal AI to get it right?

At LexisNexis, our answer is: yes, and here’s why.

What Are AI Hallucinations in Legal Research?

In the legal field, a “hallucination” occurs when an AI system fabricates a legal citation, case, or fact that sounds plausible but isn’t real. This is more than a technical glitch – it's a serious professional risk.

Whether you're drafting a brief, advising a client, or preparing for trial, relying on made-up case law or misquoted authorities can damage your credibility and your case. As shown in the recent case reported by The Verge, it can even result in court sanctions.

How Protégé in Lexis+ AI Reduces Hallucination Risk

While some AI tools have generated misleading or fabricated legal citations – including non-existent cases or misquoted authorities – Lexis+ AI is built differently. We designed Lexis+ AI to deliver what legal professionals need most: trustworthy, verifiable responses based on real legal sources users can verify.

Built on Verified Sources and Real Legal Citations

Lexis+ AI delivers legally sound responses backed by real, verifiable sources. Responses are drawn exclusively from the industry’s most trusted and expansive legal content repository, including Shepard’s®-reviewed case law, statutes, and practical guidance.

Now, with Document Management System (DMS) connectivity, Lexis+ AI can also ground responses in your firm’s internal knowledge including clauses from agreements, securely and in context. This enhances the relevance of responses by aligning them with how your firm structures and negotiates contracts.

For broader internal knowledge coverage such as precedent briefs, Protégé Vault allows you to securely upload a wider range of document types. This ensures that answers are not only legally sound but also aligned with your firm’s practice and voice.

Whether you're drafting a motion or reviewing strategy, Lexis+ AI delivers responses that are both externally authoritative and internally informed – with citations linked to real, verifiable sources. Citations are directly linked to the full source document, giving legal professionals the transparency and confidence they need to rely on what they cite.  

Top 4 Ways Lexis+ AI Reduces AI Hallucination Risk

  • Legal citations are real, verifiable, and linked to primary legal sources
  • Grounded in Shepard’s-reviewed case law and trusted legal sources
  • Built on a Retrieval Augmented Generation (RAG) platform that references only authoritative LexisNexis content
  • Enhanced with secure DMS connectivity to incorporate a firm’s own internal documents, enabling responses that reflect both case law and private practice knowledge

Read more about how Lexis+ AI delivers AI checks and citation integrity.

What Sets Lexis+ AI Apart? A Responsible, Transparent Approach to Legal AI

Lexis+ AI isn’t just an overlay on an existing Large Language Model (LLM). It’s a purpose-built platform for legal professionals, with safeguards baked in at every level:

  • Retrieval Augmented Generation (RAG): Our proprietary RAG framework ensures that AI responses are based only on authoritative legal content – not the internet or unverified data. Each prompt is checked across five stages, including semantic understanding, recency filtering, authority ranking, and citation validation.
  • Linked Legal Citations: All citations are directly linked to source documents, enabling instant review. If a citation isn’t linkable, we flag it for manual validation – reinforcing that AI should support, not replace, professional legal judgment.
  • Human Oversight: We place legal subject matter experts in the loop throughout the development process. Continuous human review, user feedback, and refinement ensure that Lexis+ AI stays aligned with the highest standards of reliability.

The Bottom Line: Not All Legal AI Is Created Equal

As recent headlines have shown, AI hallucinations in legal work are a real threat. But they are not inevitable.

Lexis+ AI was built to empower legal professionals with responses grounded in trusted, authoritative legal content, backed by our customer's own internal sources, and ready to support real-world decisions.

We don’t believe in shortcuts. We believe in accountability, transparency, and the kind of innovation that earns trust, not headlines.

Learn More About Lexis+ AI

To explore how Protégé in Lexis+ AI delivers fast, trusted responses with verifiable linked legal citations, visit Lexis+ AI.