A recent commissioned study by one of the world’s leading independent business research and advisory firms has provided significant findings of the measurable impact that a legal AI platform can...
In-house legal departments are experiencing notable shifts in hiring trends at all levels starting with the general counsel role. A recent study from Russell Reynolds Associates found that GC turnover...
If you’ve ever sat across the table at a settlement negotiation wishing you knew how similar cases had resolved, you’re not alone. That’s exactly the challenge Lex Machina and Lexis Verdict...
By Tony Muljadi | VP of Large Law, LexisNexis As LexisNexis deepens its investment in innovative legal technology, our strategic alliance with Harvey has sparked plenty of thoughtful questions from our...
AI-generated false information represents one of the most significant risks facing all working professionals today. However, unlike other fields where minor inaccuracies might be inconvenient, legal research...
* The views expressed in externally authored materials linked or published on this site do not necessarily reflect the views of LexisNexis Legal & Professional.
By Serena Wellen, Vice President of Product Management at LexisNexis Legal and Professional
May 15, 2025
In a legal industry increasingly shaped by AI, the rise of “AI hallucinations” — fake legal citations generated by large language models — has made accuracy and citation integrity a top concern.
This week, The Verge reported on a troubling case involving AI-generated legal research. A California judge sanctioned attorneys for submitting a legal brief containing fake citations and quotes, originally generated using AI tools.
The judge’s response was clear: “I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist. That’s scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order.”
This incident highlights a growing concern in the legal profession – the inclusion of fabricated or unverifiable legal citations by generative AI tools – and raises a critical question: Can you trust your legal AI to get it right?
At LexisNexis, our answer is: yes, and here’s why.
In the legal field, a “hallucination” occurs when an AI system fabricates a legal citation, case, or fact that sounds plausible but isn’t real. This is more than a technical glitch – it's a serious professional risk.
Whether you're drafting a brief, advising a client, or preparing for trial, relying on made-up case law or misquoted authorities can damage your credibility and your case. As shown in the recent case reported by The Verge, it can even result in court sanctions.
While some AI tools have generated misleading or fabricated legal citations – including non-existent cases or misquoted authorities – Lexis+ AI is built differently. We designed Lexis+ AI to deliver what legal professionals need most: trustworthy, verifiable responses based on real legal sources users can verify.
Lexis+ AI delivers legally sound responses backed by real, verifiable sources. Responses are drawn exclusively from the industry’s most trusted and expansive legal content repository, including Shepard’s®-reviewed case law, statutes, and Practical Guidance.
Now, with Document Management System (DMS) connectivity, Lexis+ AI can also ground responses in your firm’s internal knowledge including clauses from agreements, securely and in context. This enhances the relevance of responses by aligning them with how your firm structures and negotiates contracts.
For broader internal knowledge coverage such as precedent briefs, Protégé Vault allows you to securely upload a wider range of document types. This ensures that answers are not only legally sound but also aligned with your firm’s practice and voice.
Whether you're drafting a motion or reviewing strategy, Lexis+ AI delivers responses that are both externally authoritative and internally informed – with citations linked to real, verifiable sources. Citations are directly linked to the full source document, giving legal professionals the transparency and confidence they need to rely on what they cite.
Read more about how Lexis+ AI delivers AI checks and citation integrity.
Lexis+ AI isn’t just an overlay on an existing Large Language Model (LLM). It’s a purpose-built platform for legal professionals, with safeguards baked in at every level:
As recent headlines have shown, AI hallucinations in legal work are a real threat. But they are not inevitable.
Lexis+ AI was built to empower legal professionals with responses grounded in trusted, authoritative legal content, backed by our customer's own internal sources, and ready to support real-world decisions.
We don’t believe in shortcuts. We believe in accountability, transparency, and the kind of innovation that earns trust, not headlines.
To explore how Protégé in Lexis+ AI delivers fast, trusted responses with verifiable linked legal citations, visit Lexis+ AI.