Use this button to switch between dark and light mode.

Can Lawyers Trust Generative AI? Building a Framework for Trustworthy Legal AI

Drawn from a Legal Geek talk by Sébastien Bardou, CEMEA International BU General Manager & VP Strategy at LexisNexis

If the internet once promised to “change everything,” generative AI may finally be the technology to deliver on that claim. In law, it is cutting research time, drafting in seconds and handling tasks that once kept lawyers busy until late night hours. The question is no longer whether AI can help lawyers - it is whether lawyers can trust it enough to use it without losing sleep.


Why General-purpose AI falls short for lawyers
Generative AI systems such as large language models (LLMs) are built on probability. Gen AI is designed to predict the next most likely word in a sequence. That is marvelous for creating a recipe, but less useful when advising on corporate liability. Lawyers deal in facts, authorities and context, not probabilities.
The problem lies in the training. General-purpose AI is built on the open web - a vast but messy collection of data that includes everything from kitten photos to pancake blogs, mixed in with legal material that is often outdated, biased or from the wrong jurisdiction. When such models attempt to answer a legal question, they can produce hallucinations: confident-sounding but entirely fabricated cases, statutes or citations. Convincing nonsense is still nonsense - and in law, it is unacceptable. For most industries, this is an inconvenience. For law, it is fatal. Clients do not pay for answers that are likely correct.

The cost of misplaced trust
Putting faith in generic AI tools can lead to:

  •  Reputational harm when hallucinations are exposed.
  • Wasted costs spent double-checking or correcting flawed outputs.
  • Professional liability if a client relies on inaccurate information.

In short: if accuracy is non-negotiable, trust must be earned. That is why the focus has shifted towards building AI that is genuinely worthy of trust.

The three pillars of trustworthy legal AI
A framework for trustworthy AI in law is taking shape, resting on three essential pillars: technology, content and human oversight.

1. Technology
Recent advances such as Retrieval-Augmented Generation (RAG) mean that outputs can be grounded in curated databases rather than the model’s memory. It’s a polite way of telling the AI to “stick to the sources, please.” This reduces hallucinations and ensures every answer can be traced back to something real. 

2. Content
No technology can redeem poor foundations. AI is only as good as the material it is built upon. Trustworthy legal AI must rely on authoritative sources:

  • Primary law, including legislation and reported judgments.
  • Practical guidance, precedents and forms practitioners already use.

When the training data includes pancake recipes, the outputs will inevitably taste a bit off.

3. Human oversight
The final safeguard is the lawyer. Critical thinking cannot be automated. Professionals must verify answers, question assumptions and treat AI as an assistant, not an oracle. The rule is simple: always test the output before you trust it.

A role model profession
Lawyers are uniquely well positioned to lead in the age of AI. The profession cannot - and will not - settle for “good enough.” By demanding transparency, citations and human oversight, legal professionals can set the benchmark for how AI should be used responsibly in any industry.

Tags:

Get in touch