As Generative AI (Gen AI) reshapes the profession, legal expertise alone is no longer enough. Tomorrow’s most valuable lawyers will blend legal knowledge with AI fluency - mastering the skills needed to...
Generative AI (Gen AI) has moved beyond the experimental stage. Across Europe, law firms and in-house teams are no longer asking whether to adopt AI but how to scale it responsibly and effectively. The...
Drawn from a Legal Geek talk by Sébastien Bardou, CEMEA International BU General Manager & VP Strategy at LexisNexis If the internet once promised to “change everything,” generative AI may finally be...
Mathieu Balzarini, LexisNexis’s vice president of product in the CEMEA region, says interest in Gen AI legal research tools ‘is both high and growing rapidly’ In this Q&A, Mathieu Balzarini, vice president...
Cryptocurrencies are no longer a niche asset—they are reshaping the Mergers & Acquisitions (M&A) landscape in Europe. In 2025, their adoption is accelerating cross-border deals, enhancing transparency...
Drawn from a Legal Geek talk by Sébastien Bardou, CEMEA International BU General Manager & VP Strategy at LexisNexis
If the internet once promised to “change everything,” generative AI may finally be the technology to deliver on that claim. In law, it is cutting research time, drafting in seconds and handling tasks that once kept lawyers busy until late night hours. The question is no longer whether AI can help lawyers - it is whether lawyers can trust it enough to use it without losing sleep.
Why General-purpose AI falls short for lawyers
Generative AI systems such as large language models (LLMs) are built on probability. Gen AI is designed to predict the next most likely word in a sequence. That is marvelous for creating a recipe, but less useful when advising on corporate liability. Lawyers deal in facts, authorities and context, not probabilities.
The problem lies in the training. General-purpose AI is built on the open web - a vast but messy collection of data that includes everything from kitten photos to pancake blogs, mixed in with legal material that is often outdated, biased or from the wrong jurisdiction. When such models attempt to answer a legal question, they can produce hallucinations: confident-sounding but entirely fabricated cases, statutes or citations. Convincing nonsense is still nonsense - and in law, it is unacceptable. For most industries, this is an inconvenience. For law, it is fatal. Clients do not pay for answers that are likely correct.
The cost of misplaced trust
Putting faith in generic AI tools can lead to:
In short: if accuracy is non-negotiable, trust must be earned. That is why the focus has shifted towards building AI that is genuinely worthy of trust.
The three pillars of trustworthy legal AI
A framework for trustworthy AI in law is taking shape, resting on three essential pillars: technology, content and human oversight.
1. Technology
Recent advances such as Retrieval-Augmented Generation (RAG) mean that outputs can be grounded in curated databases rather than the model’s memory. It’s a polite way of telling the AI to “stick to the sources, please.” This reduces hallucinations and ensures every answer can be traced back to something real.
2. Content
No technology can redeem poor foundations. AI is only as good as the material it is built upon. Trustworthy legal AI must rely on authoritative sources:
When the training data includes pancake recipes, the outputs will inevitably taste a bit off.
3. Human oversight
The final safeguard is the lawyer. Critical thinking cannot be automated. Professionals must verify answers, question assumptions and treat AI as an assistant, not an oracle. The rule is simple: always test the output before you trust it.
A role model profession
Lawyers are uniquely well positioned to lead in the age of AI. The profession cannot - and will not - settle for “good enough.” By demanding transparency, citations and human oversight, legal professionals can set the benchmark for how AI should be used responsibly in any industry.