*The views expressed in externally authored materials linked or published on this site do not necessarily reflect the views of LexisNexis Legal & Professional. Protégé General AI makes general-purpose...
*The views expressed in externally authored materials linked or published on this site do not necessarily reflect the views of LexisNexis Legal & Professional. Legal professionals have rapidly adopted...
* The views expressed in externally authored materials linked or published on this site do not necessarily reflect the views of LexisNexis Legal & Professional. In this Article: Why Trust in Technology...
Lawyers are trained to aim for absolute accuracy. But in the world of artificial intelligence, perfection isn’t the goal. Productivity, insight and better decision-making are. The latest AI Culture Clash...
* The views expressed in externally authored materials linked or published on this site do not necessarily reflect the views of LexisNexis Legal & Professional. The rise of powerful generative artificial...
Lawyers are trained to aim for absolute accuracy. But in the world of artificial intelligence, perfection isn’t the goal. Productivity, insight and better decision-making are. The latest AI Culture Clash report from LexisNexis found that 77% of lawyers remain concerned about relying on inaccurate information from AI tools. Yet the same research shows that those already using AI are seeing measurable efficiency and quality gains, even when outputs still require human review.
If the profession waits for AI to be flawless, it risks missing its real value: enhancing human judgment rather than replacing it.
The legal profession’s risk-averse mindset has always been its strength. Precision matters when clients’ rights, livelihoods or reputations are on the line. But when applied to AI adoption, that instinct for certainty can become a barrier.
As Gerrit Beckhaus, Co-Head of Freshfields Lab, explained: “AI demands clear strategic direction and communication from the top, tying it to client-value outcomes and measurable impact.” The lesson is that success cannot be measured by “perfect answers” alone.
Large language models are probabilistic systems. They generate the most likely answer based on data patterns, not absolute truth. Expecting them to behave like legal textbooks misunderstands their purpose. The firms gaining the most benefit aren’t chasing perfection. They are testing, refining and learning through iterative use, treating AI as a partner that improves over time.
Even a partly accurate AI result can have value. A 70%-right draft that surfaces the key issues in seconds can be reviewed and perfected by a lawyer far faster than starting from a blank page.
The report found that 71% of lawyers feel more confident using AI grounded in trusted legal sources. That confidence rose to 88% among those using purpose-built tools such as Lexis+ AI. This shows that confidence in AI depends more on transparency and provenance than on infallibility.
Bhavisa Patel, Director of Legal Technology at Eversheds Sutherland, put it simply: “You can have the best solution, but if people don’t know what it is, how to use it, or how it will help them, the benefits will always be limited.”
Lawyers build confidence in AI when:
Legal-specific models trained on case law, legislation and expert commentary give lawyers this sense of control. When outputs are anchored in authoritative sources, AI becomes an informed assistant rather than an unpredictable outsider. Transparency allows practitioners to verify and contextualise results instead of blindly trusting them.
In practice, AI that is 80–90% accurate on drafting, research or summarisation can save lawyers hours each week. The human layer of oversight remains essential, but the overall result is faster, cheaper and often clearer.
Shoosmiths partner Tony Randle captured this balance: “A lawyer would never rely on Google to answer a legal question, so they should not rely on general AI platforms to take on tasks that require legal knowledge.” The key is to use the right tool for the task. General-purpose models may boost productivity, but legal-specific AI is designed for compliance, relevance and defensibility.
Many small and mid-sized firms are discovering that “good enough” AI is already transformative when applied thoughtfully. Summarising witness statements, drafting standard letters, identifying case citations or generating first-pass research saves precious time without compromising quality. The lawyer still signs off, but they reach the final version in a fraction of the time.
Being realistic about AI’s limitations is not lowering standards. It is about adopting a managed-risk mindset. No technology is error-free, but with proper governance, firms can ensure AI errors are detectable and correctable.
As Michelle Holford, Chief Commercial Officer at Slaughter and May, observed: “You have to give lawyers time to play, learn and fail in order to work out how best to use a new tool.” Creating this space for safe experimentation helps lawyers understand both the power and the boundaries of AI.
Firms leading in adoption are introducing policies that encourage responsible use rather than blanket restrictions. They specify approved tools, outline verification steps and promote openness about when AI has been used in drafting or research. This approach turns potential risk into a training opportunity, embedding AI literacy across the organisation.
AI in law should not be treated as a source of answers but as a generator of possibilities. Human oversight is the stage where legal knowledge, ethics and nuance enter the equation.
That oversight layer is what transforms AI output into legal advice. Lawyers interpret results through experience, precedent and context, elements AI cannot yet replicate. When the human element remains central, imperfection becomes manageable.
Experienced users know how to interrogate AI responses: asking follow-up questions, comparing multiple outputs, or requesting sources. Over time, this human-machine collaboration becomes second nature, turning initial scepticism into informed confidence.
The AI Culture Clash report shows that 56% of private-practice lawyers reinvest saved time into more billable work, and 53% report a better work-life balance. Even if every AI answer requires review, the time saved is still tangible.
Imperfect AI therefore has a direct commercial upside. It allows lawyers to deliver faster without sacrificing quality, freeing capacity for strategy, client engagement or new business development. Firms measuring AI success solely by “error-free outputs” risk missing its broader impact.
AI in law is not about eliminating human error; it is about amplifying human capability. Waiting for flawless technology is like waiting for the perfect precedent — it never arrives. What matters is how effectively firms integrate AI into workflows, manage risk and empower people to use it wisely.
As Beckhaus put it, “AI demands clear strategic direction and communication from the top, tying it to client-value outcomes and measurable impact.” For firms that embrace this approach, AI becomes less a disruptor and more a catalyst for better practice.
The most successful adopters will be those who see imperfection not as a flaw but as a feature, a signal that human expertise still matters. Trust in AI is not absolute, but practical and earned through consistent performance, transparent sources and active human judgment.
Perfection may be the wrong benchmark altogether. The smarter goal is reliability, accountability and measurable improvement. For lawyers, that balance of precision and progress is where AI’s real power lies.
Discover how LexisNexis is setting the standard for responsible, secure AI in legal technology. Learn more about our ground-breaking AI-powered solutions: Lexis+ AI with Protégé
and follow along for the latest in Legal AI educational content.
Are you excited about the possibilities on how AI can transform the lives of legal professionals? If you're not a Lexis+ AI Insider, become one today and take the first step.
Email: myln@lexisnexis.com
Phone: +65 6349-0110