By Julie Chapman | LexisNexis Head of Legal, North America The acceleration in the adoption of new artificial intelligence (AI) tools in the legal industry has many lawyers anxious about how this technology...
The adoption of generative artificial intelligence (Gen AI) tools in the corporate legal sector continues to accelerate. Nearly half (49%) of in-house counsel expect Gen AI tools to yield cost savings...
In-house counsel have the sobering responsibility of protecting their organizations from evolving cybersecurity and data privacy threats at a time when there is a dramatic increase in the sophistication...
The trend of pet-friendly workplaces has seen a significant rise in recent years, with many companies recognizing potential benefits for employee morale and recruitment. A 2024 study found that 82% of...
By Madison Johnson | LexisNexis 2024 was the year of experiments and pilots with legal tech. As we look ahead, 2025 is shaping up to be the year where use cases are actioned and AI goes mainstream. To...
By Megan Bramhall
The New York Times broke a story over Memorial Day weekend that many of us working in the legal technology industry knew was inevitable.
A lawyer had used ChatGPT to assist him with some legal research, copied and pasted the case citations surfaced by the AI chatbot into a brief prepared for a client, and filed that brief in New York federal court. The problem? He never checked those cites and they were entirely made up by ChatGPT.
Legaltech News published a summary of what happened in a case that will surely go down as one of the defining moments in the early use of open-web AI tools by lawyers:
“At least six of the submitted cases as research for a brief appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” said Judge Kevin Castel of the Southern District of New York. “The court is presented with an unprecedented circumstance.”
The plaintiff’s lawyer appeared before Judge Castel on June 8th for a sanctions hearing and Law360 reported that the judge judge will be taking the matter under advisement before issuing a written decision.
This unfortunate case touched off nationwide buzz among legal professionals, but in truth it was a predictable incident for anyone who understands a critical flaw in the currently available versions of generative AI tools. They are prone to “hallucinate” believable answers that are patently false.
In fact, in recent months other lawyers have shared their own eye-opening experiences with open-web AI tools. One attorney wrote in a column about how ChatGPT surfaced a list of 15 law review articles — complete with full citations and page numbers — that did not actually exist. And another attorney blogged recently about ChatGPT directing him to a case that sounded directly on-point, but in fact “did not exist anywhere except in the imagination of ChatGPT.”
The problem is not the use of AI-powered tools, but a lack of understanding of the purpose for which these tools were developed and — more importantly — the way they should be used by lawyers.
“Generative AI can be reliable for summarization of a particular document, while it can be unreliable for legal research,” said David Cunningham, chief innovation officer at Reed Smith, in Law.com. “Also, the answer is very dependent on whether the lawyer is using a public system versus a private, commercial, legal-specific system, preloaded and trained with trustworthy legal content.”
These considerations are not new to LexisNexis. In fact, LexisNexis has been leading the way in the development of legal AI tools for years, working to provide lawyers with products that leverage the power of AI technology to support key legal tasks. And with the rollout of Lexis+ AI, we’re now pioneering the use of generative AI for legal research, analysis and the presentation of results, with a focus on how these tools can enable legal professionals to achieve better outcomes.
It is important to understand that we are bringing a very deliberate approach to the development of these products:
This legal domain expertise — in terms of research, professional ethics, developer expertise and content grounding — is what will set apart the development of our AI tools from those that are currently available on the open-web.
“I think (generative AI) is useful for lawyers as long as they’re using it properly, obviously, and as long as they realize that ChatGPT is not a legal research tool,” said Judge Scott U. Schlegel of the 24th Judicial District Court in Louisiana, in an interview with Legaltech News.
We invite you to join us on this journey by following our Lexis+ AI web page, where we will share more information about these AI-powered solutions and how they can responsibly support the practice of law.