Summary Click on an attorney to jump to their bio: Manuel J. Ruiz, Jr. Reynaldo Guerra Garza Raúl Héctor Castro Antonia Hernández Cruz Reynoso Sonia...
Few technology innovations in recent memory have touched off the wide range of emotions in social conversation as the nascent category of generative artificial intelligence (AI) tools. Depending on whom...
Summary: What is Arbitration? Is Arbitration Better than Litigation? Why Choose Litigation Over Arbitration? Other Factors That Influence the Path to Arbitration vs. Litigation What is a Disadvantage...
This post was originally published in October 2018 and verified in September 2023. So, you’re a skilled lawyer working for a small (or solo) law firm - though it may be tempting to assume your...
As a solo or small firm attorney, managing your law firm's budget is a crucial aspect of running a successful practice. Staying financially sound allows you to focus on serving your clients without...
By Megan Bramhall
The New York Times broke a story over Memorial Day weekend that many of us working in the legal technology industry knew was inevitable.
A lawyer had used ChatGPT to assist him with some legal research, copied and pasted the case citations surfaced by the AI chatbot into a brief prepared for a client, and filed that brief in New York federal court. The problem? He never checked those cites and they were entirely made up by ChatGPT.
Legaltech News published a summary of what happened in a case that will surely go down as one of the defining moments in the early use of open-web AI tools by lawyers:
“At least six of the submitted cases as research for a brief appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” said Judge Kevin Castel of the Southern District of New York. “The court is presented with an unprecedented circumstance.”
The plaintiff’s lawyer appeared before Judge Castel on June 8th for a sanctions hearing and Law360 reported that the judge judge will be taking the matter under advisement before issuing a written decision.
This unfortunate case touched off nationwide buzz among legal professionals, but in truth it was a predictable incident for anyone who understands a critical flaw in the currently available versions of generative AI tools. They are prone to “hallucinate” believable answers that are patently false.
In fact, in recent months other lawyers have shared their own eye-opening experiences with open-web AI tools. One attorney wrote in a column about how ChatGPT surfaced a list of 15 law review articles — complete with full citations and page numbers — that did not actually exist. And another attorney blogged recently about ChatGPT directing him to a case that sounded directly on-point, but in fact “did not exist anywhere except in the imagination of ChatGPT.”
The problem is not the use of AI-powered tools, but a lack of understanding of the purpose for which these tools were developed and — more importantly — the way they should be used by lawyers.
“Generative AI can be reliable for summarization of a particular document, while it can be unreliable for legal research,” said David Cunningham, chief innovation officer at Reed Smith, in Law.com. “Also, the answer is very dependent on whether the lawyer is using a public system versus a private, commercial, legal-specific system, preloaded and trained with trustworthy legal content.”
These considerations are not new to LexisNexis. In fact, LexisNexis has been leading the way in the development of legal AI tools for years, working to provide lawyers with products that leverage the power of AI technology to support key legal tasks. And with the rollout of Lexis+ AI, we’re now pioneering the use of generative AI for legal research, analysis and the presentation of results, with a focus on how these tools can enable legal professionals to achieve better outcomes.
It is important to understand that we are bringing a very deliberate approach to the development of these products:
This legal domain expertise — in terms of research, professional ethics, developer expertise and content grounding — is what will set apart the development of our AI tools from those that are currently available on the open-web.
“I think (generative AI) is useful for lawyers as long as they’re using it properly, obviously, and as long as they realize that ChatGPT is not a legal research tool,” said Judge Scott U. Schlegel of the 24th Judicial District Court in Louisiana, in an interview with Legaltech News.
We invite you to join us on this journey by following our Lexis+ AI web page, where we will share more information about these AI-powered solutions and how they can responsibly support the practice of law.