Use this button to switch between dark and light mode.

More Law Firms Taking Cautious Approach to Use of Generative AI Tools

June 13, 2023 (3 min read)

By Geoffrey Ivnik

There is widespread excitement over the potential of emerging generative Artificial Intelligence (AI) tools, such as ChatGPT, to drive new workflows that super-charge human productivity — and the legal industry is no exception. But a recent column by Paul Daly in Administrative Law Matters, “ChatGPT and Legal Research: A Cautionary Tale,” illustrates why many law firms are taking a measured approach to widespread adoption of these tools.

Mr. Daly recounts an interaction with ChatGPT in which he was initially impressed with the tool’s apparent understanding of the core concepts associated with some legal research he wanted to conduct. “After some prodding,” he writes, “it generated a long list of articles for me.”

The tool then surfaced a list of 15 law review articles, complete with full citations and page numbers, that appeared to be on-point to his research inquiry.

“The only problem is that these articles do not actually exist,” he writes.

Mr. Daly’s experience mirrors that of many other lawyers who have discovered that open-web generative AI tools are not yet ready for conducting legal research, where accuracy and reliability of results is essential.

Daniel Davis, an associate in the Leesburg, Va. office of Dunlap Bennett & Ludwig, blogged recently about “The Case of the Imaginary Yacht.” Mr. Davis shares about how he “was curious to see ChatGPT flex its legal research muscles” so he asked the tool to assist him with an inquiry regarding a specific type of matter addressed previously by the Trademark Trial and Appeal Board. ChatGPT instantly produced a four-paragraph answer to his question, complete with citations.

Unfortunately, the key on-point case surfaced by the tool pointed to a decision of the U.S. Court of Appeals for the Federal Circuit, the wrong venue. Moreover, upon further inspection, the case specifics were not so on-point after all. It gets worse.

“Undeterred, I used the feedback feature of ChatGPT to ‘train’ the program that its answer was wrong,” writes Mr. Davis. “Next, I tried asking the question again. ‘Certainly!’ ChatGPT replied with scripted enthusiasm, and directed me to the case of In Re T.V. Today Network Ltd. The problem? This case is not real. It does not exist anywhere except in the imagination of ChatGPT.”

These disturbing anecdotes are examples of a critical flaw in the currently available versions of open-web generative AI tools. They are prone to using contextual clues that cause them to “hallucinate” believable answers that are patently false. This is one of the risks that a growing number of law firms are simply unwilling to bear at this time.

“The critical question is whether law firms and courtrooms are ready and willing to accept the great gift that is generative AI,” writes Attiya Khan in the University of Maryland Francis King Carey School of Law blog. “The short answer is not anytime soon. Despite its sophistication, chatbots like ChatGPT are ridden with legal and ethical concerns.”

Beyond the serious “hallucination” problem, other risks to law firms associated with use of the current generation of AI tools involve data security and client confidentiality concerns. Indeed, the UK-based “Silver Circle” law firm Mishcon de Reya recently announced they were restricting their lawyers’ use of ChatGPT “amid fears that they risk compromising data by using the chatbot,” according to The Telegraph.

Legaltech News interviewed chief information officers at two large law firms and noted that, while they are excited about the potential of generative AI tools to improve productivity, they agreed that “relying on the commercial ChatGPT at this stage can be ethically dubious.”

ChatGPT launched as a prototype on November 20, 2022, and within a matter of a few months it has touched off a fast-moving phenomenon. Anyone who has experimented with these open-web generative AI tools quickly recognizes their enormous promise for improving efficiencies and productivity in every industry. But a growing number of law firms are experiencing the reality that they are not quite ready for prime time.

LexisNexis has been leading the way in the development of legal AI tools for years, working to provide lawyers with products that leverage the power of AI technology to support key legal tasks. We’re now pioneering the use of generative AI for legal research, analysis and the presentation of results, with a focus on how these tools can enable legal professionals to avoid the pitfalls of open-web generative AI tools.

We invite you to join us on this journey by following our Lexis+ AI web page, where we will share more information about these AI-powered solutions and how they can responsibly support the practice of law:

Meet Lexis+ AI