Use this button to switch between dark and light mode.

Lex Machina Attends the 2024 Annual Conference of NAACL

July 10, 2024 (2 min read)

In the recent climate of large language models being introduced and implemented into the fiber of legal technology products, Lex Machina continues to keep its finger on the pulse and listen to influential voices in the technology industry, such as participating in the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL) in Mexico City, Mexico.

Lex Machina Principal Data Scientists, Marco Valenzuela and Luke Nezda, attended the conference and collaborated with other leaders in the industry to discuss the enhancement and application of large language models to legal data problems. For example, ChatGPT is a great example of a large language model, and its propensity to hallucinate falsehoods serves as a floor in terms of what to enhance about large language models.

“[Large language models] are being applied quite successfully to lots of tasks ranging from summarizing documents to finding and extracting precise facts from text,” according to Luke Nezda.

Highlights from this conference included the following: Self Expertise, a knowledge-based instruction dataset augmentation for a legal expert language model, and Beyond Borders, a study exploring the cross-jurisdictional generalizability of legal case summarization models.

“Self Expertise is a [proposed method] to generate instruction datasets in the legal domain from a seed dataset,” said Marco Valenzuela. He further said, “[It is a] method used to train a Korean legal specialized model, LxPERT, on the LLaMA-2 7B model; LxPERT outperformed GPT-3.5-turbo on both in-domain and out-of-domain datasets.”

This is a major observation because Self Expertise seeks to overcome unintentional hallucinations or inaccurate information being portrayed as factual information.

In regards to Beyond Borders, Marco Valenzuela shared that this study shows the cross-jurisdictional generalizability of legal case summarization models. He further said, “The goal is to summarize legal cases in target jurisdictions without available reference summaries; jurisdictional similarity plays a pivotal role in selecting optimal source datasets for effective transfer.”

This is another major observation because Beyond Borders shares insights into building upon the current practices of only training a model on automating legal case summaries on cases within the same jurisdiction.

An additional highlight from the conference surrounded “explainability” and how large language models will need to explain their outcomes for the purpose of garnering understanding and trust from their end users. Traditionally, end users of legal technology products rely on precedent to predict future outcomes of cases. Moving forward, precedent is a possible solution for large language models to explain its outcomes to end users. However, this comes with its challenges too, such as which precedent cases should the large language model focus on and how to consider legal principles changing over time.