Use this button to switch between dark and light mode.

How Litigation in 2024 Could Shape the Future of Gen AI

February 29, 2024 (3 min read)
An image of a judge's gavel on a desk with a light color overlay

By: Mike Swift, Chief Global Digital Risk Correspondent, MLex

Generative Artificial Intelligence (Gen AI) is at the beginning of what is likely to become the next wave of technology litigation and regulation. The way these actions play out in the courts and in the hallways of regulatory agencies could have as much impact on how Gen AI is deployed in the U.S. as any law that is ultimately passed by Congress.

This is a fluid and fast-moving landscape that changes weekly, but what is clear is that the leading developers of Gen AI technology face extensive court litigation that could stretch years into the future.

At this early stage, the hottest action in U.S. courts is in the Second Circuit and the Ninth Circuit of the U.S. Courts of Appeal. Google, Meta, OpenAI and other leading AI developers are all facing copyright and/or privacy lawsuits in the Southern District of New York and the Northern District of California.

“In 2024, courts will be tasked with weighing in on matters of first impression in litigation over the developments and use of generative AI,” reported Law360.

Some of the cases currently in the courts have attracted widespread media attention, in part because they involve high-profile plaintiffs. In one case, the comedian Sarah Silverman has joined with others to file lawsuits against Meta and OpenAI for copyright infringement. In another case, bestselling authors John Grisham and George R.R. Martin are among those suing OpenAI for allegedly committing “systematic theft on a mass scale” by copying their works and feeding them into their Gen AI learning model (see here for broader analysis from MLex).

The California cases are slightly more advanced in their movement through the courts and may offer a bit of a roadmap for where this early litigation is headed. In deciding early motions to dismiss, the judges in these disputes are indicating what may be an emerging judicial standard.

For example, early decisions by multiple judges in the Northern District of California suggest that just because a copyrighted work is copied in full to train an AI system, unless the output of that model contains content that substantially copies the protected work, the copyright claims cannot go forward.

U.S. District Judge Vince Chhabria called “nonsensical” claims that Meta’s copying of works into the training data sets for its open source large language model, Llama, amounts to copyright infringement, ruling that AI outputs are what matters in determining infringement.

U.S. District Judge Jon Tigar recently followed a similar standard in an order related to copyright litigation against Copilot, an AI coding tool developed by OpenAI and deployed by Microsoft, which this time was to the benefit of the plaintiffs. In that case, Judge Tigar ruled that certain damage claims in an amended complaint filed by the plaintiffs could proceed because of “one significant addition — they now include examples of licensed code owned by (plaintiffs) that has been output by Copilot.”

In another closely watched case headed for the courts, The New York Times sued OpenAI in December 2023, alleging that OpenAI’s ChatGPT and Microsoft’s Copilot are depriving them of revenue from the journalism they produce by using the Times’ copyright-protected content to train the defendants’ AI models.

This case is especially intriguing because the complaint includes 100 examples of outputs from ChatGPT that copy news articles published by the Times almost verbatim. In one exhibit, there were just six words in an entire ChatGPT query output that differed from the original article published in the Times.

As my MLex colleagues, Madeline Hughes and Amy Miller, reported: “The New York Times’ copyright lawsuit against OpenAI and Microsoft adds a new dimension to what ‘fair use’ could ultimately mean for the makers of language-based generative AI systems that build algorithms using copyrighted works.”

These cases are the most prominent ones working their way through the courts in early 2024, but there are many more in the early stages of litigation and undoubtedly another wave that will follow depending on the outcome of these disputes in 2024. 

____________________________________________________________________________________________________________________________________________

For more information on Data Privacy & Security news and analysis from MLex®, click here.

Lexis+ AI is our breakthrough Gen AI platform that is transforming legal work by providing a suite of legal research, drafting, and summarization tools that delivers on the potential of Gen AI technology. Its answers are grounded in the world’s largest repository of accurate and exclusive legal content from LexisNexis with industry-leading data security and attention to privacy. To learn more or to request a free trial of Lexis+ AI, please go to www.lexisnexis.com/ai.