Use this button to switch between dark and light mode.

AI Privacy & Security: The LexisNexis Commitment

May 08, 2024 (3 min read)
A man typing at a keyboard with a floating graphical overlay that reads

By Geoffrey D. Ivnik, Esq. | Director of Large Markets, LexisNexis

The rise of powerful generative artificial intelligence (Gen AI) models that can quickly obtain answers to research queries, summarize lengthy documents and create synthetic text for lawyers offers exciting possibilities for transforming the practice of law. But it also raises important questions around data privacy and security.

Law firms are in the business of handling sensitive client information, which means that AI privacy and security considerations are paramount. The security of a Gen AI tool is essential for lawyers to uphold their ethical and legal obligations to clients, minimize professional risks and safeguard valuable information.

“The increasing use of generative AI means that organizations must carefully assess the existing legal, financial and reputational risks connected with personal data and confidentiality,” writes a trio of UK-based Deloitte Legal practitioners in Law360. “In addition to the legal and regulatory requirements that will come into force, there are various aspects that may need to be considered.”

Gen AI systems leverage large data sets and can mimic existing content, so there are risks that need to be thoughtfully addressed for the safety of your law firm and clients. Here are six core areas of concern related to privacy and security that you should consider when choosing an AI tool for your firm:

  1. Data Privacy

Law firms operate in a world of confidentiality. Client data, from financial records to strategic plans, must be fiercely protected. Is your firm’s training data obtained and used in the development of the tool?

  1. Data Bias

AI algorithms learn from the data they are fed. If that data contains inherent biases, the AI tool may incorporate them in its outputs. How does the developer avoid the potential of perpetuating unfair biases in the training data?

  1. Misinformation

Lawyers are ultimately responsible for the accuracy and completeness of their legal work product. What steps are taken to mitigate against the creation of fake or misleading content before it is delivered to a lawyer?

  1. Attribution

For legal documents, attribution to a human lawyer is essential to maintain professional responsibility. This requires that users have sufficient information in their hands to go verify the accuracy of all AI-generated content. Does the tool properly credit source materials used by the generative models when creating outputs?

  1. Transparency

Understanding the reasoning behind an AI tool’s answers helps lawyers identify and mitigate potential biases or inaccuracies. How much detail does the provider give you so you can see for yourself how the Gen AI models work?

  1. Governance

The track record and reliability of an AI developer is important so law firms can trust their knowledge of the legal domain. What thoughtful rules and procedures are in place to govern how Gen AI technology is developed and deployed?

LexisNexis has a corporate culture that emphasizes a respect for privacy and champions robust data governance, which has guided our development of responsible and trustworthy AI tools for the past decade. We have extended this commitment to the development of our breakthrough Legal AI solution, Lexis+ AI™.

Lexis+ AI is supported by state-of-the-art encryption and privacy technology to keep sensitive data secure. We safeguard customer data and use it to improve their outcomes, but it is not used to train the AI model. For example, in the development of Lexis+ AI we opted out of certain Microsoft AI monitoring features to ensure OpenAI cannot access or retain confidential customer data.

Lexis+ AI was built with a development philosophy that ensures rock-solid privacy controls from the ground up and our data protection program ensures that no aspect is overlooked in safeguarding our customers’ valuable information. Our development team includes technical experts in application security who work hand-in-hand with our product development and operations teams. This ensures that Lexis+ AI delivers great functionality and also meets our rigorous standards for privacy and security.

We recently rolled out our second-generation legal generative AI Assistant on Lexis+ AI. The new version of our AI Assistant on Lexis+ AI delivers an even more personalized experience that will support legal professionals in making informed decisions faster, generating outstanding work, and freeing up time for them to focus on other efforts that drive value. All existing Lexis+ AI customers have access to the enhanced AI Assistant.

Learn more about this groundbreaking legal AI tool or to sign up for a free trial of Lexis+ AI now.