By Elias Kahn | LexisNexis Practical Guidance Just as in-house counsel thought they were receiving some clarity around a vexing employment law topic, they were ushered right back into a murky legal landscape...
By: Practical Guidance The Federal Reserve continues to pull its available levers in order to achieve a “soft landing” of the U.S. economy, but in the meantime a number of American employers...
By Madison Johnson, Esq. | Marketing Manager The trusty Document Management System (DMS) has been the backbone of law firm work product management for decades. Legal professionals should strap in for...
In-house counsel are still grappling with the fallout from the U.S. Supreme Court’s landmark decision this summer to overturn a longstanding interpretation of federal agencies’ ability to interpret...
By: Srini Raghavan, Vice President, Microsoft 365 Ecosystem Ever since the introduction of Microsoft Copilot, AI is quickly being woven into the workplace and people's daily work habits. Three-quarters...
The emergence of new Artificial Intelligence (AI) tools is shaking up the global corporate marketplace for AI’s extraordinary business potential—as well as its immediate dangers. Perhaps no audience of corporate executives is more focused on getting their arms around the risks and rewards of the deployment of this technology than corporate compliance professionals.
AI-powered solutions offer the promises of greater workforce efficiency by automating certain tasks and improved business decision-making with the aid of objective tools. This is proving true in the corporate compliance world, with growing capabilities of AI tools that can help compliance officers to better identify risks and to monitor the implementation of existing compliance programs. AI can even be used to identify trends in compliance data and to support internal corporate investigations of potential misconduct.
These are all exciting opportunities for how AI tech can help compliance professionals to work more efficiently and effectively, but the current generation of commercially available AI tools also have corporate compliance professionals on the alert for potential misuse by employees.
“We’re excited about all of the (AI tools) coming to the fore,” said Liz Atlee, chief ethics and compliance officer at CBRE, in an interview with The Wall Street Journal. “We are trying to create some guardrails around that. We’re putting in place a policy with regards to how you use it, where you use it, what can be used and cannot be used.”
Other corporate compliance executives are also seeking to deploy AI tools but only after they take the time to implement a cautious approach.
“We are in the process of implementing a different AI program that is similar to ChatGPT,” said Sidney Majalya, chief risk officer at Binance.US, at a recent WSJ Risk & Compliance Forum. “What we’re going to do is basically take a stepwise approach and I think the first teams that will use it in the company are technical teams, people who are writing code. But as we think about rolling it out to other parts of the company, we have to be very careful. We’re in a highly regulated space and everything we do is going to come under scrutiny from our vast number of regulators.”
This caution is clearly warranted in light of the known risks of relying on commercial AI tools now in the hands of workers, including ChatGPT. By now all corporate legal and compliance executives should be well aware of the cautionary tales of lawyers who have used ChatGPT for legal research, only to discover the law review citations or case precedents provided by the AI engine were completely fabricated out of thin air.
LexisNexis® is focused on providing information resources and solutions that help corporate compliance professionals realize the benefits of AI-powered tools while managing the serious immediate risks of this breakthrough technology.
For example, the Practical Guidance team has created the Generative Artificial Intelligence (AI) Resource Kit, which aggregates a variety of resources regarding the use of generative AI tools. One of these resources is an insightful article that reviews the National Institute of Standards and Technology (NIST) Risk Management Framework for the use of AI in a trustworthy manner, including guidance on: Validity and Reliability; Safety; Security and Resiliency; Transparency and Accountability; Explainability and Interpretability; Privacy; and Fairness and Bias.
And to help deliver the promise of secure AI tools to legal and compliance professionals, LexisNexis recently unveiled Lexis+ AI™, a generative AI platform that is built on the industry’s largest repository of accurate and exclusive legal content. This means it will provide legal and compliance professionals with trusted, comprehensive results that are backed by verifiable and citable authority.
The opportunity to pair unsurpassed legal content with breakthrough generative AI technology is one that could redefine the day-to-day lives of corporate legal and compliance professionals. Specific applications include conversational search, insightful summarization and intelligent legal drafting capabilities, all supported by state-of-the-art encryption, reliability and privacy technology to keep sensitive data secure and ensure reliability.
We are committed to the responsible development of AI tools with a focus on reliability, consistency and data security. We invite you to join us on this journey by signing up to be a Lexis+ AI Insider through which you will be the first to know about these AI-powered solutions, have exclusive access to thought leaders, learn more about how AI can responsibly support the practice of law and corporate compliance, and much more.