By Madison Johnson, Esq. | Manager, Large Markets Legal professionals understand the importance of staying informed about breaking developments in the business world, but it is especially critical for...
By Madison Johnson, Esq. | Manager, Large Markets As the legal landscape continues to evolve, the ability to conduct efficient and accurate research remains paramount for law students and legal professionals...
By Serena Wellen & Min Chen There’s an entirely new category of generative AI that’s rapidly emerging – with the express purpose of making your legal work easier and faster. If...
By Madison Johnson, Esq. | Manager, Large Markets Consumers were introduced to the generative artificial intelligence (Gen AI) revolution two years ago with the launch of ChatGPT, then the category of...
By Madison Johnson, Esq. | Manager, Large Markets Law firm marketing professionals are under the gun more than ever in today’s legal industry landscape. A report presented at last year’s...
At the recent Women, Influence & Power in Law 2024 Conference in Chicago, women working on corporate legal teams discussed a number of timely issues bearing down on in-house counsel today. One of the more intriguing threads of discussion emerged in a session focused on the adoption of generative artificial intelligence (Gen AI) technologies in their organizations.
“Sometimes there’s a misperception that certain highly regulated industries are slow to react to technology … (because) regulations can often hinder our business’ eagerness to try new things,” said Julia Riley, assistant general counsel at Bank of America, in a story published by LegalTech News. “But it can also provide road maps or existing frameworks that are already prepared and ready to address many of the issues that we see in things like AI.”
In-house counsel at companies across all sectors—and especially those in highly regulated industries—have been appropriately cautious in their adoption of Gen AI technologies. Unfortunately, there appears to be a misalignment emerging between top legal officers and other executives within some companies.
A September 2024 survey by Littler Mendelson PC gathered insights from more than 300 executives across the U.S.. Among the findings:52% of top legal officers said their organizations were not using AI tools in Human Resources, for example, but only 31% of CEOs said their companies were not using Gen AI in hiring and—most concerning—a mere 18% of HR executives said they were not using the tools.
“That suggests that 82% of Human Resources departments are using AI while about half of their legal chiefs don’t even know about it,” reported Law360. “These discrepancies among executives pose challenges for effective AI risk management.”
The survey found similar disconnects between in-house counsel and senior executives on other Gen AI applications, such as auditing and automated monitoring.
In-house counsel need to take the lead in rectifying this possible misalignment and ensuring their organizations have best practices in place for adopting Gen AI tools in the workplace.
Joseph O’Keefe, Edward Young and Hannah Morris—Practical Guidance contributors for LexisNexis®—published an insightful practice note, “Artificial Intelligence in the Workplace: Best Practices,” which contains some guidance on the legal implications of integrating Gen AI into the business:
If an employer permits its workers to use Gen AI for work-related purposes, it should train them on how to use the technology in a way that protects the employer’s business, legal and other interests. For example, it would be beneficial to inform trainees that they must still always comply with other employer policies, such as a policy against harassment and discrimination.
It’s important for employers to communicate their stance on the use of Gen AI explicitly to employees, either through a handbook or standalone policy. Employers can add relevant Gen AI language into a new policy or incorporate such language into an existing computer of electronic systems policy.
Employers should make clear that employees should only use Gen AI to enhance or assist in the performance of job-related tasks by enhancing productivity, efficiency and decision-making. For example, they should remind employees that all of its policies involving non-discrimination, anti-harassment and confidentiality still apply when using Gen AI. They may want to consider implementing an approval process whereby employees report to a specific point person to request use of a Gen AI tool.
Even when an employer sets parameters on Gen AI in the company, the employer should regularly review whether measures in place are working or need to be revised, based on reasons such as employee non-compliance or lowered productivity. In addition, as more laws regulating Gen AI are passed and implemented, employers should make sure their policies comply with any specific jurisdictional requirements.
Finally, employers should develop a procedure for employees to report suspected violations of an employer’s AI policies, possible data breaches, a Gen AI system failure or instances where the Gen AI tool generates erroneous, discriminatory or harassing output. The employer should also notify employees that it may impose disciplinary consequences — up to and including termination — for employees who violate the Gen AI company policies.
The Practical Guidance team at LexisNexis has released the Generative Artificial Intelligence Resource Kit, a comprehensive collection of information resources that examine the key legal issues related to the adoption and use of Gen AI technologies. Key content for in-house counsel includes:
Get a free trial of Lexis Practical Guidance.
All of these resources are accessible to in-house legal teams via Lexis+® General Counsel Suite, which provides a vast collection of legal resources, breaking business and legal news, and Practical Guidance content.
Learn more about Lexis+ GC Suite or to register for a free 7-day trial.