Review this exciting guide to some of the recent content additions to Practical Guidance, designed to help you find the tools and insights you need to work more efficiently and effectively. Practical Guidance...
By: Jeffrey D. Mamorsky , COHEN & BUCKMANN, P.C. THIS VIDEO SERIES CELEBRATES THE ENACTMENT of the Employee Retirement Income Security Act (ERISA), signed by President Gerald Ford on September 2...
By: Kirk A. Sigmon , BANNER WITCOFF THIS CHECKLIST OUTLINES KEY CONSIDERATIONS THAT ATTORNEYS should review when advising whether and how to copyright artificial intelligence (AI) and machine learning...
By: Erin Hanson , Arlene Arin Hahn , Sahra Nizipli , and Jordan Hill , WHITE & CASE LLP THIS ARTICLE SUMMARIZES VARIOUS INTELLECTUAL PROPERTY AND TECHNOLOGY (IP/IT) PROVISIONS, including sample definitions...
By: Damon W. Silver , Gregory C. Brown, Jr. , and Cindy Huang , JACKSON LEWIS P.C. Overview of Artificial Intelligence (AI) in Employment Decisions AI tools are fundamentally changing how people work...
Copyright © 2024 LexisNexis and/or its Licensors.
By: Damon W. Silver, Gregory C. Brown, Jr., and Cindy Huang, JACKSON LEWIS P.C.
AI tools are fundamentally changing how people work. Tasks that used to be painstaking and time-consuming can now be completed near-instantaneously with the assistance of AI. Organizations of various sizes and across an array of industries have begun leveraging the benefits of AI to improve their hiring and performance management processes.
For instance, they are using AI tools for the following in hiring and onboarding:
And, they are using AI tools for the following in performance management:
One AI use that is quickly gaining popularity, and on which we will focus in this article, is managing employment decisions and employee performance. For instance, organizations are using AI to evaluate employee engagement and flight risk, and to monitor employees’ productivity levels and develop plans to boost it. Use of AI tools enables employers to constantly collect and analyze new data—such as employees’ communications, browsing history, search history, and email response times—and rapidly turn that analysis into roadmaps for better performance management outcomes, while also easing the administrative burden of manually providing regular performance feedback.
The benefits of AI tools are undeniable, but so too are the associated risks. Organizations that rush to implement these tools without thoughtful vetting processes, policies, and training will quickly land themselves in hot water.
AI Tools Can be Inaccurate Sources of Information
AI tools sometimes create outputs that are nonsensical or simply inaccurate, commonly referred to as AI hallucinations. These hallucinations can occur when AI tools are trained using limited, incomplete, or unrepresentative datasets.
An infamous example of AI hallucinations in the legal context is Mata v. Avianca, a case in the Southern District of New York. Attorneys involved in this case were sanctioned when they “submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.”1 The court ultimately imposed a monetary penalty along with sanctions on the attorneys and their law firm. Mata is one of many examples of the potential exposure that can stem from the lack of due diligence in confirming AI outputs.
Be Wary of the Potential for Biased or Discriminatory Results
In April 2023, several federal agencies, including the Consumer Financial Protection Bureau, Department of Justice—Civil Rights Division, U.S. Equal Employment Opportunity Commission (EEOC), and Federal Trade Commission, issued a joint statement regarding their efforts to protect against bias in automated systems and AI.2
The agencies highlighted potential sources of discrimination when using AI tools, including that:
A recent experiment conducted with ChatGPT illustrated how these embedded biases can manifest in the performance management context.3 In this experiment, the experimenter asked ChatGPT to draft performance feedback for a range of professions with limited prompts about the employee’s identity. In the written feedback, ChatGPT sometimes assumed the employee’s gender based on the profession and traits provided in the prompt. When the prompt included the employee’s gender, ChatGPT wrote longer but more critical feedback for females compared to males. Since ChatGPT—and other generative AI tools—are trained using historical data, they can potentially perpetuate biases in performance feedback if the employee’s characteristics do not align with the AI tool’s standards.
AI tools used in the employment context frequently have access to significant volumes of sensitive information held by the organization, which creates substantial data privacy and security risk.
If an AI tool has access to employees’ communications and their personnel files, for example, use of that tool will likely result in a significant expansion in the number of files maintained by the organization that contain confidential information, as users will be able to seamlessly pull data from numerous sources into new files (i.e., the AI tool’s output) that will then be saved, emailed, and distributed. Those new files will expand the organization’s data breach footprint; could give rise to privacy claims; complicate data mapping and classification (e.g., make it harder to figure out where confidential information is stored, how it is labeled, how it is used, whether it is disclosed); and present privacy compliance challenges under data privacy laws like the California Consumer Privacy Act (CCPA)4 (e.g., if a former employee requests deletion of their personal information, the organization would need to account for personal information stored in the AI tool’s outputs).
Depending on what AI tool the organization uses, the organization’s data could be disclosed to external parties or even to members of the organization that should not have access to it. Some AI tools use public web searches or third-party applications to assist with generating their outputs, which could cause the organization’s confidential information to be disclosed to third-party providers.
In addition, if the AI tool sources data based on the user’s permissions, and those permissions are overly expansive or erroneous (i.e., the user has access to files they should not), the organization could have significant unauthorized access to data by internal users, along with potential use and external disclosure of that data by those users. An employee with overexpansive permissions, for example, may—intentionally or unintentionally—prompt an AI tool to disclose their co-worker’s performance evaluation or medical records.
To read the complete article, which includes guidance on new AI regulations, state and local action, along with mitigation strategies and insights into managing the risks going forward, subscribers may follow this link to read the complete article in Practical Guidance.
Not yet a Practical Guidance subscriber? Sign up for a free trial to read the complete article.
Damon W. Silver is a principal in the New York City, New York, office of Jackson Lewis P.C. and a Certified Information Privacy Professional (CIPP/US).
Gregory C. Brown, Jr. is an attorney in the New York City, New York, office of Jackson Lewis P.C. His goal is to be a strategic partner in all aspects of workplace management to ensure his clients can focus on running their business effectively.
Cindy Huang is an attorney in the New York City, New York, office of Jackson Lewis P.C. Her practice focuses on representing employers in workplace law matters, including preventive advice and counseling.
To find this article in Practical Guidance, follow this research path:
RESEARCH PATH: Labor & Employment > Screening and Hiring > Practice Notes
For an overview of current practical guidance on generative artificial intelligence (AI), ChatGPT, and similar tools, see
> GENERATIVE ARTIFICIAL INTELLIGENCE (AI) RESOURCE KIT
For practical guidance on using AI in the workplace, see
> ARTIFICIAL INTELLIGENCE IN THE WORKPLACE: BEST PRACTICES
For a survey of enacted state and notable local legislation regarding AI, see
> ARTIFICIAL INTELLIGENCE LEGISLATION STATE LAW SURVEY
For information on federal, state, and local legislation on the use of AI, see
> ARTIFICIAL INTELLIGENCE LEGISLATION TRACKER (2024)
For a look at the legal landscape surrounding biometrics in the employment context, see
> BIOMETRICS WORKPLACE COMPLIANCE AND BEST PRACTICES FOR EMPLOYERS
For a listing of laws related to biometric privacy in all 50 states and the District of Columbia, see
> BIOMETRIC PRIVACY STATE LAW SURVEY
For a sample AI workplace policy, see
> ARTIFICIAL INTELLIGENCE (AI) DRIVEN TOOLS IN THE WORKPLACE POLICY (WITH ACKNOWLEDGMENT)
For more resources on screening and hiring, see
> SCREENING AND HIRING RESOURCE KIT
1. Mata v. Avianca, 678 F. Supp. 3d 443 (S.D.N.Y. 2023). 2. U.S. Equal Employment Opportunity Commission, Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems. 3. Kieran Snyder, ChatGPT writes performance feedback, Textio (Jan. 25, 2023). 4. Cal. Civ. Code § 1798.100 et seq.