Review this exciting guide to some of the recent content additions to Practical Guidance, designed to help you find the tools and insights you need to work more efficiently and effectively. Practical Guidance...
FOR THE SECOND YEAR IN A ROW, a student group of aspiring attorneys from Chicago’s Legal Prep Charter Academy took first place in an annual mock trial competition held at the specialized high school...
By: Jeffrey D. Mamorsky , COHEN & BUCKMANN, P.C. THIS VIDEO SERIES CELEBRATES THE ENACTMENT of the Employee Retirement Income Security Act (ERISA), signed by President Gerald Ford on September 2...
By: Kirk A. Sigmon , BANNER WITCOFF THIS CHECKLIST OUTLINES KEY CONSIDERATIONS THAT ATTORNEYS should review when advising whether and how to copyright artificial intelligence (AI) and machine learning...
By: Erin Hanson , Arlene Arin Hahn , Sahra Nizipli , and Jordan Hill , WHITE & CASE LLP THIS ARTICLE SUMMARIZES VARIOUS INTELLECTUAL PROPERTY AND TECHNOLOGY (IP/IT) PROVISIONS, including sample definitions...
Copyright © 2024 LexisNexis and/or its Licensors.
By: Ellen M. Taylor, SLOAN, SAKAI, YEUNG, & WONG LLP
Today, a variety of artificial intelligence (AI) employment-assessment tools are available to assist employers with nearly every stage of the hiring cycle, from recruiting and assessing job candidates to measuring the performance of current employees. There is AI software designed to screen applications and prioritize resumes based on key phrases relevant to the job,1 monitor and assign scores to employees based on their typing speed,2 find and recruit job candidates whose skills match a job posting,3 evaluate job applicants’ and current employees’ skills and potential through game-based tests,4 and even interview job candidates.5
AI employment-assessment tools are often marketed as tools that can reduce costs, decrease the risk of human error, and reduce or eliminate bias during the hiring process. However, AI employment-assessment tools can result in discriminatory hiring practices if they are not carefully designed, implemented, and monitored. Although AI technology is developing more rapidly than the laws regulating its use, there have been recent legal developments relating to the regulation of AI employment-assessment tools.
On March 15, 2022, California’s Civil Rights Council (CRC), formerly known as the Fair Employment and Housing Council,6 published draft modifications to regulations that if enacted, would expand liability for employers around their use of AI, such as automated-decision systems, for evaluating job applicants and employees.7 The CRC most recently published a revised version of these draft regulations dated July 28, 2022 (Draft Regulations).8 The CRC Algorithms and Bias Hearing Subcommittee announced at its December 13, 2022, meeting that it was continuing to workshop these regulations in preparation for moving the next iteration of these Draft Regulations into the rule-making process.9
The Draft Regulations define an automated-decision system (ADS) as a “computational process, including one derived from machine-learning, statistics, or other data processing or artificial intelligence techniques, that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts employees or applicants.”10
The Draft Regulations also expand the definition of an employer’s agent to include any person or third party that provides “administration of automated-decision systems for an employer’s use in making hiring or employment decisions that could result in the denial of employment or otherwise adversely affect the terms, conditions, benefits, or privileges of employment.” This means that employers could be liable for actions taken by third parties that the employer hires to administer ADS decision-making tools if those decision-making tools have a discriminatory impact.11
The Draft Regulations expand aiding and abetting liability for unlawful employment discrimination by defining unlawful assistance, unlawful solicitation or encouragement, and unlawful advertising to include “the advertisement, sale, provision, or use of a selection tool, including but not limited to an automated-decision system, on behalf of a person or individual for an unlawful purpose, such as limiting, screening out, or otherwise unlawfully discriminating against applicants or employees based on protected characteristics.”12
The Draft Regulations would require employers and all other covered third-party entities to retain any personnel or other employment records “dealing with any employment practice and affecting any employment benefit of any applicant or employee (including all applications, personnel, membership, or employment referral records or files and all machine-learning data)” for four years.13
Illinois and Maryland, as well as New York City, have all already enacted laws (within the past three years) to regulate how employers may use AI in the hiring process.
Illinois was the first state to enact a law specifically regulating the way employers can use AI to conduct employee interviews.14 Illinois’ Artificial Intelligence Video Interview Act15 (the Video Interview Act) went into effect in January of 2020 and was amended effective January 1, 2022.16 The Video Interview Act requires employers that are “considering applicants for positions based in Illinois” to do all of the following before they ask applicants to submit video interviews:
Employers who rely “solely upon an artificial intelligence analysis of a video interview to determine whether an applicant will be selected for an in-person interview” must also collect and report certain demographic data to the Illinois Department of Commerce and Economic Opportunity.18
Maryland’s H.B. 1202, enacted in March of 2020, requires employers to meet specific requirements in order to use facial recognition technology when interviewing job applicants.19 The law requires employers to obtain signed consent from applicants before the employer may use facial recognition technology “for the purpose of creating a facial template” during the interview.20
The New York City Council enacted Local Law 144 of 2021 on December 11, 2021.21 Due to the high number of public comments on this local law, the Department of Consumer and Worker Protection (DCWP) has postponed enforcement of this law to July 5, 2023.22 When implemented, the law will require employers to perform “bias audits” on any “automated employment decision tool before use of said tool” and to notify employees and candidates who reside in New York City about the employer’s use of “such tools in the assessment or evaluation for hire or promotion,” and the “job qualifications and characteristics” that the ADS will be evaluating.23
On October 28, 2021, the EEOC launched an “Initiative on Artificial Intelligence and Algorithmic Fairness.”24 The EEOC issued its EEOC Guidance in May 2022, which includes questions and answers about when the use of AI may “violate existing requirements under Title I of the Americans with Disabilities Act (ADA).”25 Through this technical guidance, the EEOC highlights three of the most common ways that an employers’ use of AI might violate the ADA.
First, an algorithm that an employer uses may fail to “provide a ‘reasonable accommodation’ that is necessary for a job applicant or employee to be rated fairly and accurately.”26 For example, an employer might be in violation of the ADA if they require a job applicant with a disability that interferes with their manual dexterity to take a timed knowledge test that requires them to use a keyboard or trackpad, without any accommodation or alternative version of the test (unless doing so would result in undue hardship).27
Second, an algorithm that the employer uses may intentionally or unintentionally screen out an individual with a disability, even though that individual is able to do the job with a reasonable accommodation.”28 This could happen, for example, if interviewing software that is designed to analyze an applicant’s problem-solving skills gives lower marks to a job candidate with a speech impediment that makes it difficult for the software to interpret their response according to the speech pattern that the software has been trained to recognize.29
Third, an algorithmic decision-making tool that an employer uses to evaluate job applicants or employees might violate “the ADA’s restrictions on disability-related inquiries and medical examinations.”30 This type of violation might occur if the AI tool that the employer is using to assess job applicants or employees asks questions that either directly ask about whether they have a disability, or are likely to elicit a response that includes information pertaining to whether the individual has a disability.31
Notably, the Strategic Enforcement Plan that the EEOC released on January 10, 2023, which discusses the EEOC’s priorities from 2023-2027, repeatedly references the use of artificial intelligence in hiring, and states that the EEOC plans to “focus on employment decisions, practices, or policies in which covered entities’ use of technology contributes to discrimination based on a protected characteristic” including “the use of software that incorporates algorithmic decision-making or machine learning” and AI.32
As the laws regulating the use of AI employment-assessment tools develop further, there are several best practices that employers can keep in mind.
First, employers should be aware of whether and how they are already using AI to make hiring decisions.33 This includes taking the time to learn which processes the organization uses that are dependent on AI and which judgment calls are being made by AI rather than humans.34 Employers should consider appointing a task force to create an organization-wide policy regarding the use of AI employment-assessment tools.35
Second, employers should make sure that any AI employment-assessment algorithms they are using are not resulting in unlawful discrimination.36 If the employer itself is creating the AI employment-assessment tools that it will use, the employer should consider seeking input from people of diverse backgrounds when designing the algorithm that the software will use.37
Third, employers should alert job applicants and employees when they are being evaluated using AI decision-making tools and notify those individuals that reasonable accommodations are available to them if they have a disability.38
Lastly, employers should ensure that their staff members are trained to recognize requests for reasonable accommodations (e.g., a request for an alternative test format).39 If another company controls and administers the AI decision-making tool the employer is using, the employer should make sure that the outside company is forwarding requests for accommodation to the employer so that the employer can process them.40
As the use of AI in hiring and evaluating employees becomes more commonplace, employers should keep informed about legal developments relating to AI employment-assessment tools and ensure that they remain in compliance with applicable law.
Ellen M. Taylor is an attorney at Sloan, Sakai, Yeung, & Wong LLP, where she represents public and non-profit entities in labor, employment, and government law matters. She can be reached at etaylor@sloansakai.com.
To find this article in Practical Guidance, follow this research path:
RESEARCH PATH: Labor & Employment > Screening and Hiring > Articles
For an overview of current practical guidance on Generative AI, see
> GENERATIVE ARTIFICIAL INTELLIGENCE (AI) RESOURCE KIT
For an overview of materials related to recruiting, screening, testing, hiring, and onboarding employees, see
> SCREENING AND HIRING RESOURCE KIT
> INTERVIEWING AND SCREENING JOB APPLICANTS
> SCREENING AND HIRING STATE PRACTICE NOTES CHART
> SCREENING, RECRUITING, INTERVIEWING, HIRING, AND ONBOARDING: TRAINING PRESENTATION
> ARTIFICIAL INTELLIGENCE AND ROBOTS IN THE WORKPLACE: BEST PRACTICES
> SCREENING, RECRUITING, INTERVIEWING, HIRING, AND ONBOARDING PROCEDURES CHECKLIST
1. Equal Employment Opportunity Commission, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (May 12, 2022) (hereinafter EEOC Guidance). 2. Id. 3. Gary D. Friedman & Thomas McCarthy, Employment Law Red Flags in the Use of Artificial Intelligence in Hiring, Bus. Law Today (ABA Oct. 1, 2020). 4. See HireVue, Game-Based Assessments. 5. See HireVue Video Interview Software. 6. See California Civil Rights Department, Department Name Change (July 1, 2022). 7. California Civil Rights Council, Fair Employment & Housing Council Draft Modifications to Employment Regulations Regarding Automated-Decision Systems (Attachment B) (Mar. 15, 2022). 8. California Civil Rights Council, Fair Employment & Housing Council Draft Modifications to Employment Regulations Regarding Automated-Decision Systems (Attachment G) (July 28, 2022) (Draft Regulations) 9. See Video, Civil Rights Council: December 13, 2022 Meeting, at 2:35-2:40. 10. Draft Regulations, supra note 8. 11. Id. 12. Id. 13. Id. 14. Jeffrey Bosley, et al., Illinois Becomes First State to Regulate Employers’ Use of Artificial Intelligence to Evaluate Video Interviews, JD Supra (Sept. 4, 2019). 15. 820 Ill. Comp. Stat. Ann. 42/1 et seq. 16. Bosley, supra note 14. 17. 820 Ill. Comp. Stat. Ann. 42/5. 18. Id. 19. Md. Code Ann., Lab. & Empl. § 3-717. 20. Id. 21. 2021 N.Y.C. Local Law No. 144. 22. NYC Consumer and Worker Protection, New Laws & Rules. 23. 2021 N.Y.C. Local Law No. 144; New Laws & Rules, supra note 22. 24. EEOC Guidance, supra note 1. 25. Id. 26. Id. 27. Id. 28. Id. 29. Id. 30. Id. 31. Id. 32. Equal Employment Opportunity Commission, Draft Strategic Enforcement Plan, 88 Fed. Reg. 1379 (Jan. 10, 2023). 33. Alysa Austin, et al., AI Regulation in the U.S.: What’s Coming, and What Companies Need to Do in 2023, JD Supra (Dec. 12, 2022). 34. Id. 35. Id. 36. Dylan Walsh, MIT Management Sloan Sch., How can human-centered AI fight bias in machines and people? (Feb 2, 2021). 37. Id. 38. EEOC Guidance, supra note 1. 39. Id. 40. Id.
This article was previously published in Bender’s California Labor & Employment Bulletin, Vol. 2023, No. 3, March 2023.