Use this button to switch between dark and light mode.

AI Regulations Hit New York City Employers

August 03, 2023 (4 min read)

By Kevin Hylton | LexisNexis Practical Guidance

Lawmakers and regulators at every level of government are rushing to formulate rules for the oversight of how Artificial Intelligence is used in every conceivable area of life and commerce. The most recent prominent AI-related regulatory development has hit businesses that recruit and hire employees in New York City.

The New York City Council issued final implementation rules related to a first-of-its-kind law passed in 2021 that prohibits employers from using algorithm-based technologies — known as “automated employment decision tools” — for recruiting, hiring or promotions, unless the tool is audited for bias annually and the employer provides specific notices to individuals who are subject to screening by the tool.

“New York City’s law requiring employers to audit and notify candidates about the use of automated employment decision tools will be enforced beginning July 5,” reported HR Dive.

The new regulations apply to the use of AI tools at each stage of the hiring process and not just to the final hiring or promotions decisions. Employers can face fines of up to $500 for a first violation and up to $1,500 for subsequent violations, with each day on which AI tools are improperly used constituting a separate violation.

The law “is designed to protect employees during the hiring and promotion processes from unlawful bias by employers that rely on automated employment decision tools, (including) recruitment tools that read and select a job candidate’s resume and job application,” said Human Resource Executive.

The new law has New York City employers moving quickly to understand its requirements and ensure compliance, but it is also being viewed by employers nationwide as a potential template for state and local legislation in a number of jurisdictions.

“Employers have long been prevented from discrimination in hiring and employment decisions on the basis of protected characteristics, so in a way this is just an extension of a traditional area of employment law,” said Ryan Kurtz, an associate in Patterson Belknap’s Employment Group, who is based in New York City. “This law is about ensuring that AI tools are following those same rules and that employers aren’t simply throwing up their hands and saying ‘these AI tools are out of my control’ to avoid liability for how they are used.”

The New York City law broadly defines automated employment decision tools to capture “any computational process, derived from machine learning, statistical modeling, data analytics or artificial intelligence, that issues simplified output, including a score, classification or a hiring/screening recommendation.”

“It’s really about the use of automated tools that boil down a person to a score or a recommendation on an employment decision,” said Kurtz. “The law does permit employers to use AI as a guide without being subject to the law and the bias audit requirement, but only if the AI does not substantially assist in the hiring process.”

Kurtz identified three ways that the use of AI in the hiring process would invoke the application of the new law:

  1. If the employer relies solely on a simplified output (e.g., score, tag, classification, ranking, etc.), with no other factors considered;
  2. If a simplified output is used as one set of criteria where the simplified output is weighted more than any other criterion in the set; or
  3. If a simplified output is used to overrule conclusions derived from other factors including human decision-making.

“The law would, therefore, cover a computer program that eliminates certain applicants from consideration before they even reach an HR department, as well as technology that an employer allows to overrule a hiring manager’s recommendation that a particular candidate be interviewed,” according to a recent Patterson Belknap blog post. “But it would appear not to cover an algorithm that makes initial suggestions about whom to interview, as long as HR personnel reviews each application and the employer does not give more deference to the algorithm than to recommendations from individuals in the HR department.”

Kurtz spelled it out this way: If you’re just using AI to give you a preliminary sense of who might be a strong applicant prior to a rigorous employment review, that’s probably not going to be considered substantial assistance. But if the computer is running the show, then your process is going to be covered by the new law.

“For the time being, I would advise employers to err on the side of caution,” said Kurtz. “Any time that technology is being used to screen a resume and a human isn’t putting their eyes on it, you should probably assume that is an automated employment decision tool.”

I had the privilege of interviewing Kurtz on the latest episode of our “Practical Guidance: Labor and Employment Series” podcast, where we invite experts to provide insights on cutting-edge labor and employment issues in the law. Listen now or download the episode regarding the pending changes to New York City regulations addressing the use of AI in hiring processes and the use of AI in hiring today.