Counseling Employers on the Legal Implications of Artificial Intelligence and Robots in the Workplace
 

Counseling Employers on the Legal Implications of Artificial Intelligence and Robots in the Workplace

Posted on 04-18-2018

By: Richard R. Meneghello, Sarah J. Moore, and John T. Lai, Fisher & Phillips LLP

This article provides guidance and best practices for counseling employers on the legal implications of integrating artificial intelligence (AI) and robots into their workplaces.

Reductions in Force Due to Artificial Intelligence and Robotics

Thanks to recent technological advances, AI algorithms and robots are developing the sophistication to displace human employees, causing many employers to engage in mass layoffs and reductions in force. For instance, Goldman Sachs recently laid off nearly 600 equity traders whose work has largely been supplanted by automated trading programs and a team of computer engineers.1

As employers continue to pursue disruptive technologies like AI and robotics that can reduce workforces, unions and employees will mount legal challenges in an effort to protect their positions. To ensure employers can implement these technologies with minimal repercussions, you should assess their risks and liabilities and help them put together a strategic plan. Consider the following measures to avoid liability from layoffs caused by AI and robotics.

  • Request a seat at the table to discuss integrating robotics and AI automation. Counsel your human resources and inhouse counsel contacts to request a seat at the table when their organization discusses how to integrate robotic and AI automation into the workplace. With your help, your contacts can assist their organization with strategic plans that implement new technologies while limiting the company’s exposure.
  • Consider a voluntary ADEA-compliant termination plan. Before recommending that an employer carry out an involuntary reduction in force (RIF), encourage it to adopt a voluntary termination strategy, such as offering employees separation agreements that release the employer from all claims in exchange for a monetary sum. Be sure to adhere to applicable state and federal laws, such as the Age Discrimination in Employment Act (ADEA) and the Older Workers Benefit Protection Act (OWBPA) when separating employees age 40 and over. Additionally, encourage employees to consult with an attorney before accepting the offer to minimize the risk that the separated employee will be able to subsequently invalidate the agreement on the grounds of coercion or duress.
  • Ensure compliance with the Worker Adjustment and Retraining Notification Act (WARN Act) and any state applicable miniWARN Acts. Once the employer has completed any voluntary separations, you should assess what must be done to comply with the forthcoming RIF. The Worker Adjustment and Retraining Notification Act (WARN), 29 U.S.C. § 2101 et seq., requires businesses that have 100 or more full-time employees (or 100 or more employees, including part-time, who work at least 4,000 hours per week, excluding overtime) to issue 60 days’ advance written notice of a plant closing or mass layoff to (1) the affected non-union employees, (2) the representative of affected unionized employees, (3) the state or entity designated to carry out rapid response activities, and (4) the chief elected official of the unit of local government where the closing or layoff will occur. 20 C.F.R. § 639.6. A mass layoff is defined as either a reduction during any 30-day period of (1) 500 or more employees or (2) 50 or more employees, provided they constitute at least 33% of the employees at the worksite. 29 U.S.C. § 2101(a)(3)(A)–(B). Many state laws impose additional WARN Act-like obligations, including California (Cal. Lab. Code §§ 1400–1408), Illinois (820 Ill. Comp. Stat. 65/1 to 65/99), New York (N.Y. Lab. Law § 860 et seq.), and New Jersey (N.J. Stat. Ann. § 34:21-1 to -7).
  • Determine whether reductions or plant closings are subject to mandatory bargaining. Reductions of unionized employees or plant closings may require additional obligations. For instance, employers may need to bargain regarding the implementation of AI and robotics, since some federal circuit courts and the National Labor Relations Board (NLRB) have held that implementing new technologies and automation that affect the terms and conditions of union jobs is a mandatory bargaining subject.2 Thus, instruct employers to notify the union of the changes well before implementing them so that the union has time to bargain over the decision to make, and the effects of, the operational change. Communications between the employer and the union should be in writing to create a record that the employer met its obligation to bargain in good faith under 29 U.S.C. § 158(d) of the National Labor Relations Act.
  • Encourage employers to communicate the positive aspects of automation. Another important consideration when implementing technological advances is the effect it has on workplace morale. If the employer is not downsizing as a result of automating, or is only conducting limited layoffs, it should assure employees that its technological advances do not foretell a RIF. Indeed, AI can have the effect of enhancing jobs rather than replacing them, such as when automation replaces repetitive manual work and frees up employees to do higher-level strategy work.

The Risks of Using AI for Screening and Hiring

Another way employers may utilize AI is to filter large pools of job applicants. For example, some employers use computer software programs to auto-screen resumes as a human recruiter would. Such programs use machine learning, algorithms, and/or natural language processing to identify the best candidates for employment. Similarly, employers can use an AI-powered recruiting assistant that allows applicants to communicate through messaging apps. One such program uses natural language processing to analyze data an applicant provides and then asks the applicant additional questions to help fill gaps in the applicant’s data. The applicant can also ask the virtual recruiting assistant questions. Other computer programs search social media to find information to fill the gaps in candidates’ profiles and then rank the candidates. Certain employers also have candidates play neuroscience-based computer games and use the results to determine which candidates to interview.

Some employers even use AI for conducting interviews. For instance, an employer might ask a candidate to record answers to interview questions, and a computer program would then analyze the interview (utilizing machine learning, algorithms, and/or natural language processing) for key words, the speed of speech, body language, or other relevant predictors of a candidate’s qualifications and future successes. The computer program would generate a report with suggestions that could then be used to determine whether a candidate should move on in the employer’s recruitment process.

While a sophisticated AI screening system may be able to eliminate unqualified candidates, system limitations and inherent biases may lead to employment discrimination lawsuits. Consider taking the steps below to limit exposure resulting from using AI in the screening and hiring process.

  • Ensure employers use proper and relevant data when developing AI systems to assist with screening and hiring. Make certain that the employer inputs appropriate data into its AI algorithm to avoid unintentionally discriminating against job candidates. Though an AI system itself does not have any biases, the information humans choose to use in the system may be biased, and the computer-generated results could perpetuate these biases. Accordingly, failing to use a proper data set for AI can cause the algorithm to disproportionately factor in applicants’ protected characteristics and/or represent certain populations, resulting in disparate impact claims.3
    Thus, data should only include information that is in line with business necessity and relevant to a particular skill or trait for the particular job. For example, an AI system could analyze data regarding the skills that have made previous employees successful and pattern match to find applicants with these characteristics. Data should not include characteristics such as gender, religion, race, marital status, or whether someone has children. It should include information from all populations, not a select few.
    Note that even data that is seemingly facially neutral could lead to an unintentional disparate impact. For example, (1) using data such as the distance an applicant lives from the potential job site could reflect the different ethnic or racial profiles of the surrounding towns and neighborhoods; (2) using the reputation of the colleges/universities from which an applicant obtained a degree could have a disparate impact on a protected class if equally qualified members of the protected class graduate from these colleges/universities at a substantially lower rate than those not in the protected class; or (3) using an AI system that screens applicants based on a hiring manager’s previous hiring decisions could recreate the historical bias of that hiring manager if the hiring manager’s decisions previously disfavored a particular protected class, and the AI codes this bias into the system.4
  • Utilize AI instead of human intelligence to reduce the risk of certain human biases. Although AI filtering systems risk perpetuating human biases, as discussed above, these same applications, when used properly can actually prevent humans from selecting an applicant for an interview based on conscious or subconscious biases. For example, a subconscious racial or ethnic bias could unintentionally influence a human’s decision to select or not select an applicant for an interview based on the applicant’s name.5 An AI system would not likely use an applicant’s name when deciding whom to select for an interview as it is unlikely that an AI system would distinguish race from an applicant’s name. Similarly, AI systems could eliminate the human bias that would subconsciously influence a human to select or not select an applicant based on a particular appearance and eliminate the possibility of a hiring manager selecting applicants who share his or her gender, race, or other protected characteristic. To the extent possible, advise the employer to use an AI filter that is simple enough for the human resources department to easily understand, implement, and, if necessary, defend in litigation.
  • Use screening filters specific to the position the employer seeks to fill. Ensure that the employer tailors any AI screening system to the particular job the company seeks to fill. A screening filter used to select appropriate applicants for a finance position in New York will not be effective for filling a manufacturing job in Omaha, since the characteristics the employer seeks—such as employment history, education levels, resume phrases, and other unprotected categories—will vary from job to job. Accordingly, the employer should customize the search criteria to different job openings to avoid hiring a candidate who is ill-suited for the role
  • Review with employers any voice-recognition programs to ensure compliance with disability and ethnicity/national origin issues. Similarly, voice-recognition software that utilizes AI to screen oral interviews may not be able to distinguish between a poor interviewer or unqualified candidate and an interviewee with a speech disability, mental disability, or native accent. Filtering out candidates on the basis of these protected characteristics is likely to result in disability or national-origin claims.
  • Encourage employers to supplement AI use by personally screening applications. If the employer is concerned that its big-data screening algorithms will cause a disparate impact, encourage the employer to supplement the AI filter by screening job applications manually. This way, in the event of litigation, the employer can testify to an individualized, unbiased selection process.

Health and Safety Issues Concerning Robots and Artificial Intelligence

Robotics and AI raise novel issues and concerns for employers regarding employee safety. There are currently no Occupational Safety and Health Administration (OSHA) standards specifically for the robotics industry. However, OSHA highlights general standards and directives applicable to employers utilizing robotics.6 OSHA also provides guidelines for robotics safety.7

Under the Occupational Safety and Health Act (OSH Act), a covered employer utilizing robotics—like any other employer the OSH Act covers—must conduct a “hazard assessment” in which it reviews working environments for potential occupational hazards. 29 C.F.R. § 1910.132(d). An employer that identifies a hazard must implement a “hazard control” in the following order of preference: hazard elimination, hazard replacement, engineering controls, administrative controls, or personal protective equipment. With this legal framework as background, consider taking the following actions to mitigate the risk of employee exposure to hazards and legal actions associated with robots:

  • Enlist the assistance of an OSHA-trained attorney to assist at the outset of implementation. As demonstrated above, the intersection between robotics operations and the law is complex, so you should advise employers to consult an attorney with expertise in workplace health and safety issues. Together, the OSHA-trained attorney and the company can develop an employee health and safety plan that minimizes the risk of a workplace accident while creating a defense against employee claims if an incident were to occur.
  • Have the employer develop a basic understanding of the robot’s potential hazards and preventive measures the employer can take. Due to the complexity of sophisticated robots, the employer’s managers and supervisors are unlikely to understand their inner workings. As a result, it may be difficult for employers to identify and eliminate their potential hazards. Accordingly, have the employer train its management staff on the robot’s decision-making processes; what actions the robot could take and under what circumstances it would take such actions; and how to eliminate the hazard should the robot malfunction, such as the steps for shutting it down.
  • Know whom to contact when a robot misbehaves. Unlike human errors, which can be addressed through discipline and retraining, when the root cause of a workplace accident involves the logic of a robot, such traditional methods are not applicable. Rather, the employer may need to consult highly trained engineers to understand why the robot malfunctioned and correct the robot’s performance. If doing so is not feasible, the employer could replace a manufacturing line entirely, but due to the significant cost and disruption to the business this would cause, an employer should only order a complete replacement as a last resort.

Dangers of Artificial Intelligence in Wearable Technology

From the Apple Watch to the Fitbit, wearable technology is becoming increasingly prominent in modern life. In the workplace, using AI to catalog and assess employee data can be a significant boon for employers, which can use AI systems to track worker movements to identify and rectify inefficiencies. Nevertheless, privacy and data security concerns abound when employers utilize such technology

Privacy Issues

Consider the following measures to guard against privacy claims:

  • Ensure that employer monitoring does not violate employees’ reasonable expectation of privacy.
    • Monitoring employee locations. Few courts have considered employers’ right to monitor employees’ locations via GPS while using employer-owned property and vehicles, but thus far courts have not found that tracking employees’ location in public areas violates their privacy rights.8 Some states, such as Connecticut, Conn. Gen. Stat. § 31-48d, require businesses to obtain worker consent before monitoring the location of employees. You should research state and local privacy laws and advise employers accordingly. Additionally, consider distributing an employee privacy policy providing notice of the employee tracking; the business reasons for doing so; the ways in which the employer will safeguard the employee’s data; and the limits on the employee monitoring, such as only tracking movements in public areas during working hours. Be sure to obtain an employee acknowledgment consenting to the monitoring.
    • Monitoring electronic and telephonic communications. Similarly, if employers are monitoring employee telephonic or electronic communications or website usage via wearable technology, employers must make sure that such monitoring complies with federal, state, and local privacy and other laws. To protect an employer’s right to access and monitor employee communications, employers should have clear written policies informing employees that they should not have any expectation of privacy in their use of company electronic systems and that the employer will monitor communications on company electronic systems.
  • Ensure compliance with the Health Insurance Portability and Accountability Act (HIPAA) and other relevant medical privacy laws. If employers utilize wearable technology to collect and store health information—such as to help the employer configure an employee wellness plan—you must counsel employers on compliance with HIPAA, 110 Stat. 1936 et seq., and other relevant state privacy and electronic surveillance laws. These laws typically place limits on what data employers can collect and use and require employers to provide notice to employees regarding what personal information the employer will obtain, how the employer will use that information, and with whom the employer will share the information. Be sure the employer honors its obligations under these laws and takes all the necessary steps to protect its employees’ privacy.

Data Security Issues

Whenever employers gather data, including via wearable technology, they must consider the risk of data breaches and how to prevent them. As this area of law is continually evolving, ensure that the employer consults with an attorney who is well-versed in cybersecurity issues. You should also determine whether the employer has appropriate safeguards in place to prevent unauthorized intruders from obtaining private, personal employee data. For instance, IT departments should mask data collected so that it cannot be linked to a specific user and should use encryption. Additionally, consider implementing regular audits to ensure the employer’s data security protocols are legally compliant and up-to-date.

Integrating AI into the Practice of Law

The amount of data that parties produce in discovery in today’s employment litigations can be staggering. Compounding this problem, attorneys are expected to review this data efficiently— quickly and at a low cost. The faster and more accurately a lawyer can locate useful information, the better and more cost-effectively the attorney will be able to develop his or her case. Because AI can analyze a larger quantity of information more thoroughly than humans can, and in a fraction of the time, attorneys are turning to AI more and more as a key component of their legal practices. Consider taking advantage of recent developments in AI in your own practice in the following ways:

  • Cull through e-discovery. AI technology used in the legal profession includes machine learning and natural language processing. In e-discovery, attorneys often utilize predictive coding, a process that uses algorithms to distinguish relevant from non-relevant documents. This process usually involves an attorney expert on the subject matter reviewing a sample set of documents (known as a seed set) from the whole set of documents and coding the documents for relevance, privilege, or other issues. The computer program then analyzes the determinations the attorney made on the seed set and learns how to select relevant documents from the larger pool of documents. Among other organizational tools, algorithms can rank documents in the order of relevance or use concept clustering, which groups together documents that share certain combinations of words. This type of AI can be extremely useful to attorneys involved in employment discrimination or wage and hour litigation. For it to be effective, however, attorneys must be properly trained in how to use AI. For example, the attorneys coding a sample set of documents should be experts on the subject area. If the initial set of documents is not coded accurately, then the data the AI tool produces may not be accurate.
  • Streamline contract review. AI can similarly be used in contract review. Software learns from contracts as they are uploaded into a database and then compares these contracts to those inputted by the attorney or end user. The software can then produce a report recommending changes to the attorney’s contract based on this comparison.

Richard R. Meneghello is the Publications Partner for Fisher Phillips. He develops legal alerts, web articles, newsletter features, and blog posts for the Fisher Phillips website. Rich is also an accomplished litigator. He won a unanimous decision before the U.S. Supreme Court in the case of Albertsons v. Kirkingburg, an Americans with Disabilities Act case, as well as cases for clients at the Ninth Circuit Court of Appeals, the Oregon Supreme Court, and the Oregon Court of Appeals, along with trial victories in both state and federal courts. Sarah Moore is a partner at Fisher Phillips, in its Cleveland office. She enjoys a robust practice that crosses industries in the private and public sectors and routinely incorporates the insights and best practices from this diversity in experience into her work. Sarah thrives on handling highly sensitive and challenging issues and regularly works hand-in-hand with her clients addressing the full spectrum of labor and employment concerns. John T. Lai is an associate in the firm’s Irvine office. He practices in all areas of labor and employment law. John has experience in intellectual property matters, unfair competition, and complex litigation.


To find this article in Lexis Practice Advisor, follow this research path:

RESEARCH PATH: Labor & Employment > Investigations, Discipline, and Terminations > Discharge and Layoffs/RIFs > Practice Notes

Related Content

For more information on voluntary separation programs and alternatives to reductions in force (RIFS), see

> ALTERNATIVES TO REDUCTIONS IN FORCE (RIFS)

RESEARCH PATH: Employee Benefits & Executive Compensation > Employment, Independent Contractor, and Severance Agreements > Executive Separation Agreements & Severance Plans > Practice Notes

For a discussion of drafting separation agreements, see

> SEPARATION AGREEMENTS: DRAFTING AND NEGOTIATION TIPS (PRO-EMPLOYER)

RESEARCH PATH: Labor & Employment > Discrimination and Retaliation > Claims and Investigations > Practice Notes

For information on state laws concerning RIFs, see

> THE MASS LAYOFF AND PLANT CLOSING LAWS COLUMN IN INVESTIGATIONS, DISCIPLINE, AND TERMINATIONS STATE PRACTICE NOTES CHART

RESEARCH PATH: Labor & Employment > Investigations, Discipline, and Terminations > Discharge and Layoffs/RIFs > Practice Notes

For an overview of requirements under the Worker Adjustment and Retraining Notification Act (WARN Act), see

> WARN ACT COMPLIANCE CHECKLIST

RESEARCH PATH: Labor & Employment > Investigations, Discipline and Terminations > Discharge and Layoffs/RIFs > Checklists

For best practices on drafting policies concerning employee privacy when using electronic devices, including a sample policy, see

> CREATING POLICIES ON COMPUTERS, MOBILE PHONES, AND OTHER ELECTRONIC DEVICES

RESEARCH PATH: Labor & Employment > Employment Policies > Company Property and Electronic Information > Practice Notes

For more information on the risks of wearable technology, see

> UNDERSTANDING EMPLOYMENT PRIVACY ISSUES UNDER FEDERAL LAW

RESEARCH PATH: Labor & Employment > Privacy, Technology and Social Media > Monitoring and Testing Employees > Practice Notes

1. See Nanette Byrnes, As Goldman Embraces Automation, Even the Masters of the Universe Are Threatened, MIT TECHNOLOGY REVIEW (Feb. 7, 2017). 2. See, e.g., Renton News Record, 136 N.L.R.B. 1294, 1297–98 (1962); NLRB v. Columbia Tribune Publ’g Co., 495 F.2d 1384, 1391 (8th Cir. 1974); Newspaper Printing Corp. v. NLRB, 625 F.2d 956, 964 (10th Cir. 1980). 3. See EXECUTIVE OFFICE OF THE PRESIDENT, Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights (May 2016), p. 14; Roger W. Reinsch and Sonia Goltz, The Law and Business of People Analytics: Big Data: Can the Attempt to be More Discriminating be More Discriminatory Instead?, 61 St. Louis L.J. 35, 40–42 (2016). 4. See Data-Driven Discrimination at Work, Pauline T. Kim, 58 Wm. & Mary L. Rev. 857, 863, 873 (2017); Sofia Granaki, Autonomy Challenges in the Age of Big Data, 27 Fordham Intell. Prop. Media & Ent. L.J. 803, 826 (2017); Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. L. Rev. 671, 682, 689, 722 (2016); Federal Trade Commission, Big Data: A Tool for Inclusion or Exclusion? (January 2016), p. v. 5. See Anupam Chander, Reviews: The Racist Algorithm?, 115 Mich. L. Rev. 1023, 1029 (2017). 6. See Robotics, Standards, Occupational Safety and Health Administration, Safety and Health Topics. 7. See Guidelines for Robotics Safety, OSHA Instruction STD 01-12-002 (1987). 8. See, e.g., Elgin v. St. Louis Coca-Cola Bottling Co., 2005 U.S. Dist. LEXIS 28976, at *7–11 (E.D. Mo. 2005); Gerardi v. City of Bridgeport, 2007 Conn. Super. LEXIS 3446, at *17–20 (Super. Ct. Dec. 31, 2007)