The Legal Risks of Using Artificial Intelligence in Hiring and Recruiting

By Allison P. Sues – SmithAmundsen LLC – www.salawus.com

As employers seek to reduce costs and time in the hiring process through artificial intelligence (AI) tools, they should also be aware of potential legal risks that come with merging recruitment with technologic innovation. Employers are turning to AI to assist with many aspects of the recruitment and hiring process, including automating the sourcing of potential candidates, screening from an existing candidate pool, and using AI assessment tools, such as conversational chatbots and video interviewing tools that can measure a candidate’s strengths based on factors such as facial expression, word choice, body language, and vocal tone. 

While these AI tools can be helpful in streamlining and strengthening the recruitment process, they can also cause unintended disparate effects on protected classes and place employers at risk for discrimination claims. For example, an AI tool may prioritize candidates that live within the same zip code as the office because studies show that employees with favorable commutes stay longer with their employers. However, such an automated selection may eliminate candidates from areas primarily composed of minorities and have a negative disparate impact on African American and Latino candidates. As another example, an AI tool may prioritize candidates with similar profiles to current successful employees in the company, which could put women or minorities at a disadvantage if the current workplace profile has more white men in higher positions.

The EEOC is currently looking at two instances of alleged discrimination in AI recruitment and hiring, and more charges and lawsuits on this issue are expected to appear. The EEOC has made clear that employers using AI in their hiring process can be liable for unintended discrimination, and AI vendors regularly include non-liability clauses in their contract with employers.  Therefore, employers must take steps to vet their AI tools and validate that they are not causing unintended discrimination in recruitment. Employers should test the AI algorithm’s functionality in pilot systems to determine whether the results may be biased. For sizeable employers, an internal Chief AI Officer may be used. Smaller employers may prefer to contract with a data scientist. Either way, these people should work with the employer’s counsel to validate the data, assess for bias, and determine risk for legal liability, all while protecting the information under the attorney client privilege. 

While AI in recruitment is not regulated on a federal level yet, Illinois has just enacted a first-of-its-kind law called the Artificial Intelligence Video Interview Act. Effective since January 1, 2020, this law requires employers who use AI to analyze video interviews of candidates to do the following: 

  • Employers must notify applicants that AI will be used in their video interviews.
  • Employers must explain to applicants how the AI works and what characteristics the AI will be tracking in relation to their fitness for the position. 
  • Employers must obtain the applicant’s consent to use AI to evaluate the candidate.
  • Employers may only share the video interview with those who have AI expertise needed to evaluate the candidate and must otherwise keep the video confidential.
  • Employers must comply with an applicant’s request to destroy his or her interview video within 30 days.

The teeth of this Act remain uncertain as it does not explicitly provide for a private right of action or damages for violations of the statute. Regardless, employers should tread cautiously and proactively in utilizing AI in video interviews or at any other stage of the hiring and recruitment process.

If you have questions on this article or other employment law topics, please contact Allison Sues at 312.455.3951 or asues@salawus.com. Allison is a contributor to the Labor & Employment Law Update at www.laborandemploymentlawupdate.com.