Illinois Enacts AI Interview Law Amid an International Trend Toward Regulation
Time 11 Minute Read

Imagine a future in which Artificial Intelligence (AI) does the recruiting and hiring at US companies. Every new hire will be the uniquely perfect candidate whose skills, personality, presence, temperament and work habits are a flawless match for the job. Performance management and poor performance become extinct, relics from an age in which humans brought primitive instincts, biases and flawed intuition to hiring and employment decisions. While there are risks and challenges to employers in introducing this technology, manufacturers of AI software say that some version of that future may not be too far off. AI software such as Mya, HireVue and Gecko are among the numerous platforms that retail employers are now leveraging to hone in on and hire the best candidates more quickly. Generally speaking, AI interviewing products combine mobile video interviews with game-based assessments. The AI platform then analyzes the candidate’s facial expressions, word choice and gestures in conjunction with game assessment results to determine the candidate’s work style, cognitive ability and interpersonal skills.

There are numerous laws that apply to regulate such hiring practices. One in particular, the Illinois Artificial Intelligence Video Interview Act (the Illinois Act), was a direct response to AI hiring software. The law, which is the first of its kind in the United States, will impose restrictions on an employer’s ability to use AI to analyze videotaped interviews of job applicants. But the Illinois Act is just one piece in a worldwide patchwork of laws and pending legislation that govern or seek to govern the use of AI both in the recruiting process and from a general data protection perspective. In many instances, multiple laws will apply at once; thus, these laws do not exist in a vacuum, and retail employers must be prepared to manage all of them at once to avoid compliance issues and resulting liability. Given the complexity of AI and the unsettled legal landscape, retail employers should approach the implementation of AI with extreme caution. This blog post will discuss the Illinois Act, US biometric privacy laws, anti-discrimination statutes such as Title VII and international privacy laws, all or some of which might apply when an employer uses AI during the recruiting process.

I. United States: the Illinois Act, Biometric Privacy Laws and Federal Legislation

A. The Illinois Act

The Illinois Act obligates an employer to disclose its use of AI to the applicant, provide them with information about how the AI works and obtain their consent to use the AI platform. For more details on the Illinois Act, please refer to our firm’s privacy blog post here. The Illinois Act is silent on how aggrieved parties may enforce the Act or seek remedies, but the Illinois Biometric Information Privacy Act (BIPA) (which is discussed in more detail in our previous post here) does provide for a private right of action.

B. State Biometric Privacy Laws

Even in the absence of laws such as the Illinois Act, employers should still be cognizant of biometric privacy laws when using AI platforms during the interview process. AI recruiting platforms implicate biometric privacy laws because the software collects biometric identifiers such as voiceprints, retina scans and facial scans.  Like BIPA in Illinois, there are comprehensive biometric privacy laws that regulate the collection and storage of biometric identifiers in Texas, California,[1] New York and Washington. However, only BIPA provides for a broad private right of action for violations outside the scope of a data breach. Ten other states have proposed similar legislation, and employers can expect that it is only a matter of time before these and other states enact laws that protect biometric data.[2] A legal collage of biometric privacy laws across the United States will likely present a problem in the context of AI hiring software because the interviews are conducted via video call. An employer located in State A may have a video interview with an applicant in State B, and suddenly the biometric data that the software collects (facial features, voiceprints and so on) to analyze things like facial expressions or tone of voice in candidates’ responses is regulated by two separate and potentially incompatible laws. This problem is further compounded in the likely event that the employer uses a third party’s AI platform, such as the companies previously mentioned. If the AI company is located in State C, then a third law may come into play.

 C. Title VII and State Anti-Discrimination Statutes

Retail employers must also be aware of the potential for discrimination claims arising out of the use of AI during the recruiting process. Even if an AI platform runs autonomously and does not allow for human input, deep learning can still be as problematic as using a human decision maker because some industry experts assert that an algorithm is a reflection of its human creator. Thus, they argue, it is impossible to truly take human cognition and biases out of the equation completely. Employers should scrutinize AI products carefully around issues of validation, and should consult counsel about potential risks under garden-variety discrimination laws.

D. Federal Algorithmic Accountability Act of 2019

The federal Algorithmic Accountability Act (the Act) is pending in the House Committee on Energy and Commerce as of the date of this publication. The Act would apply to any “covered entity”—specifically, a person, partnership or corporation—that is subject to the Federal Trade Commission’s (FTC) jurisdiction, makes $50 million or more per year, possesses or controls personal information on at least one million people or devices, or primarily acts as a data broker that buys or sells consumer data. The Act would require the FTC to create rules for evaluating “highly sensitive” automated systems. Companies would then have to evaluate whether their algorithms are discriminatory or otherwise produce biased results. If this legislation passes, it will strengthen the FTC’s regulatory power and establish a foothold for the agency in the realm of AI.

II. GDPR in the EU and its Status in the U.K. Post-Brexit

A. GDPR

The General Data Protection Regulation (GDPR), which took effect in 2018, imposes strict requirements on companies such that the lawful use of AI becomes difficult. A comprehensive guide to the GDPR’s requirements is available via Hunton’s EU General Data Protection Regulation, a Guide for In-House Lawyers, which is available for download here.

The GDPR requires companies to collect the minimum amount of personal data needed to fulfill whatever the objective is in collecting that data and only use it for its original intended purpose. It further prevents companies from collecting additional data before they understand the value of the data that they have already collected. AI requires massive datasets to function, so any restrictions on the amount of data that can be collected make the implementation of AI very difficult. The company must also be able to explain to applicants how the AI makes its decisions. This is especially problematic where the creators of the AI platform themselves may not fully understand how the AI makes decisions because the algorithms are too complex for humans to process. Additionally, Article 22 of the GDPR states that the subject of any data collection has a right to have a human review any significant automated decisions. Thus, if a company wishes to use AI to more efficiently evaluate a pool of 250,000 candidates, each of those candidates may request a human review of the company’s employment decision. This obviously defeats the purpose of increased efficiency. Lastly, the GDPR imposes hard costs on companies that wish to use AI, including required hiring of data protection officers and obtaining affirmative consent of all applicants.

Given the GDPR’s tight requirements, companies in the United States that use AI interviewing technology can run into problems when they interview individuals who are located in the EU even if those individuals are applying for US-based jobs. Going back to the previous example, an employer located in State A might use the AI platform of a third-party company in State B to have a video interview with an applicant in the EU. The employer then needs to ensure that it complies with the GDPR along with the laws in State A and State B.

B. UK, the GDPR and Brexit

The state of data privacy in the UK is murky given the uncertainty of the Brexit initiative. Since the UK voted to leave the EU, the GDPR would not otherwise apply. The European Union (Withdrawal) Agreement incorporates GDPR into UK domestic law, and it would be implemented alongside the UK’s 2018 Data Protection Act (DPA). If the UK and EU do not reach an agreement on withdrawal, then the data privacy landscape in the UK becomes less clear. What is clear, though, is that UK companies that deal with EU citizens will still need to adhere to GDPR. Prudent UK-based companies will plan with the assumption that no deal will be reached (i.e., the worst-case scenario) and prepare contractual clauses that incorporate GDPR data protections to which both parties need to adhere.

III. Toward the Future: Advice for Retail Employers in Illinois and Beyond

Retail employers should approach the implementation of AI with caution regardless of where in the world they are located. Additionally, Illinois retail employers that implement AI in their recruiting should not dismiss the Illinois Act because of its silence on enforcement and remedies; rather, they should anticipate that this aspect of the statute will be tested in court. Likewise, employers in states or countries that do not currently regulate the use of AI or biometric information should still act now to establish policies and procedures if they use AI during the interview process or anticipate doing so in the future. When drafting these policies, employers should keep in mind that more than one law may apply to its use of AI during interviews. Moreover, employers should be cognizant of whether they might use AI to interview candidates who are located in the EU. When selecting a third-party AI platform, employers should consider running the software with a test group to observe the outcome and troubleshoot any perceived biases.

Next, when establishing policies to address compliance with any applicable privacy laws, some questions that employers should ask include:

  • Where are the candidates that your company interviews usually located? By understanding your typical candidate pool, you will understand which laws could apply.
  • How will you notify applicants that your company uses AI during the interview process? AI privacy laws generally require that the employer disclose to candidates that the employer uses AI during the interview process. Although the Illinois Act does not require written notice, that is not the case for all laws. For example, companies subject to the GDPR must provide notice in writing.
  • What will you tell applicants about how the AI interview platform that you use works? AI privacy laws also require employers to disclose how the AI works. In order to do that, the employer must educate itself on the AI platform that it uses.
  • How will you obtain candidates’ consent to both the use of AI and collection of their biometric information? The Illinois Act, for example, requires written consent.
  • What safeguards will you impose to ensure that information from candidates’ interviews is secure? Data privacy laws generally impose further burdens on companies that fall victim to a data breach, so it is important to take proactive measures to prevent breaches in the first instance.
  • When and how will you destroy the information that the AI platform collects during the interview? Laws vary regarding when and how a company must destroy data. All destruction procedures must comply with the applicable laws.
  • If you are using a third-party AI platform, who will be liable if an applicant brings a privacy or discrimination claim? Companies may reduce liability if the AI produces unintended consequences by including an indemnification clause in their contracts with third-party AI providers.

Although compliance is a challenge for employers that want to take advantage of the benefits of AI during the recruiting process, careful advanced planning and troubleshooting can help to avoid some of its pitfalls.

[1] A detailed discussion of the recent amendments to California’s Consumer Protection Act is available via our privacy blog here.

[2] Legislators in Delaware, Alaska, Florida, Arizona, Hawaii, Oregon, Michigan, Montana, Massachusetts, New Hampshire, New Jersey and Rhode Island introduced legislation that would have established protections for biometric data. Employers can expect that it is only a matter of time before these and other states enact laws that protect biometric data.

  • Partner

    Bob litigates complex employment, labor and business disputes. Bob is a litigator who represents businesses in resolving their complex labor, employment, trade secret, non-compete and related commercial disputes. He is ...

  • Associate

    Katherine incorporates value, efficiency, and creativity to aggressively represent clients in complex state and federal employment, commercial, and trade secrets litigation. Katherine has extensive experience in ...

Search

Subscribe Arrow

Recent Posts

Categories

Tags

Authors

Archives

Jump to Page