EEOC Issues Guidance on Use of AI in Employment Decisions
Time 5 Minute Read
EEOC Letterhead

As of late, it seems we can hardly go a day without hearing about the rise of artificial intelligence (“AI”) and its potential to disrupt all manner of industries.  But awareness of AI’s potential implications to our careers has only recently hit the mainstream.  Many employees may be surprised to learn that a number of employers have already been using AI to make employment decisions for some time, especially in the hiring process.  And the number of employers using AI in the workplace has been growing rapidly.  Some employers are even using AI to make promotion decisions.

Perhaps sensing this growing trend, the Equal Employment Opportunity Commission (“EEOC”) published guidance last year regarding artificial intelligence and employer obligations under the Americans with Disabilities Act (“ADA”).  Last week, the EEOC issued similar guidance (the “Recent Guidance”), this time regarding employers’ use of AI in their “selection procedures” (e.g. hiring, promotion, and termination) and the potential for disproportionate adverse effects (i.e. “disparate impact”) on applicant groups that are protected under Title VII of the Civil Rights Act of 1964 (“Title VII”). 

The EEOC’s Recent Guidance provides broad definitions of key terms—software, algorithms, and artificial intelligence—and explains that the Uniform Guidelines on Employee Selection Procedures (“Guidelines”), which the EEOC adopted way back in 1978 (when businesses had more typewriters than personal computers), still apply and should guide employers that now use AI to make employment decisions.  The Recent Guidance also provides a series of questions and answers to help employers understand how to use AI in their “selection procedures” without running afoul of the law.

It explains that a “selection procedure” is any “measure, combination of measures, or procedure” employers use as a basis for an employment decision.  Thus, according to the EEOC, the Guidelines would apply to “algorithmic decision-making tools when they are used to make or inform decisions about whether to hire, promote, terminate, or take similar actions toward applicants or current employees.”

The Recent Guidance explains that employers can assess whether a selection procedure has a disparate impact on a particular group by determining whether the procedure selects individuals in a protected group “substantially” less than individuals in another group.  If the AI or algorithmic decision-making tool adversely affects applicants or employees of a particular race, color, religion, sex, or national origin, then the tool likely violates Title VII, unless the employer can show that the selection procedure or criteria is “job related and consistent with business necessity” under Title VII.

The Guidelines provide a rule, the “four-fifths rule,” for determining whether an employer’s selection rate for one group is “substantially” different than the selection rate of another group.  According to the four-fifths rule, one rate is substantially different than another if the ratio is less than four-fifths (or 80%).  The EEOC’s Recent Guidance explains that the four-fifths rule is simply a “rule of thumb” and “may be inappropriate under certain circumstances,” such as where AI makes a large number of selections and thus smaller differences may reflect an adverse impact on certain groups, or where an employer’s actions discourage individuals in a protected group from applying disproportionately on grounds of a Title VII-protected characteristic.  Thus, employers cannot necessarily rely on the “four-fifths” rule to ensure compliance with Title VII. 

Additionally, the Recent Guidance confirms that employers may be held responsible for algorithmic decision-making tools that create a disparate impact, “even if the tools are designed or administered by another entity, such as a software vendor.”  In other words, an employer using AI to make hiring decisions may be liable under Title VII if the AI discriminates on a protected basis, such as gender or race, even if an outside vendor developed the AI.  Further, the Recent Guidance provides that employers “may be held responsible for the actions of their agents . . . if the employer has given them authority to act on the employer’s behalf” and that this liability may extend to “situations where an employer relies on the results of a selection procedure that an agent administers on its behalf.”  The Recent Guidance encourages employers that learn that an AI tool is creating a disparate impact to “take steps to reduce the impact or select a different tool in order to avoid engaging in a practice that violates Title VII.”

It is clear that AI’s presence in the workplace will continue to grow.  As it does, employers should be mindful of the developing legal framework in this area and take proper measures to ensure that their AI tools are not unlawfully discriminating against their applicants and employees.  Employers should make sure that they understand the legal and statistical nuances of disparate impact discrimination and maintain human involvement in AI-assisted selection procedures to ensure that disparate impact discrimination does not occur.  To do this, employers should partner with employment counsel who have expertise in this area and conduct privileged audits of their AI-assisted selection procedures to ensure that they comply with the law.  Indeed, in the Recent Guidance, the “EEOC encourages employers to conduct self-analyses on an ongoing basis to determine whether their employment practices have a disproportionately large negative effect on a basis prohibited under Title VII or treat protected groups differently.” 

  • Associate

    Michael guides clients through labor and employment matters, including litigation surrounding non-compete agreements, trade secrets, discrimination, sexual harassment, and wrongful termination. He also counsels employers ...

Search

Subscribe Arrow

Recent Posts

Categories

Tags

Authors

Archives

Jump to Page