According to recent Equal Employment Opportunity Commission (EEOC) guidance, released May 18, 2023, yes, artificial intelligence (AI) can discriminate in violation of Title VII of the Civil Rights Act of 1964 (Title VII). Title VII prohibits discrimination against employees on the basis of race, color, national origin, religion and sex. The purpose of the EEOC guidance is to ensure employers comply with Title VII, even as technology advances. This guidance specifically focuses on an employer’s use of AI during the “selection process” (hiring, promotion, or firing), as well as monitoring and evaluating employee activity. Employers should be aware of the risks of using the assistance of AI for employee related engagements.
Congress defined AI as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” The EEOC provided some of the following examples of when an employer might use AI:
- Chatbots on hiring sites
- Resume screening software
- Software that monitors employee’s keystrokes
- Video interviewing software that evaluates a candidate’s facial expression
Employers may think since AI is a machine it is incapable of discrimination. While a machine may not engage in overt or intentional discrimination, with AI, the focus is whether the software or program has a disparate impact on protected groups.
Disparate impact is most simply defined as an unintentional discriminatory practice, where a practice, on its face, does not appear to be discriminatory, but the practice is discriminatory in its effect. To examine whether a facially neutral hiring process has a disparate impact, an employer should analyze whether the process has a substantial impact on employees based on race, color, religion, sex, or national origin. The EEOC suggests employers consider the “four-fifths” rule for determining substantial impact. This rule indicates there is a substantial impact if one group’s ratio is less than 80%. The EEOC provides the following example of a substantial impact using the “four-fifths” rule as explained below:
Suppose that 80 males and 40 females take a personality test that is scored using an algorithm as part of a job application, and 48 of the male applicants and 12 of the female applicants advance to the next round of the selection process. Based on these results, the selection rate for males is 48/80 (equivalent to 60%), and the selection rate for females is 12/40 (equivalent to 30%).
The “four-fifths” rule is not a bright-line test, but it can be a helpful tool to employers when evaluating their AI hiring processes. Note that a hiring process that results in a disparate or substantial impact may be allowed if the employer can show its use is “job-related and consistent with business necessity.” For example, if the business requires a physical fitness test, it may be acceptable that fewer women than men pass the test if certain physical requirements are a business necessity. However, it still must be the least discriminatory option available even if it’s used for a business necessity.
Employers must understand that they can be held responsible for any disparate impact created by AI, even if it is owned and/or administered by a third-party vendor. Employers should inquire with vendors “whether steps have been taken to evaluate [if use] of the tool causes a substantially lower selection rate for individuals with a [protected] characteristic.” However, even if the vendor represents that their AI does not result in discriminatory practices, but the vendor is incorrect, employers will likely not escape responsibility.
This EEOC guidance shows employers that even with the increased use of technology in the workplace, the EEOC’s position remains the same: employers are not permitted to engage in discriminatory practices during the selection process. This means that AI tools should be used with discretion and should be routinely assessed by employers to ensure ongoing compliance.