On October 28, 2021, the U.S. Equal Employment Opportunity Commission (“EEOC”) launched the Initiative on Artificial Intelligence and Algorithmic Fairness (the “Initiative”) to ensure that artificial intelligence (“AI”) and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws. The Initiative’s goal is to examine more closely how technology is fundamentally changing the way employment decisions are made. It further aims to guide applicants, employees, employers, and technology vendors in ensuring that these technologies are used fairly, consistent with federal equal employment opportunity laws.
The EEOC issued its first guidance regarding the Initiative on May 12, 2022. This guidance provides practical tips to employers on how to comply with the Americans with Disabilities Act (“ADA”), and to job applicants and employees who think that their rights may have been violated. In addition to the technical assistance document, the EEOC released a summary document providing “Tips for Job Applicants and Employees.” Notably, the EEOC guidance focuses on three primary areas of potential violation under the ADA:
- Employers should have a process in place to provide reasonable accommodations when using algorithmic decision-making tools;
- Without proper safeguards, workers with disabilities may be “screened out” from consideration for a job or promotion regardless of whether they can do the job with or without a reasonable accommodation, and
- If the use of AI or algorithms results in applicants or employees having to provide information about disabilities or medical conditions, it may result in prohibited disability-related inquires or medical exams.
Significantly, the guidance states that the employer in most cases is responsible under the ADA for its use of algorithmic decision-making tools, even if the tools are designed or administered by another entity such as a software vendor. In order to reduce the chances that an algorithmic decision-making tool will screen out an individual because of a disability, the EEOC recommends fact-intensive discussions with software vendors. Notable inquiries include but are not limited to:
- If the tool requires applicants or employees to engage a user interface, did the vendor make the interface accessible to as many individuals with disabilities as possible?
- Are the materials presented to job applicants or employees in alternative formats? If so, which formats? Are there any kinds of disabilities for which the vendor will not be able to provide accessible formats, in which case the employer may have to provide them (absent undue hardship)?
- Did the vendor attempt to determine whether use of the algorithm disadvantages individuals with disabilities? For example, did the vendor determine whether any of the traits or characteristics that are measured by the tool are correlated with certain disabilities?
Even if an employer is using an outside entity to develop an algorithmic decision-making tool, the employer may be able to take additional steps during implementation and deployment to reduce the chances that the tool will screen out someone because of a disability, either intentionally or unintentionally. The EEOC emphasized proper training of employees in conjunction with:
- Clearly indicating that reasonable accommodations, including alternative formats and alternative tests, are available to people with disabilities;
- Providing clear instructions for requesting reasonable accommodations, and
- In advance of the assessment, providing all job applicants and employees who are undergoing assessment by the algorithmic decision-making tool with as much information about the tool as possible, including information about which traits or characteristics the tool is designed to measure, the methods by which those traits or characteristics are to be measured, and the disabilities, if any, that might potentially lower the assessment results or cause screen out.
Finally, we note that the EEOC released this guidance one week after initiating litigation against iTutorGroup, Inc. and its affiliates in the United States District Court for the Eastern District of New York, regarding the use of AI technology in sifting through job applicants. The pending lawsuit, filed on May 5, 2022, alleges that Defendants solicited the birth dates of applicants and programmed their application software to automatically reject female applicants over the age of 55 and male applicants over the age of 60 in violation of the Age Discrimination in Employment Act of 1967 (the “ADEA”). In a press release, the EEOC Chair Charlotte A. Burrows stated, “This case is an example of why the EEOC recently launched an Artificial Intelligence and Algorithmic Fairness Initiative. Workers facing discrimination from an employer’s use of technology can count on the EEOC to seek remedies.” We anticipate the EEOC will continue to increase their enforcement efforts in this space as the use of AI technology in hiring and other employment decisions continues to increase.
If you have any questions regarding the EEOC initiative or guidance, please contact your labor and employment counsel at Smith, Gambrell & Russell, LLP.