The U.S. Equal Employment Opportunity Commission (EEOC) held a public hearing on January 31, 2023, titled “Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier.” This meeting was part of EEOC Chair Burrows’s Artificial Intelligence and Algorithmic Fairness Initiative.

The Goal of the Hearing

One of the initiative’s stated goals is to issue technical assistance to guide algorithmic fairness and the use of AI in employment decisions. On May 12, 2022, the EEOC issued its first technical guidance under this initiative titled, “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees.” While this technical guidance focused on the application of the Americans with Disabilities Act (ADA) to artificial intelligence tools, the scope of the testimony at the hearing was significantly broader than this single law. 

The purpose of the January 31 hearing was to receive panelists’ testimonies concerning automated systems, including artificial intelligence, used by employers in employment decisions. The meeting began with testimonies from Chair Charlotte A. Burrows, Vice Chair Jocelyn Samuels, Commissioner Keith E. Sonderling, and Commissioner Andrea R. Lucas. This portion of the hearing was followed by a panel of higher education professors, nonprofit organization representatives, attorneys, and workforce consultants with prepared statements and question-and-answer periods with each commissioner. 

We’ve compiled some of the main concerns the panelists addressed and key takeaways and highlights of the hearing.   

Panelist Concerns 

The panelists shared several concerns and also put forward recommendations about the role the EEOC should play in addressing these concerns. 

Here are some of the concerns brought to attention:

Critical evaluation of data. Artificial intelligence runs off data. The panelists raised concerns about how scope and quality can impact individuals who may be selected or excluded by tools based on algorithms and how this can affect the scope and quality of selections.  

Transparency and trust. Multiple panelists raised concerns over how much individuals subjected to artificial intelligence tools are made aware that such applications are being used. There was doubt regarding how individuals with disabilities affected by artificial intelligence applications could know whether, when, and how to request accommodation. The shared priority of the panelists is that the EEOC must support a system in which AI is trustworthy in its applications. 

Validation and auditing. Panelists suggested the need to audit AI tools for bias, and the testimony debated whether audits should be required or recommended and whether they should be independent or self-conducted. Panelists also questioned whether vendors should share liability related to the artificial intelligence tools they promote for commercial gain. 

Applicable or necessary laws. Testimony critiqued the application of traditional antidiscrimination analysis to the application of artificial intelligence as a hiring and screening tool. Current disparate treatment analysis seeks to prohibit a decision-maker from considering race when selecting a candidate. However, the panelists suggested that some consideration of race and other protected characteristics should be permitted as a strategy to de-bias automated systems to ensure an artificial intelligence model is fair to all groups. The panelists also addressed the applicability of the Uniform Guidelines on Employee Selection Procedures to automated decision tools and the potential for using analyses other than the “four-fifths rule” to evaluate the potential disparate impact of such tools. 

 

Panelist Recommendations 

Many panelists suggested that the EEOC has a role in evaluating AI applications for bias. A recommendation by Commissioner Sonderling was to take an approach like the one taken by the U.S. Department of Agriculture, in which the agency approves artificial intelligence products to certify their use.

Panelists also urged the EEOC to issue guidance addressing compliance with Title VII of the Civil Rights Act of 1964 and the Age Discrimination in Employment Act when utilizing artificial intelligence tools. It was suggested that the EEOC work with other federal regulators to address the usage of tools. 

Key Takeaways 

The January 31 hearing took place as the New York City Department of Consumer and Worker Protection continued to develop enforcement regulations for the city’s automated employment decision tools law. This law is the first (but surely not the last) of its kind in the United States, imposing a bias audit requirement on AI applications. 

Keep an eye out because the EEOC will be issuing additional guidance for employers and individuals on the application of equal employment laws to artificial intelligence applications. While future EEOC publications may address the role of a bias audit in employer decision-making tools, it’s unlikely that the EEOC will require such an audit. 

You can watch the full video of the hearing here.

Share This Story