Employment and Labor

Using AI in Human Resources and the Risks of Employment Discrimination

Using AI in Human Resources and the Risks of Employment Discrimination

Artificial intelligence (AI) is taking the world by storm and revolutionizing how people work. While the possibilities for automation and increasing efficiency appear endless with the rapid progression of this technology, employers should be aware of potential legal issues surrounding the use of AI in the hiring process.  The National Artificial Intelligence Initiative Act of 2020 defined AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”[1] This definition provides a very functional understanding of AI, but it does not express the fundamental necessity that AI systems have for gargantuan data sets. Modern AI systems are reliant upon massive pools of data and the machine learning software that develop coherent and digestible information for human decision makers.

A significant risk with AI systems, and one that the Equal Employment Opportunity Commission (EEOC) has targeted recently, is that biased data fed into a system will result in a biased output.[2] Bias, in this context, reflects “unrepresentative or imbalanced datasets, datasets that incorporate historical bias, or datasets that contain other types of errors.”[3]

AI is not Immune to Federal Antidiscrimination Laws

While AI developments possess near unimaginable potential, the laws and agencies regulating AI based discrimination are well established. The already-existing federal antidiscrimination laws, well known to the employment world, will directly confront AI bias. Title VII of the Civil Rights Act of 1964 acts as the bedrock for antidiscrimination laws, and has been expanded with laws such as the Age Discrimination in Employment Act and the Americans with Disabilities Act, along with its amendments. Furthermore, Title VII acts as a model for parallel state laws, including the Florida Civil Rights Act. Enforcement of these antidiscrimination laws in employment falls upon the EEOC, which investigates discrimination complaints and provides guidance to help employers avoid running afoul of the law.

The EEOC recognizes the immense benefits that AI systems can have in the hiring process, but this does not distract from the equally immense negatives that can develop if these systems are left unchecked.[4] Of primary concern for the EEOC, is the potential for discrimination stemming from the use of AI systems. If the data relied upon by an AI system is biased, then the whole system is tainted with potential for bias. Such tainted systems could easily lead to disparate impacts that will face EEOC scrutiny as its attention shifts to the real harms that occur through the use of these biased AI systems.

Already, the EEOC has announced that the four-fifths test will apply to AI systems during EEOC investigations.[5] This is not surprising as both Chairs of the EEOC and the Federal Trade Commission have reiterated that technological sophistication is not an exemption from anti-discrimination laws.[6] Above all else, the EEOC is concerned with outcomes, regardless of whether AI tools are used or not. Disparate impacts will always be a red flag and AI systems that don’t account for biased input data will inherently carry the potential to propagate, if not exacerbate, the  bias, and thus the potential for a disparate impact.

To fully understand the ramifications of the EEOC’s position on AI bias, a view from the employer’s perspective must be considered. AI tools have the potential to streamline the hiring process while also allowing for review of a much greater set of applicants than a human driven system could ever achieve. As always, a business will be well served following the guidelines published by the EEOC, but until official guidelines are published addressing use of AI tools, businesses are left relying on more informal recommendations from the EEOC. Currently, the EEOC provides numerous resources to assist businesses in properly utilizing AI tools.[7] These resources provide useful insight into the EEOC’s approach to AI regulations. Such insight includes confirmation that use of algorithmic decision-making tools can fall under the guideline’s definition of “selection procedure,” or the fact that an employer may be responsible for the use of these tools even if the tools are designed or administered by another entity.

Applying New Regulations and Guidelines

State and local governments have also increased focus on the use of AI tools, particularly in the realm of human resources. New York City passed Local Law 144 of 2021 which prohibits employers from using “automated employment decision tool unless the tool has been subject to a bias audit within one year of the use of the tool, information about the bias audit is publicly available, and certain notices have been provided to employees or job candidates.”[8] Additionally, a proposed bill in California, AB 331, provides for very similar regulations of automated employment decision tools.[9] On the other hand, some states, such as Florida, have not implemented any new regulations targeting the use of AI or other automated tools in the employment process. In such a rapidly developing legal landscape, businesses will be well served keeping a close eye on legal developments at both the federal, state, and local level.

The private sector has recognized the need for heightened control over AI systems, and as a result, the National Institute of Standards and Technology (NIST) has developed the AI Risk Management Framework (AI RMF).[10] While voluntary, the AI RMF seeks to provide strong industry standards in the “design, development, use, and evaluation of AI products, services, and systems.” This initiative reflects the desire of private entities to utilize AI tools in a risk-conscious manner. Additionally, civil rights and technology policy organizations have proposed a host of best practices for responsible use of AI tools.[11] These best practices include “pre- and post-deployment audits, short-form disclosures, procedures for requesting accommodations or opting out, record keeping, transparency and notice, and systems for oversight and accountability.”[12] Additionally, businesses should avoid “certain selection procedures that create an especially high risk of discrimination. These include selection procedures that rely on analyzing candidates’ facial features or movements, body language, emotional state, affect, personality, tone of voice, [and] pace of speech.”[13]

AI is revolutionary, and the laws and regulations that protect against discrimination will remain evolutionary as lawmakers grapple with addressing AI. As demonstrated by the EEOC, gradual development and refinement seems to be the hallmark of regulation over current AI systems. EEOC Chair Burrows recently remarked “there’s no exception under the civil rights laws for high-tech discrimination.”[14] Clearly, the current focus is on the effect of the tools in the employment realm, not necessarily the tools themselves.

This article was co-written by Nicholas Sellas, a summer associate with RumbergerKirk who is a Juris Doctorate student at Florida State University College of Law.  







[7] See




[11] Matt Scherer & Ridhi Shetty, Civil Rights Standards for 21st Century Employment Selection Procedures, Ctr. for Democracy & Tech. (Dec. 5, 2022),


[13] Supra, note 8