Skip to content

Artificial Intelligence (AI) and automated employment decision tools (AEDT) can be useful for employers in the hiring process. They may be used for myriad tasks, from sifting through employment applications to predicting applicant or employee honesty. Their use, however, comes with a specific legal risk: they may create a disparate impact, as we examine further below.

Human Biases May Be Coded into AI

AI tools are subject to human limitations. This is because they are coded by humans, and humans are subject to biases both conscious and unconscious (or “implicit”). As a result, a tool may inadvertently prefer candidates from certain geographic locations (which may exclude certain racial groups), favor male candidates over female candidates (if, for example, the tool is using information from successful applicants in a historically male-dominated position), or incorrectly evaluate the facial expressions of neurodiverse individuals. In short, without safeguards, the risk of bias in AI is real.

Intent to Discriminate Is Irrelevant Against a Claim that an Employer’s AI Tool Had an Adverse Impact on a Group

Various discrimination laws forbid an employer from using selection procedures that, while facially neutral, nonetheless have the effect of disproportionately excluding or affecting people with a certain protected characteristic (e.g., race, religion, gender, sexual orientation, disability, and national origin), a phenomenon known as “disparate impact.” If a selection tool has an adverse disparate impact on any group, the employer may be liable even absent an intent to discriminate. Griggs v. Duke Power Co., 401 U.S. 424 (1971).

The EEOC Issued Guidance to Help Employers Navigate AI Use

In May of 2023, the Equal Employment Opportunity Commission (EEOC) issued guidance concerning employer use of AI tools. That guidance made one thing clear: in the EEOC’s view, employers may be held responsible for using an AI selection tool that has an adverse or disparate impact against a person in a protected class. This is true even if the tool – and its implicit bias – is designed or administered by another company, such as a software developer.

States Have Introduced Legislation to Address Similar Issues With AI

Legislation addressing the risk of implicit discrimination in AI decision-making has popped up across the country. Illinois enacted the Artificial Intelligence Video Interview Act (820 ILCS 42) in 2020, Maryland ratified a prohibition against the use of facial recognition services in labor and employment (H.B. 1202) in 2020, Washington D.C. introduced the Stop Discrimination by Algorithms Act (B24-0558) in 2021 (died in chamber), Texas approved H.B. 2060 establishing an Artificial Intelligence Advisory Council in June 2023, and New York City’s Local Law 144 (NYC 144) passed and went into effect July 5, 2023.

As discussed in recent GT thought leadership (seeblog post and GT Alert), NYC 144 and its progeny broadly regulate the use of AEDT by employers and employment agencies in New York City. Employers are now required to conduct and publicly report the findings of annual bias audits for any AEDT used to assist with employment decisions. Employers must also disclose the use of AEDTs to all impacted resident NYC job applicants. NYC 144 provides a broad definition for AEDT that may be instructive for employers wondering which technology may fall under the purview of such legislation: in NYC AEDT comprises any process that is “derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making.” Regulated AEDT include those that automate, support, substantially assist, or replace discretionary decision-making processes of human beings that can materially impact natural persons. The law does not include commonplace computer systems or applications like junk email filters, firewalls, antivirus software, or compilations of data. Violations of NYC 144 can result in civil penalties of $375 to $1,500 per infraction.

In January 2023, California entered the arena with the introduction of the automated decision tools bill (AB 331). California’s Assembly Member Bauer-Kahan introduced AB 331 to address concerns about discrimination stemming from AI decision-making across a wide berth of industries. Influenced by the Blueprint for an AI Bill of Rights, AB 331 prohibits employment discrimination resulting from AI decision-making tools and creates a right of civil action for violations. AB 331 requires California employers to inform any person who may be impacted by a “consequential decision” made by an AI AEDT of that fact and allow those persons to opt out of determinative AEDT processes where feasible. Employers subject to AB 331 would be required to perform annual impact assessments for any AEDT used, explain the purpose of the tool, its benefits, and how it is deployed. Failure to comply with the impact assessment requirement could result in a fine of up to $10,000.

If passed, AB 331 would go into effect in January 2026. The bill remains active in the committee process – held under submission as of May 18, 2023.

Employer Takeaway

These recent trends suggest that employers utilizing AI to make employment-related decisions take extra care when deciding whether and which AI technology to use. Employers may also want to engage with vendors to determine if they are taking adequate steps to prevent implicit bias and discrimination unrelated to the job or business necessity, keeping in mind they may be found liable if the technology results in discriminatory employment decisions. It may be prudent for employers to proactively implement policies to regularly review AEDT technology and AI decision-making, just as they would for human employees.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Timothy Long Timothy Long

Timothy Long, Co-Managing Shareholder of the Sacramento office, has deep experience litigating complex labor and employment issues, having served as lead counsel in multiple class, collective, and representative actions and advising on dozens more. Tim splits his time between GT’s Los Angeles and…

Timothy Long, Co-Managing Shareholder of the Sacramento office, has deep experience litigating complex labor and employment issues, having served as lead counsel in multiple class, collective, and representative actions and advising on dozens more. Tim splits his time between GT’s Los Angeles and Sacramento offices, and is Practice Group Leader of the Sacramento office’s Labor & Employment Practice. Tim’s clients have included a variety of financial institutions and entities, health care-related entities, airlines, retailers, high-tech companies, and transportation and logistics companies. Tim also advises private investment funds and their partners in disputes concerning the management of funds, removal of non-performing members, and disputes involving portfolio companies.

Tim has litigated virtually every wage-and-hour issue there is, including exemption, incentive compensation, independent contractor, off-the-clock, meal and rest, pay practice, and PAGA claims. He also has defeated class and collective certification (including at Stage One) in exemption, off-the-clock, and pay practice cases, and has defeated PAGA claims short of trial. Tim has also litigated a wide variety of discrimination, harassment, and retaliation claims, as well as wrongful termination, defamation, Anti-SLAPP, fraud, emotional distress, breach of contract, and other employment-related claims. Tim has both prosecuted and defended employers in trade secret and unfair business practices litigation. He has also resisted competitor efforts to enjoin the lawful practices of his clients.

Photo of Tayanah C. Miller Tayanah C. Miller

Tayanah (“Tay”) Miller represents corporate clients in a variety of labor and employment matters, including complex wage and hour collective and class actions, traditional labor matters, discrimination and retaliation claims, wrongful termination lawsuits, and common law tort claims. She has defended clients in

Tayanah (“Tay”) Miller represents corporate clients in a variety of labor and employment matters, including complex wage and hour collective and class actions, traditional labor matters, discrimination and retaliation claims, wrongful termination lawsuits, and common law tort claims. She has defended clients in matters arising under the California Labor Code, the Fair Labor Standards Act, the National Labor Relations Act, the Fair Employment and Housing Act, Title VII of the Civil Rights Act, the Americans with Disabilities Act, the Fair Credit Reporting Act, and various other federal and state laws. She also maintains a robust employment counseling practice.

Tay has experience at all stages of litigation. She has managed and conducted factual investigations and discovery, drafted discovery and dispositive motions, argued motions, and developed trial strategies. She also has mediation, settlement, and trial experience.

During law school, Tay interned at the U.S. Department of Labor, Employment Benefits Security Administration where she helped administer the Voluntary Fiduciary Correction Program.

Photo of Kristina Lee˘ Kristina Lee˘

Kristina Lee˘ is a 2023 summer associate in Greenberg Traurig’s Sacramento office.

˘ Not a licensed attorney.