Artificial Intelligence (AI) and automated employment decision tools (AEDT) can be useful for employers in the hiring process. They may be used for myriad tasks, from sifting through employment applications to predicting applicant or employee honesty. Their use, however, comes with a specific legal risk: they may create a disparate impact, as we examine further below.
Human Biases May Be Coded into AI
AI tools are subject to human limitations. This is because they are coded by humans, and humans are subject to biases both conscious and unconscious (or “implicit”). As a result, a tool may inadvertently prefer candidates from certain geographic locations (which may exclude certain racial groups), favor male candidates over female candidates (if, for example, the tool is using information from successful applicants in a historically male-dominated position), or incorrectly evaluate the facial expressions of neurodiverse individuals. In short, without safeguards, the risk of bias in AI is real.
Intent to Discriminate Is Irrelevant Against a Claim that an Employer’s AI Tool Had an Adverse Impact on a Group
Various discrimination laws forbid an employer from using selection procedures that, while facially neutral, nonetheless have the effect of disproportionately excluding or affecting people with a certain protected characteristic (e.g., race, religion, gender, sexual orientation, disability, and national origin), a phenomenon known as “disparate impact.” If a selection tool has an adverse disparate impact on any group, the employer may be liable even absent an intent to discriminate. Griggs v. Duke Power Co., 401 U.S. 424 (1971).
The EEOC Issued Guidance to Help Employers Navigate AI Use
In May of 2023, the Equal Employment Opportunity Commission (EEOC) issued guidance concerning employer use of AI tools. That guidance made one thing clear: in the EEOC’s view, employers may be held responsible for using an AI selection tool that has an adverse or disparate impact against a person in a protected class. This is true even if the tool – and its implicit bias – is designed or administered by another company, such as a software developer.
States Have Introduced Legislation to Address Similar Issues With AI
Legislation addressing the risk of implicit discrimination in AI decision-making has popped up across the country. Illinois enacted the Artificial Intelligence Video Interview Act (820 ILCS 42) in 2020, Maryland ratified a prohibition against the use of facial recognition services in labor and employment (H.B. 1202) in 2020, Washington D.C. introduced the Stop Discrimination by Algorithms Act (B24-0558) in 2021 (died in chamber), Texas approved H.B. 2060 establishing an Artificial Intelligence Advisory Council in June 2023, and New York City’s Local Law 144 (NYC 144) passed and went into effect July 5, 2023.
As discussed in recent GT thought leadership (seeblog post and GT Alert), NYC 144 and its progeny broadly regulate the use of AEDT by employers and employment agencies in New York City. Employers are now required to conduct and publicly report the findings of annual bias audits for any AEDT used to assist with employment decisions. Employers must also disclose the use of AEDTs to all impacted resident NYC job applicants. NYC 144 provides a broad definition for AEDT that may be instructive for employers wondering which technology may fall under the purview of such legislation: in NYC AEDT comprises any process that is “derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making.” Regulated AEDT include those that automate, support, substantially assist, or replace discretionary decision-making processes of human beings that can materially impact natural persons. The law does not include commonplace computer systems or applications like junk email filters, firewalls, antivirus software, or compilations of data. Violations of NYC 144 can result in civil penalties of $375 to $1,500 per infraction.
In January 2023, California entered the arena with the introduction of the automated decision tools bill (AB 331). California’s Assembly Member Bauer-Kahan introduced AB 331 to address concerns about discrimination stemming from AI decision-making across a wide berth of industries. Influenced by the Blueprint for an AI Bill of Rights, AB 331 prohibits employment discrimination resulting from AI decision-making tools and creates a right of civil action for violations. AB 331 requires California employers to inform any person who may be impacted by a “consequential decision” made by an AI AEDT of that fact and allow those persons to opt out of determinative AEDT processes where feasible. Employers subject to AB 331 would be required to perform annual impact assessments for any AEDT used, explain the purpose of the tool, its benefits, and how it is deployed. Failure to comply with the impact assessment requirement could result in a fine of up to $10,000.
If passed, AB 331 would go into effect in January 2026. The bill remains active in the committee process – held under submission as of May 18, 2023.
Employer Takeaway
These recent trends suggest that employers utilizing AI to make employment-related decisions take extra care when deciding whether and which AI technology to use. Employers may also want to engage with vendors to determine if they are taking adequate steps to prevent implicit bias and discrimination unrelated to the job or business necessity, keeping in mind they may be found liable if the technology results in discriminatory employment decisions. It may be prudent for employers to proactively implement policies to regularly review AEDT technology and AI decision-making, just as they would for human employees.