As the capabilities of artificial intelligence have advanced, the prevalence of AEDT (Automated Employment Decision Tools) has increased. With this evolution in recruitment technology has come an increased concern that implicit bias may be coded into these AEDT modalities. After all, humans are designing the systems that have been implemented and, as has been seen in countless examples, humans are often unintentionally biased in their employment decision making.
As a result of unconscious bias making its way into AEDT, employers may be exposing themselves to the risk of litigation. Long-standing precedent continues to be at issue, despite the technological advancement in talent acquisition. “Disparate impact” is the yardstick by which employers need to measure any biased results of using AI in employment decisions.
Prohibited methods which cause disparate impact are delineated in Griggs v. Duke Power Co., 401 U.S. 424 (1971). Specifically prohibited are any tools which disproportionately exclude or affect people with certain protected characteristics (e.g., race, religion, gender, sexual orientation, disability, and national origin).
Because these emerging technologies are beginning to have the ability to identify specific traits in individuals, their application must be administered carefully. As technology continues to evolve, employers will need to be more mindful of how these advancements may result in new, unforeseen disparate impact, for example, upon applicants with disabilities. For instance, as mental health conditions are protected under the ADA, assessment tools designed to weed out impulsivity or moodiness could easily have a disparate impact on individuals with ADHD or Bipolar Disorder.
Even physical affects during video interviews could be similarly problematic if AI is used and candidates are ruled out based on facial tics or involuntary fidgeting. These AI tools do not have the discretion to offer disability accommodation in selection criteria so if these tools disproportionately exclude applicants from consideration, they may create liability.
However, simply ensuring that no disparate impact exists may not be enough in light of recent, and likely impending, regulation. The enforcement of these regulations will be even more stringent than simply meeting the disparate impact standard. In fact, depending on the state, penalties may be incurred even if AEDT does not have disparate impact. For instance, New York City Local Law 144, which went into effect July 5th, 2023, sets out the following requirements for employers who use AEDT:
“(1) confirm that a bias audit has been conducted; (2) provide at least 10 days’ notice to the applicant or employee that software utilizing AEDT is being or will be used; (3) explain the qualifications the AEDT will use during the assessment; (4) provide the data source and type of AEDT being used, and the employer’s data retention policy (if not disclosed elsewhere); and (5) inform the applicant or employee that they may request an alternative means by which to be assessed (or a “reasonable accommodation” under other laws).”
In this context, AEDT is defined as “any process that is derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making.”
Other states are also beginning to regulate the use of AI in employment. Illinois enacted the Artificial Intelligence Video Interview Act (820 ILCS 42) in 2020. This law requires employers to do the following:
Provide notice: Employers must inform applicants that AI will be used to analyze their interview videos. (A provision requiring written notice was removed from the bill.)
Provide an explanation: Employers must explain to the applicant how their artificial intelligence program works and what characteristics the AI uses to evaluate an applicant’s fitness for the position.
Obtain consent: Employers must obtain the applicant’s consent to be evaluated by AI before the video interview and may not use AI to evaluate a video interview without consent.
Maintain confidentiality: Employers will be permitted to share the videos only with persons whose expertise or technology is needed to evaluate the applicant.
Destroy copies: Employers must destroy both the video and all copies within 30 days after an applicant requests such destruction (and instruct any other persons who have copies of the video to destroy their copies as well).
Furthermore, in 2020, Maryland ratified a prohibition against the use of facial recognition services in labor and employment (H.B. 1202).
As regulations increase, it may be worthwhile for employers to consult employment lawyers and HR consultants to determine if using AEDT is actually of net benefit. Newly required audits and notification standards and the inherent risk of using these tools could outweigh the cost savings and other benefits of using AEDT in hiring.
Contact the experienced employment lawyers at Wilt Toikka Kraft LLP if you suspect you may have been the victim of AI employment discrimination.