AI Hiring BiasAI Hiring Algorithms Present Big Questions About Accountability and Liability

AI Hiring Algorithms Present Big Questions About Accountability and Liability

As artificial intelligence (AI) becomes an increasingly prevalent human resources tool, the algorithms powering those hiring and staffing decisions have come under increased scrutiny for their potential to perpetuate bias and discrimination.

Are There Any Federal Laws or Regulations Governing the Use of AI in Hiring?

Under Title VII of the Civil Rights Act of 1964, the United States Equal Opportunity Commission (“EEOC”) is responsible for enforcing federal laws that make it illegal to discriminate against job applicants or employees because of their membership in a protected class.  For decades, attorneys have relied on the jointly issued Employment Tests and Selection Procedures by the Civil Service Commission, Department of Justice, Department of Labor and EEOC.  See generally 28 CFR § 50.14; see also Fact Sheet on Employment Tests and Selection Procedures, EEOCNevertheless, the current form of the Employment Tests and Selection Procedures fail to provide any guidance on the use of AI tools in the hiring process.   

That isn’t to say Federal regulators and legislators aren’t keen on regulating this area.  On December 8, 2020, ten United States Senators sent a joint letter to the EEOC regarding the EEOC’s authority to investigate the bias of AI driving hiring technologies.  In relevant part, the letter poses three questions:  

  1. Can the EEOC request access to “hiring assessment tools, algorithms, and applicant data from employers or hiring assessment vendors and conduct tests to determine whether the assessment tools may produce disparate impacts?
  2. If the EEOC were to conduct such a study, could it publish its findings in a public report?
  3. What additional authority and resources would the EEOC need to proactively study and investigate these AI hiring assessment technologies?  Id.

As of the current date, the EEOC has yet to respond to the letter.  Nevertheless, given the questions above, the current political climate, and the lack of current guidance from the EEOC, we anticipate future guidance, regulation, and potential enforcement actions in this area. 

How Are States Handling AI Hiring Bias? 

Illinois was first state to legislate in the area of the use of AI in hiring.  On August 9, 2019, Illinois enacted the Artificial Intelligence Video Interview Act (“AIVIA”), imposing strict limitations on employers who use AI to analyze candidate video interviews.  See 820 ILCS 42 et seq.  Under AIVIA, employers must: 

  1. Notify applicants that AI will be utilized during their video interviews;
  2.  Obtain consent to use AI in each candidate’s evaluation;  
  3. Explain to the candidates how the AI works and what characteristics the AI will track with regard to their fitness for the position; 
  4. Limit sharing of the video interview to those who have the requisite expertise to evaluate the candidate; and
  5. Comply with a candidate’s request to destroy his or her video within 30 days.  Id

Illinois was quickly followed up by Maryland, which on May 11, 2020 enacted legislation prohibiting an employer from using certain facial recognition services during a candidate’s interview for employment unless the candidate expressly consents.  See Md. Labor and Employment Code Ann. § 3-717.  The Maryland law specifically requires the candidate to consent to the use of certain facial recognition service technologies during an interview by signing a waiver which contains: 

  1. The candidate’s name;
  2. The date of the interview;
  3. that the candidate consents to the use of facial recognition during the interview;
  4. and that the candidate has read the waiver.  Id.

As with AIVIA, the emerging nature of the Maryland law does not provide much insight into how the law will be interpreted or enforced.

There are a number of other jurisdictions which have bills in different states of progress.  On February 20, 2020 a bill was introduced into the California legislature which would limit the liability of an employer or a purveyor of AI assisted employment decision making software under certain circumstances.  See 2019 Bill Text CA S.B. 1241.  This Californian bill “would create a presumption that an employer’s decision relating to hiring or promotion based on a test or other selection procedure is not discriminatory, if the test or procedure meets specified criteria, including, among other things, that it is job related and meets a business necessity” and “that the test or procedure utilizes pretested assessment technology that, upon use, resulted in an increase in the hiring or promotion of a protected class compared to prior workforce composition.”  Id. The bill would also require the employer to keep records of the testing or procedure and submit them for review to the California Department of Fair Employment and Housing, upon request, in order to qualify for the presumption and limit their liability.  Id

Not to be outdone, a bill was introduced into the New York City Counsel on February 27, 2020 with the purpose of regulating the sale of automated employment decision making tools.  See Int. No. 1894.  The New York City Council bill broadly defines automated employment decision making tool as “any system whose function is governed by statistical theory, or systems whose parameters are defined by such systems, including inferential methodologies, linear regression, neural networks, decision trees, random forests, and other learning algorithms, which automatically filters candidates or prospective candidates for hire or for any term, condition or privilege of employment in a way that establishes a preferred candidate or candidates.”  Id.  The bill seeks to prohibit the sale of automated employment decision making tools if they were not the subject of an audit for bias in the past year prior to sale, were not sold with a yearly bias audit service at no additional cost, and were not accompanied by a notice that the tool is subject to the provisions of the New York City Council’s bill.  Id.  The bill would require any person who uses automated employment assessment tools for hiring and other employment purposes to disclose to candidates, within 30 days, when such tools were used to assess their candidacy for employment, and the job qualifications or characteristics for which the tool was used to screen.  Id.  Finally, the bill is not without bite, as violator are subject to “a civil penalty of not more than $500 for that person’s first violation and each additional violation occurring on the same day as the first violation, and not less than $500 nor more than $1,500 for each subsequent violation.”  Id.

What Can My Business Do Now to Prepare for Potential Liability Related to the Use of AI in Hiring?

As the current political and legal landscape continues to be in flux, one of the best things your business can do is stay on top of current statutes.  Your business could also audit both internal and external use of AI in hiring to validate and confirm the absence of bias in the system; however, testing external systems may require your vendors to open their proprietary technology and information to their customers, something that most are hesitant to do.  Finally, your business should consider conducting a thorough review of any and all indemnification provisions in its vendor agreements to see how risk might be allocated between the parties.

Beckage is a law firm focused on technology, data security, and privacy. Beckage has an experienced team of attorneys and technologists who can advise your business on the best practices for limiting its liability related to the use of AI in hiring.

*Attorney Advertising. Prior results do not guarantee future outcomes.

Subscribe to our Newsletter.