ai background checks

When hiring tools driven by artificial intelligence were first introduced, they seemed like a substantial productivity gain for employers. AI-based tools enabled hiring managers to, among other things, sort through candidates more quickly, readily identify the most qualified applicants and spend more time with the best applicants rather than wasting time reading CVs of unsuitable candidates. Or, at least, those were the benefits that AI hiring tools were supposed to bring to the table. In fact, there was discussion about whether AI might finally be the way to remove bias from hiring.

Now, though, there is a significant backlash brewing against this once-promising HR technology. Increasingly, AI-based hiring tools are criticized for their implicit bias and lack of nuance. New York City even recently passed a bill aimed at fixing the discrimination these technologies can bring to the hiring process. The question is, can these technologies be fixed so that they can offer their touted benefits? Or will employers have to go back to reviewing resumes on a more manual, one-by-one basis?

Critics of AI-driven hiring tools have identified a variety of problems with these tools. For instance, employers will often set specific guidelines for resume sorting tools, such as limits on how long an employment gap a person can have to be a serious candidate. Some employers view significant gaps on a resume to be a red flag, indicating that someone can’t hold down a job or isn’t very motivated – among other conclusions. However, there is a fault in a system that automatically disqualifies any candidate with an employment history gap longer than, say, six months. These types of cut-and-dried criteria don’t consider nuance. While employment history gaps could indicate unfavorable qualities in a candidate, they might also be there because the candidate was recovering from an injury or illness, dealing with grief over a lost spouse or family member, going back to school, taking care of an infant or raising a child, or navigating pandemic-era unemployment. 

The New York City bill – which has been passed by the New York City Council (NYCC) but not yet signed by the mayor – would require “that a bias audit be conducted on an automated employment decision tool before using said tool.” In addition, employers would have to notify candidates and employees before using AI-based tools “in the assessment or evaluation for hire or promotion.” That notification would need to include details “about the job qualifications and characteristics that will be used by the automated employment decision tool.”

The NYCC reportedly intends the bill first and foremost to solve the racial or gender biases that an AI hiring tool can create. For instance, an automated hiring system programmed to reject candidates based on lengthy employment gaps could be interpreted as having a gender bias, as women are more likely to have long gaps in work history due to child-rearing responsibilities.

If the mayor signs the bill, it will go into effect on January 1, 2023. At that point, the city will be able to fine employers $1,500 for any policy violations. Said another way, while the bill would not automatically solve the problems with AI hiring tools, it would give employers more incentive to solve those issues.

For the most part, the flaws with AI-based employment tools are not with the technology itself but with its application. Similarly, background checks can be used in a way that creates bias or discrimination in hiring. The Equal Employment Opportunity Commission (EEOC) advises that employers consider arrests or convictions on a case-by-case basis rather than taking the blanket approach of denying employment to anyone with a criminal record. Such a policy is discriminatory, given that minority groups are statistically more likely to have criminal records and are thus disproportionately impacted by zero-tolerance policies toward criminal history. 

Just as employers must be thoughtful and nuanced about their hiring decisions based on background checks, they must also be reflective and nuanced in designing criteria for their AI tools.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *