914-243-9155 info@myhrd.biz

In the 21st century, we’re facing technological advancement all around us. A big part of that is Artificial Intelligence (AI). 

AI works by analyzing, sorting, and storing vast amounts of available information into algorithms, which allows the system to “learn” from the patterns it identifies. AI is fast and relatively easy to use, making it a go-to tool for many industries. 

AI has become increasingly prevalent in the recruitment and hiring process. Companies use AI-powered systems to analyze resumes, conduct candidate assessments, and even make hiring decisions. However, these AI systems are not immune to the biases that exist in the data they use to learn and make predictions. This can result in unintended discrimination against certain groups of people during the hiring process.

How Machine Learning Works

Bias in AI can be introduced in many places. For example, if the training data contains biases, then the AI system will learn and reflect these biases in its predictions. In the hiring process, this can mean that certain candidates are unfairly disadvantaged due to their race, gender, age, or other factors that are not related to their qualifications or abilities. 

For example, if a company’s training data contains a higher proportion of resumes from male candidates than female candidates, then an AI system designed to analyze resumes could “discriminate” against female applicants by assuming that male candidates are more likely to be qualified for the job.

AI systems are trained on historical data and past patterns to make predictions about future outcomes. This means that if past hiring practices have been biased, then the AI system will continue to perpetuate these biases in the future.

Many tech vendors that sell their AI systems to companies for hiring are tackling the issue of hiring bias head on. By being aware of the data sets that the machines are trained on, it is possible to eliminate or minimize bias in the end system by not teaching the system bias in the first place. By focusing on the training data, one can even the playing field. 

It’s important for business leaders to engage in these discussions with their AI vendors proactively – learning “how they use their data, how they train their model, and how they validate not having adverse impact” is crucial to finding a vendor that is aligned to your business’s commitment to fair hiring practices. 

The Benefits and Drawback of AI for Hiring

The benefits of using AI during the hiring process are obvious – smart tech can streamline the process by pre-screening resumes and applications so that when the talent pool gets to the hiring team, they already have a qualified bunch to choose from. From there, companies can use AI to further focus their search efforts. 

Attorney Michelle Duncan says, There are AI-related tools which pre-screen resumes and applications, evaluate video-recorded interviews, create work simulations and chatbots, and mine social media to determine applicants’ digital footprint.” She also warns of legal landmines for companies relying on AI:

“Enforcement agencies will look at overall applicant-to-hire for adverse impact and then drill down into a company’s selection procedures. That means, they will ask about various tools you’re using and the impact those tools have. If it is determined that the tools do result in adverse impact, the burden is on the employer to properly validate that tool, meaning proving that the tool is job-related and required by business necessity.”

Some states, like Illinois, have laws on the books about AI in hiring; meanwhile, there are no firm federal laws to guide companies in their use of AI. However, in the past year, the White House has released soft “guidelines” for best practices when using AI.

Loose Federal Oversight

This “blueprint” is nonbinding, but it lays the groundwork for firmer legislation in the future. 

This blueprint includes five points that are intended to guide the creation and deployment of AI in the pursuit of protecting the public from machine-entrenched bias. They are as follows..

  • Protecting people from unsafe or ineffective automated systems;
  • Preventing discrimination by algorithms;
  • Safeguarding people from abusive data practices and giving them control over how their data is used;
  • Informing people that an automated system is being used;
  • Letting users opt out of automated systems;

These loose guiding principles are meant to prevent oversteps from machine learning systems; on the other hand, some tech leaders argue that even these guidelines could slow American business practices and leave companies at a disadvantage. 

New York City

One of Bill de Blasio’s final acts as mayor of New York City was to pass legislation requiring companies hiring in New York to audit their AI programs for bias and ensure that the programs they use in their hiring practices are free of racial and gender bias.

The systems that fall under the requirements of this new law are defined broadly as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence that scores, classifies, or otherwise makes a recommendation regarding candidates and is used to assist or replace an employer’s decision-making process.”

To enforce the law, employers that fail to comply will face daily fines of up to $1,500 for each subsequent violation.

The law also requires employers to inform applicants and employees that AI systems are being used to review their material, reflecting the growing concern for transparency and fairness in the hiring process. 

While this law is on the mark of federal guidance, some wish the legislation was stricter. 

Where We Go Next

Between the federal guidance and some state and local laws taking effect, AI and machine learning is slowly shifting from the Wild West to a monitored tool. This is all the more encouraging given the growing popularity of this new technology among businesses looking to streamline their hiring and onboarding processes.

For now, it is crucial to remember that machines are not impartial, neutral tools… they are designed by humans and can harm as well as help.

Ensuring that your systems are fair and free from bias is a good way to stay clear of the law while also protecting your time and resources. 

This blog was inspired by the following resources:  SHRM, SHRM, Wall Street Journal!