What to do when your hiring data is biased

What to do when your hiring data is biased

There’s an old adage in HR that people hire in their own image. But when one considers the hundred or so cognitive biases swirling around in the human brain, it becomes clear how hiring decisions can be tainted by prejudice.

This is the premise behind machine learning in recruitment: to allow algorithms to run through, parse out, and “read” candidate data with as little human intervention as possible.

That’s different from saying there’s a complete absence of bias in machine learning.

Where bias occurs
Critics have argued against human bias influencing the way algorithms are written. But for Michael Martin, senior executive at IBM Canada, HR’s bigger problem isn’t the programming bias – it’s often in the training data.

The training data is a subset of an organization’s overall dataset; it is used specifically to “teach” the machine how to classify data.

When HR data scientists use historical data that is skewed in favor of, or against, a particular group of candidates, then there’s a likelihood of replicating the bias.

“If I load data, say, for a job position, and my training data is focused on 40- to 50-year-old people, then it could eliminate people that might be around 30 years old,” Martin cited as an example.

“We want all qualified candidates to be considered equally,” he said. “The training data might not have the right attributes; it then trains the AI to think that the only candidate is of a certain age bracket.”

In hiring for the IT sector too, for instance, there might typically be a dominance of white male candidates. “If you make a hiring decision based on race or sex, then you could be missing out on some great talent just because that’s the way it’s always been done,” said Martin.

“The first thing we do is we look at the company’s hiring data and analyze it, and look for trends and patterns and irregularities that might be indicators of a potential bias.”

“You might be closing your mind or closing off an opportunity to a very good candidate just because of one characteristic you’re perceiving to be bad,” he said.

The ‘mythology’ behind AI
Historically, clients have had to create their own training data to feed into their AI tools. But IBM is giving organizations a complete set of training data to help them avoid costly mistakes of eliminating good candidates; mistakes that may be no different from the supposedly sexist algorithm deployed (and later scrapped) by Amazon.

Much of the fear surrounding AI and machine learning stems from the “mythology” Hollywood has perpetuated about cognitive technologies, Martin said. “They think it’s very complicated.”

“People perceive it as a risk to use some of these tools because they don’t understand that the tools are there to augment their work.”