Is your recruitment software biased?

Is your recruitment software biased?

How can recruitment software predict which new hires are likely to succeed in the long run – and which ones won’t?

Of the multiple HR functions, recruitment is expected to benefit the most from advances in artificial intelligence and predictive analytics.

Today’s most innovative recruitment software can take large volumes of data on job applicants, and start parsing and classifying the data based on what it has ‘learned’ from handling previous datasets.

Earlier this month, HRTechNews reported how employees – hired using algorithmic systems – “outperform” those hired by human decisions by at least 25%.

Experts believe predictive analytics, far from replacing the role of gut instinct in candidate selection, enables recruiters to make informed decisions.

By relying on mathematical models, smart machines are beginning to make decisions on behalf of humans, whether by screening resumes or identifying diverse talent pools likely to be overlooked by human recruiters.

What happens, however, when the recruitment software is tainted with bias – the influence of personal prejudice on data collection, classification, and analysis?

When the algorithm learns to select candidates based on attributes irrelevant to performance, such as gender and ethnicity, and not credentials?

Experts have pointed out algorithm bias is simply a reflection of implicit human bias, and the use of historical data that favors one set of applicants over another would likely produce skewed results.

Some examples of algorithm bias are subtle. In a 2015 experiment on Google’s audience targeting, the algorithm served up ads for high-paying executive jobs to what it deemed were ‘male’ users more than it did to ‘female’ ones during the simulation.

On the contrary, a number of tech startups such as Eightfold.ai and HiringSolved leverage AI in hiring precisely to increase diversity in the candidate pool and, ultimately, in the workforce.

Whether AI-powered decision-making is beneficial or detrimental to society at large has been the subject of debate, and mathematician Cathy O’Neil is among industry observers out on the frontlines calling for a proper audit of these models.

Algorithms are “assumed to be fair and objective simply by dint of their mathematical nature,” she said.

The lack of oversight in the tech industry, however, has prevented software users from identifying which platforms are plagued by algorithm bias.

HR professionals and tech specialists can avoid these loopholes and ensure fairness by submitting their recruitment software for auditing.

O’Neil founded the O’Neil Risk Consulting and Algorithmic Auditing (ORCAA) firm with a two-fold mission to help organizations use algorithms fairly, and to establish measures that test a model’s tendencies for bias. Businesses whose mathematical models pass the fairness test will receive an ORCAA stamp of approval, much like the tech industry’s version of the Good Housekeeping seal.

 

Related stories:
Will algorithms replace gut instinct?
Can AI help eradicate unconscious bias?