Should you tell an employee if they're talking to a robot?

Should you tell an employee if they

Robotics is thriving in the recruitment world – but that doesn’t necessarily mean we should be misleading candidates. Whist it may be fun to see how far a chatbot can be programmed to sound like a human, it throws up some questions around ethics in hiring.

Abhishek Gupta is the founder of Montreal AI Ethics Institute and an AI Ethics Researcher at McGill University, Montreal, Canada.

His research focuses on applied technical and policy methods to address ethical, safety and inclusivity concerns in using AI in different domains. Abhishek comes from a strong technical background, working as a Software Engineer, Machine Learning at Microsoft in Montreal.

HR Tech News spoke to Abhishek to ask his opinion on the changing nature of ethics in AI-powered recruitment and the importance of keeping the human touch in hiring.

“We shouldn’t give too much power to any AI system,” explained Abhishek. “These machines are not autonomous in the sense that we’re the ones architecting the system. Ultimately, it’s the human who we should trust or not trust – it’s the person behind the machine.

“This notion of an evolving output distribution, whereby the algorithm goes forth interacting with real-world data and learning from that, makes you question whether you should trust the human or trust the system itself. The data itself is the source of bias. We’re capturing the stereotypes that exist within society, hence they’re reflected in the outputs of the machine learning systems. For every AI slip up that’s played out on a public stage, I’m confident there’s a few going undetected internally.”

Recruitment is one of the most prominent industries that’s been revolutionized through AI and automation. Robotics allows HR managers to pass over any menial work and devote their precious time to more humanistic skills. That being said, we need to know where to draw the line with our digital colleagues.

For instance, when a candidate is applying for a job via a chatbot or AI-powered recruitment tool – should the candidate be informed of such?

“Unequivocally yes,” explained Abhishek. “Look back at the Google Duplex example from this summer. At the Google IO conference, the team unveiled their new Google Duplex which can book appointments on behalf of the user.”

The system was actually mimicking human tells – in the sense of using filler words like ‘umm’ and ‘err’ – as well as pausing to copy speech patterns. Whilst it’s incredibly helpful in connecting humans to businesses, it also raises questions around disclosure.

Is it disingenuous to make a person think they’re talking to a human entity when they’re not? For instance, whilst booking a hairdressing appointment may be a trivial matter, we can’t necessarily say that this is the only modem this type of technology will be used for in the future.

“Automated telephone systems are fine,” continued Abhishek, “but when you’re interacting with what appears to be a live human voice that disingenuousness is a bigger problem.”

Relating that back to the recruitment process, Abhishek believes that organizations have a duty to inform candidate they’re interacting with a robot – no matter how human it may seem.

“It’s important to respect transparency and surrender to full disclosure,” he told HR Tech News. “After all, if a candidate is to trust the company they’re applying to it really doesn’t bode well that you start on a foundation of trickery.”