When visions of a world dominated by machines abound in mainstream media, fear can end up clouding people’s judgment of emerging tech – and with it, their ability to engage in meaningful debate, an artificial intelligence expert said.
Concerns over AI have prompted calls to regulate its development, but the move won’t be as simple as regulating just any other product on the market.
“We’ll need an overhaul of regulatory infrastructure for [policy development] to be very nimble [and] agile,” said Abhishek Gupta, machine learning engineer at Microsoft and founder of the Montreal AI Ethics Institute.
“We’ll need technical experts who know what is actually going on in the field and what the actual capabilities are, rather than policy makers being informed by popular media or some dense white paper reports that they read,” the AI ethics researcher told HR Tech News.
“Part of my belief is for regulation and policy makers to build up a basic literacy of how these technologies work,” he said. “Because then, when engaged with technical experts, they can ask the right questions.”
Before governments and the private sector can balance innovation with legislation, they first need to come to a common understanding of the tech, Gupta said. He believes people should “move the conversation away from sensationalist ideas and dystopic scenarios” and focus on the practical and tangible ways AI affects businesses and communities.
In HR, for instance, AI and automation can influence recruitment through candidate vetting and resume screening; coach workers with their career-development goals using automated prompts; and forecast workforce trends through predictive analytics.
The challenge, however, is for business and community leaders to develop their own literacy of AI through a language accessible to technical and nontechnical experts alike. This shared language is what will push the AI ethics agenda further.
“If you’re choosing to buy a solution from a supplier, what are some of the questions that you can ask them about how that system has been built: has it been tested; what are the false positive rate and false negative rates?” Gupta said.
The Montreal AI Ethics Institute organizes regular meetings for anyone in the community who wants to learn more about the impact of AI.