Microsoft wants to develop stricter policies on AI

Microsoft wants to develop stricter policies on AI

Microsoft is planning to launch a set of ethical principles to govern the use of its facial recognition tools. The move aims to address public concern about the nascent technology, especially those powered by artificial intelligence.

Facial recognition is used by government agencies, including the military, police, and border patrol, as well as tech companies such as Apple and Facebook.

Others in the HR tech space have suggested the “game-changing” potential of facial recognition in candidate screening, workforce management, and workplace security.

But without clearly defined policies to regulate these tools, the use of facial recognition to identify and profile individuals without their consent runs the risk of violating people’s privacy and freedoms.

Some privacy advocates have raised suspicion that the tech is being used for mass surveillance and the profiling of individuals.

In December, Microsoft urged governments to develop legislation covering facial recognition software.

“We do need to lead by example and we’re working to do that,” said Brad Smith, president and chief legal officer of Microsoft.

By the end of March, the company hopes to have “operationalized” its ethical principles by testing and developing new policies, engineering tools, and governance systems, Bloomberg reported.

Microsoft said it plans to set certain limits on the sale of its facial recognition tools, especially when there is a possibility that the tech might be used for purposes other than what the company intends.