X

Microsoft Restricts Its Facial Recognition Tools, Citing the Need for 'Responsible AI'

The tech giant lays out its goals toward equitable and trustworthy AI.

Nina Raemont Writer
A recent graduate of the University of Minnesota, Nina started at CNET writing breaking news stories before shifting to covering Security Security and other government benefit programs. In her spare time, she's in her kitchen, trying a new baking recipe.
Nina Raemont
2 min read
Illustration shows a face being scanned against a red background

Facial recognition technology has become a growing civil rights and privacy concern.

James Martin/CNET

Microsoft is restricting access to its facial recognition tools, citing risks to society that the artificial intelligence systems could pose.

The tech company released a 27-page "Responsible AI Standard" on Tuesday that details the company's goals toward equitable and trustworthy AI. To align with the standard, Microsoft is limiting access to facial recognition tools in Azure Face API, Computer Vision and Video Indexer.

"We recognize that for AI systems to be trustworthy, they need to be appropriate solutions to the problems they are designed to solve," wrote Natasha Crampton, chief responsible AI officer at Microsoft, in a blog post. She added the company would retire its Azure services that infer "emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup." 

Facial recognition technology has become a growing civil rights and privacy concern. Studies have shown the tech misidentifies, at a disproportionate rate, female subjects and subjects who have darker skin tones.This can have serious implications when used to identify criminal suspects or in situations of surveillance. Other companies, including Amazon and Facebook, have also scaled back or discontinued their own facial recognition tools.

"[Our laws] have not caught up with AI's unique risks or society's needs. While we see signs that government action on AI is expanding, we also recognize our responsibility to act," Microsoft said in a blog post. 

Microsoft's standards for equitable and responsible AI technology don't stop at facial recognition. They also apply to the speech-to-text tech the company offers through Azure AI's Custom Neural Voice. Microsoft said it took steps to improve the software after a March 2020 study detailed high error rates in speech-to-text technologies when used across African American and Black communities. 

Starting Tuesday, Microsoft said new customers will need to apply to use Azure's Face API, and returning customers are granted one year to reapply to continue using the facial recognition software.