Microsoft Updates 'Responsible AI' Framework with New Facial Rec Guidance
Microsoft has announced updates to its "Responsible AI Standard," the internal playbook that guides Redmond's artificial intelligence product development and deployment. Among other changes, v2 of the document adds "an additional layer of scrutiny" to the way Microsoft implements its facial recognition services. Customers will be cut off next year if they don't meet Microsoft's stipulations, the company says.
With the publication of this document, Microsoft is introducing "Limited Access" eligibility requirements. New customers must apply for access to use facial recognition in the company's Azure Face API, Computer Vision, and Video Indexer. And existing customers have one year to apply and receive approval for continued access to the facial recognition services based on their use cases they provide.
"By introducing Limited Access, we add an additional layer of scrutiny to the use and deployment of facial recognition to ensure use of these services aligns with Microsoft’s Responsible AI Standard and contributes to high-value end-user and societal benefit," said Sarah Bird, principal product manager in Microsoft's Azure AI group, in a blog post.
Microsoft is also removing emotional assessments from its face-scanning solutions, as well as "identity attributes such as gender, age, smile, facial hair, hair, and makeup." The idea is that such attributes can be used in "stereotyping, discrimination, or unfair denial of services."
Facial recognition software is all about surveillance, but Microsoft conceives of some limited access commercial use cases for its technologies, as described in this document ("Use Cases for Azure Face Service"). The list of permitted commercial use cases includes:
- Identity verification: "for opening a new account, verifying a worker, or authenticating to participate in an online assessment."
- Touchless access: for cards and tickets in "airports, stadiums, offices, and hospitals."
- Personalization: for kiosks at the workplace and at home.
- Blocking: to "prevent unauthorized access to digital or physical services or spaces."
There are also some limited-use cases for public sector applications of Microsoft's facial recognition technologies. They're similar to the commercial use cases, but allow law enforcement to scan already apprehended suspects in court cases. Microsoft also permits facial scanning in cases where death or physical injury risk may be involved. It's also permitted for "humanitarian assistance."
Microsoft's policy already prohibits the use of real-time facial recognition technology on mobile cameras used by law enforcement to attempt to identify individuals in "uncontrolled, in-the-wild" environments.
Microsoft made headlines in 2020 with claims it would make selling facial recognition technology to U.S. police agencies contingent upon there being a national law in place "grounded in human rights." However, the American Civil Liberties Union had revealed that Microsoft was trying to sell facial recognition software to the U.S. Drug Enforcement Agency simultaneously with its law enforcement limitation claims.
With Microsoft's new approach, it's still the judge and jury of the fair use of the facial recognition services it sells. Google has offered similar ruminations on the use of facial recognition technologies, without completely ruling them out.
The new stipulations suggest that Redmond is getting serious about limiting the use of its facial recognition services. It's a distinctly different direction for the company's evolving AI solutions strategy recent past. Three years ago, Microsoft pulled back from a $78 million investment in Israeli startup AnyVision after it was revealed that AnyVision was using its face-scanning technologies to help the Israeli state surveil Palestinians.
Kurt Mackie is senior news producer for 1105 Media's Converge360 group.