News

Microsoft 's 'Counterfit' Tool for Attacking AI Systems Now Open Source

Microsoft announced this week the release to open source of Counterfit, its artificial intelligence (AI) security assessment tool. The tool was developed to help organizations conduct AI risk assessments to ensure that the algorithms used in their businesses are "robust, reliable, and trustworthy," the company says.

A command-line-interface that provides a generic automation layer for assessing the security of machine learning (ML) models, Counterfit was developed by Microsoft's internal "red team" to conduct automated attacks at scale on the company's AI systems.

"This tool was born out of our own need to assess Microsoft’s AI systems for vulnerabilities," said Will Pearce, Microsoft's AI Red Team Lead, and Ram Shankar Siva Kumar, "Data Cowboy" on Microsoft's Azure Security Data Science team, "with the goal of proactively securing AI services, in accordance with Microsoft’s responsible AI principles and Responsible AI Strategy in Engineering (RAISE) initiative. Counterfit started as a corpus of attack scripts written specifically to target individual AI models, and then morphed into a generic automation tool to attack multiple AI systems at scale."

Microsoft uses Counterfit to attack its own AI systems in production to find vulnerabilities. The tool is also being piloted by Microsoft for use in the AI software development phase to "catch vulnerabilities in AI systems before they hit production," the company says.

Organizations can use Counterfit to attempt to "evade and steal AI models," Microsoft says. It has a logging capability that provides telemetry information that can be used to understand AI model failures. And it comes preloaded with published attack algorithms that can be used to bootstrap red team.

The tool is similar to other attack tools, such as Metasploit or PowerShell Empyre, and can hook into existing offensive tools, Microsoft says. The company recommends using Counterfit with its Adversarial ML Threat Matrix solution, an ATT&CK style framework released by MITRE and Microsoft for security analysts to orient to threats against AI systems.

In building Counterfit, Microsoft enlisted testing support from partners, organizations, and government agencies. The tool is now available to organizations more broadly and will work across AI models used on-premises, in the cloud, and at the edge, regardless of the type of data used.

The announcement points to a bunch of resources that organizations can use to understand machine learning failures. There's also a "Threat Modeling" guide for developers of AI and ML systems. This document notes that the greatest threat to machine learning systems today is "data poisoning" because it's hard to detect. Attackers can do things like force emails to be labeled as spam, add inputs that lead to misclassifications, and "contaminate" training data. 

Counterfit is available now on GitHub for deployment via the Azure shell and as an Anaconda Python environment installed locally.

Microsoft is planning to talk more about Counterfit during a May 10 webinar, led by Ann Johnson, VP of Microsoft's Security, Compliance, and Identity Business Development group, and Dr. Hyrum Anderson, a Microsoft Principal Architect. Sign-up for the webinar can be found at this page.

About the Author

Kurt Mackie is senior news producer for 1105 Media's Converge360 group.

Featured