In-Depth

5 Key Takeaways from Microsoft's 'Trustworthy AI' Manifesto

An inside look at how Microsoft's AI sausage is made, responsibly.

This month, Microsoft shared a paper titled "How Microsoft 365 Delivers Trustworthy AI." Part mission statement, part product blueprint and part policy manual, the 25-page document was released just a few months after the general availability of Microsoft 365 Copilot, Microsoft's bid to bring generative AI to the scores of organizations around the world using its cloud-based productivity suite.

Adoption, though, is not a sure thing. Copilot and solutions like it are hamstrung by persistent concerns about the security of LLMs and commercial generative AI technologies in general.

The paper seems designed to assuage those concerns. As Microsoft put it in a blog post, it's "a comprehensive document providing regulators, IT pros, risk officers, compliance professionals, security architects, and other interested parties with an overview of the many ways in which Microsoft mitigates risk within the artificial intelligence product lifecycle."

"Comprehensive" is correct. The paper covers a lot of ground, from which teams within Microsoft's vast organization are responsible for steering its AI policies, to which AI regulations on both sides of the pond Microsoft has adopted (and which ones it is merely "considering"). The full document is available to read here; below are some highlights.

1. The 10 AI Vulnerabilities on Microsoft's Radar
Widespread use of AI opens the door to new and harder-to-detect cybersecurity threats. Microsoft is keeping a particularly close eye on these 10:

  • Command Injection: The ability [to] inject instructions that cause the model to deviate from its intended behavior.
  • Input Perturbation: The ability to perturb valid inputs such that the model produces incorrect outputs (also known as model evasion or adversarial examples).
  • Model Poisoning or Data Poisoning: The ability to poison the model by tampering with the model architecture, training code, hyperparameters, or training data.
  • Membership Inference: The ability to infer whether specific data records, or groups of records, were part of the model's training data.
  • Attribute Inference: The ability to infer sensitive attributes of one or more records that were part of the training data.
  • Training Data Reconstruction: The ability to reconstruct individual data records from the training dataset.
  • Property Inference: The ability to infer sensitive properties about the training dataset.
  • Model Stealing: The ability to infer/extract the architecture or weights of the trained model.
  • Prompt Extraction: The ability to extract or reconstruct the system prompt provided to the model.
  • Input Extraction: The ability to extract or reconstruct other users' inputs to the model.

Future updates to Microsoft's Software Development Lifecycle will be designed to keep pace with these and other emerging AI security threats, according to the paper.

2. The 6 Tenets of Microsoft's Responsible AI Standard
Now in its second edition, the Responsible AI Standard (RAIS) is Microsoft's evolving internal rulebook for developing around AI. Wherever there's a gap between legal policies around AI and the actual capabilities of AI technology, Microsoft turns to RAIS to guide its hand.

There are six "domains" covered in the RAIS, per Microsoft:

  • Accountability
  • Transparency
  • Fairness
  • Reliability
  • Safety
  • Privacy and security
  • Inclusiveness

"The RAIS applies to all AI systems that Microsoft creates or uses," according to the paper, "regardless of their type, function, or context."

"As a commercial customer of Microsoft 365, your data will not be used to train Copilot LLMs without your consent, even models only used by other users within your tenant. You are always in control of how, when, and where your data is used."

Microsoft, "How Microsoft 365 Delivers Trustworthy AI," January 2024

3. Microsoft's AI Watchmen
The paper identifies three discrete groups within Microsoft that "work collaboratively" to steer the company's responsible AI efforts. They are:

  • The Office of Responsible AI (ORA), an offshoot of Microsoft's legal arm. Besides lawyers, ORA also comprises engineers and privacy and security experts who've all been tasked with overseeing Microsoft's "implementation of the company's responsible AI strategy across all business units and functions."
  • AI Ethics and Effects in Engineering and Research (Aether), which was started in 2016 by then-Microsoft Research director Eric Horvitz (who has since become Microsoft's first ever chief scientific officer) and Microsoft President Brad Smith. Aether advises Microsoft's top floor on "rising questions, challenges, and opportunities with the development and fielding of AI and related technologies," and provides "recommendations on policies, processes, and best practices."
  • The Artificial Generative Intelligence Security (AeGIS) team, which, as its name suggests, is responsible for ensuring that Microsoft product teams working with generative AI have the appropriate data security handholds in place. "Its mission," per the paper, "is to provide strategic threat intelligence, guidance, and specialized services to product and platform teams that are developing and launching GAI-powered features."

All three teams spend their time "figuring out what it means for AI to be safe," says Microsoft. "Failure modes in AI don't distinguish between security and responsible AI, so our teams closely collaborate to ensure holistic coverage of risk."

4. Microsoft's 3-Part Data Security Promise
Data security and privacy are an IT team's perennial bugbears. To assure organizations that Copilot is not overstepping its boundaries when it comes to their data, Microsoft is promising these three things:

  • You're in control of your data -- this has been true long before Copilot and remains true in the era of AI
  • Prompts, responses, and data accessed through Microsoft Graph aren't used to train foundation LLMs, including those used by Microsoft Copilot for Microsoft 365
  • Your data is protected at every step by the most comprehensive compliance and security controls in the industry

"No unauthorized tenant or user has access to your data without your consent," according to the paper. "Further, those promises are extended into LLMs. This means that as a commercial customer of Microsoft 365, your data will not be used to train Copilot LLMs without your consent, even models only used by other users within your tenant. You are always in control of how, when, and where your data is used."

Microsoft employs several security checks to make sure Copilot doesn't expose or misuse an organization's data, including data encryption (in transit and at rest) and role-based access control.

"Even though your copilot requests are run on multi-tenant shared hardware and in multi-tenant shared service instance," Microsoft says, "data protection is ensured through extensive software defense-in-depth against unauthorized use such as RAG [retrieval augmented generation] and other techniques."

5. Intelligence Isn't Knowledge
Copilot is good, Microsoft acknowledges, but that's not because it's violating data privacy principles. Per the paper: "Because copilots produce results tailored to you, you may be concerned that your data is being trained into an LLM or could be seen or benefit other users or tenants. That is not the case."

Microsoft gives this high-level explanation for how Copilot processes information while still respecting an organization's security policies:

Copilots use a variety of techniques to ensure highly tailored results without training your data into the models; techniques like "Retrieval Augmented Generation (RAG)" in which your copilot prompt is used to retrieve relevant information from your corpus using semantic search with an access token that ensures only data you are permitted to see can be used. That data is fed into the LLM as "grounding," along with your prompt, which enables the LLM to produce results that are both tailored for you and cannot include information you are not authored to see.

"Any sufficiently advanced technology is indistinguishable from magic," Arthur C. Clarke famously aphorized. One possible corollary, phrased clumsily, is, "Just because a technology seems markedly advanced, that doesn't mean it's doing anything nefarious."

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured