News
        
        Responsible AI: Microsoft's 3-Pronged Promise
        Microsoft's AI commitments are getting publicized not long after it cut its pioneering Ethics and Society team. 
        
        
        
Microsoft this week shared its three tenets regarding AI use and its commitment to customers. 
To wit, Microsoft promised its AI users the following:
  - Sharing our learnings about developing and deploying  AI responsibly
- Creating an AI Assurance Program, and 
- Supporting you as you implement your own AI  systems responsibly.
The responsible use of AI with proper governance concerns industry,  governments and organizations, opined Anthony Cook, Microsoft's corporate vice  president and deputy general counsel:
  Ensuring the right guardrails  for the responsible use of AI will not be limited to technology companies and  governments: every organization that creates or uses AI systems will need to  develop and implement its own governance systems. That's why today we are  announcing three AI Customer Commitments to assist our customers on their  responsible AI journey.
The expertise sharing part of Microsoft's AI commitments is  based on the documents that Microsoft uses, including its "Responsible  AI Standard, AI  Impact Assessment Template, AI  Impact Assessment Guide, Transparency  Notes, and detailed primers on the implementation of our responsible AI by  design approach," Cook explained. Microsoft also plans to share its employee  training curriculum and invest in "dedicated resources and expertise in regions  around the world." 
The AI Assurance Program part of Microsoft's AI commitments  is still getting created. The exact details of the program weren't described,  but Microsoft seems to be adopting some identity verification aspects used with  its financial service industry customers for it. Microsoft also is incorporating  guidelines from its "Governing AI" document (PDF).  Additionally, Microsoft promised to "attest to how we are implementing  the AI  Risk Management Framework recently published by the U.S. National  Institute of Standards and Technology (NIST)." Microsoft will seek views  from "customer councils," too. 
As for the customer support part of Microsoft's AI  commitments, Microsoft is promising to create a "dedicated team of AI  legal and regulatory experts" as a resource for organizations. It's also promising  to leverage partner support in helping customers implement AI systems. 
"Today we can announce that PwC and EY are our launch  partners for this exciting program," Cook indicated.
Other Moves
Microsoft's AI commitments are getting publicized not long after  it cut  its pioneering Ethics and Society team that was involved in early work with  Microsoft software development teams using AI. Microsoft also had suggested  last month that it was aiming to hire more talent for its responsible AI  program, which seemed like a different turn of events.
Microsoft uses OpenAI's large language models for its Azure  OpenAI service. However, the secure use of OpenAI's service hasn't been altogether  clear. For instance, Samsung  last month banned the direct use of OpenAI's ChatGPT service over security  concerns.  That action came after press  accounts had suggested that some Samsung employees had put proprietary code into  their ChatGPT prompts, which could be viewed by others.
        
        
        
        
        
        
        
        
        
        
        
        
            
        
        
                
                    About the Author
                    
                
                    
                    Kurt Mackie is senior news producer for 1105 Media's Converge360 group.