News
Microsoft Adds AI Compute Power to Azure via AMD, In-House Cobalt Chips
Microsoft is buttressing its Azure cloud platform with more compute options to support AI workloads.
The new capabilities, announced Tuesday as part of Microsoft's 2024 Build conference, are designed to give Azure customers access to "exceptional cost-performance in compute and advanced generative AI capabilities," according to a blog by Omar Khan, general manager of Azure Infrastructure Marketing at Microsoft.
First, Microsoft has launched the preview of Azure virtual machines (VMs) that run on its first-party Cobalt processor, which was first announced at last November's Ignite conference alongside another first-party chip called Maia.
Specifically, the new VMs run on the Arm-based Azure Cobalt 100 processor that's designed for a wide swath of workloads, from general-purpose to cloud-native to AI.
"Cobalt 100-based VMs are Azure's most energy efficient compute offering and deliver up to 40% better performance than our previous generation of Arm-based VMs," said Khan. "Today, the new Cobalt 100 based VMs are expected to enhance efficiency and performance for both Azure customers and Microsoft products."
Second, Microsoft announced the general availability of new AMD-based Azure VMs. These Azure VMs run on AMD's Instinct MI300X Accelerator, which, per Khan, can deliver "unprecedented cost-performance for inferencing scenarios of frontier models like GPT-4."
"Our infrastructure supports different scenarios for AI supercomputing, such as building large models from scratch, running inference on pre-trained models, using model as a service providers, and fine-tuning models for specific domains," he added.
Azure's AI infrastructure comprises a variety of accelerators, Khan noted, including Nvidia and AMD, as well as Microsoft's own chips like Cobalt and Maia.
To help customers spin up the Azure instances they need more quickly, Microsoft is also introducing a new service called Azure Compute Fleet. Now in preview, Azure Compute Fleet lets customers easily spin up and manage up to 10,000 VMs at a time, while keeping within the bounds of their compute needs and cost requirements.
Azure Compute Fleet "simplifies provisioning of Azure compute capacity across different virtual machine (VM) types, availability zones, and pricing models to more easily achieve desired scale, performance, and cost by enabling users to control VM group behaviors automatically and programmatically," said Khan. "As a result, Compute Fleet has the potential to greatly optimize your operational efficiency and increase your core compute flexibility and reliability for both AI and general-purpose workloads together at scale."