News

NVIDIA expands multiyear AI infrastructure deal with Meta, including millions of Blackwell and Rubin chips

NVIDIA said it has expanded a multiyear partnership with Meta Platforms under which it will supply the social media company with millions of its current- and next-generation artificial intelligence chips for use in Meta's data centers.

The expanded arrangement covers NVIDIA’s Blackwell and Rubin graphics processing units, as well as NVIDIA's central processing units and networking products, the company said. Meta plans to use the systems for training AI models and running them in production, the company said.

NVIDIA did not disclose the financial terms of the deal. Analysts have estimated that the agreement could be worth tens of billions of dollars based on the scale of shipments and the product mix.

The agreement includes broader adoption of Nvidia’s Arm-based Grace processors in Meta’s data centers, NVIDIA said, alongside plans to deploy the company's Vera processors in 2027. NVIDIA said the Grace rollouts aim to improve performance per watt for some of Meta’s data center applications.

"No one deploys AI at Meta’s scale – integrating frontier research with industrial-scale infrastructure to power the world’s largest personalization and recommendation systems for billions of users," NVIDIA Chief Executive Jensen Huang said in a statement.

NVIDIA said Meta will also deploy its Spectrum-X Ethernet switches as part of Meta’s Facebook Open Switching System platform, a move NVIDIA framed as improving utilization, latency, and operational efficiency for AI-scale networking.

Meta has adopted NVIDIA Confidential Computing for private processing in WhatsApp, Nvidia said, and the two companies are working to expand that capability to other applications across Meta’s portfolio.

The partnership spans on-premises deployments as well as capacity provided by Nvidia cloud partners, Nvidia said. Those partners include firms such as CoreWeave and Crusoe that host NVIDIA chips for customers to rent.

The deal underscores NVIDIA's push beyond GPUs into data center CPUs and networking, areas long dominated by Intel and Advanced Micro Devices. NVIDIA introduced its Grace CPUs as companions to its AI accelerators, and it has since sought to position them for a wider range of data center workloads.

The announcement comes as large technology companies face investor scrutiny over the pace of AI infrastructure spending and the payback from those investments. Chip shares have also cooled in recent months as customers, including Amazon, Google, and Microsoft, roll out newer versions of their own AI chips.

Meta is developing in-house AI chips and has also explored alternatives, including discussions about using Google’s Tensor Processing Unit chips for some AI work, according to a prior media report.

NVIDIA shares rose in early trading after the announcement, while Meta shares slipped.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured