News
Red Hat Goes All-In on Generative AI with Acquisition of Neural Magic
- By John K. Waters
- 11/12/2024
Red Hat, one of the world's leading providers of open-source solutions, today announced its pending acquisition of Neural Magic, a company known for pioneering GenAI inference software. This acquisition appears to be a strategic move aimed at making powerful, flexible AI accessible across the hybrid cloud. Neural Magic’s expertise in high-performance inference software and algorithms will now be in Red Hat’s hands, potentially pushing the company's mission of open-source innovation into new frontiers.
With enterprises collecting data everywhere—from sprawling cloud servers to edge devices—AI’s next frontier will almost certainly demand a platform-agnostic approach, says Red Hat’s president and CEO, Matt Hicks. "AI workloads need to run wherever customer data lives across the hybrid cloud," he said in a statement. "This makes flexible, standardized and open platforms and tools a necessity, as they enable organizations to select the environments, resources and architectures that best align with their unique operational and data needs."
With the acquisition of Neural Magic, Red Hat is doubling down on the tools to make that happen. Standardized, flexible, open platforms will allow organizations to choose the environments, resources, and architectures that align with their needs—whether they’re running models on AMD GPUs or Google TPUs.
GenAI’s promise is undeniable, but industry watchers argue that today’s technology hasn’t caught up to the hype. Large language models (LLMs) are getting bigger and more complex, and powering them takes serious computing resources and operational savvy. For most organizations, the ability to create customized and secure AI models is still out of reach. This is where Red Hat sees a gap—and an opportunity.
"AI isn't just about massive, resource-intensive models," Hicks said in a blog post. "We're witnessing a shift towards smaller, more specialized models that deliver exceptional performance with greater efficiency. These models are not only more efficient to train and deploy, but they also offer significant advantages in terms of customization and adaptability."
Enter vLLM, a Berkeley-developed, community-driven project that supports open model serving. With Neural Magic’s involvement in vLLM, Red Hat is positioned to enable faster inference and advanced support for multiple hardware backends. That means an open pathway for organizations to leverage AI across a spectrum of devices, from enterprise data centers to edge hardware.
Neural Magic was born in the MIT lab in 2018, bringing cutting-edge inference performance software to the market. With a mission centered on deep learning and performant inference, the team’s talent and expertise in AI engineering have made them leaders in their field. Now, as part of Red Hat, Neural Magic is poised to accelerate the democratization of AI.
According to Brian Stevens, Neural Magic’s CEO, joining Red Hat is a perfect cultural fit. “At Neural Magic, we’ve assembled some of the industry’s top talent in AI performance engineering," he said in a statement. "Joining Red Hat is not only a cultural match, but will benefit companies large and small in their AI transformation journeys."
This isn’t just about adding another tool to the AI ecosystem, Hicks says. Red Hat is building an end-to-end AI strategy, with RHEL AI at its core. Here’s what’s coming to Red Hat’s AI suite:
- RHEL AI (Red Hat Enterprise Linux AI): The base for enterprise-grade LLMs, helping businesses deploy and test open-source models.
- OpenShift AI: A complete suite for developing, training, serving, and monitoring AI models on Kubernetes across any cloud.
- InstructLab: A project in collaboration with IBM, InstructLab is an open-source AI community focused on evolving open-licensed models with advanced fine-tuning.
- vLLM: The Powerhouse at the Core
- The heart of this transformation is vLLM, an open-source project backed by Neural Magic’s engineering prowess. It’s a flexible, open stack, offering full control over infrastructure, security policies, and model lifecycle, allowing enterprises to deploy AI wherever their data lives. With LLM Compressor, Neural Magic’s optimization library, enterprises can squeeze more efficiency out of models, thanks to sparsity and quantization algorithms.
By joining Red Hat, Neural Magic brings its mission of cross-platform, high-performance GenAI into the open-source fold, making it available to companies of all sizes. Red Hat’s AI initiative seeks to lower both the cost and complexity of AI adoption, with a solid foundation on RHEL and OpenShift, a partner ecosystem, and hardware support.
About the Author
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at jwaters@converge360.com.