News

Red Hat Enterprise Linux AI Goes GA

Red Hat last week announced the general availability of Red Hat Enterprise Linux AI (RHEL AI), a platform designed to support enterprise AI innovation across hybrid cloud environments. The new platform combines open-source-licensed Granite large language models (LLMs) with InstructLab tools for model alignment, enabling businesses to develop, test, and deploy generative AI models tailored to their specific needs.

Red Hat, a subsidiary of IBM, is one of the world’s leading providers of open-source solutions. An Granite is IBM's flagship brand of open and proprietary large language foundation models spanning multiple modalities.

"For GenAI applications to be truly successful in the enterprise, they need to be made more accessible to a broader set of organizations and users and more applicable to specific business use cases," said Joe Fernandes, VP and GM in the Foundation Model Platforms group at Red Hat, in a statement. "RHEL AI provides the ability for domain experts, not just data scientists, to contribute to a built-for-purpose gen AI model across the hybrid cloud, while also enabling IT organizations to scale these models for production through Red Hat OpenShift AI."

RHEL AI is optimized for seamless operation on various environments, from on-prem data centers to public clouds like AWS, Google Cloud, and IBM Cloud. This flexibility allows organizations to scale AI models more efficiently while reducing costs typically associated with large language models.

Bringing a more consistent foundation model platform closer to where an organization’s data lives is crucial in supporting production AI strategies, the company said in a statement. As an extension of Red Hat’s hybrid cloud portfolio, RHEL AI will span nearly every conceivable enterprise environment, from on-prem datacenters to edge environments to the public cloud.

RHEL AI will be available directly from Red Hat, from Red Hat’s original equipment manufacturer (OEM) partners, and to run on the world’s largest cloud providers, including Amazon Web Services (AWS), Google Cloud, IBM Cloud and Microsoft Azure, the company says. "This enables developers and IT organizations to use the power of hyperscaler compute resources to build innovative AI concepts with RHEL AI," the company says.

"The benefits of enterprise AI come with the sheer scale of the AI model landscape and the inherent complexities of selecting, tuning, and maintaining in-house models," observed Jim Mercer, program vice president in the Software Development, DevOps, and DevSecOps group at IDC. "Smaller, built-to-purpose, and more broadly accessible models can make AI strategies more achievable for a much broader set of users and organizations, which is the area that Red Hat is targeting with RHEL AI as a foundation model platform."

The platform is available now via the Red Hat Customer Portal, with additional availability on Microsoft Azure and Google Cloud expected later this year.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured