News

AI21’s Jamba-Instruct LLM Now Available on Azure AI Studio's MaaS

AI21 Labs' instruction-following large language model (LLM), Jamba-Instruct, is now available as a serverless API within Azure AI Studio’s Models-as-a-Service (MaaS). This is the first time Jamba-Instruct has been accessible through a cloud partner, and it represents a significant new collaboration between AI21 and Microsoft in the GenAI space, the companies said.

Including Jamba-Instruct in Azure AI’s MaaS catalog strengthens both companies' positions in the GenAI race by providing enterprise customers with the tools necessary to develop and scale cutting-edge AI applications, the companies said.

Jamba-Instruct is an instruction-tuned version of the Jamba base model, which AI21 introduced and open-sourced in March of this year. Jamba is based on Mamba, which was originally proposed in "Mamba: Linear-Time Sequence Modeling with Selective State Spaces," a paper published by Albert Gu and Tri Dao. The model is a hybrid of state-space-models (SSM) architecture and traditional Transformer architecture (the "T" in GPT), which proponents say optimizes performance, quality, and cost-efficiency.

One of Jamba-Instruct’s standout features is its 70,000-token context window, which allows the model to manage long-context instances. This capability is crucial for tasks such as document comprehension and implementing sophisticated Retrieval-Augmented Generation (RAG) mechanisms, which makes it a potential game-changer for enterprises looking to harness advanced AI workflows. The model’s novel hybrid architecture also supports cost-efficient processing, which enables it to manage lengthy contexts with a smaller cloud footprint.

"By utilizing the instruction-tuned Jamba-Instruct on Azure’s platform, organizations can harness the full potential of AI while having safe, reliable, and secure use" said Microsoft product manager Thasmika Gokal, in a blog post. "As an instruction-tuned model, Jamba-Instruct comes with built-in safety instructions, chat capabilities, and complex command comprehension needed to make it ready for immediate use by enterprises."

With this collaboration, AI21 is promoting the Jamba-Instruct model  to developers, emphasizing its ability to provide seamless integration with tools in Azure AI Studio, Microsoft's GenAI development hub, including Azure AI Content Safety, Azure AI Search, and prompt flow to enhance effective AI practices. Gokal listed "the main advantages that highlight the smooth integration and strong support system provided by Jamba-Instruct with Azure, Azure AI and Models as a Service." Her listed included: 

  • Enhanced Security and Compliance: Azure places a strong emphasis on data privacy and security, adopting Microsoft's comprehensive security protocols to protect customer data. With Jamba-Instruct on Azure AI Studio, enterprises can operate confidently, knowing their data remains within the secure bounds of the Azure cloud, thereby enhancing privacy and operational efficiency.   
  • Content Safety Integration: Customers can integrate Jamba-Instruct models with content safety features available through Azure AI Content Safety, enabling additional responsible AI practices. This integration facilitates the development of safer AI applications, ensuring content generated or processed is monitored for compliance and ethical standards. 
  • Simplified Assessment of LLM flows:Azure AI's prompt flow allows evaluation flows, which help developers to measure how well the outputs of LLMs match the given standards and goals by computing metrics. This feature is useful for workflows created with Jamba-Instruct; it enables a comprehensive assessment using metrics such as groundedness, which gauges the pertinence and accuracy of the model's responses based on the input sources when using a retrieval augmented generation (RAG) pattern. 
  • Simplified Deployment and Inference: By deploying AI21 models through MaaS with pay-as-you-go inference APIs, developers can take advantage of the power of Jamba-Instruct without managing underlying infrastructure in their Azure environment. You can view the pricing on Azure Marketplace for Jamba-Instruct based on input and output token consumption. 

"This partnership between AI21 and Microsoft underscores their commitment to empowering enterprise builders with the most advanced language models available, paving the way for the next generation of AI-driven applications at scale," Gokal added

Jamba-Instruct is available in the model catalog in both Azure AI Studio and Azure Machine Learning Studio.  There are rate limits for the Jamba-Instruct model on Azure. Each deployment has a rate limit of 400 tokens per minute and 1,000 API requests per minute.

 

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured

Upcoming Training Events

0 AM
Live! 360 Orlando
November 17-22, 2024
TechMentor @ Microsoft HQ
August 11-15, 2025