AWS Updates Bedrock with Library Expansion and Model Comparison Feature

Launched just last fall, Amazon Bedrock, the AI development platform stewarded by cloud giant Amazon Web Services (AWS), currently has a user base in the "tens of thousands."

Bedrock is a serverless platform that gives developers access to generative AI models via an API so they can build AI-enabled applications with relatively little overhead. Swami Sivasubramanian, AWS vice president of data and machine learning, gave an update on Bedrock's growth in a blog post Tuesday, as well as described updates to the service designed to improve its usability.

More Models
Bedrock's library of AI models includes those from AI21, Stability AI, Cohere, Meta, Anthropic, Mistral and Amazon itself. On Tuesday, AWS announced that Meta's latest models, the Llama 3 8B and 70B, are now available on Bedrock.

The Llama 3 models "are designed for building, experimenting, and responsibly scaling generative AI applications," Sivasubramanian wrote. "Llama 3 8B excels in text summarization, classification, sentiment analysis, and translation, ideal for limited resources and edge devices. Llama 3 70B shines in content creation, conversational AI, language understanding, R&D, enterprises, accurate summarization, nuanced classification/sentiment analysis, language modeling, dialogue systems, code generation, and instruction following."

Also generally available on Bedrock is AWS' own Titan Image Generator model. The Titan Image Generator lets users create images using natural language prompts, edit existing images, specify image dimensions and more. Images created by Titan come with "tamper-resistant" invisible watermarks to denote the fact that they are AI-generated.

Meanwhile, version 2 of AWS' Titan Text Embeddings model, which converts text into vectors (or numerical representations), is coming to Bedrock soon. This LLM is "optimized for RAG workflows," said Sivasubramanian, and "prioritizes cost reduction while retaining 97% of the accuracy for RAG [retrieval-augmented generation] use cases, out-performing other leading models." At general availability, Titan Text Embeddings V2 will come in three sizes: 256, 512 and 1,024 embeddings.

Finally, Bedrock will soon add support for two models from Cohere: Command R and the recently released Command R+. These models are "optimized for long-context tasks like [RAG] with citations to mitigate hallucinations, multi-step tool use for automating complex business tasks, and support for 10 languages for global operations."

Help Choosing Models
A newly available Model Evaluation feature in Bedrock now lets users compare models to help them choose which one best suits their purposes.

Model Evaluation lets developers "select candidate models to assess -- public options, imported custom models, or fine-tuned versions," Sivasubramanian said. "They define relevant test tasks, datasets, and evaluation metrics, such as accuracy, latency, cost projections, and qualitative factors."

A separate blog post by AWS evangelist Jeff Barr provides a detailed overview of Model Evaluation.

Custom Model Support
AWS is previewing a new capability that would let users import their own, pre-customized models into Bedrock, where they can then access it like any other Bedrock model.

This feature, called Custom Model Import, extends Bedrock's native security, scalability, fine-tuning and development features to organizations' own models that they've already been building elsewhere -- including, for example, in AWS' own SageMaker machine learning platform.

"Starting today, Amazon Bedrock adds in preview the capability to import custom weights for supported model architectures (such as Meta Llama 2, Llama 3, and Mistral) and serve the custom model using On-Demand mode," AWS explained in this detailed blog post. "You can import models with weights in Hugging Face safetensors format from Amazon SageMaker and Amazon Simple Storage Service (Amazon S3)."

AWS has also made improvements to Bedrock Agents, which it describes as "fully managed capabilities that make it easier for developers to create generative AI-based applications that can complete complex tasks for a wide range of use cases and deliver up-to-date answers based on proprietary knowledge sources."

Agents are now available for two of Anthropic's Claude 3 models: Sonnet and Haiku. Agents are also easier to use and make, according to this AWS blog, which described improvements like "quick agent creation," "simplified configuration" and "return of control."

Stronger AI 'Guardrails'
Finally, AWS is helping developers create more controls around AI misuse and harmful content with the general availability of Guardrails for Bedrock.

"Guardrails for Amazon Bedrock is the only responsible AI capability offered by a major cloud provider that enables customers to build and customize safety and privacy protections for their generative AI applications in a single solution," said AWS senior solutions architect Esra Kayabali in a separate blog post.

Guardrails work with the models that are already included in Bedrock, as well as with models that users have independently fine-tuned. While many models have native protections to protect against misuse, Guardrails can filter out as much as 85 percent more harmful content than these tools alone, according to Kayabali.

Guardrails lets developers set filters to block the use of specific words in their applications, or to block categories of words. The latter option can be used across six categories, per Sivasubramanian: "hate, insults, sexual, violence, misconduct (including criminal activity), and prompt attack (jailbreak and prompt injection)."

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.