News

Google Unveils 'Gemma' Open Models Built from Gemini Research

Google this week took the wraps off Gemma, a new family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models. The company is billing Gemma as " a new generation of open models designed to assist developers and researchers in building AI responsibly."

Developed by Google DeepMind and other teams across Google, Gemma is now available worldwide. The company is releasing model weights in two sizes: Gemma 2B and Gemma 7B. Each size is released with pre-trained and instruction-tuned variants.

This release is accompanied by a new Responsible Generative AI Toolkit, developed to provide guidance and essential tools for creating safer AI applications with Gemma. Google is also providing toolchains for inference and supervised fine-tuning (SFT) across all major frameworks, including JAX, PyTorch, and TensorFlow through native Keras 3.0. It also comes with ready-to-use Colab and Kaggle notebooks, alongside integration with popular tools such as Hugging FaceMaxTextNVIDIA NeMo and TensorRT-LLM, make it easy to get started with Gemma.

This release also comes with pre-trained and instruction-tuned Gemma models designed to run on laptops, workstations, or Google Cloud with easy deployment on Vertex AI and Google Kubernetes Engine (GKE). And it's optimization across multiple AI hardware platforms ensures industry-leading performance, including NVIDIA GPUs and Google Cloud TPUs.

Gemma models share technical and infrastructure components with Gemini, Google's largest AI model. This enables Gemma 2B and 7B to achieve best-in-class performance for their sizes, the company says compared to other open models. Gemma surpasses significantly larger models on key benchmarks, the company says, while adhering to Google's rigorous standards for safe and responsible outputs. The company points to a technical report for details on performance, dataset composition, and modeling methodologies.

The Gemma models were designed with its AI Principles, the company said. "As part of making Gemma pre-trained models safe and reliable, we used automated techniques to filter out certain personal information and other sensitive data from training sets," the company said in a statement. "Additionally, we used extensive fine-tuning and reinforcement learning from human feedback (RLHF) to align our instruction-tuned models with responsible behaviors. To understand and reduce the risk profile for Gemma models, we conducted robust evaluations including manual red-teaming, automated adversarial testing, and assessments of model capabilities for dangerous activities...."

Google emphasized that Gemma was built for the open community of developers and researchers powering AI innovation, and it invites interested developer to learn more and access QuickStart guides on ai.google.dev/gemma.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured