News
        
        Google's Gemini Now Available on Vertex AI Platform
        
        
        
        Fresh on the heels of Google unveiling Gemini, its "largest and most capable AI model," the company announced the advanced tech is now available on its Vertex AI platform.
Gemini 1.0 debuted last week in three sizes. Ultra, the largest size, is for "highly complex tasks; Pro is a scalable, general-purpose model; and Nano, the smallest, is designed to run on devices with limited memory. It's the middle offering, Pro, that is now available on Vertex AI, an "end-to-end AI platform" that the company says provides intuitive tooling, fully managed infrastructure and built-in privacy and safety features.
Google recently launched its most advanced AI model, Gemini, on the Vertex AI platform. Gemini 1.0, introduced last week, comes in three sizes: Ultra for complex tasks, Pro as a scalable model for general purposes, and Nano for devices with limited memory. The Pro version is now available on Vertex AI, offering comprehensive AI solutions with user-friendly tools, fully managed infrastructure, and enhanced privacy and safety features.
Use cases for Gemini Pro listed by Google include:
  - Summarization: Create a shorter version of      a document that incorporates pertinent information from the original text.      For example, summarize a chapter from a textbook or create a product      description from a longer text.
 
  - Question      answering: Provide answers to questions in text. For example, automate      the creation of a Frequently Asked Questions (FAQ) document from knowledge      base content.
 
  - Classification: Assign a label describing      the provided text. For example, apply labels that describe whether a block      of text is grammatically correct.
 
  - Sentiment      analysis: This is a form of classification that identifies the sentiment      of text. The sentiment is turned into a label that's applied to the text.      For example, the sentiment of text might be polarities like positive or      negative, or sentiments like anger or happiness.
 
  - Entity      extraction: Extract a piece of information from text. For example, extract      the name of a movie from the text of an article.
 
  - Content      creation: Generate texts by specifying a set of requirements and      background. For example, draft an email under a given context using a      certain tone.
 
"Vertex AI makes it possible to customize and deploy Gemini, empowering developers to build new and differentiated applications that can process information across text, code, images, and video at this time," said Google exec Burak Gokturk today (Dec. 13).
He further said that developers can now:
  - Discover and      use Gemini Pro, or select from a curated list of more than 130 models from      Google, open-source, and third-parties that meet Google's strict      enterprise safety and quality standards.
 
  - Customize model      behavior with specific domain or company expertise, using tuning tools      to augment training knowledge and even adjust model weights when required.
 
  - Augment models      with tools to help adapt Gemini Pro to specific contexts or use cases.
 
  - Manage and      scale models in production with purpose-built tools to help ensure that      once applications are built, they can be easily deployed and maintained.
 
  - Build search      and conversational agents in a low code / no code environment.
 
  - Deliver      innovation responsibly by using Vertex AI's safety filters, content      moderation APIs, and other responsible AI tooling to help developers      ensure their models don't output inappropriate content.
 
  - Help protect      data with Google Cloud's built-in data governance and privacy      controls.
 
One specific use case he mentioned addresses the relatively recent rise of specialized AI constructs called agents: "With Gemini Pro, now developers can build 'agents' that can process and act on information."
Like other cloud giants have done, Google is providing indemnity on generated model outputs, today extending its generated output indemnity to include model outputs from PaLM 2 and Vertex AI Imagen, in addition to an indemnity on claims related to the company's use of training data. How that indemnity will cover Gemini is a little murky, as the announcement in one sentence said: "Indemnification coverage is planned for the Gemini API when it becomes generally available." However, the very next sentence after that said: "The Gemini API is now available."
That link takes you to the Vertex AI console, where the web-based Vertex AI Studio helps developers quickly create prompts to generate useful responses from the available large language models (LLMs).
While the middle-level Pro model is available, developers will have to wait a while for the top Ultra model.
  "We will be making Gemini Ultra available to select customers, developers, partners and safety and responsibility experts for early experimentation and feedback before rolling it out to developers and enterprise customers early next year," Gokturk said.
        
        
        
        
        
        
        
        
        
        
        
        
            
        
        
                
                    About the Author
                    
                
                    
                    David Ramel is an editor and writer at Converge 360.