Google Betas New Tools For Deploying Machine Learning Pipelines
Google on Wednesday announced the beta launch of Cloud AI Platform Pipelines, a new service that provides a way to deploy robust, repeatable machine learning (ML) pipelines, as well as monitoring, auditing, version tracking, and reproducibility features.
The service aims to delivers an enterprise-ready, easy-to-install, secure execution environment for machine learning workflows, the company said.
"When you're just prototyping a machine learning model in a notebook," Google product manager Anusha Ramesh and staff developer advocate Amy Unruh said in a blog post, "it can seem fairly straightforward. But when you need to start paying attention to the other pieces required to make an ML workflow sustainable and scalable, things become more complex.
A machine learning workflow can involve many steps with dependencies on each other, from data preparation and analysis, to training, to evaluation, to deployment, and more. It's hard to compose and track these processes in an ad-hoc manner — for example, in a set of notebooks or scripts — and things like auditing and reproducibility become increasingly problematic."
Google's new AI Platform Pipelines actually has components: an enterprise-ready infrastructure for deploying and running structured ML workflows that are integrated with GCP services; and a set of pipeline tools for building, debugging, and sharing pipelines and components.
AI Platform Pipelines runs on a Google Kubernetes Engine (GKE) cluster. A cluster is automatically created as part of the installation process, but users can use an existing GKE cluster. The Cloud AI Platform UI lets users view and manage all clusters. The Pipelines installation can also be deleted from a cluster and then reinstalled, and retain the persisted state from the previous installation while updating the Pipelines version.
The beta launch of AI Platform Pipelines includes a number of new features, including support for template-based pipeline construction, versioning, and automatic artifact and lineage tracking. It's also easier for developers to get started with ML pipeline code, the TensorFlow Extended (TFX) SDK provides templates, or scaffolds, with step-by-step guidance on building a production ML pipeline for their own data. With a TFX template, developers can incrementally add different components to the pipeline and iterate on them.
TFX templates can be accessed from the AI Platform Pipelines "Getting Started" page in the Cloud Console. The TFX SDK currently provides a template for classification problem types and is optimized for TensorFlow. Google says there are more templates on the way for different use cases and problem types.
A TFX pipeline typically consists of multiple pre-made components for every step of the ML workflow. For example, developers can use ExampleGen for data ingestion, StatisticsGen to generate and visualize statistics of their data, ExampleValidator and SchemaGen to validate data, Transform for data preprocessing, Trainer to train a TensorFlow model, etc. The AI Platform Pipelines UI lets developers visualize the state of various components in the pipeline, dataset statistics, and more, as shown below.
The new service also supports pipeline versioning, allowing users to upload multiple versions of the same pipeline and group them in the UI, so that semantically related workflows can be managed together.
A complete list of features is available on the Google Cloud blog page. A how-to guide and other documentation is also available online.
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at firstname.lastname@example.org.