News

Firebase Dev Platform Simplifies Machine Learning Functionality in ML Kit

Google's Firebase team announced updates to its ML Kit, providing machine learning functionality for Android and iOS mobile apps, during the Google I/O 2019 conference.

ML Kit is a beta offering that provides mobile developers with out-of-the-box APIs -- on-device or in the cloud -- for common mobile use cases such as recognizing text and landmarks, detecting faces, scanning barcodes, labeling images and more.

Launched last year, the kit this week received three more beta capabilities:

  • On-device Translation API
  • Object Detection & Tracking API
  • AutoML Vision Edge

"The On-device Translation API allows you to use the same offline models that support Google Translate to provide fast, dynamic translation of text in your app into 58 languages," said Francis Ma, head of product on the Firebase team, in a blog post this week. As a beta release, this API might be changed in backward-incompatible ways, Google warned, and isn't covered by any service-level agreements (SLAs) or deprecation policies.

"On-device translation is intended for casual and simple translations, and the quality of translations depends on the specific languages being translated from and to," Google says on its site for the feature. "As such, you should evaluate the quality of the translations for your specific use case. If you require higher fidelity, try the Cloud Translation API."

One in-device advantage, though, is that translations are performed quickly, without requiring developers to send user text to a remote server.

Regarding the second feature, Ma said: "The Object Detection & Tracking API lets your app locate and track, in real-time, the most prominent object in a live camera feed."

Again, being based on-device, the API quickly tracks objects and can serve well as the front end of a longer visual search pipeline.

Developers have the option to classify detected objects into one of several general categories.

"Object detection and tracking with coarse classification is useful for building live visual search experiences," Google says on its site for the feature.

Finally, AutoML Vision Edge, smooths the process of creating custom image classification models tailored to specific needs.

"For example, you may want your app to be able to identify different types of food, or distinguish between species of animals," Ma said. "Whatever your need, just upload your training data to the Firebase console and you can use Google’s AutoML technology to build a custom TensorFlow Lite model for you to run locally on your user's device. And if you find that collecting training datasets is hard, you can use our open source app which makes the process simpler and more collaborative."

The new features and functionality announced this week comes on the heels of Natural Language Processing and other functionality added to ML Kit, as was detailed here last month.

About the Author

David Ramel is an editor and writer at Converge 360.

Featured