Google Uses Machine Learning To 'Nowcast' the Weather

Google researchers are exploring the potential of machine learning (ML) models for "nearly instantaneous" local weather predictions.

The Alphabet subsidiary shared its research in a blog post linked to a paper, "Machine Learning for Precipitation Nowcasting from Radar Images," in which it looks at how ML might be used to make highly localized predictions that apply to the immediate future.

Google's high-resolution "nowcasting" focuses on zero- to six-hour forecasts, generating predictions that have a 1km resolution with a total latency of five to 10 minutes, explained Jason Hickey, senior software engineer in Google's Research group, in the post. These short-term, limited-range forecasts outperform traditional models, he said, even at these early stages of development.

Because of the virtually unlimited variables, weather forecasting is a real challenge for AI and ML systems. But Google's application of deep learning (DL) techniques shows exceptional promise, Hickey said.

"A significant advantage of machine learning is that inference is computationally cheap, given an already-trained model, allowing forecasts that are nearly instantaneous and in the native high resolution of the input data," he explained.

The Google system uses a data-driven "physics-free" approach, which means the neural network employed learns to approximate the atmospheric physics from the training examples alone, instead of incorporating a priori knowledge of how the atmosphere works. Radar images are collated and analyzed using convolutional neural networks (CNNs), which are the same layered networks used to recognize objects in images and to interpret natural speech.

"We treat weather prediction as an image-to-image translation problem," Hickey said, "and leverage the current state-of-the-art in image analysis."

Google researchers compared their results to three widely used models: the High Resolution Rapid Refresh (HRRR) numerical forecast from the National Oceanic and Atmospheric Administration (NOAA), which contains predictions for many different weather quantities; an optical flow (OF) algorithm, which attempts to track moving objects through a sequence of images; and the persistence model, in which each location is assumed to be raining in the future at the same rate it is raining now (the precipitation pattern doesn't change). According to the paper, Google's ML-powered rain forecaster outperformed all three.

"One of the advantages of the ML method is that predictions are effectively instantaneous," Hickey said, "meaning that our forecasts are based on fresh data, while HRRR is hindered by computational latency of 1-3 hours. This leads to better forecasts for computer vision methods for very short term forecasting. In contrast, the numerical model used in HRRR can make better long term predictions, in part because it uses a full 3D physical model -- cloud formation is harder to observe from 2D images, and so it is harder for ML methods to learn convective processes. It's possible that combining these two systems, our ML model for rapid forecasts and HRRR for long-term forecasts, could produce better results overall, an idea at the focus of our future work. We're also looking at applying ML directly to 3D observations. Regardless, immediate forecasting is a key tool for real-time planning, facilitating decisions and improving lives."

Hickey provides a thorough explanation in his blog post, but the paper is also available for download. It was authored by Hickey and Shreya Agrawal, Luke Barrington, Carla Bromberg, John Burge and Cenk Gazen.

About the Author

John K. Waters is the editor in chief of a number of sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at