News

The Work-from-Home World gets a New Cloud-AI Video Streaming Platform from NVIDIA

The new stay-at-home reality has required some serious enterprise and educational adaptation, which simply wouldn't have been possible without streaming video technology. NVIDIA, inventor of the graphics processing unit (GPU), stepped into this space earlier this month with the early access release of Maxine, a new artificial intelligence (AI) driven, cloud-native, streaming video platform.

Designed to makes it possible for service providers to bring new AI-powered capabilities to the more than 30 million web meetings industry watchers estimate are taking place every day, Maxine was designed to allow developers with a cloud-based suite of GPU-accelerated AI conferencing software to enhance streaming video.

Because the data is processed in the cloud rather than on local devices, end users can access new features without any specialized hardware--things like gaze correction, super-resolution, noise cancellation, and face relighting.

"Video conferencing is now a part of everyday life," said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA, in a statement, "helping millions of people work, learn, and play--and even see the doctor. NVIDIA Maxine integrates our most advanced video, audio, and conversational AI capabilities to bring breakthrough efficiency and new capabilities to the platforms that are keeping us all connected."

In a nutshell, the Maxine platform was designed to dramatically reduce the amount of bandwidth required for video calls. Instead of streaming the entire screen of pixels, the AI software analyzes the key facial points of each person on a call, and then intelligently re-animates the face in the video on the other side, the company says. This makes it possible to stream video with far less data flowing back and forth across the internet.

Using this new AI-based video compression technology running on NVIDIA GPUs, developers can reduce video bandwidth consumption down to one-tenth of the requirements of the H.264 streaming video compression standard, the company says. This reduction of bandwidth has the potential to cut costs for providers and deliver a smoother video conferencing experience for end users, who can access more AI-powered services while streaming less data on their computers, tablets, and phones.

Maxine takes advantage of AI microservices running in Kubernetes container clusters on NVIDIA GPUs to help developers scale their services according to real-time demands. Users can run multiple AI features simultaneously, while remaining within application latency requirements. And the modular design of the Maxine platform enables developers to easily select AI capabilities to integrate into their video conferencing solutions.

The Maxine platform integrates technology from several NVIDIA AI SDKs and APIs. In addition to NVIDIA Jarvis, Maxine leverages the NVIDIA DeepStream high-throughput audio and video streaming SDK and the NVIDIA TensorRT SDK for high-performance deep learning inference.

The Santa Clara, CA-based NVIDIA has been making a serious push into the AI/ML space in recent months. The company announced plans last month to acquire chip designer Arm Holdings, to enhance its AI computing platform.

Computer vision AI developers, software partners, startups, and computer manufacturers creating audio and video apps and services can apply for early access to the NVIDIA Maxine platform.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured