Qualcomm's New Chip Aims To Improve AI Inference Performance in the Datacenter
Earlier this month Qualcomm Technologies Inc. announced the upcoming Qualcomm Cloud AI 100, a new accelerator chip designed to improve inference performance (as opposed to training) of AI projects while also making use of power more efficiently than other solutions.
"With this introduction, Qualcomm Technologies facilitates distributed intelligence from the cloud to the client edge and all points in between," the company stated in its official announcement of the chip.
The chip is expected to be released later this year. When it is, Qualcomm said it will feature "10x performance per watt over the industry's most advanced AI inference solutions deployed today," 7nm process nodes and support for TensorFlow, PyTorch, Glow and Keras (among others), all sitting on a new chip designed specifically for AI inference workloads.
"Our all new Qualcomm Cloud AI 100 accelerator will significantly raise the bar for the AI inference processing relative to any combination of CPUs, GPUs, and/or FPGAs used in today's data centers," commented Keith Kressin, Qualcomm senior vice president of product management, in a prepared statement. "Furthermore, Qualcomm Technologies is now well positioned to support complete cloud-to-edge AI solutions all connected with high-speed and low-latency 5G connectivity."
Qualcomm said it will release a "full stack of tools and frameworks" for developers to work with the accelerator.
Becky Nagel is the vice president of Web & Digital Strategy for 1105's Converge360 Group, where she oversees the front-end Web team and deals with all aspects of digital strategy. She also serves as executive editor of the group's media Web sites, and you'll even find her byline on PureAI.com, the group's newest site for enterprise developers working with AI. She recently gave a talk at a leading technical publishers conference about how changes in Web technology may impact publishers' bottom lines. Follow her on twitter @beckynagel.