News

Numenta Boosts DL Networks Performance Using Brain-Derived Algorithms

Numenta, a Redwood City, CA-based company focused on reverse-engineering the neocortex and enabling machine intelligence technology based on cortical theory, says it's efforts have yielded dramatic performance improvements on inference tasks in deep learning (DL) networks without loss of accuracy.

Applying a principle of the brain called "sparsity" in a proof-of-concept demonstration, the company achieved significant acceleration and power efficiencies for a variety of DL platforms and network configurations, "while maintaining competitive accuracy," the company said in a statement.

Specifically, the company compared "sparse" and "dense" networks by running its algorithms on Xilinx FPGA (Field Programmable Gate Array) chips for a speech recognition task using the Google Speech Commands (GSC) dataset. Applying the metric of "number of words processed per second," the comparison showed that sparse networks yield more than 50x acceleration over dense networks on a Xilinx Alveo board. Numenta also demonstrated the GSC network running on a Xilinx Zynq chip, "a smaller chip where dense networks are too large to run," enabling a new set of applications that rely on low-cost, low-power solutions. "Using the metric of 'number of words per second per watt,' we show that the sparse networks use significantly less power than the most efficient dense network," the company said.

"Sparsity is foundational to how the brain works and offers the key to unlocking tremendous performance improvements in machine learning today," said Subutai Ahmad, Numenta's VP of Research and Engineering. "Going forward, Numenta's neuroscience research has generated a roadmap for building machine intelligence which will yield equally exciting improvements in robustness, continual learning, unsupervised learning and sensorimotor integration."

This proof-of-concept demonstration showed what can be achieved with sparsity, the company claims. The significant acceleration and power efficiencies it appears to enable for a range of DL platforms and network configurations could allow for the implementation of larger and more complex networks using the same resources, allow more network copies on the same resources, implement DL networks on edge platforms where resource constraints prevent dense networks from running, and achieve large energy savings and lower costs through scaling efficiencies, the company said.

Numenta's co-founder, Jeff Hawkins, founder of Palm Computing and Handspring, where he invented the PalmPilot and the Treo, is behind the company's theory of cortical functions, called the "Thousand Brains Theory of Intelligence," which they believe will be fundamental to advancing the state of artificial intelligence and machine learning. "By applying this theory to existing deep learning systems, we are addressing today's bottlenecks while enabling tomorrow's applications," the company said.

Jeff Hawkins' new book, A Thousand Brains: A New Theory of Intelligence, is set for publication in March 2021.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured