Intel Pushing Xeon for Deep Learning

Intel has released a new benchmark test for its Xeon Scalable processors to show why enterprises should be considering them for their deep learning projects.

The report state that when using AWS Sockeye Neural Machine Translation (NMT) with Intel Math Kernel Library and Apache MXNet, results are four times faster using the the company's Xeon Scalable processor than with Nvidia's V100 GPU. Step-by-step instructions are detailed in the report for duplicating the results.

"These results demonstrate the gains of using Intel MKL with Intel Xeon processors. In addition, properly setting the environment variables gives additional performance and provides comparable performance to V100 (22.5 vs 23.2 sentences per second)," the benchmark report stated.

"In addition to these gains, additional optimizations are coming soon that we expect will further improve CPU performance."

The results follow a late 2017 academic report the company released focused on how several universities found using Intel Xeon Scalable processors for their deep learning training.

Nvidia is, of course, also pushing its offerings as the ultimate solution for deep learning, as is Qualcomm and AMD, as well as startups like Cerebras, KnuEdge and Groq.

About the Author

Becky Nagel serves as vice president of AI for 1105 Media specializing in developing media, events and training for companies around AI and generative AI technology. She also regularly writes and reports on AI news, and is the founding editor of PureAI.com. She's the author of "ChatGPT Prompt 101 Guide for Business Users" and other popular AI resources with a real-world business perspective. She regularly speaks, writes and develops content around AI, generative AI and other business tech. Find her on X/Twitter @beckynagel.

Featured