Intel, Nvidia Trade Shots Over AI, Deep Learning

Intel and Nvidia downplay each other's efforts as both make deeper pushes into the emerging artificial intelligence market.

data center

Artificial intelligence and machine learning are opening up another front in the sprawling competition between Intel and Nvidia that has stretched from the data center and high-performance computing environments to autonomous vehicles.

Both chip makers see the nascent artificial intelligence (AI) space—and the machine learning that helps enable it—as key growth areas and have made significant recent pushes into the market, and each sees the other as primary competitors.

Now the two companies are looking to grab mindshare around AI in the industry by both touting their own technologies while throwing shade at the other's. As Intel executives at the Intel Developer Forum (IDF) last week unveiled a range of moves the company is making to address the needs in the emerging market, Nvidia officials fired off a post on the vendor's blog questioning some of the benchmark numbers Intel was using comparing its many-core Xeon Phi processors to Nvidia's GPUs.

Ian Buck, vice president and general manager of Nvidia's accelerated computing unit, in the blog post downplayed methods Intel used in trying to pump up the benchmark numbers for Xeon Phi.

"While we can correct each of their wrong claims, we think deep learning testing against old Kepler GPUs and outdated software versions are mistakes that are easily fixed in order to keep the industry up to date," Buck wrote. "It's great that Intel is now working on deep learning. This is the most important computing revolution with the era of AI upon us and deep learning is too big to ignore. But they should get their facts straight."

In his own blog post this week, Jason Waxman, corporate vice president in the Intel's Data Center Group and general manager of the company's Data Center Solutions Group, pushed back, noting what he said is the company's strong position as the AI market grows and the worry that may cause competitors.

"However, arguing over publicly available performance benchmarks is a waste of time," Waxman wrote. "It's Intel's practice to base performance claims on the latest publicly available information at the time the claim is published, and we stand by our data."

The argument echoes similar ones the two companies have made in the past in such areas as high-performance computing (HPC), where Intel processors are the dominant CPU in the systems but Nvidia's GPUs are increasingly being used as accelerators to help boost the performance and power efficiency of the machines. Intel has responded with Xeon Phi, which initially could be used only as coprocessors for accelerating performance but, since the release last year of the 72-core Knights Landing chip, can now be used as the primary processor.

Intel has argued that running HPC workloads on its x86-based architecture—both its Xeon and Xeon Phi chips—makes sense, while Nvidia officials have said GPUs offer greater performance in parallel processing environments.

Some of the debate around AI is similar. Nvidia executives for the past several years have said that AI and machine learning—which aims to train neural networks to enable artificial intelligence so systems can learn from experience, much like a human brain does—are key technologies for the company's future. In April, Nvidia unveiled the Tesla P100, a massive chip based on Nvidia's 16-nanometer Pascal architecture that packs 150 billion transistors, as well as the DGX-1, a supercomputer for deep learning and AI that combines eight Tesla P100 GPUs with two Intel Xeon server chips to drive 170 teraflops of performance in a 3U (5.25-inch) form factor.

"Deep learning has the potential to revolutionize computing, improve our lives, improve the efficiency and intelligence of our business systems, and deliver advancements that will help humanity in profound ways," Nvidia's Buck wrote. "That's why we've been enhancing the design of our parallel processors and creating software and technologies to accelerate deep learning for many years. Our dedication to deep learning is deep and broad. Every framework has NVIDIA-optimized support, and every major deep learning researcher, laboratory and company is using NVIDIA GPUs."

In his own blog post, Intel's Waxman said Intel is "inherently well-positioned to support the machine learning revolution." Intel chips power more than 97 percent of servers used for running machine learning workloads, and "while there's been much talk about the value of GPUs for machine learning, the fact is that fewer than 3 percent of all servers deployed for machine learning last year used a GPU."

He also noted other efforts by Intel in the AI space, including the planned release of a Xeon Phi chip dubbed "Knights Mill" with enhanced variable precision and flexible high-capacity memory that is aimed at AI workloads, the commitment to open frameworks for machine learning—including Caffe and Theano—and the acquisition of Nervana Systems and its machine learning technologies.