SAN FRANCISCO—Intel last week took a significant step in the artificial intelligence space with the planned acquisition of startup Nervana Systems, a move that will give it software designed for the crucial task of machine learning.
At the Intel Developer Forum (IDF) here Aug. 17, company officials said they will continue their efforts to make the chip maker’s x86 products the foundational silicon for the job of training the neural networks driving the development of artificial intelligence (AI).
Diane Bryant, executive vice president and general manager of Intel’s Data Center Group, said the company next year will release a new version of its many-core Xeon Phi processors—dubbed “Knights Mill”—which will be crucial in competing with GPUs from Nvidia in the machine-learning (ML) task of training. Intel also got a nod from Chinese internet giant Baidu, which will use the Knights Mill chips in its data centers as part of its “deep speech” platform for its speech recognition efforts.
The combination of Knights Mill—a derivative of the current Xeon Phi Knights Landing chip aimed at the AI space—and the presence of Baidu on the IDF stage is another boost for Intel as it looks to gain more traction in the rapidly growing AI and machine learning space, according to Patrick Moorhead, principal analyst with Moor Insights and Strategy. Bryant didn’t give much in the way of details about Knights Mill, but Moorhead said the focus on such capabilities as enhanced variable precision means the company is moving in the right direction. Other features for Knights Mill listed by Intel include improved efficiency, flexible and high-capacity memory, and the fact that it’s being built for scale-out environments.
Having Baidu—part of the group of major hyperscale players that Intel has dubbed the “Super 7” and which also includes such vendors as Google, Facebook, Amazon and Alibaba—come out in support of the effort also was important, he said.
“It was the first time Intel has had someone from the Super 7 on stage and talking about AI and ML and Xeon Phi,” the analyst told eWEEK.
Intel is well-positioned to be the dominant force in the AI and machine learning space, Bryant said. In 2015, 7 percent of all servers were running data analytics workloads. By 2020, more than half will. In addition, 97 percent of all machine-learning tasks run on Intel silicon, she said. Current AI workloads include such jobs as image recognition and fraud detection, but the industry promises huge growth in the coming years.
She and other officials also said that those workloads also run best on Intel CPUs. Nidhi Chappell, director of machine learning with Intel’s Analytics Platform Group, told eWEEK that the company with its Xeon and Xeon Phi server chips not only has the best silicon for deep learning—a subset of machine learning—but that engineers are ensuring that all the parts of the software stack—much of which is open-source—needed for AI are optimized to run on those chips.
In addition, Chappell noted that most machine-learning tasks are linear workloads, which Intel has been processing for decades.
“There’s nothing inherent about neural networks that lends themselves [to be best used by] GPUs,” she said.
Intel Unveils Upcoming Xeon Phi Chip Aimed at AI Workloads
Intel’s Xeon Phi chips also are primary processors, which means they can run by themselves rather than serve only as co-processors to other CPUs. By contrast, Nvidia’s GPU accelerators need to work with CPUs.
Machine learning is made up of training (where neural networks are taught such things as object identification) and inference (where they use this training to recognize and process unknown inputs). Neural networks used for training are large, and a lot of that training is done on Nvidia GPUs, which offer more processing cores than CPUs. The inference networks are smaller, and most of that work is done on CPUs from Intel.
Bryant and Chappell argued that CPUs have advantages over GPUs in processing machine learning tasks, particularly as environments scale out. The stance was backed up by Slater Victoroff, CEO of text and image analytics startup Indico, who is opting for CPUs over GPUs for that reason.
In a post on Nvidia’s blog this week, company officials have disputed recent benchmark numbers Intel has released to back up its arguments for CPUs, saying that the chip maker has used outdated and faulty data to come to its conclusions.
“It’s great that Intel is now working on deep learning,” they wrote in the blog post. “This is the most important computing revolution with the era of AI upon us and deep learning is too big to ignore. But they should get their facts straight.”
In a research note, Charles King, principal analyst with Pund-IT, wrote that Intel’s efforts in the nascent AI market—including buying Nervana—shows the company’s commitment to the space, which “represents a huge opportunity.”
“But Intel is anything but alone in pursuing it,” King wrote. “Other notable companies are in the AI hunt, including enterprise vendors like IBM, cloud players including Google and Amazon, and other silicon vendors, such as NVIDIA. To stay ahead of the curve, Intel is committing sizable financial and human capital to its commercial AI efforts.”
However, he warned that “competitors may believe they can overtake and overcome the company. But time and again, Intel has demonstrated that it has what it takes to go for and win the gold.”