Intel Unveils Upcoming Xeon Phi Chip Aimed at AI Workloads

The company's "Knights Mill" processor, due next year, is part of the chip maker's larger effort to expand its capabilities in AI and machine learning.

Intel logo

SAN FRANCISCO—Intel last week took a significant step in the artificial intelligence space with the planned acquisition of startup Nervana Systems, a move that will give it software designed for the crucial task of machine learning.

At the Intel Developer Forum (IDF) here Aug. 17, company officials said they will continue their efforts to make the chip maker's x86 products the foundational silicon for the job of training the neural networks driving the development of artificial intelligence (AI).

Diane Bryant, executive vice president and general manager of Intel's Data Center Group, said the company next year will release a new version of its many-core Xeon Phi processors—dubbed "Knights Mill"—which will be crucial in competing with GPUs from Nvidia in the machine-learning (ML) task of training. Intel also got a nod from Chinese internet giant Baidu, which will use the Knights Mill chips in its data centers as part of its "deep speech" platform for its speech recognition efforts.

The combination of Knights Mill—a derivative of the current Xeon Phi Knights Landing chip aimed at the AI space—and the presence of Baidu on the IDF stage is another boost for Intel as it looks to gain more traction in the rapidly growing AI and machine learning space, according to Patrick Moorhead, principal analyst with Moor Insights and Strategy. Bryant didn't give much in the way of details about Knights Mill, but Moorhead said the focus on such capabilities as enhanced variable precision means the company is moving in the right direction. Other features for Knights Mill listed by Intel include improved efficiency, flexible and high-capacity memory, and the fact that it's being built for scale-out environments.

Having Baidu—part of the group of major hyperscale players that Intel has dubbed the "Super 7" and which also includes such vendors as Google, Facebook, Amazon and Alibaba—come out in support of the effort also was important, he said.

"It was the first time Intel has had someone from the Super 7 on stage and talking about AI and ML and Xeon Phi," the analyst told eWEEK.

Intel is well-positioned to be the dominant force in the AI and machine learning space, Bryant said. In 2015, 7 percent of all servers were running data analytics workloads. By 2020, more than half will. In addition, 97 percent of all machine-learning tasks run on Intel silicon, she said. Current AI workloads include such jobs as image recognition and fraud detection, but the industry promises huge growth in the coming years.

She and other officials also said that those workloads also run best on Intel CPUs. Nidhi Chappell, director of machine learning with Intel's Analytics Platform Group, told eWEEK that the company with its Xeon and Xeon Phi server chips not only has the best silicon for deep learning—a subset of machine learning—but that engineers are ensuring that all the parts of the software stack—much of which is open-source—needed for AI are optimized to run on those chips.

In addition, Chappell noted that most machine-learning tasks are linear workloads, which Intel has been processing for decades.

"There's nothing inherent about neural networks that lends themselves [to be best used by] GPUs," she said.