Intel Shipping Xeon Phi 'Knights Landing' Processors
During the conference call, Wuischpard focused on the roles Xeon Phi and the company’s Scalable System Framework can play in the emerging AI and machine learning spaces. Those are areas Nvidia is investing in heavily, developing such technologies as the new Tesla P100 GPU accelerator—the version for servers with PCIe was announced at ISC 16—and the DGX-1, a system based on the P100 aimed at deep learning and AI. It was announced at Nvidia's GPU Technology Conference in April. Wuischpard noted Intel also has been working on technologies for the emerging markets, but that company until now has been "too quiet about it." The Xeon Phi processors are well-armed to tackle the work of training the neural networks needed for machine learning because they are faster and more scalable than GPUs, and the company's Xeon processors are the most widely used chips for the less compute-intensive inference workloads in machine learning, he said. For AI, Intel offers its Scalable System Framework, which includes compute, storage, memory, fabric and software. The Xeon Phi chips are part of a larger effort by Intel to grow its capabilities in the data center rapidly. The company is continuing to innovate its Xeon family of server chips, and also is developing a range of other technologies for next-generation data centers, including its FPGAs, silicon photonics, Omni-Path interconnect architecture and Optane memory offerings.
The vendor's Data Center Group has become a key part of the company, with first-quarter revenue hitting $4 billion, a 9 percent jump over the same period in 2015.