Intel to Start Shipping Xeons With FPGAs in Early 2016

Server chips with the integrated accelerators initially will go to cloud-scale companies, an Intel executive says at the Structure Conference.

Intel logo

Intel in the first quarter in 2016 will begin shipping its first Xeon processors with integrated programmable chips, following through on a promise the company made last year.

According to reports, Diane Bryant, senior vice president and general manager of Intel's Data Center Group, said during the Structure Conference Nov. 18 that the server chips initially will ship to the largest cloud-scale companies, such as Amazon Web Services, Facebook, Microsoft, Google and Baidu. However, she declined to specify which companies will receive the chips.

Integrating field-programmable gateway arrays (FPGAs) in its server chips is part of a larger effort by Intel to expand the accelerators it uses in its server processors and the workloads that can run on them. Intel in June announced it was buying FPGA maker Altera for $16.7 billion, bringing its one-time partner in-house. FPGAs bring with them a lot of flexibility because they can be reprogrammed through software after they're manufactured, which is why they are becoming increasingly important accelerators for cloud and Web-scale environments, where workloads can change quickly.

Intel has predicted that FPGAs could be used in as much as 30 percent of data center servers by 2020. Company officials first said they were bringing FPGA capabilities to their processors last year, though they never said who they were partnering with.

"Combining Xeon with FPGAs gives Intel a more powerful and programmable chip that can plug into existing Xeon slots," Bryant said, according to a report in Fortune.

The select companies will be able to tune their algorithms to the new Xeon chips ahead of the general availability of the processors at a later date, Bryant said.

The high-performance computing (HPC) space for almost a decade has been using GPU accelerators from Nvidia and Advanced Micro Devices to help improve the performance of their systems while keeping down power consumption. Intel offers its x86 Xeon Phi coprocessors as accelerators to HPC organizations. More than 100 of the world's 500 fastest supercomputers use either GPU accelerators from Nvidia or AMD or Intel's Xeon Phis.

With the rise of such trends as cloud computing, big data and software-defined data centers, there is a growing demand for other accelerators, including FPGAs from such vendors as Altera and Xilinx. Xilinx has been partnering with a growing number of chip makers to bring its FPGAs to their platforms. Those partnerships include Qualcomm—which is developing ARM-based systems-on-a-chip (SoCs) for use in servers—and most recently IBM. At the SC 15 supercomputer show this week, Xilinx and IBM announced that they are collaborating on an array of efforts that are focused on using Xilinx's FPGAs in Power systems to help speed up such workloads as big data analytics, machine learning, network-functions virtualization (NFV), HPC and genomics. The partnership will include everything from infrastructure to middleware to software.

Other companies also are looking at FPGAs to help them accelerate their workloads. For example, Microsoft last year announced Project Catapult, an effort to use FPGAs in servers running Intel Xeon chips to speed up Bing search results.

Along with the FPGAs, Intel officials in May said the chip maker is partnering with eASIC to bring application specific integrated circuits (ASICs) to custom Xeons that can be used in enterprise data centers and cloud environments for such workloads as data analytics and security. The various accelerators can be targeted at particular workloads.

Patrick Moorhead, principal analyst with Moor Insights and Strategy, told eWEEK at the time of Intel's eASIC announcement that ASICs work better than FPGAs with workloads such as video encoding and decoding or audio.

"You want to use an ASIC when the software never changes or when you are looking for the lowest cost or highest level of performance per die size," Moorhead said. "ASICs are used where standards rarely change if ever."

By contrast, FPGAs—which can be reprogrammed to meet workload demands—are better for environments where the software can change, he said.