Intel to Integrate ASIC into Xeon Server Chips for Cloud Workloads

The partnership with eASIC is part of a larger effort by Intel to build custom processors to speed up cloud and Web-scale applications.

Intel's ASIC

Intel is bringing new acceleration capabilities to some of its custom server chips that it will sell to cloud service providers to speed up such workloads as big data analytics and security.

Intel is partnering with eASIC to bring application specific integrated circuits (ASICs) to custom Xeons that can be used in enterprise data centers and cloud environments. The move is part of a larger push by the giant chip maker to expand its custom chip efforts by leveraging accelerators—such as field-programmable gate arrays and now ASICs—for particular workloads.

"Having the ability to highly customize our solutions for a given workload will not only make the specific application run faster, but also help accelerate the growth of exciting new applications like visual search," Diane Bryant, senior vice president and general manager of Intel's Data Center Group, said in a statement.

Intel officials have said the shift in IT and business toward the cloud and software-defined infrastructure (SDI) is increasing the demand for customized chips that are optimized for particular applications, making those workloads run even faster. According to the company, the use of ASICs technology in custom Xeon chips will help cloud providers increase the acceleration of particular workloads up to two times over FPGAs, and speed the time to market by as much as 50 percent.

Patrick Moorhead, principal analyst with Moor Insights and Strategy, said there are particular workloads—such as video encoding and decoding or audio—where ASICs will have an advantage over FPGAs.

"You want to use an ASIC when the software never changes or when you are looking for the lowest cost or highest level of performance per die size," Moorhead said in an email sent to eWEEK. "ASICs are used where standards rarely change if ever."

By contrast, FPGAs—which can be reprogrammed to meet workload demands—is better for environments where the software can change, he wrote.

"You want flexibility and you would need to reprogram the FPGA," Moorhead wrote, adding that you can switch from an FPGA to an ASIC "to get a smaller, more cost-effective solution, but it's not reprogrammable. If time-to-market is what you are looking for, FPGA is the way to go. … You would add an ASIC to a Xeon when the algorithm or workload doesn’t change a lot."

Workload optimization is becoming increasingly important in a world where cloud providers and Web-scale companies—such as Facebook, Google, Amazon and Microsoft—run huge data centers that house massive numbers of servers running many small workloads. They want systems that run those jobs at optimum performance and efficiency levels.

One way Intel is meeting the demand is by offering a wider range of options within its product families. For example, when the chip vendor launched its high-end Xeon E7 v3 processors May 5, the product portfolio included 12 CPUs in four segments, sorted by such variables as core counts, power envelopes and pricing. In addition, some of the new chips were optimized for particular workloads, including databases, low-power applications and high-performance computing (HPC).

In addition, Bryant and other Intel executives have talked about the importance of offering custom chips to some customers to address particular workloads. The company in 2013 built 15 custom processors designed to meet the needs of particular customers, such as eBay and Facebook, and more than twice that many products were planned for 2014, officials said.

The integration of FPGAs with Xeon chips is one way to meet that demand, and Intel reportedly is looking to boost that capability by buying chip maker Altera.

The addition of eASIC's technology will help not only increase performance over FPGAs for some workloads, but also will offer lower power consumption, Intel officials said.