Intel executives in July 2013 announced that the giant chip maker planned to release a system-on-a-chip version of its upcoming 14-nanometer Xeon “Broadwell” server processor, which will include such integrated features as fabric, I/O and accelerators.
Intel already offers systems-on-a-chip (SoCs) in its low-power Atom platform for mobile devices, a market dominated by ARM-based SoCs from a variety of chip manufacturing partners, including Qualcomm and Samsung. At the same time, Intel is already in the second generation of its Atom SoCs for low-end, highly dense microserver space, the C2000 “Avoton.” The company will follow up next year with the 14nm “Denverton” chip.
However, the Broadwell offering will be the first time Intel has brought the SoC design to mainstream data center systems, including servers, storage devices and networking hardware.
To Patrick Moorhead, it’s a clear indication of the influence ARM has had in chip and server designs since company officials announced more than four years ago their intent to take ARM’s low-power designs into the data center, challenging Intel’s dominance in the space.
“What ARM has brought to the table is the SoC,” Moorhead, principal analyst with Moor Insights and Strategy, told eWEEK, pointing to Intel’s upcoming Broadwell SoC. “That’s one of the things I don’t think would have happened if [ARM] hadn’t been there.”
ARM’s push into the server space probably accelerated Intel’s road map, according to Jeff Underhill, director of server programs at ARM.
“The incumbents may have ultimately come down this [SoC] road eventually, but they probably came down faster than they would have” because of ARM, Underhill told eWEEK.
ARM executives for several years have been vocal about their intention of bringing the company’s low-power architecture—which is found in the bulk of SoCs running smartphones and tablets—and moving into data centers, which are undergoing a significant transition due to such trends as mobility, cloud computing and big data.
Much of the talk around ARM has focused on microservers, highly dense systems that are designed to process the massive numbers of small workloads for cloud service providers and Web 2.0 companies like Google, Facebook and Amazon, where power efficiency is as important as performance. However, some ARM chip manufactures, including Applied Micro and Cavium, are touting their 64-bit ARM chips as competitors to Intel’s mainstream Xeon server processors. In addition, as demonstrated at the recent International Supercomputing Conference in Germany, there is also a push to get ARM’s architecture into the high-performance computing (HPC) space.
However, despite the bravado from ARM and its manufacturing partners, there are few, if any, actual ARM-based servers on the market. ARM only relatively recently got its new 64-bit ARMv8-A architecture to partners, and most systems are still in the demonstration phase. Commercial systems aren’t expected to start hitting the market until later in the year, and it won’t be until at least 2015 when they begin to make a dent in the space.
“The impact of ARM in the server market is undetermined right now,” Tom Bradicich, vice president for engineering for servers at Hewlett-Packard, told eWEEK. “The enterprise capabilities of ARM are unknown.”
That said, Bradicich and other vendors and analysts agreed that ARM’s entrance into the market is having a ripple effect, from driving such chip design features as SoCs and fabrics to fueling new system designs from the likes of HP and Dell. It’s also looking to answer the growing demand from end users for greater performance and power efficiency in their systems.
“That combination is an extremely potent combination for the server market,” Bradicich said.
ARM Server Chips Forcing Intel to Defend Market It’s Long Dominated
SoCs integrate a range of features—from memory and storage to graphics and networking—onto the same piece of silicon as the CPU, driving both performance and energy efficiency. They’re common in mobile devices like smartphones and tablets and embedded systems. With the growth of the cloud and applications such as big data and mobility, the enterprise demand for greater performance and lower power are growing, and ARM officials see the company’s architecture as a good fit.
So do some system vendors. HP is building out its portfolio of Moonshot systems—essentially small server modules built for large, highly dense data centers and aimed at cloud and Web 2.0 workloads. The first Moonshot servers in the market were based on Intel’s Atom platform, but the OEM also is working with Applied Micro on a version that will enable 180 physical cores to be packed into a 4.3U (about 7.5-inch) box, Bradicich said.
Dell currently has three proof-of-concept efforts under way with its ARM-based Copper and Zinc microserver efforts. With Copper, Dell is able to put four quad-core Applied Micro SoCs in a 1U (1.75-inch) blade and 12 of those in a chassis, with 48 nodes in a 3U (5.25-inch) system, according to Robert Hormuth, senior distinguished engineer and executive director of platform architecture and technology in the Office of the CTO at Dell. Zinc is “uber-dense,” with 288 nodes in a 3U to 4U box, Hormuth told eWEEK.
SoC features can vary from chip maker to chip maker, which can impact the direction of the server, he said. “You end up optimizing the server design based on the attributes of the SoC you’ve chosen,” Hormuth said.
There’s also a challenge with SoCs, he said. On the one hand, there are the benefits of greater performance and lower power consumption. However, that comes with less choice for organizations in the technologies that go along with the CPU. With traditional processors, end users have greater flexibility in choosing such technologies as networking and storage. However, when all those features are integrated onto the SoC by the chip maker, many of those choices are closed to the user. And that challenge comes with any platform.
“You may be looking to go to a different architecture, but even if you go to an x86 SoC, you still have to give up some choice,” he said.
A key attribute on SoCs—and one that has also been influenced by ARM’s data center push—is the use of interconnect fabrics that are used to improve communications within systems and between systems. In a traditional environment, networking includes top-of-rack switches, end-of-rack switches and core switches, with data moving between them and the servers. Integrated interconnect fabrics enable communication between the chips and systems without a core switch and top-of-rack switch, with data flowing in a more east-west direction, according to Moor analyst Moorhead.
ARM chip makers Applied Micro, Calxeda (which shut down in December 2013) and Advanced Micro Devices—which also has a long history of competing with Intel in the x86 chip space—have been key innovators in the design of fabrics, which are key to ensuring communication in highly scalable environments, improving performance and saving money. ARM’s Underhill pointed to AMD as an example of the move toward fabrics, noting that it is integrating Ethernet networking into its ARM-based Opteron A1100 Series “Seattle” server chip, and that it also inherited the Freedom Fabric when it bought microserver vendor SeaMicro in February 2012.
“SeaMicro was really a pioneer in showing what a fabric can do in stitching together [all the chips] in a box,” he said.
ARM Server Chips Forcing Intel to Defend Market It’s Long Dominated
Moorhead said the push in fabrics by the ARM partners has helped accelerate Intel’s innovation in the area. Intel in recent years has bolstered its capabilities around interconnect technologies through the acquisitions of Fulcrum Microsystems and technologies from Cray and Qlogic. Earlier this month, Intel unveiled Omni Scale, an open fabric technology that will be introduced in the upcoming 14nm Xeon Phi “Knights Landing” chips for the HPC space.
“When you have 95 percent of the [server chip] market, you are going to respond to threats,” Moorhead said.
ARM’s business model also has helped the chip designer and its partners respond to the growing demand for processors and systems that can be optimized for particular workloads, and is a key differentiator from Intel. Whereas Intel sees its x86-based processors being flexible enough to handle all workloads—from embedded and mobile systems to the largest servers and supercomputers—ARM designs low-power SoCs, and then licenses those designs to chip makers, who add their own technologies before selling the chips to systems makers.
The result is a wide range of ARM-based processors that vary in their features and can be chosen based on workload needs. ARM’s Underhill pointed to Texas Instruments as an example, noting the chip maker’s strong heritage in digital signal processors (DSPs), while AMD, Applied Micro, Broadcom and Cavium all have strengths in networking and storage. Field-programmable gate arrays (FPGAs) also can be added into that mix, according to officials with HP and Dell.
Organizations will have a choice of SoCs that have different assets for different workloads, but are all based on the basic ARM architecture. HP’s Bradicich likened it to cars, which have the same basic infrastructure, though with a wide range of features.
“There is overlap [between products], but the issue is not the overlap,” he said. “The issue is where they don’t overlap, and that justifies their existence.”
Other chip vendors are trying to leverage business models similar to ARM’s to expand their reach of their architectures. IBM last year launched the OpenPower Consortium, with Big Blue licensing its Power processors to other companies so they can build their own servers, networking systems and storage appliances based on IBM’s architecture. Imagination Technologies in May announced Prpl, a similar effort for the MIPS architecture.
Through all this, Intel hasn’t been standing still. The company has been aggressive in expanding the reach of its processors, including in the data center. As noted, Intel has made moves in developing its fabric technology and later this year will release its third-generation Atom chip for microservers, and is addressing business demand for greater choice and workload optimization. The company is offering a wider variety of its server chips with varying numbers of cores, frequencies and accelerators to address a range of server workloads as well as networking and storage tasks. For example, when the Xeon E5-2600 v2 was launched in September 2013, the portfolio offered 21 different products.
In addition, the company is growing its custom chip businesses, and earlier this month, Diane Bryant, senior vice president and general manager of Intel’s Data Center and Connected Systems Group, announced the chip maker will integrate FPGAs—which enable end users to program chips for particular jobs, then reprogram them for others—into the same package as Xeon processors.
ARM Server Chips Forcing Intel to Defend Market It’s Long Dominated
The move to integrate FPGAs made sense for Intel as it looks at ARM and its partners entering the server market, according to Dell’s Hormuth, who added that the giant chip maker has done a good job adjusting its product road map to meet the changing demands from businesses for improved power efficiency and solutions optimized for particular workloads.
“It’s clear they were responding to market dynamics” with the FPGA integration, he said. “Intel is trying to respond to 10 to 12 competitors.”
It’s also part of Intel’s larger message that CEO Brian Krzanich spelled out during the company’s financial earnings call in April: “If it computes, it runs best on the Intel Architecture.”
For the time being, that’s what most server workloads will run on, at least until systems start hitting the market with ARM-based SoCs. As HP’s Bradicich pointed out, exactly how the systems will run in data centers and how organizations will embrace them remains to be seen. In addition, there already has been some shakeout in the ARM server arena—including Calxeda’s shutdown late last year, and reports that Samsung and Nvidia are pulling back on plans to offer ARM-based server chips—but Nvidia announced this month that its GPU accelerators will support 64-bit ARM chips in the HPC space.
However, a number of others, including AMD, Applied Micro, Cavium and Broadcom, are continuing to push the ARM architecture for the data center. Applied Micro and Cavium are using custom CPUs in their SoCs—X-Gene for Applied Micro, ThunderX for Cavium—to challenge Intel and its Xeon chips in mainstream servers. In addition, while Seattle is based on an off-the-shelf ARM Cortex-A57 CPU core, AMD announced that is has licensed the design to build its own CPU core in the future.
In January, ARM released its Server Base System Architecture specification, giving OEMs a framework for building systems powered by its SoCs.
In addition, ARM and its partners are building up the software ecosystem around the architecture, with the help of the likes of the Linaro consortium, which is working to bring open-source software to the ARM platform.
And, in the end, there is interest among systems makers and organizations for an alternative to Intel and x86 in the data center, according to Roy Kim, marketing manager for Nvidia’s Tesla Group.
“The data center is really primed up for 64-bit ARM,” Kim told eWEEK. “Because there is choice and an open platform, a lot more innovation is going to happen in the data center.”