ARM Server Chips Forcing Intel to Defend Market It's Long Dominated
SoCs integrate a range of features—from memory and storage to graphics and networking—onto the same piece of silicon as the CPU, driving both performance and energy efficiency. They're common in mobile devices like smartphones and tablets and embedded systems. With the growth of the cloud and applications such as big data and mobility, the enterprise demand for greater performance and lower power are growing, and ARM officials see the company's architecture as a good fit. So do some system vendors. HP is building out its portfolio of Moonshot systems—essentially small server modules built for large, highly dense data centers and aimed at cloud and Web 2.0 workloads. The first Moonshot servers in the market were based on Intel's Atom platform, but the OEM also is working with Applied Micro on a version that will enable 180 physical cores to be packed into a 4.3U (about 7.5-inch) box, Bradicich said. Dell currently has three proof-of-concept efforts under way with its ARM-based Copper and Zinc microserver efforts. With Copper, Dell is able to put four quad-core Applied Micro SoCs in a 1U (1.75-inch) blade and 12 of those in a chassis, with 48 nodes in a 3U (5.25-inch) system, according to Robert Hormuth, senior distinguished engineer and executive director of platform architecture and technology in the Office of the CTO at Dell. Zinc is "uber-dense," with 288 nodes in a 3U to 4U box, Hormuth told eWEEK. SoC features can vary from chip maker to chip maker, which can impact the direction of the server, he said. "You end up optimizing the server design based on the attributes of the SoC you've chosen," Hormuth said."You may be looking to go to a different architecture, but even if you go to an x86 SoC, you still have to give up some choice," he said. A key attribute on SoCs—and one that has also been influenced by ARM's data center push—is the use of interconnect fabrics that are used to improve communications within systems and between systems. In a traditional environment, networking includes top-of-rack switches, end-of-rack switches and core switches, with data moving between them and the servers. Integrated interconnect fabrics enable communication between the chips and systems without a core switch and top-of-rack switch, with data flowing in a more east-west direction, according to Moor analyst Moorhead. ARM chip makers Applied Micro, Calxeda (which shut down in December 2013) and Advanced Micro Devices—which also has a long history of competing with Intel in the x86 chip space—have been key innovators in the design of fabrics, which are key to ensuring communication in highly scalable environments, improving performance and saving money. ARM's Underhill pointed to AMD as an example of the move toward fabrics, noting that it is integrating Ethernet networking into its ARM-based Opteron A1100 Series "Seattle" server chip, and that it also inherited the Freedom Fabric when it bought microserver vendor SeaMicro in February 2012. "SeaMicro was really a pioneer in showing what a fabric can do in stitching together [all the chips] in a box," he said.
There's also a challenge with SoCs, he said. On the one hand, there are the benefits of greater performance and lower power consumption. However, that comes with less choice for organizations in the technologies that go along with the CPU. With traditional processors, end users have greater flexibility in choosing such technologies as networking and storage. However, when all those features are integrated onto the SoC by the chip maker, many of those choices are closed to the user. And that challenge comes with any platform.