Andrew Feldman, corporate vice president and general manager of Advanced Micro Devices’ Server Business Unit, likes to show a couple of photos to illustrate how rapid and widespread the adoption of mobile devices and cloud computing has been.
The first photo shows the crowd at the papal inauguration of Pope Benedict XVI in 2005. Except for the random cell phone here and there, there is essentially no evidence of a mobile device in the crowd. Fast forward to 2013 and the inauguration of Pope Francis and practically every person is holding up a smartphone or tablet, the bright bluish-white of the screens decorating the top of the crowd.
All those mobile devices are pulling down data and running apps that are housed in servers and accessed via the cloud, putting tremendous pressure on data center infrastructures. While the photos give a view of what’s happening on the client side, it’s what those trends toward mobility and cloud computing mean to the data center and to servers that is most interesting to Feldman.
To him, the photos put in sharp relief the changing nature of data centers—workloads are becoming more highly parallel as data centers support millions of users and growing. Demand for denser, highly efficient servers is also increasing, which puts pressure on chip makers such as AMD and Intel to change with them.
“This new environment is going to have new needs,” Feldman told eWEEK in an interview this spring, “and the same-old, same-old will not work anymore.”
Trends ranging from the cloud and mobility to big data and social media are forcing enterprise and service provider data centers to deal with growing amounts of data and changing workloads. Data center managers and designers are demanding new technologies to deal with the new realities where energy efficiency is as important as performance and server density is more crucial than server power.
At the same time, organizations are still dealing with tight IT budgets and smaller IT staffs that might not have the breadth of skills that that they need, according to Christian Perry, a senior analyst with Technology Business Research (TBR).
“Now we see customers looking to technology to solve their problems,” Perry told eWEEK. “They’re looking for products that decrease complexity and increase simplicity.”
That is fueling the drive toward converged infrastructures, with tightly integrated compute, storage and networking products that are all managed by software and toward software-defined data centers, which aim to make infrastructures more programmable, automated, flexible and cost-effective, he said.
The rapidly changing data center landscape is also supporting the rise of new chip maker competitors, particularly ARM and its growing number of partners. The competition is getting more intense because IT organizations want infrastructures that are dynamic rather than static, automated rather than manual and that offer high levels of performance, but drive down such costs as power and space.
Rapid Data Center Evolution Forces Chip Makers to Adopt New Strategies
“I don’t think I’ve ever seen more disruptive things going on than now,” Greg Scherer, vice president of server and storage strategy in Broadcom’s Infrastructure and Networking Group, told eWEEK, pointing not only to such trends as cloud and big data, but also to software-defined networking (SDN) and storage along with the migration from 10 Gigabit Ethernet networks to 40GbE and eventually 100GbE. “We’ve been anticipating it happening. … Data centers have typically been pretty stodgy places. With what we’re seeing now with cloud, data centers are anything but stodgy.”
Diane Bryant, senior vice president and general manager of Intel’s Data Center and Connected Systems Group, said during a recent two-day workshop with analysts and journalists that the chip maker is working hard to address the changes going on in the data center.
“We’re going through a fundamental transformation in the way that IT is used,” Bryant said. “Today, we look at IT as the service. IT is no longer supporting the business; rather, IT is the business. … Our goal is that all data center workloads, regardless of what they are, run best on Intel Architecture.”
Chip makers are taking a more application-centric approach. Traditionally, Intel and AMD would roll out general-purpose server processors, which OEMs like Hewlett-Packard, Dell and IBM would put into their systems and then sell to businesses, which would take these general-purpose servers and fit them into their data centers.
With the rise of cloud computing, mobility and other trends, that’s changing. Web 2.0 companies like Facebook and Google run massive data centers with huge numbers of small servers processing an increasing number of small workloads. Microsoft officials recently announced that their data centers are running more than 1 million servers.
These organizations are demanding infrastructures that are dense, high-performing, flexible, on-demand and highly energy-efficient. But now, as illustrated by the Facebook-led Open Compute Project—they are willing to build their own systems if they can’t find what they want on the market.
Systems makers like Hewlett-Packard (with Project Moonshot) and Dell (with its Copper servers) are responding with initiatives related to building small, energy-efficient microservers that use systems-on-a-chip (SoCs) from multiple vendors—not only Intel and AMD but also ARM partners like Calxeda and Marvell Technology. Chip makers are looking to meet that demand, not only with new processors optimized for particular workloads but also with broader architectural approaches, new partnerships and custom-chip businesses that can tailor their silicon to fit specific customer needs.
Much of the change and competition is happening at the lower end of the server spectrum. There will always be RISC-based systems and servers powered by Intel Xeons to handle the high-end, heavy-duty business applications. Where the rapid changes are taking place—and where the competition is heating up—is in the space for smaller, more dense and more energy-efficient systems.
Rapid Data Center Evolution Forces Chip Makers to Adopt New Strategies
During the two-day workshop in San Francisco, Intel executives laid out the company’s strategy for transforming the data center, with organizations migrating toward software-defined infrastructures. In these data centers, resources like compute, storage and networking will be pooled, and applications will automatically draw the resources they need to run and then return those resources back to the pools for other applications to use.
Intel officials touched on evolving rack architectures that eventually will offer pools of compute, storage and networking resources that can be accessed by applications as needed, silicon aimed not only at servers but also storage and networking products, and an SoC methodology that integrates such features as I/O, security and memory onto the silicon. These factors enable Intel to offer products that are more optimized for particular workloads, such as differentiating between systems that run more compute-intensive applications from those that need more networking or memory capabilities.
Intel also laid out an aggressive road map that includes ramping up development of its low-power Atom platform for the nascent microserver market. The chip maker already offers its Atom S1200 “Centeron” SoC. Later this year, it will roll out the 22-nanometer “Avoton” chip, which will be based on the “Silvermont” architecture and will offer significant improvements in performance and energy efficiency. In 2014, Intel will introduce “Denverton,” a 14nm Atom SoC for highly efficient servers.
Intel also next year will introduce an SoC version of its “Broadwell” Xeon E3 chip optimized for particular workloads in the Web hosting space.
It’s in this area that the competition will be fiercest. AMD in May unveiled its x86-based Opteron X-Series Kyoto chip for microservers, a part of the industry that company officials view as a key growth area. AMD is putting a lot of money and effort behind the microserver space, having acquired SeaMicro and its Freedom Fabric technology in February 2012, and it’s active in the open hardware movement. At the same time, AMD officials have said that they will start offering server chips based on ARM’s technology starting next year, when ARM’s 64-bit ARMv8 architecture hits the market.
The move highlights the company’s heterogeneous computing approach, offering customers both x86- and ARM-based server silicon. ARM has no real presence in the data center, but given the trends in the industry and the demand for low-cost, low-power architectures, AMD’s Feldman believes that ARM could account for as much as 20 percent of the overall server chip market by 2016.
“We’ve relaxed our religious commitment to x86, and we’ve embraced ARM,” Feldman told eWEEK, adding that given its history in the server chip market, strong OEM relationships and broad IP, AMD will become a dominant ARM partner. ARM’s presence in the data center will have a significant impact over the next few years, he said.
“I think we’re going to see unbelievable [stuff] with this,” Feldman said. “It’s going to be spectacular.”
TBR analyst Perry said AMD is making smart moves as it tries to get back onto more firm financial footing. Partnering with ARM and focusing on energy-efficient systems “is their best shot to be relevant again.”
Rapid Data Center Evolution Forces Chip Makers to Adopt New Strategies
The lower end, which focuses on dense, energy-efficient systems, is still just beginning to evolve, which opens up opportunities for AMD and ARM in the server market, which has been dominated by Intel, he said. There’s no guarantee Intel will be able to dominate the microserver space as it does other areas of the server market.
Richard Fichera, an analyst with Forrester Research, said Intel’s two-day workshop gave executives the chance to let the industry know what it plans to do in the space.
“Legacy IT processing is not an issue,” Fichera wrote in a July 23 post on the Forrester blog. “For all practical purposes, Intel owns this space. But the emerging worlds of cloud, big data and the Internet of things may have some surprises left as they develop. This event allows Intel to highlight its successes and lay out strategies for what will be the fastest-growing segments of the infrastructure business, and also ones where Intel may actually face competition from emerging ARM alternatives and an intensely focused AMD, which has put a lot of its muscle behind cloud, mobile and low-power semiconductors.” AMD has also managed to “snatch up a couple of highly visible CPU contracts,” Fichera noted.
ARM officials see the same low-power, high-performance capabilities that have made their SoC designs dominant in the mobile device space as good fits for the burgeoning microserver market, where energy efficiency is crucial. Their ARMv8 architecture will bring crucial data center features to their designs, from 64-bit computing to greater support for virtualization to more memory. Officials also boast about their broad partnerships, not only with Calxeda and Marvell, but also AMD and others likes Samsung and Qualcomm.
Collaboration drives innovation, and that collaboration—with or among chip makers, software vendors and Linux distributors Red Hat, Canonical and Ubuntu—represents the core element of ARM’s opportunity in servers, Lakshmi Mandyam, director of ARM’s Server and Ecosystems unit, told eWEEK at HP’s Project Moonshot launch in April.
Manyam also said the growing use of open-source technology in data centers and partnerships with open-source vendors defuses Intel’s contention that familiar x86 tools and software—and its compatibility with common data center applications—give it a significant advantage over ARM in servers.
“Open source is the great equalizer,” she said. “I don’t think the gap [between ARM and Intel in server processor technology] is as much as you might think”
TBR’s Perry isn’t so sure. Software compatibility could be an issue for ARM going forward. Some organizations have told Perry that they’re wary about bringing another architecture into the data center. However, given that microservers are only a relatively small part of the larger server space and are aimed at specific workloads, that might not become as big a problem, he said.
“We’re still in the wait-and-see phase,” Perry said. “Nobody’s counting out ARM.”