When Verizon began laying the groundwork several years ago for its new public cloud, company IT executives set several goals for the environment. They wanted consistent performance, high security and availability, and they didn’t want customers to have to modify their applications in order to run them in the cloud.
They wanted few moving parts and no special hardware.
“We wanted very few actual hardware components,” Paul Curtis, chief architect of cloud computing at Verizon, said during Advanced Micro Device’s Developer Summit 2013 in November. “My boss said, ‘I only have five fingers in my hand. Don’t make me use them all.'”
There was to be “no honking router outside of this. … No special firewall, no special anything. It just doesn’t scale.”
What Verizon quickly settled on was SeaMicro, a small company that at the time was making small, highly energy-efficient microservers that could be used in very dense data center environments and are linked via the company’s Freedom Fabric. SeaMicro has since become part of AMD, which bought the company in February 2012 for $344 million.
The Verizon Cloud platform, which will compete with the likes of Amazon Web Services and Rackspace, is now in paid public beta, with the expectation of rapid expansion as 2014 unfolds. It also is a microcosm of some of the drivers that are fueling the changes in server and data center architecture, giving rise to new offerings from established system and component makers and new designs from smaller vendors.
These changes in the data center also are roiling the competitive waters, with longtime partners suddenly becoming competitors and dominant architectures seeing threats from new sources. Enterprise data centers and service providers are making new demands on their system vendors, and those vendors are working hard to meet those demands. Organizations are looking for smaller sizes, more energy efficiency, easier manageability and lower costs. They want systems that can run the growing range of new applications, from big data to video to analytics.
This has created a fundamental shift in what is driving server design. In the past, server makers would put out new systems with the newest chips and then give those systems to customers to let them decide the best way to use them. Now it’s the customer that is in the driver’s seat, according to Andrew Feldman, corporate vice president and general manager of AMD’s Server Business Unit.
“OEMs have [lost] a lot of the power” over how systems are designed, Feldman, former CEO of SeaMicro, told eWEEK. “We’re now seeing radical new designs in servers.”
Drivers Are In the Numbers
It’s really a story about numbers. It’s about the skyrocketing numbers of people and devices that will continue to connect to the Internet over the next few years, and the massive amounts of data they will generate. (Cisco Systems forecasts that by 2017, there will be 3.6 billion Internet users and more than 19 billion network connections, including machine-to-machine (M2M) connections. By 2020, it is expected that there will be 50 billion devices connected to the Internet.)
It’s about the number of organizations that are increasingly moving parts of their businesses to the cloud and the number of new applications—like big data and analytics—that enterprises are implementing. (IDC analysts expect global spending on public IT cloud services to hit $107 billion in 2017.)
Mobile Devices, Cloud, Applications Driver Server Design Diversity
And it’s the growing number of major cloud service providers—like Facebook, Google and Amazon—with large, hyperscale data center environments that are aggressively looking for new technologies that will help them run and expand their operations while saving money. And if they can’t find those products on the market, they’re increasingly willing to develop them themselves.
Facebook several years ago began designing its own energy-efficient servers and launched the Open Compute Project to help other organizations looking for highly efficient data center hardware. Google was among the first to adopt software-defined networking and reportedly is considering designing its own server chips with the help of ARM. Google currently is Intel’s fifth-largest customer.
“All of these organizations have been doing things differently than classic enterprise data centers,” Jeffrey Hewitt, research vice president at Gartner, told eWEEK, noting the specific apps they run and their unique approaches to data center implementations. “When you’re doing that much [computing], you’re not looking to do things like they’ve been doing them for years.”
“That combination—devices, users, applications—is sort of accelerating that change” in server design, AMD’s Feldman said. “If you add in the Internet of Things … more compute is coming.”
Data Center Modernization
Data center modernization has become a key priority for many enterprises that are wrestling with new workloads, security concerns, power issues and cost worries. According to a recent survey by QuinStreet Enterprise (which publishes eWEEK, among other tech news sites) and Palmer Research, 88 percent of respondents are investing in their data centers.
Twenty-seven percent said they had completed upgrading their facilities, while another 61 percent said they were making it a priority. Server virtualization, energy-efficient hardware and converged infrastructures were among the key data center technologies they are deploying, and 74 percent have deployed or are considering deploying a cloud delivery model.
Flash-based servers also are an emerging technology under consideration, according to the study, “2014 Data Center Outlook: Data Center Transformation—Where Is Your Enterprise?”.
“Energy-efficient hardware provides a more cost-effective way for enterprises to meet their ever-growing power and cooling needs and is seeing increased adoption,” the study authors said. “Converged infrastructure, cloud delivery and Big Data analytics are hot topics right now with heavy consideration and deployments planned for within the year. How enterprises prioritize and believe the advantages will benefit them will determine the time frame for rollout.”
The QuinStreet Enterprise survey dovetails with a similar one released Dec. 3 by TheInfoPro that found that many enterprises had bought much of the hardware they needed to upgrade their data centers—in particular, x86 servers for hosting large numbers of virtual machines—and that IT administrators’ attention was turning to software.
However, there is increasing interest in such hardware as solid-state disks (SSDs), both inside servers and as direct-attached storage, and converged infrastructures, which offer tightly integrated compute, storage, networking and management software. Cisco has its Unified Computing Systems (UCS) and VCE its Vblocks, while HP, Dell and IBM offer similar products. In TheInfoPro report, 49 percent of respondents said they are currently using such systems, while another 26 percent said they expect to be considering these technologies in the next two years.
“They’re gaining more and more traction in the marketplace,” Peter ffoulkes, research director for servers and virtualization at the TheInfoPro, told eWEEK.
Such systems can reduce complexity in the data center and save organizations time and money, freeing them from the task of having to integrate the various components by themselves.
Mobile Devices, Cloud, Applications Drive Server Design Diversity
Currently, enterprises can spend as much as 70 percent of their IT dollars on the operation and maintenance of their data centers, according to Jim Ganthier, vice president of marketing, operations and general manager of mainstream business for HP’s Servers Group. Converged offerings can reduce those costs, enabling businesses to spend more on innovative products, Ganthier told eWEEK.
HP also is also rolling out converged systems that feed into the growing demand for more workload-optimized systems. At its HP Discovery event in December, the company unveiled a converged system for Vertica aimed at big data environments, and two more for virtualized environments. As businesses tackle new applications such as big data, cloud, analytics and security, the need for systems that are designed specifically for those jobs grow.
That demand is being felt not just at the system level, but also down to the components. Intel is releasing as many as a dozen or more versions of chips that are aimed at different workload needs, whether that is compute, memory, storage or something else. Both Intel and AMD are expanding what they can do to custom make chips to meet end user needs.
The same thing is happening with servers. Cisco in November introduced accelerator packs to make it easier for organizations to run the OpenStack cloud platform on the UCS. Oracle has leveraged the hardware it inherited through its acquisition of Sun Microsystems to create systems optimized to run its database and cloud offerings. Growing numbers of vendors—from Cisco to Intel—are talking about application-centric infrastructures, where applications dictate what the hardware is and does.
AMD’s Feldman said companies increasingly are thinking in terms of workloads. Facebook divides its workloads into four categories on which it bases its server purchases. Some financial institutions have as many as a dozen or more categories, he said.
“You divide the world up into different types of work, then buy the machine that’s better situated for running that type of work,” Feldman said. “The same logic will lead Facebook, Google [and] Amazon to ask for specialized things from their processors.”
Cloud service providers are becoming particularly important players in server design, given their specific needs such as energy efficiency, space and workload optimization and due to the huge numbers of servers they buy every year. For example, over the past few years, Google has been aggressively searching for ways to address power and space issues; to use software-defined networking (SDN) technology; and to leverage such technologies as Hadoop and Apache Cassandra. And what Google and others in the cloud do will impact other businesses.
“Google, Yahoo, Facebook, Amazon, eBay are leading indicators of what will happen with larger enterprises,” Feldman said.
In 2011 HP unveiled Project Moonshot, an initiative to create ultra-low power, ultra-dense servers modules for the growing hyperscale data center environments. The Moonshot systems encapsulate many of the changes in server designs being driven by cloud computing, mobility and other trends. The server modules are small, powerful and highly energy efficient, they’re optimized for particular workloads and they are not tied to any single architecture—the first ones are based on Intel’s low-power Atom platform, but systems running on AMD chips and ARM-based systems-on-a-chip (SoCs) are planned for 2014.
HP officials call Moonshot cartridges “software-defined servers.” They are 89 percent more energy efficient than traditional servers, take up 80 percent less space and cost 77 percent less. They share management, power, cooling, networking and storage, which the company says makes their innovation cycle three times faster.
Mobile Devices, Cloud, Applications Drive Server Design Diversity
The first Intel-based Moonshot systems—including the ProLiant m300 server cartridge—is designed for Web applications, leveraging the high performance-per-watt features of Intel’s Atom 2750 SoC. Meanwhile, the new m700 cartridge is targeted at hosted desktop environments, taking advantage of the integrated graphics acceleration in AMD’s Opteron X2150 accelerated processing unit (APU).
HP is far from the only server maker to offer energy-efficient, dense architectures. SeaMicro has been at it for several years, and Dell is building its Copper and Zinc microservers, which are powered by ARM-based chips from Calxeda and Applied Micro. The demand for power efficiency and density also is giving rise to smaller vendors—including Servergy and Boston Ltd.—that are rolling out low-power systems.
Microservers are what Verizon opted for with its new public cloud infrastructure. Verizon’s Curtis said that when he and other officials began looking for systems, SeaMicro’s SM15000s made the most sense. The systems had what Verizon was looking for and there were few alternatives that could meet the wireless carrier’s demands. The only other choices were traditional 1U (1.75-inch) systems Curtis said.
“We focused on SeaMicro pretty quick from the start,” he said. “The density was very attractive. The low power was attractive. “
The SM15000 microservers run up to 512 cores in a 10U (17.5-inch) system, with up to 2,048 cores per rack and 4 terabytes of memory per system. Each CPU socket offers 10G-bps bandwidth, and the systems bring up to 5 petabytes of storage.
“I wanted to be able to do things in my cloud I couldn’t do in the data center,” Curtis said, noting not only the microservers, but the ability to leverage SSDs and other technologies. He also touted SeaMicro’s Freedom Fabric, which helps link the hundreds of chips in the microserver. Fabrics will become increasingly important as server vendors look to leverage increasingly larger numbers of chips and cores in their systems.
It’s In the Chips
Intel’s dominance in server chips is being challenged in several areas, though it remains to be seen how those challenges will pan out. Low-power systems running on upcoming 64-bit ARM chip designs are expected to begin coming to market later in 2014 from a wide range of system makers, including HP, Dell and Boston Ltd.
AMD will begin making 64-bit ARM chips next year, a key part of its ambidextrous computing approach to offer whatever platform is needed, whether it’s x86 or ARM. Feldman said the growth in the use of smartphones and tablets is fueling the demand for greater computing in the data center along with the need for greater energy efficiency and density in the systems. Power and space are becoming increasingly important, and “what this means is that one-size-fits-all is dead,” a reference to Intel’s insistence that the x86 architecture can be used for all compute needs.
Intel officials are taking the ARM server threat seriously. It responded earlier this year by launching the second generation of its low-power Atom chips for microservers even before the first of the systems powered by 64-bit ARM chips has reached the market. They also argue that the Intel Architecture offers organizations familiar software and development tools—as well as a large software partner ecosystem—rather than having to adopt unfamiliar software.
ARM will have its challenges going forward. During the recent Dell World 2013 show, several Dell executives said there is potential for ARM to succeed in the data center, but the chip designer will need to build the ecosystem around the Dell architecture for it to gain real traction.
Mobile Devices, Cloud, Applications Drive Server Design Diversity
However, ARM officials have pointed to the growth of open-source technologies in data centers, the company’s strong partnerships and the wide support for ARM in the open-source community.
“Open source is the great equalizer,” Lakshmi Mandyam, director of server systems and ecosystem for ARM, told eWEEK in April when HP unveiled a new Moonshot system. “I don’t think the gap [between ARM and Intel in server processor technology] is as much as you might think.”
The ARM community also will need to rebound from the recent collapse of Calxeda, a leading voice for ARM in the data center. Calxeda executives said the company’s failure had more to do with timing—rolling out products before the industry was ready for them—than with the idea of ARM SoCs in servers. Patrick Moorhead, principal analyst with Moor Insights and Strategy, said it also had to do with how much change enterprises are willing to put up with.
“Data centers didn’t want too many software transitions, from X86 to 32-bit ARM to 64-bit ARM,” Moorhead told eWEEK. In the end, scale-out data centers were only open to one potential change. There is still a market desire for very dense servers and the technology that provides this, lower-power SoCs tied together by intelligent fabric. Intel has made huge advances here, but there are no less than 10 ARM-based companies focused on specialized silicon for specific workloads that are chomping at the bit to make inroads. It will be an interesting 2014 as 64-bit ARM servers make their presence.”
AMD also is pushing its heterogeneous computing strategy, the idea of combining CPUs with GPUs, digital signal processors and other accelerators to increase server performance and power efficiency and to enable them to handle increasingly parallel workloads. The foundation of the effort is AMD’s APUs, which offer integrated CPUs and GPUs on the same silicon.
AMD and other chip makers, including ARM, Imagination Technologies, Qualcomm, Samsung and Texas Instruments are key members of the Heterogeneous System Architecture (HSA) Foundation, which is working to create standards for system designs that leverage CPUs, GPUs and other accelerators.
Accelerators also are a point of contention in the high-performance computing (HPC) arena. AMD and Nvidia are promoting their respective GPU technologies as accelerators to help HPC systems increase performance without increasing power consumption, important factors as supercomputers and other such systems become more powerful and handle increasingly heavy workloads. During the SC ’13 supercomputing show in November, both Nvidia and AMD unveiled new GPU acceleration technologies and Nvidia announced that IBM will support GPU accelerators in its Power systems.
For its part, Intel is answering with its x86-based many-core Xeon Phi coprocessors, which are part of the chip makers “neo-heterogeneity” initiative. Intel executives note that HPC environments will use both processors and coprocessors or accelerators and say Xeon Phi enables Intel to offer common and familiar underlying programming model and tools. Intel officials in November released details about the upcoming next generation of Xeon Phi, the 14-nanometer Knights Landing, which are due next year and will be capable of being used either as coprocessors or host processors.
The use of coprocessors or accelerators is expected to grow in the HPC field. According to the compilers of the Top500 list of the world’s fastest supercomputers, 53 systems on the November list use either GPU accelerators or coprocessors—38 of which use Nvidia GPUs and another two that use AMD’s. Thirteen are using Xeon Phi coprocessors.