NEWS ANALYSIS: System OEMs are learning to adapt their designs to the demands from end users who are dealing with exploding amounts of data and new workloads.
When Verizon began laying the groundwork several years ago for its new public cloud, company IT executives set several goals for the environment. They wanted consistent performance, high security and availability, and they didn't want customers to have to modify their applications in order to run them in the cloud.
They wanted few moving parts and no special hardware.
"We wanted very few actual hardware components," Paul Curtis, chief architect of cloud computing at Verizon, said during Advanced Micro Device's Developer Summit 2013 in November. "My boss said, 'I only have five fingers in my hand. Don't make me use them all.'"
There was to be "no honking router outside of this. … No special firewall, no special anything. It just doesn't scale."
What Verizon quickly settled on was SeaMicro, a small company that at the time was making small, highly energy-efficient microservers that could be used in very dense data center environments and are linked via the company's Freedom Fabric. SeaMicro has since become part of AMD, which bought the company in February 2012 for $344 million.
The Verizon Cloud platform
, which will compete with the likes of Amazon Web Services and Rackspace, is now in paid public beta, with the expectation of rapid expansion as 2014 unfolds. It also is a microcosm of some of the drivers that are fueling the changes in server and data center architecture, giving rise to new offerings from established system and component makers and new designs from smaller vendors.
These changes in the data center also are roiling the competitive waters, with longtime partners suddenly becoming competitors and dominant architectures seeing threats from new sources. Enterprise data centers and service providers are making new demands on their system vendors, and those vendors are working hard to meet those demands. Organizations are looking for smaller sizes, more energy efficiency, easier manageability and lower costs. They want systems that can run the growing range of new applications, from big data to video to analytics.
This has created a fundamental shift in what is driving server design. In the past, server makers would put out new systems with the newest chips and then give those systems to customers to let them decide the best way to use them. Now it's the customer that is in the driver's seat, according to Andrew Feldman, corporate vice president and general manager of AMD's Server Business Unit.
"OEMs have [lost] a lot of the power" over how systems are designed, Feldman, former CEO of SeaMicro, told eWEEK
. "We're now seeing radical new designs in servers."
Drivers Are In the Numbers
It's really a story about numbers. It's about the skyrocketing numbers of people and devices that will continue to connect to the Internet over the next few years, and the massive amounts of data they will generate. (Cisco Systems forecasts that by 2017, there will be 3.6 billion Internet users
and more than 19 billion network connections, including machine-to-machine (M2M) connections. By 2020, it is expected that there will be 50 billion devices connected to the Internet.)
It's about the number of organizations that are increasinglymoving parts of their businesses to the cloud and the number of new applications—like big data and analytics—that enterprises are implementing. (IDC analysts expect global spending on public IT cloud services to hit $107 billion in 2017