'White Boxes' Help Cut Cost of Building Giant Enterprise Data Centers

By Jeff Burt  |  Posted 2015-03-30 Print this article Print
White Box Hardware

"But with vendors such as Dell and HP jumping into the mix with branded bare-metal switches, adoption of bare-metal switching is going to accelerate as tier 2 CSPs and large enterprises endeavor to achieve the nimbleness demonstrated by Google."

ODMs are even getting into the storage side of the data center. According to IDC, contractors shipped 43 million terabytes of storage capacity last year, more than EMC, HP and Dell combined.

The growing popularity of white boxes in the data center can be traced to the rise of Web-scale businesses like Google and Facebook and the hyperscale data center environments they operate, according to Kuba Stolarski, research manager of enterprise server at IDC.

"It boils down to cloud service providers—the growth of the cloud—and not just the growth of the cloud as a platform or a service … but the really massive scale of the relatively small number of providers," Stolarski told eWEEK.

Companies like Google, Facebook, Amazon, Microsoft and Alibaba operate huge data centers that run significantly large numbers of servers and other hardware to process, manage and move high numbers of small workloads. Such companies are constantly looking for more power- and cost-efficient ways to run their data centers, and—starting with Google—some began designing and building their own servers and relying more on ODMs for their hardware.

The number of such large, Web-based companies has since grown—Twitter, for example, designs its hardware in-house and then contracts with ODMs to build it, he said.

"There's a growing pool of customers within this sphere," Stolarski said.

Facebook's launch of the Open Compute Project (OCP) in 2011 opened the door to open-source development of hardware in a way that mirrored the rise of Linux and further fueled the idea that data center hardware didn't have to come from the established tier-one vendors.

In addition, it meant that hardware R&D was no longer the purview of the OEMs. The trend has continued with other projects, such as the OpenPower effort, which is looking to drive IBM's Power chip architecture into hyperscale and Web-scale environments. OpenPower officials earlier this month introduced more than a dozen new products that came out of the open-source project.

"There are a lot of different moving pieces, but it boils down to taking R&D out of a small number of hands and putting it into the community," he said. "It's the idea of speeding up innovation."

In the networking world, SDN and NFV are driving a large part of the changes. Web-scale businesses, enterprises, service providers and telecommunications vendors are looking to build for more programmable, agile and flexible networks that can address the rapidly changing demands brought on by mobile computing, big data, social networking and the cloud. Traditional networks house the network intelligence in proprietary switches and routers that require time-consuming programming and are difficult to quickly adapt to changing business needs.

SDN and NFV take the control plane and networking tasks like load balancing, firewalls and intrusion detection out of the underlying hardware and put them into software that can run on cheaper commodity systems. Putting these systems into software makes programming the hardware a job that takes seconds or minutes rather than days or weeks, which enables businesses to save money and time while enabling them to quickly spin out services for employees and customers.



Submit a Comment

Loading Comments...
Manage your Newsletters: Login   Register My Newsletters

Rocket Fuel