When it comes to energy conservation, sustainability and saving the planet, a lot of people are getting on the bandwagon, even Pope Francis. But when it comes to enterprise data centers, not so much.
A recent survey by IDC of data center power and cooling trends in the enterprise shows that, unlike their big brothers in the Web-scale world, most traditional enterprises are not running at the most efficient levels and that is costing them countless dollars while continuing to put strains on the power grid and the environment.
The survey of 404 enterprise data center managers (data centers with a minimum of 1,000 square feet and 100 servers) showed power and cooling costs, along with IT infrastructure itself, make up the largest share of data center budgets, both at 24 percent. With a mean budget of $1.2 million, that puts power and cooling costs at about $300,000 annually on average.
More significantly, the survey showed that most of those data centers are not running at peak efficiency. The typical measure of this is PUE, or power usage effectiveness, which is a ratio of power coming into the data center to how it's being distributed across the IT workload, according to IDC Research Manager Kelly Quinn, who presented the data in a recent webcast.
A PUE of 1.0 is considered very efficient, while anything between 2.0 and 3.0 is considered very inefficient. "When you get to PUE ratios at 2 and above, you are looking at a massive amount of power going not just to the IT side of the house but also to the facilities side," she said.
Cloud giants like Google and Facebook publish their PUE numbers. Google's most recent released number is 1.12 for the trailing 12 months. Facebook does not have any recent numbers, but has published that its Prineville, Ore., data center runs at 1.07. The U.S. government's guidelines are for 1.5.
By contrast, IDC's report shows that more than two-thirds of the enterprises it surveyed logged a PUE of more than 2.0, and 10 percent were over 3.0 or didn't know. The data showed that data center managers are indeed measuring their PUE, but not really doing anything about it.
What is to be done? There are two options. The first is to consolidate data centers and buy more efficient equipment, reducing the overall pieces of hardware that draw power and create heat. This is expensive and must be a planned capital expense. Many enterprises have made this leap, but many more have not.
The second option is more efficient data center designs and cooling strategies. Data center design is not a new concept. I recall spending a day at American Power Conversion in Rhode Island about 10 years ago and seeing demonstrations of the latest data center designs, such as hot aisle/cold aisle layouts, but today only 30 percent of data centers use that method, according to the survey.