How to Reduce Data Center Waste

It is the silent killer of IT budgets in every industry, for companies of virtually every size: runaway electricity consumption in the data center. Regardless of the ongoing debate about carbon footprints and climate change, Knowledge Center contributor Andy Dominey explains here the most compelling reason for IT executives to pay closer attention to this issue: the opportunity to achieve dramatic and immediate savings by reducing data center waste.

bug_knowledgecenter_70x70_%282%29.jpg

The Environmental Protection Agency (EPA) estimated that the computer servers in this country recently consumed 61 billion kilowatt-hours (kWh) in a single year. That is about1.5 percent of all electricity consumed in the country -a $4.5B expense. The problem is not about to go away, either. Consider that, in 2011, the EPA expects that data centers' electricity consumption could spike up to as high as 100 billion kWh-a $7.4B expense.

As much as 25 percent of a typical IT budget is allocated simply to paying the electric bill. What's more, that cost is rising as much as 20 percent each year, while IT budgets only increase about six percent annually. However, the costs do not merely stem from the computer hardware itself. For every watt of electricity powering a server, another watt is needed for data center infrastructure such as cooling and lighting. From this perspective, enterprises have a fiduciary duty to cut their costs by achieving greater efficiencies in the data center.

Start with the usage profile

Few IT managers would argue that their data centers are home to vast numbers of underutilized servers. Commodity hardware and constant expansions to the business application portfolio mean that almost every new application provisioned into the data center ends up with its own server or servers. That is a lot more hardware to track and manage, making it harder to know what every server is doing and if it is still required.

Over-provisioning is also common. Many applications designed to serve only 10,000 users to have an infrastructure that serves 20,000-and this is often done cavalierly because costs have dropped so much. This scenario creates unnecessarily large electrical and cooling demands, to say nothing of software licensing, server management and other infrastructure costs. What's more, clusters with multiple load-balanced, fault-tolerant servers often lie dormant while steadily drawing power, as long as the active node functions correctly. In most cases, high availability is not always needed for every application type.