How to Reduce Data Center Waste

 
 
By Andy Dominey  |  Posted 2010-09-30 Email Print this article Print
 
 
 
 
 
 
 

It is the silent killer of IT budgets in every industry, for companies of virtually every size: runaway electricity consumption in the data center. Regardless of the ongoing debate about carbon footprints and climate change, Knowledge Center contributor Andy Dominey explains here the most compelling reason for IT executives to pay closer attention to this issue: the opportunity to achieve dramatic and immediate savings by reducing data center waste.

The Environmental Protection Agency (EPA) estimated that the computer servers in this country recently consumed 61 billion kilowatt-hours (kWh) in a single year. That is about1.5 percent of all electricity consumed in the country -a $4.5B expense. The problem is not about to go away, either. Consider that, in 2011, the EPA expects that data centers' electricity consumption could spike up to as high as 100 billion kWh-a $7.4B expense.

As much as 25 percent of a typical IT budget is allocated simply to paying the electric bill. What's more, that cost is rising as much as 20 percent each year, while IT budgets only increase about six percent annually. However, the costs do not merely stem from the computer hardware itself. For every watt of electricity powering a server, another watt is needed for data center infrastructure such as cooling and lighting. From this perspective, enterprises have a fiduciary duty to cut their costs by achieving greater efficiencies in the data center.

Start with the usage profile

Few IT managers would argue that their data centers are home to vast numbers of underutilized servers. Commodity hardware and constant expansions to the business application portfolio mean that almost every new application provisioned into the data center ends up with its own server or servers. That is a lot more hardware to track and manage, making it harder to know what every server is doing and if it is still required.

Over-provisioning is also common. Many applications designed to serve only 10,000 users to have an infrastructure that serves 20,000-and this is often done cavalierly because costs have dropped so much. This scenario creates unnecessarily large electrical and cooling demands, to say nothing of software licensing, server management and other infrastructure costs. What's more, clusters with multiple load-balanced, fault-tolerant servers often lie dormant while steadily drawing power, as long as the active node functions correctly. In most cases, high availability is not always needed for every application type.




 
 
 
 
Andy Dominey is a Product Manager at 1E. Andy has extensive experience with data center energy efficiency, server virtualization, and a wide range of Microsoft enterprise solutions. In his current role, Andy manages the product direction and development of one of 1E solutions, based on his understanding of enterprise infrastructure, server efficiency and IT waste reduction. Since joining 1E in 2005, Andy has held numerous management roles including senior consultant, principal consultant and practice lead. Prior to joining 1E, Andy served as a systems administrator for Cobweb Solutions, where he monitored, maintained and supported an expansive infrastructure serving more than 1,500 customers. Previously, Andy developed an in-depth understanding of large-scale server infrastructures as a field service engineer, second-level engineer and third-level engineer at World Class International (WCI). Andy has presented at an array of industry events and has published numerous Microsoft Operations Manager 2005 and Microsoft System Center Operations Manager 2007 books, articles and white papers. He can be reached at andy.dominey@1e.com.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel