The Environmental Protection Agency (EPA) estimated that the computer servers in this country recently consumed 61 billion kilowatt-hours (kWh) in a single year. That is about1.5 percent of all electricity consumed in the country -a $4.5B expense. The problem is not about to go away, either. Consider that, in 2011, the EPA expects that data centers’ electricity consumption could spike up to as high as 100 billion kWh-a $7.4B expense.
As much as 25 percent of a typical IT budget is allocated simply to paying the electric bill. What’s more, that cost is rising as much as 20 percent each year, while IT budgets only increase about six percent annually. However, the costs do not merely stem from the computer hardware itself. For every watt of electricity powering a server, another watt is needed for data center infrastructure such as cooling and lighting. From this perspective, enterprises have a fiduciary duty to cut their costs by achieving greater efficiencies in the data center.
Start with the usage profile
Few IT managers would argue that their data centers are home to vast numbers of underutilized servers. Commodity hardware and constant expansions to the business application portfolio mean that almost every new application provisioned into the data center ends up with its own server or servers. That is a lot more hardware to track and manage, making it harder to know what every server is doing and if it is still required.
Over-provisioning is also common. Many applications designed to serve only 10,000 users to have an infrastructure that serves 20,000-and this is often done cavalierly because costs have dropped so much. This scenario creates unnecessarily large electrical and cooling demands, to say nothing of software licensing, server management and other infrastructure costs. What’s more, clusters with multiple load-balanced, fault-tolerant servers often lie dormant while steadily drawing power, as long as the active node functions correctly. In most cases, high availability is not always needed for every application type.
Server Virtualization
Server virtualization
Server virtualization has alleviated some of these concerns. However, once again, the cheap ubiquity and simplicity of deploying a virtual server means that many data centers are victims of “virtual server sprawl.” Virtual servers get provisioned for testing or a special event Website and then are never decommissioned.
Mergers and acquisitions (M&A), decommissioned applications, cancelled projects, changes in IT management and other events mean many servers-physical and virtual-are no longer needed. But which ones? Without careful controls, it is easy to end up with servers that are powered on but doing absolutely nothing useful for your IT environment or your business as a whole.
In many instances, CPU utilization or disk I/O statistics are used as a quick proxy of the need for a server. But the fact is, a server can have a high utilization rate while not performing any useful work. Cautious IT managers are increasingly likely to leave those servers untouched rather than introduce potentially risky changes to the infrastructure.
Instead, IT must carefully analyze what work the server is performing. For instance, a server that is running a critical customer database is inherently performing useful work. By contrast, a server that is only performing overnight antivirus scans is not defined as “useful” because it does not directly affect core, value-adding business processes. Analyzing the server’s work load at the application level is the best way to determine what useful work the server is performing, if any.
Power Manage Your Servers
Power manage your servers
Once you have determined which servers are performing various levels of work, you have the opportunity to conserve significant amounts of electricity. First and foremost, of course, you can take servers down completely, directly reducing the electrical draw of the data center. Or you can at least slow the addition of new servers by repurposing the servers you have already provisioned.
CPU throttling is an increasingly popular-and effective-measure to reduce power consumption in servers. Lower CPU speeds not only directly reduce the power draw, they also affect the power draw of other components such as disk drives. Leading server operating systems and vendor-based hardware tools incorporate some rudimentary CPU-throttling features. However, newer, dedicated utilities can do an even better job of managing processor speed.
For instance, they can cap the CPU speed for non-useful work (calculated based on the application work it is processing). While some might worry about potential degradation in performance, the impact is uneventful because the processing is not time-intensive, takes place in off-peak hours or involves non-critical applications.
Saving Green by Being Green
Saving green by being green
The net impact of these initiatives on the data center and overall IT budget is extremely compelling. It is worth the time to dig deeper to discover the largest cost and energy savings. We recently provided a server energy audit to a global engineering firm that had already consolidated its data center and reduced its servers by more than 25 percent.
Yet, by further analyzing the work loads of its servers and implementing smarter, application-level CPU throttling, we discovered that an additional 12 percent of the servers in its environment were candidates for immediate decommissioning.
Similarly, experience shows that most data centers can realize additional savings in their energy consumption. The direct electricity requirements of the server farm typically drop by 12 to 15 percent after implementing the simple steps mentioned throughout this article. Additional savings are accrued by reducing demand on the cooling infrastructure as well. For virtually any data center, these kinds of substantial returns can be generated in a rapid payback period.
Andy Dominey is a Product Manager at 1E. Andy has extensive experience with data center energy efficiency, server virtualization, and a wide range of Microsoft enterprise solutions. In his current role, Andy manages the product direction and development of one of 1E solutions, based on his understanding of enterprise infrastructure, server efficiency and IT waste reduction. Since joining 1E in 2005, Andy has held numerous management roles including senior consultant, principal consultant and practice lead.
Prior to joining 1E, Andy served as a systems administrator for Cobweb Solutions, where he monitored, maintained and supported an expansive infrastructure serving more than 1,500 customers. Previously, Andy developed an in-depth understanding of large-scale server infrastructures as a field service engineer, second-level engineer and third-level engineer at World Class International (WCI). Andy has presented at an array of industry events and has published numerous Microsoft Operations Manager 2005 and Microsoft System Center Operations Manager 2007 books, articles and white papers. He can be reached at andy.dominey@1e.com.