BALTIMORE—When facility and IT managers begin building a modern data center or expanding an existing one, there is more to consider than just the traditional issues of power and cooling to create the most efficient infrastructure for an enterprise.
The issues surrounding the design and construction of this more flexible data center that adjusts as a companys business continues to grow was the central theme of the 2007 AdaptiveXchange conference hosted by Emerson, the parent company of Emerson Network Power, which makes power and cooling products for IT infrastructures.
Bob Bauer, group president for Emerson, in his Nov. 29 keynote address delved into the new challenges within the data center that IT administrators are dealing with and how the older models of power and cooling will not address the needs of tomorrows facilities.
While there is a wave of consolidation occurring in the IT industry, some of the issues being addressed have created new problems for IT and facility managers to solve three to five years down the road.
For example, virtualization—which partitions a physical server into different virtual environments—can reduce the number of physical servers in a data center and save about 20 percent of the floor space. However, Bauer said that within three years of starting a virtualization and consolidation project, some companies need to repopulate the data center with new servers to keep pace with the growth of business and to address issues such as running new applications.
“Are you going to end up with rows of racks with blank spaces? And how long is it going to take you to migrate through this virtualization program?” Bauer asked the audience of about 1,000 attendees. “If it takes you three years to go through this process and your business grows … within that three years, the productivity you get from virtualization will essentially be consumed by the growth of the business.”
While Bauer said virtualization is still a critical part of building a data center and proper planning for that type of project remains important, there are more subtle ways to reduce power and address cooling concerns. For example, Emerson engineers found that addressing power concerns at the chip level and better power management can reduce electricity consumption by 30 percent.
The idea is to look at all the parts—including the smallest—that make up the data center, instead of just focusing on the mega issues, such as the 60 percent of electricity that typically goes to the equipment and the other 40 percent that runs the cooling system.
While locating a data center close to a supply of cheap power seems like the logical answer, Bauer told the audience that there are other factors to consider, such as the cost of broadband services. His own company found it was less expensive to put a data center in the United States as opposed to Europe when both broadband and power costs were considered in the total price.
While conceding that there is no one-stop cure, there are questions that IT and facility managers can ask themselves when designing a new data center or building out an existing one. As an example, Bauer posed the question: Can a manager add power to an existing data center or move that power around to address different needs throughout the day?
In terms of cooling, he said it is no longer enough to add more cooling to the data center; administrators need to tackle the cooling capacity at the rack level.
An example of one of the new massive—but flexible—data centers that Bauer admires is the 76,000-square-foot facility that Sun Microsystems built in Santa Clara, Calif., which opened earlier this year. Sun engineers built a highly standardized infrastructure that can easily be moved around with power and cooling flowing to the areas that need them.
“I really feel like the future is being built around this concept of dynamic critical infrastructure,” he said. “Thats what we have to figure out—what to build. Its the whole process of how to design, build and operate what you had planned.”