Mark Barrenechea stood outside the Moscone Center in downtown San Francisco Sept. 18 next to a 40-foot-long trailer, which represents one of a growing number of ways vendors are making the tech industry more environmentally friendly.
The president and CEO of Rackable Systems brought the trailer—the latest model of the Fremont, Calif., companys mobile data centers—to the Intel Developer Forum to illustrate one avenue that businesses are taking to reduce power consumption and cooling costs through innovative thinking.
“We are not following any model except to build what the customer wants,” Barrenechea said in an interview with eWEEK, while talking about the model ICE (Integrated Concentro Environment) Cube trailer. “We ask the customer what they want to build. We do believe that this type of mobile data center will address the power, cooling and density concerns of the customer, and we are working to leverage the latest Intel technology.”
The twin issues of power and cooling costs in the data center continue to push their way onto center stage in the industry. In New York Sept. 18, IT industry leaders signed a memorandum of understanding with the Department of Energy that puts in place a process for creating metrics that can be used to measure the energy efficiency of data centers.
Leaders of the 92-member Green Grid alliance, most of whom are IT industry executives, say the data center is a focus of their initial efforts for two reasons: Data centers consume a huge and growing amount of energy and are an easy place to isolate and thus measure.
In Dallas, energy efficiency was a key subject during the Data Center World show, with presenters such as Mark Monroe, Sun Microsystems director of sustainable computing, telling IT administrators that even the simplest solutions—such as shutting down servers that are no longer being used—can help the cause.
“Yes, just shut all those mystery servers down if youre not sure what function they serve,” Monroe said Sept. 18. “Youll get an e-mail soon enough from the people who were using the server. Then you can just switch it back on, no problem. After about 90 days, if you dont get an e-mail or a phone call, then you know you dont need that server, and you should take it off the system.”
Idle servers use nearly as much power as those that are active, power that is being wasted if the systems are left running while theyre not being used.
“This isnt rocket science,” Monroe said. “Its all about data: Understanding your facilities, figuring out what to fix and figuring out how to fix it. Turn things on when you need them; turn them off when you dont. Easy first steps.”
Power and cooling have grown in importance over the last couple of years as data center density has grown and energy prices have risen. According to the DOE, data centers used 61 billion kilowatt-hours in 2006, or 1.5 percent of electricity consumed in the United States. Those numbers are expected to grow by 12 percent per year through 2011.
DOE Assistant Secretary Andrew Karsner said at the signing ceremony in New York that the memorandum with The Green Grid sets a common goal of improving overall energy efficiency in data centers by 10 percent in 2011, factoring in current project data center use. “Data centers have one of the fastest industry demand rates, so you have no choice but to set aggressive goals,” Karsner told eWEEK.
Larry Vertal, senior strategist of enterprise communications in Advanced Micro Devices microprocessor segment, said energy usage “is a complex problem, but data centers are the place to start because we can have the most impact [on consumption] there.” AMD, of Sunnyvale, Calif., is a member of The Green Grid.
“The idea is to step back and look at the data center as a holistic system and really understand how that system, [including] CPU usage, cooling and networks, consumes energy,” said Rick Schuckle, a senior technical staff member at Dell and a member of The Green Grid board representing the Round Rock, Texas, company.
Page 2: Turning the Power Down
Turning the Power Down
There is more to the story than hardware; Vertal pointed to server virtualization as one way that software developers can impact the data center. “The best thing you can do is turn off a server that is not in use,” Vertal said.
The DOE pledged to work with the alliance to develop a common set of metrics and tools and to create a Web site so data center administrators can easily access tools and resources to initiate and implement energy management programs.
The DOE is also putting some federal resources, such as the Lawrence Berkeley National Laboratory, in Berkeley, Calif., at the disposal of the alliance for testing purposes.
The federal government has a key stake in this process: DOE data centers themselves represent 35 of the 500 largest data centers nationwide in terms of power consumption. But if the two partners agree on the path they need to take, they are not necessarily obeying the same speed limits.
While Karsner said he expects the initial specifications for metrics that measure energy efficiency in data centers to be published by December, members of the alliance were less sanguine. Board members told eWEEK in separate interviews that it will be difficult to develop consensus around metrics quickly, especially given the disparate nature of different data centers. “It may well be a multistage process,” said Roger Tipley, senior strategist with Hewlett-Packard, in Palo Alto, Calif.
The DOE and the alliance also have slightly different perspectives on how much of the metrics to make public. Karsner likened the measurements to Energy Star ratings, which, while voluntary, are publicly promoted by companies to prove their products energy efficiency to consumers. But alliance members said they believe that companies might be more comfortable using the metrics to track internal improvement rather than as a comparative tool to be used against the competition.
“The goal is not to try to set a bar that every data center has to reach but to help companies set goals so that they are more efficient,” Dells Schuckle said.
Karsner told eWEEK he sees government as an enabler to create a yardstick standard based on best practices “where industry is less likely to do it if left to their own devices.”
Energy efficiency is “too important to leave to industry by itself when the national interest is to be as efficient as possible for the aggregate good,” he said.
“The punch line,” Karsner added, “is that we all get to be more profitable [as a result].”
For Rackable, energy efficiency is a key part of its product road map, and the ICE Cube mobile data center is an example of that. ICE Cube is Rackables second attempt at a mobile data center —in March, the company launched Concentro, a 40-by-8-foot mobile data center. With ICE Cube, Rackable is offering customers either a 20-by-8-foot or 40-by-8-foot trailer that can house up to 1,400 of its 1U (1.75-inch) servers. Rackable also will now offer Intels quad-core Xeon chip, which means the mobile data center can contain as many as 11,200 processor cores.
Right now, the only vendors offering such mobile data centers are Rackable and Sun, with its Project Blackbox—which combines storage, computing and network infrastructure, and high-efficiency power and liquid cooling—into modular units in standard 20-by-8-by-8-foot shipping containers.
Rackables Barrenechea said ICE Cube has several advantages over Suns Project Blackbox, including Rackables use of its own Half-Depth Servers, which are only 15.5 inches deep. This means that Rackable can squeeze more systems into the center, Barrenechea said.
In addition, Rackable removes all the fans from the servers, circulating cold air throughout the data center, and expels warm air by using several impeller fans. The ICE Cube data center also uses the companys DC power technology. Together, these and other innovations help reduce overall power and cooling costs, Barrenechea said.
Check out eWEEK.coms for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.