When George Daniels looks at todays data centers, he sees an outdated model in desperate need of change.
The linear rows of rectangular boxes, cooled by large air conditioning units hanging along the walls, in many instances can no longer address the needs of the modern-day data center, where greater flexibility and more efficient power and cooling capabilities are in great demand, he said.
So when Daniels, general manager of Hewlett-Packards Enterprise Design Center, and his team gathered in February to start talking about possible visions for the future data center—a project dubbed “Lights Out”—they turned away from the current model.
They instead looked to nature, seeking patterns that would help them break away from rectangular boxes. They looked at everything from honeycombs to seashells to roses.
What theyve come up with is a design unlike whats seen in data centers today, based on the hexagon pattern found in snowflakes, with a six-sided core at the center from which everything else expands out. “This seems to have an awful lot of recurrence in nature, as well as … what we design as human beings,” Daniels said, referring to such structures as airport terminals.
The design centers work is part of a larger HP push to create the next-generation data center, one that Olivier Helleboid, vice president of adaptive infrastructure in HPs Technology Solutions Group, in Palo Alto, Calif., said during a July 21 talk with reporters would be “a lights-out, 24/7 computing environment running on an integrated common architecture.”
The next data center will entail a modular makeup with standard building blocks and will offer such enablers as virtualization, scalability and automation baked into most components, he said.
Daniels is eager to break away from the current data center model, including the terminology. He doesnt talk about servers, but instead about “cells” and “super-cells.” CPU devices are called “compute,” storage is “knowledge” and I/O is “connect.” He wont even use the term “data center.”
“I dont want to call it a data center, because that has a paradigm connected to it, and we wanted to get away from that,” said Daniels, in Houston.
At the center of the design is the hexagon-shaped “core,” about a foot tall and 18 inches across, which would hold the power and communication functions, as well as a closed-loop cooling system. Attached to each side, using rails to slide into place such as books on a shelf, would be three cells—up to 18 cells per core—that hold “sub-cells.” The sub-cells—possibly as many as 12 per cell—hold the CPU, storage and I/O functionalities. The closed-loop cooling system would send the liquid or cool air through the cells and back into the core.
Five or six of these cores and cells could be stacked to create super-cells.
These super-cells may be able to rotate and could be placed around the facility in a number of fashions, depending on the needs of the customer.
Daniels design team, with the help of Seattle-based design company Teague, is creating a scale model that should be complete within months. Then other units within HP, such as R&D and customer service, will be brought in to help determine how the technology within the concept would work and how to make it user-friendly.
What evolves over the next eight to 10 years probably wont mirror what HPs Enterprise Design Center is mapping out now, but many of the innovations and ideas should find their way into future designs, Daniels said.
HP customers, though they hadnt heard of the design, were pleased that their vendor is looking so far into the future, particularly in terms of addressing power and cooling.
Customers Appreciate Foresight
“Its nice to see such foresight, particularly when talking to a vendor,” said Dawn Sawyer, operations manager for Dallas-based GuideStone Financial Resources.
GuideStone runs about 80 servers in an 827-square-foot data center, and cooling and power are growing issues, Sawyer said. The company brought in blade servers over the past couple of years but has since taken most out because they ran too hot.
For Crossmark Holdings, a business services company in Plano, Texas, power is a bigger concern than cooling. The company has engineered its 3,000-square-foot data center to handle cooling needs, but in July Crossmark—which uses plenty of 1U (1.75-inch) and blade servers—had to bring in extra power cables. “Hopefully well be good for 18 months, but well see,” said Charles Orndorff, vice president of infrastructure services at Crossmark.
Earlier this year, AFCOM, an association of data center managers, and its Data Center Institute conducted a survey of its 3,000 members. Among resulting predictions were that by 2010 more than half of all data centers will have to relocate to new facilities or outsource some applications.
Another was that over the next five years, data center operations at 90 percent of businesses will be interrupted by power failures or power limitations.
AFCOM President Jill Eckhaus said such findings make it crucial for vendors such as HP to take a hard look at what theyre doing to address such issues. Both she and Sawyer, who also serves on the Data Center Institutes board of directors, said the work vendors are doing now—from building more efficient processors to adding features to systems that help deal with power and cooling—is a step in the right direction. Having people such as Daniels thinking of entirely new scenarios also is important.
Its going to take a combination of traditional offerings and new models to address the issues, particularly power, which Eckhaus said is the “No. 1 issue in data centers right now.”
However, work must be done within budget constraints, which means wholesale adoption of entirely new models probably isnt feasible in the short term. But Daniels said he sees these changes happening in an evolutionary fashion over a number of years. Eckhaus said that makes sense.
“Theres not just one solution and probably wont be just one solution,” said Eckhaus in Orange, Calif. “What HP is doing is important, and I think every vendor should do what theyre doing.”
Senior Writer Chris Preimesberger contributed to this report.