5 Steps to a
Scalable Data Center”>
So, you want to make over your old storage data center—or, even more daunting, build an entirely new data center from scratch?
Here, eWEEK offers five steps to building or reinstituting a data storage center. Yes, we know it will take more like 500 steps, but were hitting only the most important universal highlights, as researched through a number of sources.
And there is no dearth of examples to learn from.
Hewlett-Packard, for one, recently announced its Greenfield project, in which 85 data centers around the world will be consolidated into a mere six. The new centers will be using low-power, high-capacity blade servers with upgraded management tools, improved data center design, vastly improved power and cooling devices, and automation/virtualization at every turn.
And, after 12 months of design and construction, Sun Microsystems opened the doors on Aug. 21 to a new, 76,000-square-foot data center designed to demonstrate eco-friendly technology to its customers as well as the companys commitment to green IT. Located in Suns hometown of Santa Clara, Calif., the data center will use less than half the electricity of previous data centers—proof that the company is committed to using green technology in its own IT infrastructure.
Click here to read more about Suns new data center.
In addition, IBM announced Aug. 1 that it will consolidate approximately 3,900 of its own servers onto 33 virtualized System z mainframes running Linux to save energy and cut back on its carbon footprint. IBM officials expect the new server environment to use about 80 percent less power than the companys current open-systems setups. Big Blue expects to save more than $250 million over five years in energy, software and system support costs—and thats a conservative figure.
In each of these futuristic new data centers, servers will be powering down or off whenever possible; active servers will be provisioned more intelligently; wasted cycles will be avoided at all costs; and available server and storage capacity will shoot up into the 70 to 90 percent range, where in the past it often languished in the 30 to 40 percent range. All of this can directly result in cleaner air, lower costs to do business and increased power availability.
You, too, can build/rebuild a scalable yet green storage data center that will serve you well for years to come. Here are five steps to get you going.
Step 1: Get the board on board.
Make sure all the key executives and board members “get it” and are behind your project far in advance. IT management, which will be running the new data center, needs to have as many decision-makers on its side—from the president/CEO and chair of the board of directors on down—as possible.
“HP did this exact thing when Mark Hurd [the current HP CEO, who replaced Carly Fiorina in March 2005] brought Randy Mott on board [as CIO] and consolidated from 85 down to six data centers,” said Patrick Eitenbichler, director of marketing for HPs StorageWorks.
Similarly, when Sun President and CEO Jonathan Schwartz took over the company reins from cofounder Scott McNealy in April 2006, Schwartz immediately cited storage and data center innovation as two of his companys priorities. Shortly thereafter, he hired David Douglas, Suns vice president of Eco Responsibility, who oversaw the launch of Suns Project Blackbox later that year. The Sun Blackbox is a fully contained, 20-by-8-foot portable data center that needs no outside air cooling to do its work.
However, most companies are not on the same page when it comes to IT and data storage provisioning.
“Right now, the problem is that there is a rift—if not a chasm—that exists between how senior-level executives will set out their objectives for what needs to be done and how the IT people are then, in some ways, strapped to be able to implement that infrastructure,” Sun storage CTO Randy Chalfant told eWEEK. “There is this chasm of understanding between business and IT. Then what happens is, given the resources in time and money that are allocated, and based on business decisions, there is a total lack of understanding for how an infrastructure works; the people back there [in IT] are just trying to survive.”
The net/net of all this misunderstanding is that there are gigantic amounts of infrastructure being implemented and being wasted, serving no valuable purpose, Chalfant said. If a data center can be built with power, cooling, automation and virtualization efficiencies built in from the start, then the whole project stands a good chance of long-term success.
Storage resources now represent as much as 45 percent of the infrastructure budget in many large enterprises. Networked storage is no longer a small, isolated island of spending and resource deployment.
Page 2: 5 Steps to a Scalable Data Center
Page 2
Step 2: Choose location carefully.
Location, location, location. This isnt just a real estate agents mantra—the location of a storage data center is as important as what goes inside it.
The location finally selected is highly likely to be nowhere near the company headquarters—or even near a remote office.
For example, Google, Yahoo and a number of Web 2.0 companies are looking far and wide for data center locations, and theyre generally not in highly populated places. Available power supply and square footage are the two biggest requirements. Proximity to major population centers is low on the priority list.
Google recently completed a major development in the Dalles, Ore., area, east of Portland. The Columbia River provides virtually unlimited hydroelectric power on a comfortable, two-football-field-size lot.
The idea of building a data center in a foreign country is also quite common. Many U.S. companies already have data centers in Europe and Asia. And Iceland, of all places, has been very proactive in trying to sell American- and U.K.-based companies on building or co-locating on the island in the upper North Atlantic.
Eastern European countries such as Finland, Poland and Hungary also have made efforts to attract data centers.
Step 3: Design it green from the get-go.
A so-called “green” data center is one in which the lighting/cooling, mechanical, electrical and computer systems are designed for maximum energy efficiency and minimum environmental impact. A green data center can run on 50 to 80 percent less power today than data centers built anywhere from two to 30 years ago did.
In addition to reducing energy consumption, the construction and operation of a green data center should minimize the size of the building; maximize cooling efficiency; install catalytic converters on backup generators; and use alternative energy technologies, such as photovoltaic electrical heat pumps and evaporative cooling, whenever possible.
Water-cooling of servers is becoming trendy but is complicated to install and operate. The long-term benefits can be great, however: Chilled, circulated water can provide 10 times the cooling that air conditioning offers, Suns Chalfant said.
According to recent Gartner Group and federal Environmental Protection Agency reports, the power demands of IT-based equipment in the United States have grown by five or more times in the last seven years and is expected to double again by 2011. Its a fact that companies spend much more on power to run a server during its lifetime than they do in capital expense to purchase it.
Click here to read more about whether the future will bring data centers that dont require cooling equipment.
HP, Sun, IBM, NetApp, EMC, Data Domain, Rackable Systems, Hitachi Data Systems and other data center hardware/software suppliers have service department staffers who will sit down and help design a data center to operate optimally; this includes rack placement, air-conditioning airflow, power controls and numerous other factors.
Liebert, an Emerson Network Power company, is the U.S. market leader in providing data center air conditioning, uninterruptible power supplies, battery systems, surge protection systems and chip cooling. The company, based in Columbus, Ohio, has partnerships with virtually all the major suppliers noted above.
“Were seeing people like the HPs of the world consolidating data centers into one facility,” said Steve Madara, vice president and general manager of Liebert Precision Cooling. “Sun is also doing that. At the same time, were seeing new facilities go up for the Googles, Yahoos and Microsofts, and for the financial guys in New York. Were seeing both new center generation and build out and renovation of older sites.”
Madara said that the two keys to optimal use of power as it comes into the data center are having the most efficient power supplies available in the servers and using virtualization to consolidate the number of servers being used.
Step 4: Choose hardware and software carefully.
The hardware and software for your storage data center should be chosen for performance and quality, but also for green and scalability attributes.
Open systems that allow such features as hot-swappable disk drives, power supplies and fans have obvious major advantages. Hardware components and software that can play nicely in a production situation with similar products made by competing vendors are also recommended. Open-standards—and not necessarily open-source—products are the key.
Its also important to look for components that can literally snap together and work: blades, switches, power supplies, networking connectors, and so on. The more versatile a product is, the better. The best vendors know this and will provide interconnecting components whenever possible. It lifts a huge burden off data storage managers shoulders when they can add capacity on short notice (within a week or two of when its required to go online) because components are modular.
Page 3: 5 Steps to a Scalable Data Center
Page 3
Virtualization of servers and storage is becoming a must in data centers. Simply put, this means capturing computing resources and running them on shared physical infrastructure in such a way that each appears to exist in its own separate physical environment. This happens by treating storage and computing resources as an aggregate pool from which networks, systems and applications can be drawn on an as-needed basis. EMCs VMware, Microsoft and the open-source Xen are the three largest players in this realm.
By using virtualization software correctly, the consolidation of under-utilized servers goes unnoticed by the user and significantly reduces power consumption.
Deduplication and thin provisioning are also must-use tools in the green and scalable data center. Data deduplication eliminates redundant data throughout the storage network and adds a high level of efficiency and cost-effectiveness within the network.
Thin provisioning is a method of storage resource management and virtualization that lets IT administrators limit the allocation of actual physical storage to what applications immediately need. It enables the automatic addition of capacity on demand up to preset limits so that IT departments can avoid buying and managing excessive amounts of disk storage.
EqualLogic, Hitachi Data Systems, EMC, NetApp, 3PAR and Comm-Vault all offer thin provisioning for either SAN (storage area network) or iSCSI storage systems. This will become an important factor in the green data centers yet to be built.
Step 5: Turn it off whenever possible.
With everything mentioned above in place, you can go ahead and “turn on” your data center. Just be sure that everything in it can be turned off whenever possible.
“If you have the right management and automation systems, then you will be able to turn it [systems or parts of systems] cleanly,” said Kevin Epstein, vice president of marketing for Scalent Systems, in Palo Alto, Calif. “This is opposed to the old idea, Gee, its up and running—dont touch it!
“Companies cant afford to run all their systems 24/7, and they shouldnt,” Epstein added. “This is versus a scenario where things are cyclical, like transaction systems versus e-mail systems. You power down the spindles you dont need when you dont need them. Then, when you really need the power and capacity, you have it ready to go.”
Scalent Systems software enables data centers to react in real time to changing business needs by dynamically changing what servers are running and how those servers are connected to network and storage. The result is an adaptive infrastructure (similar to HPs vision) where data centers can transition between different configurations—or from bare metal to live, connected servers—in 5 minutes or less, without physical intervention.
U.S. data centers are now eating up about 61 billion kilowatt hours at a cost of $4.5 billion per year, according to a new EPA task force report. Another report, commissioned by London-based 1E and researched by the Alliance to Save Energy, found that U.S. companies consume one third of that number—19.8 billion kilowatt hours at a cost of about $1.72 billion—simply from leaving personal computers on overnight.
Again, these are only corporate desktop computers—not servers, laptops or mainframe machines.
When it comes to controlling an enterprise worth of PCs, 1E is a specialist. The companys centrally controlled Windows power management software automates PC hibernation, shutdown, wakeup and patch management from the server. 1E is already helping companies like Allstate, HSBC and Verizon save millions of dollars in PC power consumption, just by automating the powering down of hundreds or thousands of PCs when theyre not needed. 1E has a Vista-ready product coming soon.
On the server side, Cassatt offers an appliance-based, software-agnostic platform that automates power efficiencies specifically in the data center through pre-established policies. It can automatically turn off or power down as many as 400 servers on nights and weekends, allow only those servers needed for current demand to be running, and power systems down in the event of a brownout (or high-use local capacity) situation.
Typically, there is far more storage available to users than they appreciate or understand. Sun completed a study performed at 200 user sites globally. In the study, tools were installed to uncover the allocation and utilization efficiency. The results were surprising: On average, about 70 percent of disk space is simply wasted.
Companies such as MonoSphere, Asigra and Onaro offer high-level storage allocation, provisioning and reporting software, with varying additional service options. A number of Fortune 1000 companies are now using these tools, and, as time goes on and more and more data is stored, the value of trustworthy provisioning software is going to continue to rise.
Check out eWEEK.coms for the latest news, reviews and analysis on enterprise and small business storage hardware and software.