How to Respond to the Data Center Space Shortage

The data center space shortage is real-and escalating. In this article, London-based Allan Mertner, vice president of product delivery & IT at Tideway Systems examines how companies can respond and what this means for IT departments.


As one of Europe's most active financial centers, London has been in the news lately due to the shrinking data center real estate available to support countless large financial institutions. Experts now estimate that the vacancy rate in London's co-location facilities will approach zero percent by 2009. And London is not alone. Tier 1 Research reports that in 2006, global data center demand rose nearly 13 percent while facility supply rose only four percent.

At the same time, power and cooling costs are sharply increasing, with IDC reporting that global spending on data center power and cooling in 2007 was roughly equivalent to spending on servers. Gartner predicts that half of the world's data centers will face an acute power shortage by the end of 2008. The rapid dwindling of data center space and resources has far-reaching implications for all major metropolitan areas and centers of business, and corporations are facing pressure to address this issue before it becomes an emergency.

Many companies have responded to the data center space shortage by relocating their data center facilities to more remote (and often more affordable) locations. However, many real-time applications can tolerate a maximum of around 40 miles between a data center and the business unit it supports before server lag starts significantly degrading performance, which can directly affect a company's profitability. As a rule of thumb, adding 60 miles of distance adds a minimum of 1 millisecond to the time it takes for a single request to be turned around. The additional delay quickly adds up to a large perceived application lag because so many applications require a large number of request/response cycles to fulfill a single user action.

Certainly, the diminishing availability of data center space and resources has been a driving force behind enterprises' adoption of high-density computing and server virtualization. By employing virtualization technologies, many virtual environments can be hosted on one physical server, alleviating space concerns and allowing companies to pack multiple times the computing power in the same space. Indeed, Forrester Research reports that by 2009, two thirds of enterprises will be employing server virtualization. Unfortunately, virtualization alone can create new challenges for power and cooling, as higher-density equipment creates "hot spots" within the data center if not properly distributed.

In the end, most enterprises will elect to do a combination of both of the above: relocating less critical data center components, while implementing virtualization in order to consolidate new and existing servers.

Each of these projects can constitute a large-scale undertaking and significant risk for the IT department. The perceived risk of outages during such data center overhauls can create great internal resistance and is in fact one of the greatest obstacles facing companies considering relocation and virtualization. The dangers are real. In a 2001 survey conducted by Contingency Planning Research, 46 percent of respondents said that just an hour of downtime would cost their companies up to $50,000, and another 28 percent said an hour of downtime would cost up to $250,000. With this in mind, it can be daunting to know where to start with data center relocation and virtualization. What should be virtualized first? What cannot be relocated until its dependent services are addressed? And how can we avoid mistakes that lead to costly outages in the process?

There are four major steps that should serve as the framework of any major data center initiative undertaken and can help save money and minimize risks: