As one of Europe’s most active financial centers, London has been in the news lately due to the shrinking data center real estate available to support countless large financial institutions. Experts now estimate that the vacancy rate in London’s co-location facilities will approach zero percent by 2009. And London is not alone. Tier 1 Research reports that in 2006, global data center demand rose nearly 13 percent while facility supply rose only four percent.
At the same time, power and cooling costs are sharply increasing, with IDC reporting that global spending on data center power and cooling in 2007 was roughly equivalent to spending on servers. Gartner predicts that half of the world’s data centers will face an acute power shortage by the end of 2008. The rapid dwindling of data center space and resources has far-reaching implications for all major metropolitan areas and centers of business, and corporations are facing pressure to address this issue before it becomes an emergency.
Many companies have responded to the data center space shortage by relocating their data center facilities to more remote (and often more affordable) locations. However, many real-time applications can tolerate a maximum of around 40 miles between a data center and the business unit it supports before server lag starts significantly degrading performance, which can directly affect a company’s profitability. As a rule of thumb, adding 60 miles of distance adds a minimum of 1 millisecond to the time it takes for a single request to be turned around. The additional delay quickly adds up to a large perceived application lag because so many applications require a large number of request/response cycles to fulfill a single user action.
Certainly, the diminishing availability of data center space and resources has been a driving force behind enterprises’ adoption of high-density computing and server virtualization. By employing virtualization technologies, many virtual environments can be hosted on one physical server, alleviating space concerns and allowing companies to pack multiple times the computing power in the same space. Indeed, Forrester Research reports that by 2009, two thirds of enterprises will be employing server virtualization. Unfortunately, virtualization alone can create new challenges for power and cooling, as higher-density equipment creates “hot spots” within the data center if not properly distributed.
In the end, most enterprises will elect to do a combination of both of the above: relocating less critical data center components, while implementing virtualization in order to consolidate new and existing servers.
Each of these projects can constitute a large-scale undertaking and significant risk for the IT department. The perceived risk of outages during such data center overhauls can create great internal resistance and is in fact one of the greatest obstacles facing companies considering relocation and virtualization. The dangers are real. In a 2001 survey conducted by Contingency Planning Research, 46 percent of respondents said that just an hour of downtime would cost their companies up to $50,000, and another 28 percent said an hour of downtime would cost up to $250,000. With this in mind, it can be daunting to know where to start with data center relocation and virtualization. What should be virtualized first? What cannot be relocated until its dependent services are addressed? And how can we avoid mistakes that lead to costly outages in the process?
There are four major steps that should serve as the framework of any major data center initiative undertaken and can help save money and minimize risks:
Four major steps
1. Examine and critically assess existing configuration data.
Before you can put a plan in place for the relocation or virtualization process, it is absolutely vital to have an accurate and up-to-date picture of your data center assets, the business services they provide, and the dependencies between them. Too often this information has been gathered manually, which ensures that it will be subject to human error and out-of-date almost immediately. Even a seemingly trivial data point like the number of servers reported in a location is often erroneous by more than 20 percent from the true figure unless information is collected automatically, and the problem gets worse when it comes to more complex types of information. Doing discovery and dependency mapping can and should be automated – and the data collected must be very close to 100 percent accurate so it can serve you for the project. In many ways, it is very dangerous to rely on data that you believe is accurate but have no way of validating and is actually only 60-80 percent correct.
2. Start at the top and prioritize.
So you’ve arrived at a set of reliable configuration data, and now you’re ready to begin planning the move. Chances are, you already know what your company’s first priority business service is, and if you’ve successfully completed step one, you recognize which infrastructure components are necessary to support that service. But do you know which service is priority number two? Number 10? And have you considered the dependencies these services might have on one another? For example, many of your applications probably depend on your LDAP servers for authentication-so moving those requires a good understanding of all of the servers that depend on them and what they in turn depend on in order to avoid costly outages. Thorough answers to these questions will help you map out a comprehensive sequence of events for relocating or virtualizing parts of the business one at a time-without inadvertently affecting critical services.
3. Put pilot programs in place.
Start small. The first service you relocate or virtualize should be one you’ve identified as non-critical and not too tightly intertwined with the infrastructure supporting other parts of the business. After moving this service across, review your latest inventory and compare it against the plan to verify that the change happened successfully and identify any areas where your data was insufficient to predict the impact of the change. Though errors and outages can take place at this stage, their consequences will be contained, and you will be able to use best practices learned from these pilot programs to ensure successful migration of the more critical services. This “crawl, walk, run” approach is tried and tested as a guiding principle in leading global investment banks, and is key to mitigating the risks of downtime and expensive outages.
4. Continue to monitor and manage change.
Keep paying attention whether services are moving across successfully, refining your processes for relocating and virtualizing components in accordance with best practices. As your confidence in your tools and processes grow, the scope of projects you can successfully complete will rapidly increase too.
Relocating and virtualizing data center assets requires a lot of planning and requires businesses to undertake higher-level intelligence initiatives than they may be expecting. Though the scale of these activities can seem overwhelming, IT automation speeds up the process and eliminates a great deal of guesswork. Businesses that take these smart steps to respond to the data center space shortage will be the ones who emerge on top when their competitors fall into crisis mode.
Allan Mertner is vice president of product delivery & IT at Tideway Systems and is based in London. Prior to joining Tideway, he was AVPof Development at Peregrine Systems where he was responsible for bringing together two acquired products and delivering them as a unified product suite that was released as EnterpriseDiscovery. In late 2005, Peregrine Systems was acquired by Hewlett-Packard and Allan played a key role in the pre-acquisition product and technology roadmap planning effort. Before joining Peregrine, Allan worked in Denmark where he co-founded Ibsen Photonics and later worked for Maersk Data. He can be reached at a.mertner@tideway.com.