How to Transform Your Static Data Center into a Dynamic Data Center

 
 
By Daniel Kusnetzky  |  Posted 2008-11-05 Email Print this article Print
 
 
 
 
 
 
 

IT managers have recently been forced by rapidly changing market dynamics and regulations to find ways to lower their static data center's computing costs and resource consumption. Transforming an industry-standard, system-based, static data center into a virtualized, automated, dynamic data center can lower costs and increase efficiency. Through virtualization and automation, Knowledge Center contributor Daniel Kusnetzky explains how you can transform your static data center into a dynamic data center.

Data centers, especially those based upon industry-standard systems and software, have historically been static environments. That is to say, one consisting of a server configured to support a single operating system, data management system, application framework and a number of applications. Systems then access both storage and the network using a pre-assigned configuration that can only be changed with a carefully planned set of manual procedures.

As the users of mainframes and single-vendor midrange systems discovered nearly three decades ago, this type of static thinking leads to a number of problems and must be replaced by the careful use of virtualization and automation. Although adopting dynamic, adaptive thinking is an important step, it is still important to remember that physical machines (including systems, network and storage), must be running for all of this to work. This is a lesson the managers of industry-standard, system-based data centers are just learning now.

How to avoid overprovisioning

Today's industry-standard, system-based data centers often evolved without an overarching plan. So, each business unit or department selected systems and software to satisfy only its own requirements. Selections were made to support only that business unit or department's own flow of business. This means that most data centers have become a warehouse--something a few would call a museum--for "silos of computing." Each silo of computing was purchased with an eye only to each individual business unit or department's needs. Each silo was often managed with its own management tools (that may not play well with other tools the organization is relying on for the management of other silos).

Business units and departments purchased sufficient system, software, storage and network resources to handle their own peak periods. Sufficient resources were also purchased to provide enough redundancy so that business solutions were always up and available.

This approach also had an expensive side effect: those resources ended up sitting idle, waiting for peak periods a great deal of the time. If all of the organization's idle resources are considered, a great deal of the organization's IT investment has been wasted. After all, they are not available for the day-to-day processing requirements of the organization. It is clear that this approach--an approach that seemed reasonable and prudent only a few years ago--is now a luxury that many organizations can no longer afford. Organizations have been forced by a global market (and rapidly changing market dynamics and regulations) to include efficiency and making best use of their resources in their list of priorities.

How to overcome problematic manual processes

When outages occur in the static data center, many organizations turn to error-prone manual processes and procedures to determine what's happening. They will isolate the problem, move resources around so that the business can keep running, fix the problem, and then move resources back to their normal configuration. It's also necessary to get physical machines turned on and loaded with the appropriate software. The network must be restarted or reconfigured. Storage systems must be restarted and reconfigured. Speed of recovery is heavily dependent upon getting the physical systems back up and configured.

Each of these steps can take a great deal of time, require costly expertise that the organization doesn't normally have on staff, and is subject to human error. Another complication is that each of the computing silos is based upon different application and management frameworks. This means that staff expertise which works well in solving one part of the problem is not the same staff expertise needed to solve other parts of the problem.

It is clear that manual processes don't scale well. This, of course, is the reason mainframe and midrange-based data centers turned to automation decades ago. Organizations want to work with a dynamic data center that has the ability to deal with planned and unplanned outages--to roll back the clock--without dealing with any of the painful issues mentioned earlier.



 
 
 
 
Daniel Kusnetzky is Principal Analyst at The Kusnetzky Group. He has over 30 years of industry experience. He is responsible for research and analysis on open source software, virtualization software and system software. He examines emerging technology trends, vendor strategies, research and development issues, and end-user integration requirements. In the past, he was Executive VP for Open-Xchange, Inc., and Program VP of System Software Research for International Data Corporation. He can be reached at dan@kusnetzky.net.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Thanks for your registration, follow us on our social networks to keep up-to-date
Rocket Fuel