Data centers, especially those based upon industry-standard systems and software, have historically been static environments. That is to say, one consisting of a server configured to support a single operating system, data management system, application framework and a number of applications. Systems then access both storage and the network using a pre-assigned configuration that can only be changed with a carefully planned set of manual procedures.
As the users of mainframes and single-vendor midrange systems discovered nearly three decades ago, this type of static thinking leads to a number of problems and must be replaced by the careful use of virtualization and automation. Although adopting dynamic, adaptive thinking is an important step, it is still important to remember that physical machines (including systems, network and storage), must be running for all of this to work. This is a lesson the managers of industry-standard, system-based data centers are just learning now.
How to avoid overprovisioning
Today’s industry-standard, system-based data centers often evolved without an overarching plan. So, each business unit or department selected systems and software to satisfy only its own requirements. Selections were made to support only that business unit or department’s own flow of business. This means that most data centers have become a warehouse–something a few would call a museum–for “silos of computing.” Each silo of computing was purchased with an eye only to each individual business unit or department’s needs. Each silo was often managed with its own management tools (that may not play well with other tools the organization is relying on for the management of other silos).
Business units and departments purchased sufficient system, software, storage and network resources to handle their own peak periods. Sufficient resources were also purchased to provide enough redundancy so that business solutions were always up and available.
This approach also had an expensive side effect: those resources ended up sitting idle, waiting for peak periods a great deal of the time. If all of the organization’s idle resources are considered, a great deal of the organization’s IT investment has been wasted. After all, they are not available for the day-to-day processing requirements of the organization. It is clear that this approach–an approach that seemed reasonable and prudent only a few years ago–is now a luxury that many organizations can no longer afford. Organizations have been forced by a global market (and rapidly changing market dynamics and regulations) to include efficiency and making best use of their resources in their list of priorities.
How to overcome problematic manual processes
When outages occur in the static data center, many organizations turn to error-prone manual processes and procedures to determine what’s happening. They will isolate the problem, move resources around so that the business can keep running, fix the problem, and then move resources back to their normal configuration. It’s also necessary to get physical machines turned on and loaded with the appropriate software. The network must be restarted or reconfigured. Storage systems must be restarted and reconfigured. Speed of recovery is heavily dependent upon getting the physical systems back up and configured.
Each of these steps can take a great deal of time, require costly expertise that the organization doesn’t normally have on staff, and is subject to human error. Another complication is that each of the computing silos is based upon different application and management frameworks. This means that staff expertise which works well in solving one part of the problem is not the same staff expertise needed to solve other parts of the problem.
It is clear that manual processes don’t scale well. This, of course, is the reason mainframe and midrange-based data centers turned to automation decades ago. Organizations want to work with a dynamic data center that has the ability to deal with planned and unplanned outages–to roll back the clock–without dealing with any of the painful issues mentioned earlier.
How to Envision the Dynamic Data Center
Managers of industry-standard, system-based data centers have dreams of moving beyond a static environment. They imagine what it would be like if their data center did the following six things that a dynamic data center does:
1. If it automatically found unused, and therefore wasted, resources on a moment-to-moment basis. Included on this list of unused and wasted resources are systems, software, storage and networks.
2. If it automatically repurposed those resources in a coordinated, policy-based fashion in order to make the most optimal use of them. High-priority tasks would be given resources first.
3. Once repurposed, the data center resources would automatically be assigned to useful tasks.
4. Each workload would be provided with the resources it needed without being able to interfere with or slow down other tasks.
5. Unneeded resources would be freed up so that they could be powered down to reduce power consumption and heat generation if the organization so desired. These resources could be powered back up, provisioned for the tasks at hand and put to work as needed later.
6. New resources would need to be added only when currently available resources were really exhausted.
What’s clear is that everything must adapt in real time, in a coordinated way–otherwise problems are simply being shuffled about rather than being solved.
How to achieve your dreams of a dynamic data center
Over the past few years, virtualization and automation products have become available for industry-standard systems, operating systems and applications. It is now possible for an organization to work with a “logical” or “virtual” view of resources. This logical view is often strikingly different than the actual physical view.
What does this really mean? System users may see a single, physical computer as if it were many different systems running different operating systems and application software. Or they may be presented the view that a group of systems are actually a single computing resource. Other virtualization technology may allow individuals to access computing solutions with devices that didn’t exist when developers created an application. It may also present the image that long-obsolete devices are available for use in the virtual environment, even though none are actually installed.
In the end, the appropriate use of these layers of technology offer organizations a number of benefits. These benefits include improved levels of scalability, reliability and performance, far greater agility than possible in a physical environment, and more optimal use of hardware, software and staff resources. This requires IT decision makers to think beyond the server to achieve broader goals.
There have been many successful implementations of this type of virtualization using products from several suppliers. There are products available that act as an operating environment for the whole data center. Resources are discovered automatically and can be automated to meet the organization’s own service level objectives and policies.
IT managers who find themselves forced to grapple with the issues mentioned in this article would be well-advised to learn more about how products of this type could help them meet their objectives, while lowering their overall costs of computing though the deployment of a dynamic data center environment.