When companies first began experimenting with virtualization, mostly in the server and storage realms, excitement trumped concerns about managing the inevitably more complicated environment or finding ways to optimize the new technology. Just by virtue of moving to a virtualized infrastructure, IT managers were pretty confident that they were achieving an acceptable return on investment.
But as companies have aggressively expanded their use of virtual machines, many are finding that the rules have changed. As the production environment expands-sometimes to thousands of virtual machines-IT managers no longer have the same level of control and insight into their infrastructure they once had, and as a result, the return on investment is no longer as clear.
Watch the latest eWEEK Newsbreak video.
“They are having to go back and rethink what they have to do to achieve the best ROI expectations given an expanded infrastructure, realizing that the growth of the infrastructure in some ways is outpacing some of their capabilities,” said Stephen Elliot, a research manager for enterprise systems management at IDC. But it’s far from easy. A truly optimized environment means efficient server provisioning, matching the virtual machine workload characteristics with the available hardware resource pool, dynamically balancing workloads on an as-needed basis, minimizing idle virtual machines, consolidating physical servers to virtual machines, and retiring the virtual machines at the end of their life cycle.
The goal, said Mark Bowker, an analyst with Enterprise Strategy Group, is to create an environment that makes the best use of hardware resources and continues to maintain optimization on an ongoing basis through the use of automation and policies.
“If an application workload demands more capacity [such as memory, CPU, network or disk], there are policies in place that automate the allocation of resources and deliver a consistent service level to the end user,” he said.