eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.
2Virtualization Complexity Increases With Scale
Cloud and virtualized environments create shared pools of resources across compute, storage and network-supporting business applications. Shared-resource infrastructure is dynamic and has an unlimited number of interdependencies, which results in complexity in correctly allocating resources to and across applications to maintain performance levels. Pinpointing the cause of performance degradation and identifying the correct resolution are challenging due to the new complexities virtualization introduces. The impact? It’s impossible for a human to solve performance or utilization problems before the dynamics of the environment change and render the solution moot.
3The Shift Will Force Siloed Teams to Collaborate
The long-term effect of a shared-resource infrastructure and web of infrastructure dependencies means that different infrastructure teams that have traditionally operated in silos need to come together and collaborate, as well as use modern tools that have a holistic view of the virtualized and converged infrastructure.
4Processes—and Mindsets—Will Need to Change
5Infrastructure Will Be Controlled by Software—Now What?
A software-defined data center allows the physical infrastructure to be dynamically configured and reconfigured to support changing requirements and priorities, which means you have the potential to introduce greater agility in configuring the infrastructure and spinning up new virtual machines and applications—all with the touch of a button. You will love the speed and flexibility, but deciding what, when and where to spin up applications still requires significant time and “heavy lifting”—unless you have the right control system in place.
6Data and Metrics Can Overwhelm Your Environment
The marketplace is filled with vendors offering solutions to monitor and report on a broad set of performance metrics for virtual infrastructures. Collecting thousands of data points on multiple entities within your virtual estate will create a big data problem—and increase costs for storing all that data—if you are not careful. And once you have all that data, then what? Do you devote valuable manpower to analyzing it? Do you wait for a management tool to trigger an alert based on some perceived anomaly? When you receive an alert, will the operations team know how to interpret the metrics and decide what actions to take? Too many cycles are spent finding and trying to fix discrete performance issues instead of preventing them in the first place.
7Expect to Encounter the ‘Goldilocks Effect’
When it comes to getting the most out of cloud infrastructure resources, cloud administrators often deal with the “Goldilocks Effect.” This is the challenge of understanding if resource allocations and utilization are not “too hot” or “too cold,” but “just right” to optimize resources most efficiently while assuring service levels and business constraints are met. These are all controls that can be automated.
8Enabling Agility Requires a Cloud Control System
A significant driver behind cloud deployments hinges on agility—the time it takes IT to respond to application and business requests. A self-service portal may look great, but if it still takes multiple days or weeks for IT to respond to business requests made through the portal, then you haven’t really moved the needle. You need a “brain” for your cloud that can automate the back-end placement and capacity decisions on an ongoing basis.
9Cost Savings That You Expect May Not Happen
More often than not, organizations move to cloud/virtual environments to benefit from the savings that accompany consolidation. Without proper planning, investments in hardware refresh and software licenses can be wasted due to underutilization. Not only will the expected ROI not be achieved, but underutilized capacity will contribute to escalating operational costs (power, cooling, data center floor space, IT staffing, etc.). Further, many organizations fail to look beyond the initial CAPEX consolidation savings to what can be achieved in OPEX savings. Instead of changing the IT operations approach to match the new virtualization paradigm, legacy tools and processes are mistakenly maintained.
10The Impact of Not Assuring Service Levels Is Great
Companies rely on access to data and applications to run their businesses. Assuring service levels (i.e., avoiding sluggish performance of applications due to resource bottlenecks, memory congestions, interference at peak times and so on) is a priority for IT organizations. The cost of getting it wrong is significant. SLA violations, tarnished reputation due to downtime or degradation, employee productivity losses, inability to conduct business/book revenue and more are at risk. However, the costs of overprovisioning for peak demands to assure service levels are also significant. That’s why you have to plan for and deliver on the demands for optimal performance and optimal utilization to find a cost/performance/risk balance.
11Change Must Be More Than Simply Virtualizing Servers
For organizations to benefit fully from new-gen virtualized infrastructure and cloud services, the approach to IT operations must also be transformed from a complex, labor-intensive and volatile process to one that is simple, automated and predictable. Relying on software for complex decision-making and automated control, preventing problems from occurring, and eliminating time-consuming manual intervention by IT staff will help deliver on the business, financial and operational benefits of virtualization and cloud.