Deploying Cloud, Virtual Systems: 10 Surprises You'll Have to Deal With

 
 
By Chris Preimesberger  |  Posted 2013-05-24 Email Print this article Print
 
 
 
 
 
 
 
 

Research has shown time and time again that the biggest driver for most organizations in the process of moving to a virtual or cloud environment is—you guessed it—cost savings. However, to unlock the full return-on-investment value of virtual or cloud environments, IT operations management needs to be fundamentally modified. To achieve software-defined data center efficiencies, the approach must be shifted from bottom-up to top-down management, and the emphasis must be transitioned from manual intervention by staff in a reactive fashion to automated controls. Automation is the key change here. A good automated approach can be established using software that maintains an optimal state of operations—one in which applications always have the resources they require to meet business goals while making the most efficient use of network, storage and compute resources. This eWEEK slide show identifies the 10 main obstacles IT managers should plan to face when deploying a cloud or virtual environment. Chief resources for this are virtualization software provider VMTurbo, Forrester Research and eWEEK reporting.

 
 
 
  • Deploying Cloud, Virtual Systems: 10 Surprises You'll Have to Deal With

    by Chris Preimesberger
    1 - Deploying Cloud, Virtual Systems: 10 Surprises You'll Have to Deal With
  • Virtualization Complexity Increases With Scale

    Cloud and virtualized environments create shared pools of resources across compute, storage and network-supporting business applications. Shared-resource infrastructure is dynamic and has an unlimited number of interdependencies, which results in complexity in correctly allocating resources to and across applications to maintain performance levels. Pinpointing the cause of performance degradation and identifying the correct resolution are challenging due to the new complexities virtualization introduces. The impact? It's impossible for a human to solve performance or utilization problems before the dynamics of the environment change and render the solution moot.
    2 - Virtualization Complexity Increases With Scale
  • The Shift Will Force Siloed Teams to Collaborate

    The long-term effect of a shared-resource infrastructure and web of infrastructure dependencies means that different infrastructure teams that have traditionally operated in silos need to come together and collaborate, as well as use modern tools that have a holistic view of the virtualized and converged infrastructure.
    3 - The Shift Will Force Siloed Teams to Collaborate
  • Processes—and Mindsets—Will Need to Change

    Those deploying cloud environments need to adopt a service provider mentality and align their processes likewise. This means having IT treat different lines of business and consumers as valued customers and understanding that the business has options beyond the internal IT department.
    4 - Processes—and Mindsets—Will Need to Change
  • Infrastructure Will Be Controlled by Software—Now What?

    A software-defined data center allows the physical infrastructure to be dynamically configured and reconfigured to support changing requirements and priorities, which means you have the potential to introduce greater agility in configuring the infrastructure and spinning up new virtual machines and applications—all with the touch of a button. You will love the speed and flexibility, but deciding what, when and where to spin up applications still requires significant time and "heavy lifting"—unless you have the right control system in place.
    5 - Infrastructure Will Be Controlled by Software—Now What?
  • Data and Metrics Can Overwhelm Your Environment

    The marketplace is filled with vendors offering solutions to monitor and report on a broad set of performance metrics for virtual infrastructures. Collecting thousands of data points on multiple entities within your virtual estate will create a big data problem—and increase costs for storing all that data—if you are not careful. And once you have all that data, then what? Do you devote valuable manpower to analyzing it? Do you wait for a management tool to trigger an alert based on some perceived anomaly? When you receive an alert, will the operations team know how to interpret the metrics and decide what actions to take? Too many cycles are spent finding and trying to fix discrete performance issues instead of preventing them in the first place.
    6 - Data and Metrics Can Overwhelm Your Environment
  • Expect to Encounter the 'Goldilocks Effect'

    When it comes to getting the most out of cloud infrastructure resources, cloud administrators often deal with the "Goldilocks Effect." This is the challenge of understanding if resource allocations and utilization are not "too hot" or "too cold," but "just right" to optimize resources most efficiently while assuring service levels and business constraints are met. These are all controls that can be automated.
    7 - Expect to Encounter the 'Goldilocks Effect'
  • Enabling Agility Requires a Cloud Control System

    A significant driver behind cloud deployments hinges on agility—the time it takes IT to respond to application and business requests. A self-service portal may look great, but if it still takes multiple days or weeks for IT to respond to business requests made through the portal, then you haven't really moved the needle. You need a "brain" for your cloud that can automate the back-end placement and capacity decisions on an ongoing basis.
    8 - Enabling Agility Requires a Cloud Control System
  • Cost Savings That You Expect May Not Happen

    More often than not, organizations move to cloud/virtual environments to benefit from the savings that accompany consolidation. Without proper planning, investments in hardware refresh and software licenses can be wasted due to underutilization. Not only will the expected ROI not be achieved, but underutilized capacity will contribute to escalating operational costs (power, cooling, data center floor space, IT staffing, etc.). Further, many organizations fail to look beyond the initial CAPEX consolidation savings to what can be achieved in OPEX savings. Instead of changing the IT operations approach to match the new virtualization paradigm, legacy tools and processes are mistakenly maintained.
    9 - Cost Savings That You Expect May Not Happen
  • The Impact of Not Assuring Service Levels Is Great

    Companies rely on access to data and applications to run their businesses. Assuring service levels (i.e., avoiding sluggish performance of applications due to resource bottlenecks, memory congestions, interference at peak times and so on) is a priority for IT organizations. The cost of getting it wrong is significant. SLA violations, tarnished reputation due to downtime or degradation, employee productivity losses, inability to conduct business/book revenue and more are at risk. However, the costs of overprovisioning for peak demands to assure service levels are also significant. That's why you have to plan for and deliver on the demands for optimal performance and optimal utilization to find a cost/performance/risk balance.
    10 - The Impact of Not Assuring Service Levels Is Great
  • Change Must Be More Than Simply Virtualizing Servers

    For organizations to benefit fully from new-gen virtualized infrastructure and cloud services, the approach to IT operations must also be transformed from a complex, labor-intensive and volatile process to one that is simple, automated and predictable. Relying on software for complex decision-making and automated control, preventing problems from occurring, and eliminating time-consuming manual intervention by IT staff will help deliver on the business, financial and operational benefits of virtualization and cloud.
    11 - Change Must Be More Than Simply Virtualizing Servers
 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
Rocket Fuel