Virtualization Has Changed Disaster Recovery: 15 Data Points

 
 
By Chris Preimesberger  |  Posted 2013-07-31 Email Print this article Print
 
 
 
 
 
 
 
 

Not that many years ago, all business applications ran directly on dedicated server hardware. Many still do. Data centers contained racks and racks of servers, each one for a single purpose and requiring its own protection scheme. Many of those servers also carried minimal workloads. If another application was requested by any department, new hardware was planned, acquired and deployed into the environment. That's how companies such as Sun Microsystems, Hewlett-Packard and SGI produce a lot of profits in the 1990s. Now, with server virtualization prevalent in the data center, provisioning a new application server (virtual, but still dedicated) is simply a matter of management approval and a few mouse-clicks of action from the administrator. Thus, virtualization has completely changed the data center landscape. This slide show outlines 15 ways business-continuity and disaster-recovery planning has changed as a result. Sources include Kelvin Clibbon, CTO of Austin, Texas-based IT continuity management provider Neverfail, and eWEEK reporting. Ten of these points highlight the benefits of virtualization; this list also sheds light on five of the increased challenges brought on by its dynamic nature.

 
 
 
  • Virtualization Has Changed Disaster Recovery: 15 Data Points

    by Chris Preimesberger
    1 - Virtualization Has Changed Disaster Recovery: 15 Data Points
  • Basic Protection Easier to Obtain

    Traditional physical computers have become virtual machine containers—they are essentially files on a disk and fully abstracted from the physical hardware layer. Because we are no longer dealing with complex things like complete volumes, system states and boot sectors, it's much easier to get basic protection for these virtual machines.
    2 - Basic Protection Easier to Obtain
  • Full Server Backups Now Much Easier

    There are many vendors and many methods out there for moving these VMs around, offering some level of protection—whether you're just backing up images or replicating those to another site. Things like replication and data deduplication on disk are all commonplace today, as well as the advent of block-level differentials (BLDs), which allow you to forgo backing up the entire image every time and will only copy the blocks that have changed since the last backup.
    3 - Full Server Backups Now Much Easier
  • Server Backups Can Be in Broad Strokes

    Traditional backup systems have a one-to-one relationship with servers. With virtualization, single backup jobs now give basic protection for a larger number of servers. Today, most of the critical information now resides conveniently inside a file on the hypervisor host system, meaning that backing up one disk volume on a hypervisor host can address many VMs at once.
    4 - Server Backups Can Be in Broad Strokes
  • Full Server Recovery Now Much Easier

    The hardware-agnostic nature of virtualization makes it much easier to quickly recover virtual machines. There is no longer a need to find similar hardware to run VMs; they can now run on any hardware without loading new drivers or changing application configurations to fit the new physical server.
    5 - Full Server Recovery Now Much Easier
  • Hardware Availability Less of an Issue

    Virtualization solutions now almost universally provide options for hardware resiliency. Hypervisor clusters minimize the impact of hardware failures; in fact, VMs can be automatically restarted or even maintain a persistent state through hardware failures at the host level.
    6 - Hardware Availability Less of an Issue
  • Redundant Hardware Has Become Much More Affordable

    Redundant server hardware has historically come at a premium. Organizations only invested in redundant hardware for the most critical applications, and only one application ran on each server. Now, virtualization enables IT to fully leverage investments in redundant hardware. With modern multi-core, multiprocessor systems, a single server can easily support a dozen or more VMs, allowing more applications to benefit from highly available hardware configurations.
    7 - Redundant Hardware Has Become Much More Affordable
  • Full Site Recovery Much Easier

    The same flexible property of virtualization that enables IT to leverage redundant hardware also extends its benefits to site recovery. This was once a Herculean task that might literally have involved putting servers on a truck and attempting to install and configure them at a remote data center. It is now becoming easier to automate the site-recovery process, both for testing and for actual failover should a disaster occur. With fewer hardware dependencies, IT now can configure automated coordinated failover using software such as VMware Site Recovery Manager and others.
    8 - Full Site Recovery Much Easier
  • Data Replication No Longer Hardware-Dependent

    Before virtualization, the replication of data was performed by expensive mirrored SAN hardware or meticulously configured on a server-by-server basis. With virtualization, replication can be performed at the hypervisor level, providing the same benefit as SAN replication without the added expense and the daunting task of maintaining tight, hardware-compatible control at multiple sites.
    9 - Data Replication No Longer Hardware-Dependent
  • Server Environments Can Grow Rapidly

    Automated configuration and deployment solutions now make it easy for IT and business users to create servers on the fly. In addition, the cost of deploying a new server has been greatly reduced, and there is far less of a need to plan and justify new hardware.
    10 - Server Environments Can Grow Rapidly
  • Easier to Make Changes, Deploy New Apps

    Virtualization allows IT to make changes and deploy new applications or updates more easily and quickly than ever before. VMs can be easily relocated to different hypervisor hosts, and hardware can be added or reconfigured with minimal disruptions.
    11 - Easier to Make Changes, Deploy New Apps
  • Challenge #1: Availability Options Are Being Taken for Granted

    Since VMs are so easy to back up and restore, businesses become complacent on this being "good enough" for IT continuity. However, the truth is that most sources of application downtime come from within the VM container. If a server needs to be rebooted for an OS update, the server needs to be rebooted, and there is nothing that virtualization can do to alleviate that.  There will be downtime while the server is rebooting and, worse yet, if there is a problem following the reboot, then that problem has to be fixed, and even more downtime is incurred.
    12 - Challenge #1: Availability Options Are Being Taken for Granted
  • Challenge #2: Snapshots Often Provide a False Sense of Security

    Snapshots are expensive from a resource perspective in terms of storage, network and processing resources. Furthermore, businesses move quickly and a "snapshot" from even one hour ago can be missing a great many business transactions. Taking more frequent snapshots of every system—just in case—is simply not feasible, and it's easy to be lulled into a sense of security by the robust and simple availability offered by virtualization.
    13 - Challenge #2: Snapshots Often Provide a False Sense of Security
  • Challenge #3: Physical Servers Falling Behind in Protection

    As businesses focus on new virtualization backup technologies, availability planning has increasingly focused on the virtualized servers. However, in many cases, legacy servers are still using legacy protection—often leading to a disjointed DR plan. Ironically, it's often the more critical servers that are the last to move from physical to virtual platforms. There is a growing gap is protection strategies between legacy/physical servers and virtual servers.
    14 - Challenge #3: Physical Servers Falling Behind in Protection
  • Challenge #4: Downtime Still a Threat to Business Applications

    Both business and IT have changed substantially over the past few years, and some of these changes have driven up the risk posed by downtime. Downtime has always come at a cost, but there's more at stake today than in the past. Even small operational changes to IT infrastructure create a great risk. Organizations need to be more mindful than ever about whether their DR plans will hold up and to make sure that downtime avoidance is taken from an application and user perspective, rather than just protecting the hardware layer.
    15 - Challenge #4: Downtime Still a Threat to Business Applications
  • Challenge #5: Disaster-Recovery Plans Are More Outdated Than Ever

    eWEEK has been chronicling this issue for several years. The dynamic nature of virtualization has significant implications for BC/DR plans, which have historically been recorded in static documents and only revisited once or twice a year at best. Today, DR planning needs to be integral part of the agile server environment in order to remain relevant and effective. In short, most organizations' disaster-recovery plans probably don't accurately account for the current infrastructure and applications, so it's less and less likely that the plans will work as intended.
    16 - Challenge #5: Disaster-Recovery Plans Are More Outdated Than Ever
 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...

 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
Rocket Fuel