How to Drive Continuous Application Availability Through Virtualization

 
 
By Andrew Barnes  |  Posted 2009-03-04 Email Print this article Print
 
 
 
 
 
 
 

The advantages of virtualization are undeniable. There are tremendous operational efficiencies to be had in many different areas, but it's not a cure-all. Delivering high availability or disaster recovery to meet demands of modern 24/7 operation means understanding all your risk points and architecting around them. Virtualization won't deliver business continuity for free, but as Knowledge Center contributor Andrew Barnes explains, by architecting your virtual environment with the right tools and experience, you'll greatly ease the process and increase your chances for success-at a cost that's right for business.

As one of the most disruptive technologies of recent times, virtualization is driving change in IT shops large and small. Yet, does that change go far enough? For as long as business has relied on IT, there has been an underlying fear about the possible failure of underlying systems' components and operational disruption.

Of course, the biggest concern has been loss of vital business data, resulting in the evolution of today's sophisticated data recovery solutions. Virtualization can add tremendous value in ensuring business continuity, but not on its own. It's critical to look at the combination of physical and virtual to determine the best overall approach for your business.

Industry pundits talk a lot about focusing on recovery. "Recovery management" or "recoverability" is the in-vogue term, yet this minor switch of emphasis ignores one key fact: recovery is a reaction to failure. By the time a recovery is complete, the damage has been done. Productivity has been hit, communication has been disrupted and business has been lost. No matter which way you look, in a business world that gets more competitive by the day, critical systems need to be continuously available. That means operations servicing customers, partners and employees 24/7, no matter where in the world they are.

Building an IT strategy to deliver continuous availability of business-critical systems should go beyond using legacy backup and data protection principles. Bringing together virtualization, automation, and application monitoring and replication technologies to ensure that business continues accelerates the value of virtualization far beyond the server room.

Address the real causes of downtime

Let's face it: although the prospect of a real disaster clearly exists, in all actuality, a facilities failure, application software failure or IT failure is more likely to be the source of a business disaster than a flood, hurricane, earthquake or terrorist attack.

Take databases. One day of database downtime, caused by the failure of a cooling unit in a server room, can spell disaster for a company in terms of revenue, productivity and reputation. On another level, application failure due to a poorly-timed patch, for instance, is a much more likely threat to availability than a physical server outage. Wherever a business-critical application is deployed, the threat of downtime comes from many sources. Delivering the right infrastructure to address these threats means the difference between ultimate success and failure.

Virtualization and application availability: B- at best

Most virtualization vendors' availability solutions start by addressing entire host failover using platform high-availability facilities. While these protect against a physical server failure, they ignore the fact that the failure of an application within a virtual machine is much more likely to bring business to a halt. Facilities to detect application failure within a VM are nonexistent. This shortcoming is further complicated by the fact that a "blue-screened" VM may still appear active.

Another aspect that's not often discussed is the new and/or refreshed infrastructure required to achieve high availability and business continuity, which in today's economic climate may not be possible from a budget perspective. Virtualization platform-based failover relies on shared storage, which can be vulnerable to failures as well. This approach also relies on the premise that the protected applications are tolerant of running in a virtual world, yet many administrators still have doubts about memory, CPU and I/O requirements in a virtual environment.

So, while virtualization brings progress, the ability to deliver continuous availability through virtualization alone is a ways off. If you still rely on physical deployments, then additional strategies must be sought.



 
 
 
 
Andrew Barnes is Senior Vice President of Corporate Development for Neverfail. Andrew joined Neverfail in March 2007, bringing extensive experience in marketing, product management and pre-sales from his 25 years in the software industry. In this current role, Andrew is responsible for Neverfail's branding, marketing, product management and Web presence. Andrew most recently served as VP of Marketing for a European-based software company and has held a variety of senior positions with companies such as KVS, Sun and Platinum Technology. Previously, Andrew worked as Global Director of Marketing for KVS Inc. where he was responsible for all aspects of marketing and grew the customer base tenfold until the company's acquisition by Veritas (now Symantec). Prior to KVS, Andrew was Northern Europe Marketing Manager for iPlanet and Product Marketing Manager for Forte Software (acquired by Sun Microsystems). In addition, Andrew served as the European Product Manager for Platinum Technology. He can be reached at ABarnes@neverfailgroup.com.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel