As one of the most disruptive technologies of recent times, virtualization is driving change in IT shops large and small. Yet, does that change go far enough? For as long as business has relied on IT, there has been an underlying fear about the possible failure of underlying systems’ components and operational disruption.
Of course, the biggest concern has been loss of vital business data, resulting in the evolution of today’s sophisticated data recovery solutions. Virtualization can add tremendous value in ensuring business continuity, but not on its own. It’s critical to look at the combination of physical and virtual to determine the best overall approach for your business.
Industry pundits talk a lot about focusing on recovery. “Recovery management” or “recoverability” is the in-vogue term, yet this minor switch of emphasis ignores one key fact: recovery is a reaction to failure. By the time a recovery is complete, the damage has been done. Productivity has been hit, communication has been disrupted and business has been lost. No matter which way you look, in a business world that gets more competitive by the day, critical systems need to be continuously available. That means operations servicing customers, partners and employees 24/7, no matter where in the world they are.
Building an IT strategy to deliver continuous availability of business-critical systems should go beyond using legacy backup and data protection principles. Bringing together virtualization, automation, and application monitoring and replication technologies to ensure that business continues accelerates the value of virtualization far beyond the server room.
Address the real causes of downtime
Let’s face it: although the prospect of a real disaster clearly exists, in all actuality, a facilities failure, application software failure or IT failure is more likely to be the source of a business disaster than a flood, hurricane, earthquake or terrorist attack.
Take databases. One day of database downtime, caused by the failure of a cooling unit in a server room, can spell disaster for a company in terms of revenue, productivity and reputation. On another level, application failure due to a poorly-timed patch, for instance, is a much more likely threat to availability than a physical server outage. Wherever a business-critical application is deployed, the threat of downtime comes from many sources. Delivering the right infrastructure to address these threats means the difference between ultimate success and failure.
Virtualization and application availability: B-at best
Most virtualization vendors’ availability solutions start by addressing entire host failover using platform high-availability facilities. While these protect against a physical server failure, they ignore the fact that the failure of an application within a virtual machine is much more likely to bring business to a halt. Facilities to detect application failure within a VM are nonexistent. This shortcoming is further complicated by the fact that a “blue-screened” VM may still appear active.
Another aspect that’s not often discussed is the new and/or refreshed infrastructure required to achieve high availability and business continuity, which in today’s economic climate may not be possible from a budget perspective. Virtualization platform-based failover relies on shared storage, which can be vulnerable to failures as well. This approach also relies on the premise that the protected applications are tolerant of running in a virtual world, yet many administrators still have doubts about memory, CPU and I/O requirements in a virtual environment.
So, while virtualization brings progress, the ability to deliver continuous availability through virtualization alone is a ways off. If you still rely on physical deployments, then additional strategies must be sought.
Recovery or Availability?
Recovery or availability?
It’s 4:00a.m. in California, but in London, it’s the middle of the working day. A CEO of a global corporation with offices in both locations is relying on current information to finalize an acquisition. Without warning, the power supply to the server room fails. The mail and mobility servers are still physical, the backup is (was) online, but there are virtual hosts in another part of the corporate campus and power still flows there. This could be a career-defining moment.
With a lot of hard work, copious cups of coffee and a little luck, it only takes a few hours to re-purpose systems and provide basic e-mail and BlackBerry service. But in the lifetime of negotiation, hours seem like weeks. The CEO has now had to concede on several sticking points and wants to know why the investment in virtualization did not deliver the continuity it promised.
What’s worse is, this could have been avoided. Alongside the backup, a replica copy of the relevant systems could have been held on the virtual systems, which in turn could have been monitoring availability of the e-mail and BlackBerry servers. It could have immediately responded to the downtime by initiating an automated failover, allowing users to carry on working without disruption.
Business continuity: It’s all about the user experience
The best way for IT to ensure consistent business performance is through the implementation of a solution that focuses on the business need and user experience. Meeting that need should be part of the virtual deployment planning, and drive the selection of virtual infrastructure and extended management tools. This means looking at all aspects of the virtual deployment and the source of outage threats.
The reasons for outages vary. They may include data loss, server failure, application failure, network failure, planned downtime, application performance degradation and corruption, or a complete site outage (disaster). It’s a fact of life that IT outages will happen; therefore, a critical goal should be that when an outage occurs, it should not result in business disruption and downtime. Users should be able to continue operating as if nothing has happened, thus delivering on the promise of consistent business performance. During virtualization projects, a critical look should be taken at possible failure points and the ability of the management tools to detect such failures.
Blending physical and virtual deployments to accelerate availability
By combining the best of replication, application monitoring and automation with virtual infrastructure, users can significantly enhance their business continuity capabilities. Virtual hosts running less critical systems can be used to provide a business-critical failover server without risking existing systems in any way.
For example, in the event of a database crisis, it’s just a question of making sure the failover server is available for users to connect to and carry on working. Finding a combination of replication, monitoring and seamless failover software that can manage the process is all that is required. This architecture can work locally for high availability and remotely for disaster recovery. It can even be extended so that the virtual host becomes an availability hub supporting multiple, mission-critical applications, perhaps on a dedicated virtual host.
Previously, Andrew worked as Global Director of Marketing for KVS Inc. where he was responsible for all aspects of marketing and grew the customer base tenfold until the company’s acquisition by Veritas (now Symantec). Prior to KVS, Andrew was Northern Europe Marketing Manager for iPlanet and Product Marketing Manager for Forte Software (acquired by Sun Microsystems). In addition, Andrew served as the European Product Manager for Platinum Technology where he led a multinational team responsible for the launch and sales enablement of Platinum Infrastructure management products across Europe. He can be reached at ABarnes@neverfailgroup.com.