How to Use Virtualization to Accelerate Remote Backup and WAN Replication
IT managers need to build efficient, cost-effective disaster recovery infrastructures within the enterprise. Virtualization initiatives are dictating that the amount of hardware supported in remote replication sites be reduced. Here, Knowledge Center contributor Shawn Cooney explains how to use software-based WAN optimization technology to accelerate remote backup and WAN replication and how, by embracing virtualization, efficient remote disaster recovery can be achieved.
The remote backup and recovery end game is simple. You should be able to perform frequent backups with shorter backup windows to protect as much company information as possible. You should be able to do this regardless of the miles between your source and target replication locations. But before I get ahead of myself and dig too deeply into the technology, let's get some facts on the table.
The volume of data that must be protected by solutions such as replication and backup continues to grow by leaps and bounds. The cost of storage to IT is growing at a rate approaching 60 percent per year. The more data you store and need to protect, the more network bandwidth you will be required to purchase for replicating or backing up that data between sites. It seems like a simple equation.
Bandwidth has a direct impact on your ability to improve data recovery capabilities and limit loss. However, buying more network bandwidth for your WAN may not be the best investment to achieve your goals. Latency, congestion and packet loss can annihilate the throughput of any WAN. Latency is caused by limits in the speed of light over distance and it can reduce effective bandwidth by 90 percent. Bigger links are impacted more by the effects of latency.
To add insult to injury, networking protocols reduce link speed by half every time a packet is lost due to congestion or errors. Ramping back up to maximum speed can take time. If a second data packet gets lost, then the link speed is cut even further. And packets do get lost. Loss rates on most WAN links average around 0.1 percent, which can cripple effective throughput. As network distances approach 500 miles, a T-3 line will only provide around 40 percent effective bandwidth if no efforts are made to work around this problem.
According to a recent survey on WAN disaster recovery capabilities, nearly half of North American and European enterprises reported that network bandwidth costs represent between 20 percent and 80 percent of the total cost of data replication-and these are recurring monthly costs. Improving recovery time without increasing bandwidth is important. So, how do you break this costly dependency on network bandwidth to support increased volumes of replicated data-and do this while meeting your backup and recovery time and point objectives?