Appliances Optimize WAN Traffic

Review: eWEEK Labs tested Riverbed and Blue Coat WAN acceleration solutions using three practical scenarios to see how the devices would react to different workloads.

During the last few years, the WAN acceleration market has become rife with quality products. eWEEK Labs tested two of the most recent releases, from Riverbed Technology and Blue Coat Systems, and found that both did a good job of optimizing bandwidth on a simulated WAN link.

When we last tested a product from Riverbed, in April 2004, the company had seven customers. Since that time, its customer base has expanded more than a hundredfold and its Steelhead appliances received high honors in the category of storage hardware in the sixth annual eWEEK Excellence Awards program.

Byte-caching capabilities were added to Blue Coats SG family of appliances only in May, bringing the hardware up to par with Riverbeds. Despite its relative immaturity in the WAN acceleration area, Blue Coat could quickly become a factor in the market because it is already well-known in the networking space for its caching and network security products.

/zimages/6/28571.gifWAN accelerators provide bandwidth boost to remote users. Click here to read more.

Overall, Blue Coats SG800 appliances lagged behind Riverbeds Steelhead appliances in eWEEK Labs head-to-head tests, but they still provided a noticeable improvement compared with the performance of a nonaccelerated WAN.

We tested each solution using three practical scenarios to see how the devices would react to different workloads.

At the center of our test network was a Network Nightmare WAN simulator unit ( that was set to run at T-1 speeds with 150 milliseconds of round-trip latency and 0.1 percent of packet loss.

The test units we received were overkill for the amount of bandwidth and client load we were throwing onto our simulated T-1 link, but they still illustrate the benefits WAN acceleration technology can afford to remote users and business partners.


We tested two Blue Coat SG800 units, each of which is priced at $25,950, and the Riverbed Steelhead 2020 appliance ($20,995) and a smaller Steelhead 1020 appliance ($12,495). Both Blue Coat and Riverbed offer units for less than $5,000 that will be appropriate for T-1-level bandwidth speeds, although we were unable to acquire them during our short testing window.

Good to Go

We set up the WAN acceleration units to bridge our data center and remote site switches to the ports of our WAN simulator. We ran our tests across both sides of the WAN to gauge the acceleration running in a bidirectional fashion.

Each test sequence was run three times: once to measure unaccelerated WAN performance, once to measure cold cache accelerated performance and once to measure warm cache performance.

The cold cache numbers show how well a WAN accelerator optimizes the delivery of new data. In a cold cache performance run, the WAN accelerator is seeing data for the first time and is relying on protocol optimization and compression to speed up performance.


During a warm cache run, data sent over the WAN has already been seen by the WAN acceleration product. Warm cache performance readings are considerably faster than cold cache readings because the WAN accelerators are serving up traffic to clients from their internal caches.

Our first test was a basic CIFS (Common Internet File System) file transfer, with a Windows XP client copying over a file from a remote Windows Server 2003 system. With CIFS acceleration, IT managers can either consolidate data at a central site or simply accelerate the movement of files among geographically dispersed offices and business partners.

For this test, we used a VBScript to initiate, execute and time the remote file-copy command.

Copying a 170MB test file over the WAN without WAN acceleration took 28 minutes and 33 seconds.

Using the Blue Coat solution, a cold cache file transfer was completed in 21 minutes and 46 seconds, and a warm cache transfer took just 14 seconds. Running the same tests using the Riverbed solution, a cold cache file transfer took a relatively speedy 13 minutes and 51 seconds, and a warm cache transfer took 25.7 seconds.

Next Page: Running replication tests and sizing up expectations.