Performance testing should be objective, but when vested interests design the setting, beware of latent bias. eWeek Labs spent an extra week testing Cisco Systems Inc.s Catalyst 2950T-24 and 3550-12T switches to figure out why our performance tests showed slightly different results—more packet loss across a wider range of packet sizes—than those obtained at Ciscos labs. Using the same equipment, the two labs were getting different numbers.
In the end, it turned out that we were doing a more rigorous test, one that we believe is more reflective of the real world. To run our Layer 2 tests, we used Spirent Communications Inc.s SmartBits 6000B performance analysis system, equipped with 12 Gigabit-over-copper ports, in conjunction with the companys AstII traffic generation and measurement software.
Using a “full-mesh” test—one with all ports talking to all other ports—we found a very small but measurable amount of packet loss.
Although Cisco advised us before testing that, because of engineering design decisions, the Catalyst 3550-12T forwarded 64-byte packets (the smallest valid IP packet size) at around 93 percent of wire speed, we also found that bigger packets faced a tiny bit of trouble. We emphasize tiny because although our full-mesh tests showed some packet loss at the upper end of the scale (packet sizes greater than 1,400 bytes), all loss was less than 0.75 percent.
Using Spirents AstII, we set up static port-pair tests, where port 1 sent and received only from port 2, port 3 from port 4, and so forth. These tests produced wire-speed results for all packet sizes except 64 bytes, which jibed with Ciscos results.
Although the static-port tests that Cisco conducted are a valid measure of performance, those test conditions are unlikely to be seen in a real networking environment.