Meru Networks' E(z)RF SAM Measures Network Performance

 
 
By Andrew Garcia  |  Posted 2010-04-14
 
 
 

Meru Networks' E(z)RF SAM Measures Network Performance


Meru Networks' E(z)RF Service Assurance Module provides an easy way for enterprise wireless LAN administrators to benchmark and measure network performance on an ongoing basis. At its heart, the E(z)RF SAM provides a way to leverage an existing Meru WLAN infrastructure to provide ongoing benchmarking of the wireless network.

Instead of requiring laptops or other WLAN clients to perform the benchmark tests, the SAM sequentially turns every Meru AP300 series access point (except the AP301) in the network into a virtual client. It uses these clients to connect to every SSID (Service Set Identifier) configured throughout the network in each radio frequency band supported by the APs, while continuing to service clients at the same time. In this way, administrators can more easily understand the raw capacity and performance of their network on an ongoing basis, while still maintaining wireless service on all APs.

The company advertises the SAM as a critical component of its Wireless Service Assurance program-along with 802.11n speeds and Meru's Air Traffic Control air fairness algorithms-which aims to provide dependable WLAN service (with less than an hour of downtown per year) that can be considered for replacement of the bulk of the wired network.

Meru is not the only wireless LAN vendor talking about service assurance of WLANs.  Aerohive's Performance Boost and AirTime Sentinel technologies allow that company to offer SLA (service-level agreement) guarantees to certain users and groups. However, Meru is the first to figure out a way to constantly measure systemwide performance without a lot of legwork or new equipment needed.

My test network consisted of a single Meru MC3000 wireless LAN controller ($5,400) and three dual-band AP320 802.11n access points ($1,495 each). To add SAM functionality, I needed to add to the network a Meru SA (service appliance), the SA1000 ($6,995), on which to run SAM 2.1 ($21,995 software license for 50 APs, released in March) and the required E(z)RF Network Manager software module ($4,995). So on top of any hardware and support costs for the controller ($5,400) and three APs ($1,495 each), SAM totals out to $33,985 for a network with 50 APs.

Creating Performance Baselines

To start, network administrators need to create two performance baselines with the SAM: the first to measure connectivity (latency in milliseconds plus packet loss in raw numbers) and the second to measure bandwidth performance (in Mbps). With baselines established, the administrator can schedule health checks to run periodically, with hourly, daily, weekly or continuous recurrence schedules available. Administrators can also schedule one-off or on-demand health checks.

One would think that the SAM would compare the results of each health check to the baseline, but that is only the case with one of the tests. A throughput health check measures its performance in relation to the baseline, providing evaluative ratings as defined by percentages defined by the administrator. For instance, I defined the upper threshold at 50 percent  and the lower at 25 percent, meaning any throughput measurement achieving 50 percent or greater of baseline will be rated by the SAM as "good," anything below 25 percent "bad" and the rest "fair."  

On the other hand, the connectivity baselines are informational, but not used as a basis for ongoing comparison with health checks. In this case, the administrator must instead define the upper and lower levels of acceptable latency and packet loss throughout the network.

Determining when to run baseline tests is a bit of a philosophical argument. To measure ongoing performance against best-case scenarios, administrators should time baseline collection for times when wireless traffic and potential interferers are at a minimum. Or network administrators may want ongoing health checks to be compared to normal operating condition baselines, which would be collected during work hours.

Ideally, it would be great if health checks could be measured against both measurements taken under both circumstances, but the SAM doesn't work that way. I could run a bunch of baselines at different times of day and keep them in the system, but only one of each type of baseline is active at any time. Health checks are compared only to the active baseline, and there is no way to automatically switch baselines behind the scenes.  

I also found that the baseline measurement taken determines what specific networks and APs are tested during a health check. An ESSID (Extended SSID) or an AP that was not part of the baseline will not be part of health checks taken while the baseline is active. If I want to omit certain ESSIDs from future tests (for instance, if I don't want to benchmark a guest network), I could clone a baseline within the SAM and edit the resulting baseline to omit certain ESSIDs, access points or radios from future health checks.

When a baseline is initiated, the SAM contacts the Meru wireless controller defined for the test, pulling down the controller's saved configuration file to get a list of all available access points, as well as all the configured ESSIDs and their security settings. Then, the SAM pushes a virtual client to an AP, which in turn associates itself with another AP on the network. If the AP supports both the 2.4GHz and 5GHz bands, each radio will be tested in turn.

Once associated to the network, the SA appliance sends traffic over the wired network to the virtual client-hosting AP, which transmits the traffic over wireless to the other AP, which then routes the traffic back over the wired network to the SA appliance. For the connectivity tests, the SAM utilizes a 10-second or so burst of ICMP traffic to measure network characteristics, while the throughput test utilizes a built-in iteration of the iPerf test tool to measure upload throughput.

Not an Ideal Setup


Not an Ideal Setup

The setup is not ideal, as it does not fully replicate the traffic characteristics a wireless client would. Any conditions caused by the wired network may have their effect doubled, since the traffic both originates and terminates at the SA appliance, which could be located in a data center in a remote location from the AP under test.

The recorded measurements will not accurately reflect that of a real wireless client, as the access point client is likely located on a wall or in the ceiling, rather than somewhere on the office floor or a desk. Access points are typically deployed to optimally offer coverage for real clients on the floor, not artificial ones in the rafters, so APs may be further away from each other than a client machine would be, thereby potentially worsening the throughput and latency figures detected by the SAM.

In addition, the throughput tests have a best-case-scenario flavor to them. If the ESSID under test supports 802.11n in either band, a client AP will associate as an 802.11n client. Under these circumstances, the client AP will not test the network as a down-level 802.11a/b/g client might see it. The effects of coexistence of 802.11n with legacy clients may be tested by happenstance if a laptop happens to be using the network under test, but I had to dig to discover the presence of these legacy clients in the "All Station" log within the test details.

Also, the SAM doesn't use real application traffic: The iPerf tool used by the SAM to measure throughput sends a large burst of uncompressible data in large frames. Real applications, using different ports with smaller packet sizes and potentially more TCP overhead, will produce different results. As with any benchmarking result, take it with a grain of salt.

Identifying Performance Issues

Despite those shortcomings, when used regularly, the SAM can help identify performance issues in the network-although it is not always as good at explaining why performance degraded. For instance, in one test, the SAM was able to correctly identify when my DHCP (Dynamic Host Configuration Protocol) server was down because the AP client could not get an address, noting the symptoms and possible cause in the test results, while also sending to me an e-mail notification. The SAM was also able to suss out an AP having antenna problems.

However, in situations that were bad but not dire, the SAM was less helpful. To be able to troubleshoot poor wireless performance, you need to know things about both ends of the connection. Interference could be having an effect nearer the client or the AP. With the SAM, you know about the client, but you have to dig for the information about the AP due to the way Meru's WLAN technology works.

Because of Meru's single-channel architecture, which utilizes the same channel across all APs in the network, I often found that performance varied because the health check tallied its findings against a different access point than the one tested in the baseline. But the health check results don't clearly spell that out.

This circumstance may be listed in the test logs-if you look at the health check and baseline side by side. But I had to dig into the Network Manager to find out which AP was under test with a given client AP.

Given that Meru controls all the information in the wireless network-either in the SAM and Network Manager or in the wireless controller-I'd like to see Meru do a better job correlating data from all its own sources. The company could present a comprehensive and definitive take on what is going wrong with the network somewhere in its solution encompassing data from these sources, rather than requiring administrators to chase around between Meru applications to sort it all out on their own.

All this hunting around is made more annoying by the SAM's antiquated Web interface. Designed to work with Internet Explorer 7, I had to run IE8 in compatibility mode to get it to render at all. Even so, I still found that some dialog boxes would not register any changes that I made, leading me to try configuring the same thing time after time.

Even when it did work, the GUI was hard to deal with. The Web interface doesn't care about monitor size, instead packing a lot of poorly formatted data into a cramped series of boxes. That required me to constantly scroll left and right within a box to see all the data. Indeed, the easiest way I found to look at logs produced by the SAM was to output the results to a comma-deliminated rendering of the data, which I could then copy and paste into Notepad. 

 

Rocket Fuel