The E(z)RF Service Assurance Module enables wireless LAN administrators to leverage an existing Meru WLAN infrastructure to provide ongoing benchmarking of the wireless network.
Meru Networks' E(z)RF Service Assurance Module provides an easy way for
enterprise wireless LAN administrators to
benchmark and measure network performance on an ongoing basis. At its heart,
the E(z)RF SAM provides a way to leverage an
existing Meru WLAN infrastructure to provide ongoing benchmarking of the
Instead of requiring laptops or other WLAN clients to perform the
benchmark tests, the SAM sequentially turns
every Meru AP300 series access point (except the AP301) in the network into a
virtual client. It uses these clients to connect to every SSID (Service Set Identifier)
configured throughout the network in each radio frequency band supported by the
APs, while continuing to service clients at the same time. In this way,
administrators can more easily understand the raw capacity and performance of
their network on an ongoing basis, while still maintaining wireless service on
The company advertises the SAM as a critical
component of its Wireless Service Assurance program-along with 802.11n speeds
and Meru's Air Traffic Control air fairness algorithms-which aims to provide
dependable WLAN service (with less than an hour of downtown per year) that can
be considered for replacement of the bulk of the wired network.
Meru is not the only wireless LAN vendor talking
about service assurance of WLANs. Aerohive's Performance Boost and AirTime
Sentinel technologies allow that company to offer SLA (service-level
agreement) guarantees to certain users and groups. However, Meru is the first
to figure out a way to constantly measure systemwide performance without a lot
of legwork or new equipment needed.
My test network consisted of a single Meru MC3000 wireless LAN controller ($5,400)
and three dual-band AP320 802.11n access points ($1,495 each). To add SAM functionality, I
needed to add to the network a Meru SA (service appliance), the SA1000 ($6,995),
on which to run SAM 2.1 ($21,995
software license for 50 APs, released in March) and the required E(z)RF Network
Manager software module ($4,995). So on top of any hardware and support costs
for the controller ($5,400) and three APs ($1,495 each), SAM totals out to
$33,985 for a network with 50 APs.
Creating Performance Baselines
To start, network administrators need to create two performance
baselines with the SAM: the first to measure
connectivity (latency in milliseconds plus packet loss in raw numbers) and the second
to measure bandwidth performance (in Mbps). With baselines established, the
administrator can schedule health checks to run periodically, with hourly,
daily, weekly or continuous recurrence schedules available. Administrators can
also schedule one-off or on-demand health checks.
One would think that the SAM would compare the
results of each health check to the baseline, but that is only the case with
one of the tests. A throughput health check measures its performance in
relation to the baseline, providing evaluative ratings as defined by
percentages defined by the administrator. For instance, I defined the upper threshold
at 50 percent and the lower at 25
percent, meaning any throughput measurement achieving 50 percent or greater of
baseline will be rated by the SAM as "good,"
anything below 25 percent "bad" and the rest "fair."
On the other hand, the connectivity baselines are informational, but not
used as a basis for ongoing comparison with health checks. In this case, the
administrator must instead define the upper and lower levels of acceptable
latency and packet loss throughout the network.
Determining when to run baseline tests is a bit of a philosophical
argument. To measure ongoing performance against best-case scenarios,
administrators should time baseline collection for times when wireless traffic
and potential interferers are at a minimum. Or network administrators may want
ongoing health checks to be compared to normal operating condition baselines,
which would be collected during work hours.
Ideally, it would be great if health checks could be measured against
both measurements taken under both circumstances, but the SAM doesn't work that
way. I could run a bunch of baselines at different times of day and keep them
in the system, but only one of each type of baseline is active at any time. Health
checks are compared only to the active baseline, and there is no way to
automatically switch baselines behind the scenes.
I also found that the baseline measurement taken determines what
specific networks and APs are tested during a health check. An ESSID (Extended SSID)
or an AP that was not part of the baseline will not be part of health checks
taken while the baseline is active. If I want to omit certain ESSIDs from
future tests (for instance, if I don't want to benchmark a guest network), I
could clone a baseline within the SAM and edit the
resulting baseline to omit certain ESSIDs, access points or radios from future
When a baseline is initiated, the SAM contacts the Meru
wireless controller defined for the test, pulling down the controller's saved
configuration file to get a list of all available access points, as well as all
the configured ESSIDs and their security settings. Then, the SAM pushes a virtual
client to an AP, which in turn associates itself with another AP on the
network. If the AP supports both the 2.4GHz and 5GHz bands, each radio will be
tested in turn.
Once associated to the network, the SA appliance sends traffic over the
wired network to the virtual client-hosting AP, which transmits the traffic
over wireless to the other AP, which then routes the traffic back over the
wired network to the SA appliance. For the connectivity tests, the SAM utilizes a 10-second
or so burst of ICMP traffic to measure network characteristics, while the
throughput test utilizes a built-in iteration of the iPerf test tool to measure
Andrew cut his teeth as a systems administrator at the University of California, learning the ins and outs of server migration, Windows desktop management, Unix and Novell administration. After a tour of duty as a team leader for PC Magazine's Labs, Andrew turned to system integration - providing network, server, and desktop consulting services for small businesses throughout the Bay Area. With eWEEK Labs since 2003, Andrew concentrates on wireless networking technologies while moonlighting with Microsoft Windows, mobile devices and management, and unified communications. He produces product reviews, technology analysis and opinion pieces for eWEEK.com, eWEEK magazine, and the Labs' Release Notes blog. Follow Andrew on Twitter at andrewrgarcia, or reach him by email at firstname.lastname@example.org.