At the Interop conference in Las Vegas May 1, I sat in on a pretty fascinating panel discussion called “The Great WLAN Architectural Debate” that brought together representatives from five enterprise-grade wireless LAN companies — ostensibly to compare and contrast their technology with that of their competitors, particularly in regard to 802.11n. Moderated by IDC’s Abner Germanow, the discussion allowed each vendor the opportunity to highlight its differentiators on both the wired and wireless sides of the network — and to take some pointed potshots at its competitors in the process.
The panelists were:
Kurt Sauter – Xirrus
Keerti Melkote – Aruba Networks
Luc Roy – Siemens
Adam Conway – Aerohive Networks
David Confalonieri – Extricom
Below is my perception of what was said during the panel. It’s not a direct transcript, since I don’t type nearly fast enough for that, so I may have missed some points here or there. Nonetheless, what follows captures the spirit of what turned out be a fascinating and lively debate.
Germanow – Discuss a misconception you think exists about 802.11n.
Sauter – That the standard is not here. It is true that the final standard is not ready, but when the Wi-Fi Alliance certified Draft 2.0, that draft became the de facto standard allowing us all to move forward. Interoperability is great, and there are hundreds of devices now available. People may be skeptical when they start looking into the technology, but they quickly become believers.
Roy – All access points can use power over Ethernet equally. Many products may support POE, but will do so with degraded performance as some devices will step back from three to two transmit chains to use POE. You need to read the fine print. However, there are solutions out there that are fully POE-compliant, and with these you will be more green as a result. There are 4- or 5-watt differences between different products.
Sauter – I’ve been involved with 802.11n for about eight years and the real promise is in the 5GHz band – that’s where more channels are to do bonding, to bring home the full 300M-bps data rates we talk about. People will be disappointed with performance in the 2.4 band. If you are looking for architecture, look at 5GHz because it will enable the best performance.
Confalonieri – We believe the opposite of what he [Sauter] said. Everyone has invested in the 2.4 band, and we believe you should not close the door on that band. It has been vilified by companies, but we feel there is a way to do 802.11n effectively in 2.4 GHz. It is both possible and viable — economically and performancewise — to run 11n with 802.11a and 802.11g at the same time. 802.11n is a tremendous improvement in speed or range — but I didn’t say both. You are forced to choose what you emphasize with other solutions.
Melkote – That you need 802.11n clients to gain benefits from an 11n infrastructure. You don’t. With MIMO [multiple input, multiple output], instead of having one path to a legacy client, you will have multiple paths. Then the client selects the best connection. You can more easily connect at the 54M-bps speeds promised by 802.11a and 802.11g. I feel that with this in mind, there is not a good reason to wait for 11n clients to make the switch.
Germanow – Tell us why your architecture is best. Start on the RF side.
Sauter – Moving to 802.11n on the client or infrastructure is a good thing to do now, I agree. It won’t hurt your existing environment. With Xirrus, we were designed four years ago for 11n. The radios are modular so you can snap out old ones, snap in 802.11n radios and you are ready to go. We are doing 3 by 3 MIMO. We have a directional antenna system, so you can get far better rate and far better range. There is a misconception that all you need is one 802.11n radio or AP. Really, you need multiple ones out there that support 11n at the same time. Just as important, some people talk about contention. You can have clients that conect at 802.11n rates at certain radios and legacy clients on other radios — you can segregate your traffic this way. We have a way to load balance across 802.11n radios because there is still contention with 802.11n. 802.11bg will slow down 802.11n in 2.4GHz and 802.11a will slow down 802.11n in 5GHz.
Melkote – APs are centrally controlled from our controller. We don’t limit you to the 2.4GHz or 5GHz band. We have adaptive radio management that can sense the environment, then autoselect the right mode of operation including channel and power. Location detection, voice and other applications are happening at the application layer, and ARM tunes the radios on the whole system to deal with the apps you are using.
Roy – Who knows what kind of application you will use on your wireless network tomorrow? Can you pick an architecture based on what you don’t know? No. We use what we call a fit AP — it has intelligence in there. And there is intelligence at the back end of the network. Take videoconferencing: Wouldn’t you want to parse traffic at the AP rather than passing back to controller? Other times, you will want to tunnel back — like with voice, where you may want to move a call around the network. Configure the channel, do RF power automatically — and if a neighbor AP fails, power up the surrounding ones. Siemens says, don’t pick a religion, pick a solution that fits both ways.
Conway – Ours is similar to overlapping cell architecture, each AP runs on different channels — there are three in the 2.4GHz band, 24 (with the new ones just opened up) in 5GHz. With channel bonding, you can get 12 40MHz channels in 5GHz each doing 300M bps — that’s a lot of bandwidth. [NOTE: The speaker initially said there were 24 300M-bps channels in 5GHz, but was quickly corrected.] We take advantage of all of them. Having intelligence at the AP — that is how we grew up. We’ve developed rich QOS [quality of service] solutions that are aware of applications and manipulate RF to prioritize the right traffic.
Confalonieri – Our architecture is the only one here not doing cell planning. With cell planning, one AP has its own channel. Extricom has four radios in (some of) our APs, and we are simulatenously running each on its own channel. We take spectrum and allow that channel to be available anywhere — not allocating channel but spreading it out. APs are transceivers being fully managed by switch. We have the most efficient use of spectrum, creating a zone out of the blanket of coverage provided by each radio. This allows me to ask what I want to use each zone for. One could be 40MHz 802.11n channel in the 2.4GHz band, one can run 802.11g for legacy support, one for 5GHz 802.11n, and the other for 802.11a legacy support. Forget about RF complexities, we can now think of these zones as four pipes with high performance — what do you want to do with them? Complexities of RF are absorbed within the technology, while still honoring the principles of the original 802.11 specification, which had the original conception of a single cell with clients. We make the assumption you are not a greenfield deployment, you’ve already laid out your cabling for your wireless network components. You can deploy Extricom APs — I loathe to say migrate — but rather add 802.11n as a one for one replacement of your existing 2.4GHz APs. Keep the same location. When the day comes when there are enough clients out there, you can go to 5GHz, but keep in mind, this band has a different propagation algorithm and not much has been written about cell planning yet. Then, you expand your footprint to get to 5GHz.
Sauter – I think everyone here has automatic cell sizing and channel management — it’s kind of a given. From ab RF perspective, what is different out there? It is performance for end user laptops — you need to deliver more speed to them. At last year’s Interop conference, we covered this whole place with just 12 of our arrays. Others have done it with five times more devices in the past [NOTE: Aruba provided the WLAN at Interop in 2008]. The cost of pulling those cables, especially with the prices charged by union labor to pull it — those costs can be out of sight.
Melkote – No one has to do channel planning anymore. Our APs are more intelligent than single-channel models where if you get traffic from neighbor (like a Starbucks), you can lose the whole network. The IEEE and FCC are constantly opening more channels. Why are they doing it? Because there is need for that capacity. Single-channel systems will not scale to do this. You don’t need to crowd one channel, spread it around across the band and you will get a lot more bandwidth and less co-channel interference. To make it look like one AP, you are breaking a lot of rules.
Confalonieri – In regard to the Starbucks thing, it is categorically false that we will get taken out. You have to understand how we work — we still follow polite 802.11 protocols. There is nothing intrinsically unscalable about a single-channel model that uses the same spectrum. The way we do it, we allow multiple links in the same blanket at the same time — we call it TruReuse. And we are multichannel — each channel is available at every AP. Extricom and Meru are the only ones doing single channel, but in truth we are not “single channel” — we are “same channel.”
Melkote – I fail to understand how you can add more RF devices and expect more — 802.11 is a contention-based protocol. There is only so much juice you can squeeze out of one 20MHz channel.
Sauter – You don’t have control over the client side. We say use as many channels as you can. It gives you more capacity.
Germanow – Let’s talk about the cabled side of the WLAN. What advantages do you bring to the that side of the equation?
Roy – Let’s look at cost of a new controller. It’s not too excessive, especially if you are using 400-500 APs. [Asks audience] How many of you have a wireless LAN and how many have a guest policy SSID [service set identifier]? Is there tight security around that? You want to tunnel traffic back to ensure policy. The challenge with a fat AP is that you have to manage them individually. When you have a controller, the entire life cyle of APs becomes a simple dance — APs automatically get configurations. If you have a controller with fat AP traits, though, that is the best of both worlds.
Conway – 802.11n changes your network. Things will have higher bandwidth needs and traffic will be more bursty. if you look at how traffic flows — there is 1) client to client, 2) client to workgroup (like a printer), 3) client to core (servers), 4) client to Internet. With a classic convenience WLAN network that was slower, it made sense to bring all that traffic back to the controller. With 802.11n, that traffic gets huge. You suddenly have multiple 10 Gigabit Ethernet adapters on controllers. With Aerohive, we have central management with distributed control and data planes. In general, we believe traffic should flow as it does on a wired network. You’ve invested in infrastructure, so use it to carry your data. Resiliency of your WLAN becomes more important as it becomes a primary connection — you need to have the same resiliency as your wired network. You need stateful failovers — and you can’t afford to lose a controller and get a WLAN blackout.
Confalonieri – We believe in distributed processing. Our switches are in a distribution layer — a grouping of controllers. All traffic comes from clients to this zone — the switch is Layer 2 sitting at distribution point. Think of a switch and its APs as a Wi-Fi zone. Distribute the processing load out at the edge to deal with heavy traffic bursts coming from one area. In Layer 3 and at the core, we centralize our management component to maintain configuration of the switches. There is no configuration in our APs, they are literally just transceivers. Switches are where the logic lies and the AP is a listening post only.
Sauter – We are a little like Aerohive. Our controller is at the edge of the network right next to the radios. So we are not an overlay network. This allows lots of horsepower out at the edge. Intelligence continues to get pushed to the edge of the network, just like with the wired network. We have put all intelligence and controller at the edge to do processing — of encryption, packets, etc. And you should be sure you check into the encryption processing capabilities of any network. There are a lot of 802.11n products coming online, but encryption processing has not kept up with the network speeds. Look at data sheets to see how much encryption processing can be done.
Melkote – We pioneered centralized controllers. Most traffic in the network hits our controller. But if you look at traffic patterns, it is going through the core network anyway. Yes there is a management of access, but there is management of service as well. Mobility, security, etc — if you push it to the edge of the network, who is left with the bag to manage that? Customers. It makes life simpler to centralize. This would be a problem if the controller doesn’t scale, so we invested up front to build a hardware-based controller to give the scale you need. We had the first programmable high-speed logic. We can send 80G bps of traffic to our controllers. However, there is some need for distributed processing — you don’t want to put everything across a slow-speed WAN link, for instance. We have ways to handle that.
Germanow – How do you add all these services to your architectures? As an example, how do you do locationing on your system?
Confalonieri – The mix of applications you want to drive and what architecture you pick are correlated. Voice requires a steady stream of data and high mobility. RFID [radio-frequency identification] has different requirement — triangulation accuracy — you want to hear the tag from more points of listening. Ironically, what we see is that bringing APs together for more location accuracy reduces your mobility because of the handoffs between APs. We approach it by decoupling the relationship between coverage, mobility and security. Deploy APs in any density you want because we don’t have handoff delay and doesn’t hurt voice quality. Having multiple radios then gives me a chance to segregate applications — put data users on one network, voice on another, RFID on third.
Conway – Guest network and location are important use cases, and they are not possible with a wired network. Because you can triangulate location, it opens a lot of possible applications. Making sure the guest network is secure — put users in a database and store. We provide both services.
Melkote – What asset are you tracking, and how do you find it? Active assets like laptops are constantly communicating so it is fairly easy to find them — via triangulation, or some do fingerprinting. We can triangulate active assets to within 3 meters. Asset tags, on the other hand, don’t communicate in Wi-Fi. They only chirp occasionally, and don’t communicate constantly. You need to support tag protocols — and we support all of them. Identifying where an asset is, then putting it in an API for external applications to access. You can store a history to see how the device moves over time.
Sauter – With Xirrus, we don’t need to carpet bomb the building with APs to see where things are. We listen to asset tags and stations on all channels at all times on our directional antennae, so we get a better RF fingerprint. Give me two arrays and you get great locationing. It’s innate and built into our systems.
Audience question – What should we be looking at for 802.11n clients?
Conway – You need client management in place. Windows and Active Directory work well, but there are other solutions out there.
Sauter – Starting now, make sure to buy devices with 802.11a/b/g/n support, please!