Controller-based WLAN pitfalls: They’re expensive, don’t scale

Welcome to Controller Math 101. Let’s cut right to the chase, shall we?

Since Aruba is currently kicking Cisco’s teeth out, let’s name Aruba as the top controller-based vendor and pick on them for the duration of this blog. Did you see that? A compliment and a dig all in the same sentence. The boy has skillz. I chose Aruba simply because I have to pick on someone’s controller, and it might as well be theirs.

Aruba claims that their controller’s M3 module can handle 20Gbps of throughput, but upon closer inspection, it’s actually only 4Gbps because the 20Gbps number is for unencrypted throughput. How often do you do unencrypted Wi-Fi in an environment that has enough throughput to need an M3 controller module?

If anything, such a controller module might be supporting a small bit of unencrypted guest access traffic, but soon, even guest access will be secured with authentication and encryption. Some of the market secures guest access already, but soon everyone will be doing it. It’s easy to understand why with the recent sidejacking attack called Firesheep. So, supposing that all traffic will be encrypted, that 20Gbps is suddenly 4Gbps (per their spec sheet).

Aruba’s 6000 series controller can house four M3 modules, so they claim that the 6000 series controller can handle up to 80Gbps of traffic. Impressive, no? Actually, no. That’s actually only 16Gbps of actual traffic of course, and when you take the time to run the numbers, you’ll quickly see:

  • Peak 802.11n AP throughput on 2.4 GHz radio (20 MHz channel) = 75Mbps
  • Peak 802.11n AP throughput on 5 GHz radio (40 MHz channel) = 175Mbps
  • Peak total = 250Mbps
  • Medium/large enterprise Wi-Fi infrastructure installation = 2,000 APs
  • 2,000 x 250Mbps = 500,000Mbps = 500Gbps

That means that you’ll need 32 Aruba 6000 series controllers, each decked out with FOUR woefully expensive M3 modules, EIGHT 10Gbps interfaces, multiple power supplies, and other accessories in order to handle a medium/large installation – based only on throughput, and discounting all other factors (VPN termination, remote APs, etc.).

If you want redundancy, you’ll need another eight 19″ racks to house those backup controllers. If you were cost-conscience, then you could over-subscribe your network by 32X and buy 2 controllers (one for redundancy), with a full complement of feature licenses on both controllers (required for redundancy). 32X oversubscription for just 2,000 APs…hmmm…that seems ridiculous.

What if the installation was 4,000 APs? Just double everything of course! 1 Tbps of throughput capacity is now needed, and over-subscription is now at 32X for a pair of controllers WITHOUT redundancy. If you want redundancy at 32X over-subscription, you have to buy 4 of those fully-decked-out controllers with feature licenses. All that for just 4,000 APs???? That’s nuts.

It’s hysterical to me to note that for a redundant, non-oversubscribed network with just 4,000 APs, Aruba requires 125 fully-loaded (meaning FOUR M3 modules, extra power supplies, etc) controllers with full feature licensing. How many 19″ racks is that? Then you have to buy the APs and AirWave to manage the 125 controllers. Yeeks. Bye bye budget.

What happens when 3×3:3 (3 spatial stream) APs show up shortly? Briefly, that’s ~1.5Tbps for 4,000 APs and 48X oversubscription for a pair of fully-loaded controllers.

OK, so you get the point: controllers don’t scale. With a massive wave of Mobile Internet Devices (MIDs) on the way, scalability matters.

In an interim attempt at scaling an unscalable solution, Aruba (and others) has implemented distributed (aka “local”) forwarding. This means that data flows bypass the controller on the way to their destination.

Controller bypass requires that stateful firewall and QoS policy must be applied at the AP or applications will break and security holes will be created continually. “Thin” APs were never designed to do this type of heavy lifting (which is why they’re called “thin” in the first place), and so they are already and will continue to struggle with scalability. In addition, who wants to pay for 125 fully-loaded controllers with feature licensing? Uh…not me.

A customer of mine recently rolled out 3,550 APs. If you do the math, that’s 250 Mbps (per AP) x 3550 APs = 887.5Gbps. That’s likely the largest capacity Wi-Fi installation in existence today, though some of my other customers are pretty close to that and some even larger ones are just starting to roll out (over 1 Tbps of actual capacity).

There are no controller bottlenecks in my product’s design, so the capacity of each AP can be linearly multiplied times the number of APs to get the system capacity. When we release our 3×3:3 AP, taking a 4,000 AP customer as an example, that will be (using the numbers above) a 1.5Tbps Wi-Fi network which is 48X more capacity than four super-large, completely-loaded controllers with redundancy.

That’s all I have to say about that.

SHARETweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Pin on PinterestDigg thisShare on RedditShare on TumblrShare on StumbleUponEmail this to someone

Devin Akin is Chief Wi-Fi Architect at Aerohive. Devin has over 10 years in the wireless LAN market and over 15 years in information technology. Devin's background includes working as a network design engineer for EarthLink, AT&T/BellSouth, Foundry Networks, and Sentinel Technologies as well as working as an RF engineer in the US Army. He has authored and edited several books with Wiley-Sybex and McGraw-Hill and holds some of the industry's most esteemed certifications, including CWNE, MCNE, MCSE, CCNP, CCDP, CCSP, and INFOSEC. He is considered an authority on Wi-Fi.