Wi-Fi 6 Release 2, Or Why Naming Conventions Suck

I just noticed that the Wi-Fi Alliance announced a new spec for Wi-Fi 6 and Wi-Fi 6E. Long-time readers of this blog will know that I am a fan of referring to technology by the standard, not by a catch term that serves as a way to trademark something, like Pentium. Anyway, this updated new standard for wireless communications was announced on January 5th at CES and seems to be an entry in the long line of embarrassing companies that forget to think ahead when naming things.

Standards Bodies Suck

Let’s look at what’s included in the new release for Wi-Fi 6. The first and likely biggest thing to crow about is uplink multi-user MIMO. This technology is designed to enhance performance and reduce latency for things like video conferencing and uploading data. Essentially, it creates multi-user MIMO for data headed back the other direction. When the standard was first announced in 2018 who knew we would have spent two years using Zoom for everything? This adds functionality to help alleviate congestion for applications that upload lots of data.

The second new feature is power management. This one is aimed primarily at IoT devices. The combination of broadcast target wake time (TWT), extended sleep time, and multi-user spatial multiplexing power save (SMPS) are all aimed at battery powered devices. While the notes say that it’s an enterprise feature I’d argue this is aimed at the legion of new devices that need to be put into deep sleep mode and told to wake up at periodic intervals to transmit data. That’s not a laptop. That’s a sensor.

Okay, so why are we getting these features now? I’d be willing to bet that these were the sacrificial items that were holding up the release of the original spec of 802.11ax. Standards bodies often find themselves in a pickle because they need to get the specifications out the door so manufacturers can start making gear. However, if there are holdups in the process it can delay time-to-market and force manufacturers to wait or take a gamble on the supported feature set. And if there is a particular feature that is being hotly debated it’s often dropped because of the argument or because it’s too complex to implement.

These features are what has been added to the new specification, which doesn’t appear to change the 802.11ax protocol name. And, of course, these features must be added to new hardware in order to be available, both in radios and client devices. So don’t expect to have the hot new Release 2 stuff in your hands just yet.

A Marketing Term By Any Other Name Stinks

Here’s where I’m just shaking my head and giggling to myself. Wi-Fi 6 Release 2 includes improvements for all three supported bands of 802.11ax – 2.4GHz, 5GHz, and 6GHz. That means that Wi-Fi 6 Release 2 supersedes Wi-Fi 6 and Wi-Fi 6E, which were both designed to denote 802.11ax in the original supported spectrums of 2.4 and 5GHz and then to the 6GHz spectrum when it was ratified by the FCC in the US.

Let’s all step back and realize that the plan to simplify the naming convention of the Wi-Fi alliance for marketing has failed spectacularly. In an effort to avoid confusing consumers by creating a naming convention that just counts up the Wi-Fi Alliance has committed the third biggest blunder. They forgot to leave room for expansion!

If you’re old enough you probably remember Windows 3.1. It was the biggest version of Windows up to that time. It was the GUI I cut my teeth on. Later, there were new features that were added, which meant that Microsoft created Windows 3.11, a minor release. There was also a network-enabled version, Windows for Workgroups 3.11, which included still other features. Was Windows 3.11 just as good as Windows for Workgroups 3.11? Should I just wait for Windows 4.0?

Microsoft fixed this issue by naming the next version Windows 95, which created a bigger mess. Anyone that knows about Windows 95 releases know that the later ones had huge new improvements that made PCs easier to use. What was that version? No, not Windows 97 or whatever the year was. No, it was Windows 95 OEM Service Release 2 (Win95OSR2). That was a mouthful for any tech support person at the time. And it showed why creating naming conventions around years was a dumb idea.

Now we find ourselves in the mess of having a naming convention that shows major releases of the protocol. Except what happens when we have a minor release? We can’t call it by the old name because people won’t be impressed that it contains new features. Can we add a decimal to the name? No, because that will mess up the clean marketing icons that have already been created. We can’t call it Wi-Fi 7 because that’s already been reserved for the next protocol version. Let’s just stick “release 2” on the end!

Just like with 802.11ac Wave 2, the Wi-Fi Alliance is backed into a corner. They can’t change what they’ve done to make things easier without making it more complicated. They can’t call it Wi-Fi 7 because there isn’t enough difference between Wi-Fi 6 and 6E to really make it matter. So they’re just adding Release 2 and hoping for the best. Which will be even more complicated when people have to start denoting support for 6GHz, which isn’t universal, with monikers like Wi-Fi 6E Release 2 or Wi-Fi 6 Release 2 Plus 6E Support. This can of worms is going to wiggle for a long time to come.


Tom’s Take

I sincerely hope that someone that advised the Wi-Fi Alliance back in 2018 told them that trying to simplify the naming convention was going to bite them in the ass. Trying to be cool and hip comes with the cost of not being able to differentiate between minor version releases. You trade simplicity for precision. And you mess up all those neat icons you built. Because no one is going to legitimately spend hours at Best Buy comparing the feature sets of Wi-Fi 6, Wi-Fi 6E, and Wi-Fi 6 Release 2. They’re going to buy what’s on sale or what looks the coolest and be done with it. All that hard work for nothing. Maybe the Wi-Fi Alliance will have it figured out by the time Wi-Fi 7.5 Release Brown comes out in 2025.

You Down with IoT? You Better Be!

Did you see the big announcement from AWS re:Invent that Amazon has a preview of a Private 5G service? It probably got buried under the 200 other announcements that came out on so many other things so I’ll forgive you for missing it. Especially if you also managed to miss a few of the “hot takes” that mentioned how Amazon was trying to become a cellular provider. If I rolled my eyes any harder I might have caused permanent damage. Leave it to the professionals to screw up what seems to be the most cut-and-dried case of not reading the room.

Amazon doesn’t care about providing mobile service. How in the hell did we already forget about the Amazon (dumpster) Fire Phone? Amazon isn’t trying to supplant AT&T or Verizon. They are trying to provide additional connectivity for their IoT devices. It’s about as clear as it can get.

Remember all the flap about Amazon Sidewalk? How IoT devices were going to use 900 MHz to connect to each other if they had no other connectivity? Well, now it doesn’t matter because as long as one speaker or doorbell has a SIM slot for a private 5G or CBRS node then everything else can connect to it too. Who’s to say they aren’t going to start putting those slots in everything going forward? I’d be willing to bet the farm that they are. It’s cheap compared to upgrading everything to use 802.11ax radios or 6 GHz technology. And the benefits for Amazon are legion.

It’s Your Density

Have you ever designed a wireless network for a high-density deployment? Like a stadium or a lecture hall? The needs of your infrastructure look radically different compared to your home. You’re not planning for a couple of devices in a few dozen square feet. You’re thinking about dozens or even hundreds of devices in the most cramped space possible. To say that a stadium is one of the most hostile environments out there is underselling both the rabid loyalty of your average fan and the wireless airspace they’re using to post about how the other team sucks.

You know who does have a lot of experience designing high density deployments with hundreds of devices? Cellular and mobile providers. That’s because those devices were designed from the start to be more agreeable to hostile environments and have higher density deployments. Anyone that can think back to the halcyon days of 3G and how crazy it got when you went to Cisco Live and had no cell coverage in the hotel until you got to the wireless network in the convention center may disagree with me. But that exact scenario is why providers started focusing more on the number of deployed devices instead of the total throughput of the tower. It was more important in the long run to get devices connected at lower data rates than it was to pump up the wattage and get a few devices to shine at the expense of all the other ones that couldn’t get connected.

In today’s 5G landscape, it’s all about the clients. High density and good throughput. And that’s for devices with a human attached to them. Sure, we all carry a mobile phone and a laptop and maybe a tablet that are all connected to the Wi-Fi network. With IoT, the game changes significantly. Even in your consumer-focused IoT landscape you can probably think of ten devices around you right now that are connected to the network, from garage door openers to thermostats to light switches or light bulbs.

IoT at Work

In the enterprise it’s going to get crazy with industrial and operational IoT. Every building is going to have sensors packed all over the place. Temperature, humidity, occupancy, and more are going to be little tags on the walls sampling data and feeding it back to the system dashboard. Every piece of equipment you use on a factory floor is going to be connected, either by default with upgrade kits or with add-on networking gear that provides an interface to the control system. If it can talk to the Internet it’s going to be enabled to do it. And that’s going to crush your average Wi-Fi network unless you build it like a stadium.

On the other hand, private 5G and private LTE deployments are built for this scale. And because they’re lightly regulated compared to full-on provider setups you can do them easily without causing interference. As long as someone that owns a license for your frequency isn’t nearby you can just set things up and get moving. And as soon as you order the devices that have SIM slots you can plug in your cards and off you go!

I wouldn’t be shocked to see Amazon start offering a “new” lineup of enterprise-ready IoT devices with pre-installed SIMs for Amazon Private 5G service. Just buy these infrastructure devices from us and click the button on your AWS dashboard and you can have on-prem 5G. Hell, call it Network Outpost or something. Just install it and pay us and we’ll take care of the rest for you. And as soon as they get you locked in to their services they’ve got you hooked. Because if you’re already using those devices with 5G, why would you want to go through the pain on configuring them for the Wi-Fi?

This isn’t a play for consumers. Configuring a consumer-grade Wi-Fi router from a big box store is one thing. Private 5G is beyond most people, even if it’s a managed service. It also offers no advantages for Amazon. Because private 5G in the consumer space is just like hardware sales. Customers aren’t going to buy features as much as they’re shopping for the lowest sticker price. In the enterprise, Amazon can attach private 5G service to existing cloud spend and make a fortune while at the same time ensuring their IoT devices are connected at all times and possibly even streaming telemetry and collecting anonymized data, depending on how the operations contracts are written. But that’s a whole different mess of data privacy.


Tom’s Take

I’ve said it before but I’ll repeat it until we finally get the picture: IoT and 5G are now joined at the hip and will continue to grow together in the enterprise. Anyone out there that sees IoT as a hobby for home automation or sees 5G as a mere mobile phone feature will be enjoying their Betamax movies along with web apps on their mobile phones. This is bigger than the consumer space. The number of companies that are jumping into the private 5G arena should prove the smoke is hiding a fire that can signal that Gondor is calling for aid. It’s time you get on board with IoT and 5G and see that. The future isn’t a thick client with a Wi-Fi stack that you need to configure. It’s a small sensor with a SIM slot running on a private network someone else fixes for you. Are you down with that?

Is the M1 MacBook Pro Wi-Fi Really Slower?

I ordered a new M1 MacBook Pro to upgrade my existing model from 2016. I’m still waiting on it to arrive by managed to catch a sensationalist headline in the process:

“New MacBook Wi-Fi Slower than Intel Model!”

The article referenced this spec sheet from Apple referencing the various cards and capabilities of the MacBook Pro line. I looked it over and found that, according to the tables, the wireless card in the M1 MacBook Pro is capable of a maximum data rate of 1200 Mbps. The wireless card in the older model Intel MacBook Pro all the way back to 2017 is capable of 1300 Mbps. Case closed! The older one is indeed faster. Except that’s not the case anywhere but on paper.

PHYs, Damned Lies, and Statistics

You’d be forgiven for jumping right to the numbers in the table and using your first grade inequality math to figure out that 1300 is bigger than 1200. I’m sure it’s what the authors of the article did. Me? I decided to dig in a little deeper to find some answers.

It only took me about 10 seconds to find the first answer as to one of the differences in the numbers. The older MacBook Pro used a Wi-Fi card that was capable of three spacial streams (3SS). Non-wireless nerds reading this post may wonder what a spatial stream is. The short answer is that it is a separate unique stream of data along a different path. Multiple spacial streams can be leveraged through Multiple In, Multiple Out (MIMO) to increase the amount of data being sent to a wireless client.

The older MacBook Pro has support for 3SS. The new M1 MacBook Pro has a card that supports up to 2SS. Problem solved, right? Well, not exactly. You’re also talking about a client radio that supports different wireless protocols as well. The older model supported 802.11n (Wi-Fi 4) and 802.11ac (Wi-Fi 5) only. The newer model supports 802.11ax (Wi-Fi 6) as well. The quoted data rates on the Apple support page state that the maximum data rates for the cards are quoted in 11ac for the Intel MBP and 11ax for the M1 MBP.

Okay, so there are different Wi-Fi standards at play here. Can’t be too hard to figure out, right? Except that the move from Wi-Fi 5 to Wi-Fi 6 is more than just incrementing the number. There are a huge number of advances that have been included to increase efficiency of transmission and ensure that devices can get on and off the air quickly to help maximize throughput. It’s not unlike the difference between the M1 chip in the MacBook and its older Intel counterpart. They may both do processing but the way they do it is radically different.

You also have to understand something called Modulation Coding Set (MCS). MCS defines the data rates possible for a given definition of signal-to-noise ratio (SNR), RSSI, and Quadrature Amplitude Modulation (QAM). Trying to define QAM could take all day, so I’ll just leave it to GT Hill to do it for me:

The MCS table for a given protocol will tell you what the maximum data rate for the client radio is. Let’s look at the older MacBook Pro first. Here’s a resource from NetBeez that has the 802.11ac MCS rates. If you look up the details from the Apple support doc for a 3SS radio using VHT 9 and an 80MHz channel bandwidth you’ll find the rate is exactly 1300 Mbps.

Here’s the MCS table for 802.11ax courtesy of Francois Verges.. WAY bigger, right? You’re likely going to want to click on the link to the Google Sheet in his post to be able to read it without a microscope. If you look at the table and find the row that equates to an 11ax client using 2SS, MCS HE 11, and 80MHz channel bandwidth you’ll see that the number is 1201. I’ll forgive Apple for rounding it down to keep the comparison consistent.

Again, this all checks out. The Wi-Fi equivalent of actuarial tables says that the older one is faster. And it is under absolutely perfect conditions. Because the quoted numbers for the Apple document are the maximums for those MCSes. When’s the last time you got the maximum amount of throughput on a wired link? Now remember that in this case you’re going to need to have perfect airtime conditions to get there. Which usually means you’ve got to be right up against the AP or within a very short distance of it. And that 80MHz channel bandwidth? As my friend Sam Clements says, that’s like drag racing a school bus.

The World Isn’t Made Out Of Paper

If you are just taking the numbers off of a table and reproducing them and claiming one is better than the other then you’re probably the kind of person that makes buying decisions for your car based on what the highest number on the speedometer says. Why take into account other factors like cargo capacity, passenger size, or even convertible capability? The numbers on this one go higher!

In fact, when you unpack the numbers here as I did, you’ll see that the apparent 100 Mbps difference between the two radios isn’t likely to come into play at all in the real world. As soon as you move more than 15 feet away from the AP or put a wall between the client device and your AP you will see a reduction in the data rate. The top end of these two protocols are running in the 5GHz spectrum, which isn’t as forgiving with walls as 2.4GHz is. Moreover, if there are other interfering sources in your environment you’re not going to get nearly the amount of throughput you’d like.

What about that difference in spatial streams? I wondered about that for the longest time. Why would you purposely put fewer spatial streams in a client device when you know that you could max it out? The answer is that even with that many spatial streams reality is a very different beast. Devin Akin wrote a post about why throughput numbers aren’t always the same as the tables. In that post he mentioned that a typical client mix in a network is 2018 is about 66% devices with 1SS, 33% devices with 2SS, and less than 1% of devices have 3SS. While those numbers have probably changed in 2021 thanks to the iPhone and iPad now having 2SS radios, I don’t think the 3SS numbers have moved much. The only devices that have 3SS are laptops and other bigger units. It’s harder for a unit to keep the data rates from a 3SS radio so most devices only include support for two of them.

The other thing to notice here is that the value of what a spatial stream brings you is different between the two protocols. In 802.11ac, the max data rate for a single spatial stream is about 433 Mbps. For 802.11ax it’s 600 Mbps. So a 2SS 11ac radio maxes out at 866 Mbps while a 3SS 11ax radio setup would get you around 1800 Mbps. It’s far more likely that you’ll be using the 2SS 11ax radio more efficiently more often than you’ll see the maximum throughput of a 3SS 11ac radio.


Tom’s Take

This whole tale is a cautionary example of why you need to do your own research, even if you aren’t a Wi-Fi pro. The headline was both technically correct and wildly inaccurate. Yes, the numbers were different. Yes, the numbers favored the older model. No one is going to see the maximum throughput under most normal conditions. Yes, having support for Wi-Fi 6 in the new MacBook Pro is a better choice overall. You’re not going to miss that 100 Mbps of throughput in your daily life. Instead you’re going to enjoy a better protocol with more responsiveness in the bands you use on a regular basis. You’re still faster than the gigabit Ethernet adapters so enjoy the future of Wi-Fi. And don’t believe the numbers on paper.

Private 5G Needs Complexity To Thrive

I know we talk about the subject of private 5G a lot in the industry but there are more players coming out every day looking to add their voice to the growing supporters of these solutions. And despite the fact that we tend to see 5G and Wi-Fi technologies as ships in the night this discussion isn’t going to go away any time soon. In part it’s because decision makers aren’t quite savvy enough to distinguish between the bands, thinking all wireless communications are pretty much the same.

I think we’re not going to see much overlap between these two technologies. But the reasons why aren’t quite what you might think.

Walking Workforces

Working from anywhere other than the traditional office is here to stay. Every major Silicon Valley company has looked at the cost benefit analysis and decided to let workers do their thing from where they live. How can I tell it’s permanent? Because they’re reducing salaries for those that choose to stay away from the Bay Area. That carrot is pretty enticing and for the companies to say that it’s not on the table for remote work going forward means they have no incentive to make people want to move to work from an office.

Mobile workers don’t care about how they connect. As long as they can get online they are able to get things done. They are the prime use case for 5G and Private 5G deployments. Who cares about the Wi-Fi at a coffee shop if you’ve got fast connectivity built in to your mobile phone or tablet? Moreover, I can also see a few of the more heavily regulated companies requiring you to use a 5G uplink to connect to sensitive data though a VPN or other technology. It eliminates some of the issues with wireless protection methods and ensures that no one can easily snoop on what you’re sending.

Mobile workers will start to demand 5G in their devices. It’s a no-brainer for it to be in the phone and the tablet. As laptops go it’s a smart decision at some point, provided enough people have swapped over to using tablets by then. I use my laptop every day when I work but I’m finding myself turning to my iPad more and more. Not for any magical reason but because it’s convenient if I want to work from somewhere other than my desk. I think that when laptops hit a wall from a performance standpoint you’re going to see a lot of manufacturers start to include 5G as a connection option to lure people back to them instead of abandoning them to the big tablet competition.

However, 5G is really only a killer technology for these more complex devices. The cost of a 5G radio isn’t inconsequential to the overall cost of a device. After all, Apple raised the price of their iPad when they included a 5G radio, didn’t they? You could argue that they didn’t when they upgraded the iPhone to a 5G chipset but the cellular technology is much more integral to the iPhone than the iPad. As companies examine how they are going to move forward with their radio technology it only makes sense to put the 5G radios in things that have ample space, appropriate power, and the ability to recover the costs of including the chips. It’s going to be much more powerful but it’s also going to be a bigger portion of the bill of materials for the device. Higher selling prices and higher margins are the order of the day in that market.

Reassuringly Expensive IoT

One of the drivers for private 5G that I’ve heard of recently is the drive to have IoT sensors connected over the protocol. The thinking goes that the number of devices that are going to be deployed it going to create a significant amount of traffic in a dense area that is going to require the controls present in 5G to ensure they aren’t creating issues. I would tend to agree but with a huge caveat.

The IoT sensors that people are talking about here aren’t the ones that you might think of in the consumer space. For whatever reason people tend to assume IoT is a thermostat or a small device that does simple work. That’s not the case here. These IoT devices aren’t things that you’re going to be buying one or two at a time. They are sensors connected to a larger system. Think HVAC relays and probes. Think lighting sensors or other environmental tech. You know what comes along with that kind of hardware? Monitoring. Maintenance. Subscription costs.

The IoT that is going to take advantage of private 5G isn’t something you’re going to be deploying yourself. Instead, it’s going to be something that you partner with another organization to deploy. You might “own” the tech in the sense that you control the data but you aren’t going to be the one going out to Best Buy or Tech Data to order a spare. Instead, you’re going to pay someone to deploy it and it when it goes wrong. So how does that differ from the IoT thermostat that comes to mind? Price. Those sensors are several hundred dollars each. You’re paying for the technology included in them with that monthly fee to monitor and maintain them. They will talk to the radio station in the building or somewhere nearby and relay that data back to your dashboard. Perhaps it’s on-site or, more likely, in a cloud instance somewhere. All those fees mean that the devices become more complex and can absorb the cost of more complicated radio technology.

What About Wireless?

Remember when wireless was something cool that you had to show off to people that bought a brand new laptop? Or the thrill of seeing your first iPhone connect to attwifi at Starbucks instead of using that data plan you paid so dearly to get? Wireless isn’t cool any more. Yes, it’s faster. Yes, it is the new edge of our world. But it’s not cool. In the same way that Ethernet isn’t cool. Or web browsers aren’t cool. Or the internal combustion engine isn’t cool. Wi-Fi isn’t cool any more because it is necessary. You couldn’t open an office today without having some form of wireless communications. Even if you tried I’m sure that someone would hop over to the nearest big box store and buy a consumer-grade router to get wireless working before the paint was even dry on the walls.

We shouldn’t think about private 5G replacing Wi-Fi because it never will. There will be use cases where 5G makes much more sense, like in high-density deployments or in areas were the contention in the wireless spectrum is just too great to make effective use of it. However, not deploying Wi-Fi in favor of deploying private 5G is a mistake. Wireless is the perfect “set it and forget it” technology. Provide an SSID for people to connect to and then let them go crazy. Public venues are going to rely on Wi-Fi for the rest of time. These places don’t have the kind of staff necessary to make private 5G economical in the long run.

Instead, think of private 5G deployments more like the way that Wi-Fi used to be. It’s an option for devices that need to be managed and controlled by the organization. They need to be provisioned. They need to consume cycles to operate properly. They need to be owned by the company and not the employee. Private 5G is more of a play for infrastructure. Wi-Fi is the default medium given the wide adoption it has today. It may not be the coolest way to connect to the network but it’s the one you can be sure is up and running without the need for the IT department to come down and make it work for you.


Tom’s Take

I’ll admit that the idea of private 5G makes me smile some days. I wish I had some kind of base station here at my house to counteract the horrible reception that I get. However, as long as my Internet connection is stable I have enough wireless coverage in the house to make the devices I have work properly. Private 5G isn’t something that is going to displace the installed base of Wi-Fi devices out there. With the amount of management that 5G requires in devices you’re not going to see a cheap or simple method to deploying it appear any time soon. The pie-in-the-sky vision of having pervasive low power deployments in cheap devices is not going to be realistic on the near future horizon. Instead, think of private 5G as something that you need to use when your other methods won’t work or when someone you are partnering with to deploy new technology requires it. That way you won’t be caught off-guard when the complexity of the technology comes to play.

When Will You Need Wi-Fi 6E at Home?

The pandemic has really done a number on most of our office environments. For some, we went from being in a corporate enterprise with desks and coffee makers to being at home with a slightly different desk and perhaps a slightly better coffee maker. However, one thing that didn’t improve was our home network.

For the most part, the home network has been operating on a scale radically different from those of the average corporate environment. Taking away the discrepancies in Internet speed for a moment you would have a hard time arguing that most home wireless gear is as good or better than the equivalent enterprise solution. Most of us end up buying our equipment from the local big box store and are likely shopping as much on price as we are on features. As long as it supports our phones, gaming consoles, and the streaming box we picked up we’re happy. We don’t need QoS or rogue detection.

However, we now live in a world where the enterprise is our home. We live at work as much as we work where we live. Extended hours means we typically work past 5:00 pm or start earlier than 8:00 or 9:00. It means that we’re usually sending emails into the night or picking up that project to crack a hard problem when we can’t sleep. Why is that important? Well, one of the arguments for having separate enterprise and home networks for years was the usage cycle.

To your typical manager type in an organization, work is work and home is home and n’er the twain shall meet, unless they need you to work late. Need someone to jump on a Zoom call during dinner to solve an issue? Want someone to upload a video before bed? Those are reasonable requests. Mind if my home wireless network also supports the kids watching Netflix or playing Call of Duty? That’s a step too far!

The problem with enterprise networking gear is that it is focused on supporting the enterprise role. And having that gear available to serve a consumer role, even when our consumer office is also our enterprise office, make management types break out in hives.

Technology Marches In Place

Okay, so we know that no one wants to shell out money for good gear. I don’t want to pay for it out of my pocket. The company doesn’t want to pay for something that might accidentally be used to do something fun. So where does that leave the people that make enterprise wireless access points?

I’ll admit I’m a horrible reference to my friends when they ask me what kind of stuff to buy. I tend to get way too deep into things like coverage pattens and device types when I start asking what they want their network to look like. The answer they’re usually looking for is easy, cheap, and simple. I get way too involved in figuring out their needs as if they were an enterprise customer. So I know that most people don’t need band steering or MIMO support in the house. But I still ask the questions as if it were a warehouse or campus building.

Which is why I’m really starting to question how the planned rollout of technologies like Wi-Fi 6E is going to happen in the current environment. I’ll buy that Wi-Fi 6, also known as 802.11ax, is going to happen as soon as it’s supported by a mainstream consumer device or three. But elevating to the 6 GHz range is an entirely different solution looking for a problem. Right now, the costs of 6 GHz radios combined with the operating environment are going to slow adoption of Wi-Fi 6E drastically.

Home Is Where the Wi-Fi Connects

How hostile is the wireless environment in your house? Aside from the odd microwave, probably not too bad. Some of the smart utility services may be operating on a separate network for things like smart electric meters or whole-home DVR setups. Odds are much better that you’re probably in a nice clean radio island. You don’t have to worry about neighboring businesses monopolizing the air space. You don’t have to contend with an old scanner that has to operate on 802.11g speeds in an entirely separate network to prevent everything from slowing down drastically.

If your home is running just fine on a combination of 2.4 GHz for older devices or IoT setups and 5 GHz for modern devices like phones and laptops, what is the advantage of upgrading to 6 GHz? Let’s toss out the hardware argument right now. If you’re running on 802.11ac (Wi-Fi 5) Wave 2 hardware, you’re not upgrading any time soon. Your APs are new enough to not need a refresh. If you’re on something older, like Wi-Fi 5 Wave 1 or even 802.11n (Wi-Fi 4), you are going to look at upgrading soon to get some new features or better speeds now that everyone in your house is online and gobbling up bandwidth. Let’s say that you’ve even persuaded the boss to shell out some cash to help with your upgrade. Which AP will you pick?

Will you pick the current technology that has all the features you need in Wi-Fi 6? Or will you pay more for an AP with a feature set that you can’t even use yet? It’s a silly question that will probably answer itself. You pay for what you can use and you don’t try and break the boss’s bank. That means the likelihood of Wi-Fi 6E adoption is going to go down quickly if the new remote office has no need of the technology.

Does it mean that Wi-Fi 6E is dead in the water? Not really. What it does mean is that Wi-Fi 6E needs to find a compelling use case to drive adoption. This is a lesson that needs to be learned from other protocols like IPv6. If you can’t convince people to move to the new thing, they’re going to stay on the old thing as long as they can because it’s cheaper and more familiar. So you need to create a new device that is 6 GHz only. Or make 6 GHz super fast for things like media transfers. Or maybe even require it for certain content types. That’s how you’re going to drive adoption everywhere. And if you don’t you’re likely going to be relegated to the same junk pile as WiMAX and ATM LANE.


Tom’s Take

Wi-Fi 6E is the great solution for a problem that is around the corner. It has lots of available bandwidth and spectrum and is relatively free from interference. It’s also free from the need to adopt it right away. As we’re trying to drive people toward Wi-Fi 6 11ax infrastructure, we’re not going to be able to make them jump to both at once without a killer app or corner case requirement. Wi-Fi 6E is always going to be more expensive because of hardware and R&D costs. And given the chance, people will always vote with their wallet provided their basic needs are met.

Fast Friday – Mobility Field Day 5 Edition

I’ve been in the middle of Mobility Field Day 5 this week with a great group of friends and presenters. There’s a lot to unpack. I wanted to share some quick thoughts around wireless technologies and where we’re headed with it.

  • Wireless isn’t magic. We know that because it’s damned hard to build a deployment plan and figure out where to put APs. We’ve built tools that help us immensely. We’ve worked on a variety of great things that enable us to make it happen easier than it’s been before. But remember that the work still has to happen and we still have to understand it. As soon as someone says, “You don’t need to do the work, our tool just makes it happen” my defenses go up. How does the tool understand nuance? Who is double-checking it? What happens when you can’t feed it all the info it needs? Don’t assume that taking a human out of the loop is always good thing. Accrued knowledge is more important than you realize.
  • Analytics give you a good picture of what you want, but they don’t turn wrenches. All the data in the world won’t replace a keyboard. You need to understand the technology before you know why analytics look the way they do. It’s a lesson that people learn hard. Look back at things like VDI boot storms to understand why analytics can look “bad” and be totally normal.
  • I’m happy to see the enterprise embracing Wi-Fi 6E (6GHz). Sadly, it’s going to be another six months before we see enough hardware to make it viable for users. And don’t even get me started on the consumer side of the house. I expect the next iPad Pro will have a 6E radio. That’s going to be the tipping point. But even after that we’re going to spend years helping people understand what they have and why it works.

Tom’s Take

There are some exciting discussions to be had in the wireless community. I’m always thrilled to be a part of Mobility Field Day and enjoy hearing all the great tech discussed. Stay tuned to the Tech Field Day Youtube Channel for all the great content and more discussions!

The Bane of Backwards Compatibility

I’m a huge fan of video games. I love playing them, especially on my old consoles from my formative years. The original Nintendo consoles were my childhood friends as much as anything else. By the time I graduated from high school, everyone had started moving toward the Sony Playstation. I didn’t end up buying into that ecosystem as I started college. Instead, I just waited for my brother to pick up a new console and give me his old one.

This meant I was always behind the curve on getting to play the latest games. I was fine with that, since the games I wanted to play were on the old console. The new one didn’t have anything that interested me. And by the time the games that I wanted to play did come out it wouldn’t be long until my brother got a new one anyway. But one thing I kept hearing was that the Playstation was backwards compatible with the old generation of games. I could buy a current console and play most of the older games on it. I wondered how they managed to pull that off since Nintendo never did.

When I was older, I did some research into how they managed to build backwards compatibility into the old consoles. I always assumed it was some kind of translation engine or enhanced capabilities. Instead, I found out it was something much less complicated. For the PS2, the same controller chip from the PS1 was used, which ensured backwards compatibility. For the PS3, they essentially built the guts of a PS2 into the main board. It was about as elegant as you could get. However, later in the life of those consoles, system redesigns made them less compatible. Turns out that it isn’t easy to create backwards compatibility when you redesign things to remove the extra hardware you added.

Bringing It Back To The Old School

Cool story, but what does it have to do with enterprise technology? Well, the odds are good that you’re about to fight a backwards compatibility nightmare on two fronts. The first is with WPA3, the newest security protocol from the Wi-Fi Alliance. WPA3 fixes a lot of holes that were present in the ancient WPA2 and includes options to protect public traffic and secure systems from race conditions and key exchange exploits. You’d think it was designed to be more secure and would take a long time to break right? Well, you’d be wrong. That’s because WPA3 was exploited last year thanks to a vulnerability in the WPA3-Transition mode designed to enhance backwards compatibility.

WPA3-Transition Mode is designed to keep people from needing to upgrade their wireless cards and client software in one fell swoop. It can configure a WPA3 SSID with the ability for WPA2 clients to connect to it without all the new enhanced requirements. Practically, it means you don’t have to run two separate SSIDs for all your devices as you move from older to newer. But practical doesn’t cover the fact that security vulnerabilities exist in the transition mechanism. Enterprising attackers can exploit the weaknesses in the transition setup to crack your security.

It’s not unlike the old vulnerabilities in WPA when it used TKIP. TKIP was found to have a vulnerability that allowed for exploiting. People were advised to upgrade to WPA-AES as soon as possible to prevent this. But if you enabled older non-AES capable clients to connect to your SSIDs for compatibility reasons you invalidated all that extra security. Because AES had to operate in TKIP mode to connect the TKIP clients. And because the newer clients were happy to use TKIP over AES you were stuck using a vulnerable mode. The only real solution was to have a WPA-AES SSID to connect to for your newer secure clients and leave a WPA-TKIP SSID active for the clients that had to use it until they could be upgraded.

4Gs for the Price of 5

The second major area where we’re going see issues with backwards compatibility is with 5G networking. We’re hearing about the move to using 5G everywhere. We’ve no doubt heard by now that 5G is going to replace enterprise wireless or change the way we connect to things. Honestly, I’m not surprised someone has tried to make the claim that 5G can make waffles and coffee yet. But 5G is rife with the same backwards compatibility issues present in enterprise wireless too.

5G is an evolution of the 4G standards. Phones issued today are going to have 4G and 5G radios and the base stations are going to mix the radio types to ensure those phones can connect. Just like any new technology, they’re going to maximize the connectivity of the existing infrastructure and hope that it’s enough to keep things running as they build out the new setup. But by running devices with two radios or having a better connection from the older devices, you’re going to set yourself up to have your new protocol inherently insecure thanks to vulnerabilities in the old versions. It’s already projected that governments are going to take advantage of this for a variety of purposes.

We find ourselves in the same boat as we do with WPA3. Because we have to ensure maximum compatibility, we make sacrifices. We keep two different versions running at the same time, which increases complexity. We even mark a lot of necessary security upgrades as optional in order to keep people from refusing to implement them or fall behind because they don’t understand them1.

The biggest failing for me is that we’re pushing for backwards compatibility and performance over security. We’re not willing to make the hard choices to reduce functionality in order to save our privacy and security. We want things to be backwards compatible so we can buy one device today and have it work on everything. We’ll just make the next one more secure. Or the one after that. Until we realize that we’re still running old 802.11 data rates in our newest protocols because no one bothered to remove them. We have to make hard choices sometimes and sacrifice some compatibility in order to ensure that we’re safe and secure with the newer technology.


Tom’s Take

Backwards compatibility is like the worst kind of nostalgia. I want the old thing but I want it on a new thing that runs faster. I want the glowing warmth of my youth but with the convenience of modern technology. It’s like buying an old sports car. Sure, you get all the look and feel of an old powerful engine. You also lose the safety features of the new body along with the comforts you’ve become accustomed to. You have to make a hard choice. Do you keep the old car original and lose out on what you like to get what you want? Or do you create some kind of hybrid that has exactly what you want and need but isn’t what you started with? It’s a tough choice to make. In the world of technology, there’s no right answer. But we need to remember that every compromise we make for performance can lead to compromises in security.


  1. I’m looking at you, OWE ↩︎

The Sky is Not Falling For Ekahau

Ekahau Hat (photo courtesy of Sam Clements)

You may have noticed quite a few high profile departures from Ekahau recently. A lot of very visible community members, concluding Joel Crane (@PotatoFi), Jerry Olla (@JOlla), and Jussi Kiviniemi (@JussiKiviniemi) have all decided to move on. This has generated quite a bit of discussion among the members of the wireless community as to what this really means for the company and the product that is so beloved by so many wireless engineers and architects.

Putting the people aside for a moment, I want to talk about the Ekahau product line specifically. There was an undercurrent of worry in the community about what would happen to Ekahau Site Survey (ESS) and other tools in the absence of the people we’ve seen working on them for so long. I think this tweet from Drew Lentz (@WirelessNerd) best exemplifies that perspective:

So, naturally, I decided to poke back:

That last tweet is where I really want to focus this post.

The More Things Change

Let’s think about where Ekahau is with regards to the wireless site survey market right now. With no exaggeration, they are on top and clearly head and shoulders above the rest. What other product out there has the marketshare and mindshare they enjoy? AirMagnet is the former king of the hill but the future for that tool is still in flux with all of the recent movement of the tool between Netscout and now with NetAlly. IBWave is coming up fast but they’re still not quite ready to go head-to-head in the same large enterprise space. I rarely hear TamoGraph Site Survey brought up in conversation. And as for NetSpot, they don’t check enough boxes for real site survey to even really be a strong contender In the enterprise.

So, Ekahau really is the 800lb gorilla of the site survey market. This market is theirs to lose. They have a commanding lead. And speaking to the above tweets from Drew, are they really in danger of losing their customer base after just 12 months? Honestly? I don’t think so. Ekahau has a top-notch offering that works just great today. If there was zero development done on the platform for the next two years it would still be one of the best enterprise site survey tools on the market. How long did AirMagnet flounder under Fluke and still retain the title of “the best” back in the early 2010s?

Here Comes A Challenger

So, if the only really competitor that’s up-and-coming to Ekahau right now is IBWave, does that mean this is a market ripe for disruption? I don’t think that’s the case either. When you look at all the offerings out there, no one is really rushing to build a bigger, better survey tool. You tend to see this in markets where someone has a clear advantage. Without a gap to exploit there is no room for growth. NetSpot gives their tool away so you can’t really win on price. IBWave and AirMagnet are fighting near the top so you don’t have a way to break in beside them.

What features could you offer that aren’t present in ESS today? You’d have to spend 18-24 months to even build something comparable to what is present in the software today. So, you dedicate resources to build something that is going to be the tool that people wanted to use two years ago? Good luck selling that idea to a VC firm. Investors want return on their money today.

And if you’re a vendor that’s trying to break into the market, why even consider it? Companies focused on building APs and wireless control solutions don’t always play nice with each other. If you’re going to build a tool to help survey your own deployments you’re going to be unconsciously biased against others and give yourself some breaks. You might even bias your survey results in favor of your own products. I’m not saying it would be intentional. But it has been known to happen in the past.

Here’s the other thing to keep in mind: inertia. Remember how we posed this question with the idea that Ekahau wouldn’t improve the product at all? We all know that’s not the case. Sure, there are some pretty big names on that list that aren’t there any more. But those people aren’t all the people at Ekahau. Development teams continue to work on the product roadmap. There are still plans in place to look at new technologies. Nothing stopped because someone left. Even if the only thing the people on the development side of the house did was finish the plans in place there would still be another 12-18 months of new features on the horizon. That means trying to develop a competitor to ESS means developing technology to replace what is going to be outdated by the time you finish!

People Matter

That brings me back to the people. It’s a sad fact that everyone leaves a company sooner or later. Bill Gates left Microsoft. Steve Jobs left Apple. You can’t count on people being around forever. You have to plan for their departure and hope that, even if you did catch lightning in a bottle, you have to find a way to do it again.

I’m proud to see some of the people that Ekahau has picked up in the last few months. Folks like Shamree Howard (@Shamree_Howard) and Stew Goumans (@WirelessStew) are going to go a long way to keeping the community engagement alive that Ekahau is so known for. There will be others that are brought in to fill the shoes of those that have left. And don’t forget that for every face we see publicly in the community there is an army of development people behind the scenes working diligently on the tools. They may not be the people that we always associate with the brand but they will try hard to put their own stamp on things. Just remember that we have to be patient and let them grow into their role. They have a lot to live up to, so give them the chance. It may take more than 12 months for them to really understand what they got themselves into.


Tom’s Take

No company goes out of business overnight without massive problems under the hood. Even the biggest corporate failures of the last 40 years took a long time to unfold. I don’t see that happening to Ekahau. Their tools are the best. Their reputation is sterling. And they have a bit of a cushion of goodwill to get the next release right. And there will be a next release. And one after that. Because what Ekahau is doing isn’t so much scaling the mountain they climbed to unseat AirMagnet. It’s proving they can keep going no matter what.

Fast Friday – Mobility Field Day 4

This week’s post is running behind because I’m out in San Jose enjoying great discussions from Mobility Field Day 4. This event is bringing a lot of great discussion to the community to get everyone excited for current and future wireless technologies. Some quick thoughts here with more ideas to come soon.

  • Analytics is becoming a huge driver for deployments. The more data you can gather, the better everything can be. When you start to include IoT as a part of the field you can see why all those analytics matter. You need to invest in a lot of CPU horsepower to make it all work the way you want. Which is also driving lots of people to build in the cloud to have access to what they need on-demand from an infrastructure side of things.
  • Spectrum is a huge problem and source of potential for wireless. You have to have access to spectrum to make everything work. 2.4 GHz is pretty crowded and getting worse with IoT. 5 GHz is getting crowded as well, especially with LAA being used. And the opening of the 6 GHz spectrum could be held up in political concerns. Are there new investigations that need to happen to find bands that can be used without causing friction?
  • The driver for technology has to be something other than desire. We have to build solutions and put things out there to make them happen. Because if we don’t we’re going to stuck with what we have for a long time. No one wants to move and reinvest without clear value. But clear value often doesn’t develop until people have already moved. Something has to break the logjam of hesitance. That’s the reason why we still need bold startups with new technology jumping out to make things work.

Tom’s Take

I know I’ll have more thoughts when I get back from this event, but wireless has become the new edge and that’s a very interesting shift. The more innovation we can drive there means the more capable we can make our clients and empower users.

Extremely Hive Minded

I must admit that I was wrong. After almost six years, I was mistake about who would end up buying Aerohive. You may recall back in 2013 I made a prediction that Aerohive would end up being bought by Dell. I recall it frequently because quite a few people still point out that post and wonder what if it’s happened yet.

Alas, June 26, 2019 is the date when I was finally proven wrong when Extreme Networks announced plans to purchase Aerohive for $4.45/share, which equates to around $272 million paid, which will be adjust for some cash on hand. Aerohive is the latest addition to the Extreme portfolio, which now includes pieces of Brocade, Avaya, Enterasys, and Motorola/Zebra.

Why did Extreme buy Aerohive? I know that several people in the industry told me they called this months ago, but that doesn’t explain the reasoning behind spending almost $300 million right before the end of the fiscal year. What was the draw that have Extreme buzzing about this particular company?

Flying Through The Clouds

The most apparent answer is HiveManager. Why? Because it’s really the only thing unique to Aerohive that Extreme really didn’t have already. Aerohive’s APs aren’t custom built. Aerohive’s switching line was rebadged from an ODM in order to meet the requirements to be included in Gartner’s Wired and Wireless Magic Quadrant. So the real draw was the software. The cloud management platform that Aerohive has pushed as their crown jewel for a number of years.

I’ll admit that HiveManager is a very nice piece of management software. It’s easy to use and has a lot of power behind the scenes. It’s also capable of being tuned for very specific vertical requirements, such as education. You can set up self-service portals and Private Pre-Shared Keys (PPSKs) fairly easily for your users. You can also build a lot of policy around the pieces of your network, both hardware and users. That’s a place to start your journey.

Why? Because Extreme is all about Automation! I talked to their team a few weeks ago and the story was all about building automation platforms. Extreme wants to have systems that are highly integrated and capable of doing things to make life easier for administrators. That means having the control pieces in place. And I’m not sure if what Extreme had already was in the same league as HiveManager. But I doubt Extreme has put as much effort into their software yet as Aerohive had invested in theirs over the past 8 years.

For Extreme to really build out the edge network of the future, they need to have a cloud-based management system that has easy policy creation and can be extended to include not only wireless access points but wired switches and other data center automation. If you look at what is happening with intent-based networking from other networking companies, you know how important policy definition is to the schema of your network going forward. In order to get that policy engine up and running quickly to feed the automation engine, Extreme made the call to buy it.

Part of the Colony

More importantly than the software piece, to me at least, is the people. Sure, you can have a bunch of people hacking away at code for a lot of hours to build something great. You can even choose to buy that something great from someone else and just start modifying it to your needs. Extreme knew that adapting HiveManager to fulfill the needs of their platform wasn’t going to be a walk in the park. So bringing the Aerohive team on board makes the most sense to me.

But it’s also important to realize who had a big hand in making the call. Abby Strong (@WiFi_Princess) is the VP of Product Marketing at Extreme. Before that she held the same role at Aerohive in some fashion for a number of years. She drove Aerohive to where they were before moving over to Extreme to do something similar.

When you’re building a team, how do you do it? Do you run out and find random people that you think are the best for the job and hope they gel quickly? Do you just throw darts at a stack of resumes and hope random chance favors your bold strategy? Or do you look at existing teams that work well together and can pull off amazing feats of technical talent with the right motivation? I’d say the third option is the most successful, wouldn’t you?

It’s not unheard of in the wireless industry for an entire team to move back and forth between companies. There’s a hospitality team that’s moved back and forth between Ruckus, Aerohive, and Ubiquiti. There are other teams, like some working on 802.11u, that bounced around a couple of times before they found a home. Which makes me wonder if Extreme bought Aerohive for HiveManager and ended up with the development team as a bonus? Or if they decided to buy the development team and got the software for “free”?


Tom’s Take

We all knew Aerohive was putting itself on the market. You don’t shed sales staff and middle management unless you’re making yourself a very attractive target for acquisition. I still held out hope that maybe Dell would come through for me and make my five-year-old prediction prescient. Instead, the right company snapped up Aerohive for next to nothing and will start in earnest integrating HiveManager into their stack in the coming months. I don’t know what the future plans for further integration look like, but the wireless world is buzzing right now and that should make life extremely sweet for the Aerohive team.