The Sky is Not Falling For Ekahau

Ekahau Hat (photo courtesy of Sam Clements)

You may have noticed quite a few high profile departures from Ekahau recently. A lot of very visible community members, concluding Joel Crane (@PotatoFi), Jerry Olla (@JOlla), and Jussi Kiviniemi (@JussiKiviniemi) have all decided to move on. This has generated quite a bit of discussion among the members of the wireless community as to what this really means for the company and the product that is so beloved by so many wireless engineers and architects.

Putting the people aside for a moment, I want to talk about the Ekahau product line specifically. There was an undercurrent of worry in the community about what would happen to Ekahau Site Survey (ESS) and other tools in the absence of the people we’ve seen working on them for so long. I think this tweet from Drew Lentz (@WirelessNerd) best exemplifies that perspective:

So, naturally, I decided to poke back:

That last tweet is where I really want to focus this post.

The More Things Change

Let’s think about where Ekahau is with regards to the wireless site survey market right now. With no exaggeration, they are on top and clearly head and shoulders above the rest. What other product out there has the marketshare and mindshare they enjoy? AirMagnet is the former king of the hill but the future for that tool is still in flux with all of the recent movement of the tool between Netscout and now with NetAlly. IBWave is coming up fast but they’re still not quite ready to go head-to-head in the same large enterprise space. I rarely hear TamoGraph Site Survey brought up in conversation. And as for NetSpot, they don’t check enough boxes for real site survey to even really be a strong contender In the enterprise.

So, Ekahau really is the 800lb gorilla of the site survey market. This market is theirs to lose. They have a commanding lead. And speaking to the above tweets from Drew, are they really in danger of losing their customer base after just 12 months? Honestly? I don’t think so. Ekahau has a top-notch offering that works just great today. If there was zero development done on the platform for the next two years it would still be one of the best enterprise site survey tools on the market. How long did AirMagnet flounder under Fluke and still retain the title of “the best” back in the early 2010s?

Here Comes A Challenger

So, if the only really competitor that’s up-and-coming to Ekahau right now is IBWave, does that mean this is a market ripe for disruption? I don’t think that’s the case either. When you look at all the offerings out there, no one is really rushing to build a bigger, better survey tool. You tend to see this in markets where someone has a clear advantage. Without a gap to exploit there is no room for growth. NetSpot gives their tool away so you can’t really win on price. IBWave and AirMagnet are fighting near the top so you don’t have a way to break in beside them.

What features could you offer that aren’t present in ESS today? You’d have to spend 18-24 months to even build something comparable to what is present in the software today. So, you dedicate resources to build something that is going to be the tool that people wanted to use two years ago? Good luck selling that idea to a VC firm. Investors want return on their money today.

And if you’re a vendor that’s trying to break into the market, why even consider it? Companies focused on building APs and wireless control solutions don’t always play nice with each other. If you’re going to build a tool to help survey your own deployments you’re going to be unconsciously biased against others and give yourself some breaks. You might even bias your survey results in favor of your own products. I’m not saying it would be intentional. But it has been known to happen in the past.

Here’s the other thing to keep in mind: inertia. Remember how we posed this question with the idea that Ekahau wouldn’t improve the product at all? We all know that’s not the case. Sure, there are some pretty big names on that list that aren’t there any more. But those people aren’t all the people at Ekahau. Development teams continue to work on the product roadmap. There are still plans in place to look at new technologies. Nothing stopped because someone left. Even if the only thing the people on the development side of the house did was finish the plans in place there would still be another 12-18 months of new features on the horizon. That means trying to develop a competitor to ESS means developing technology to replace what is going to be outdated by the time you finish!

People Matter

That brings me back to the people. It’s a sad fact that everyone leaves a company sooner or later. Bill Gates left Microsoft. Steve Jobs left Apple. You can’t count on people being around forever. You have to plan for their departure and hope that, even if you did catch lightning in a bottle, you have to find a way to do it again.

I’m proud to see some of the people that Ekahau has picked up in the last few months. Folks like Shamree Howard (@Shamree_Howard) and Stew Goumans (@WirelessStew) are going to go a long way to keeping the community engagement alive that Ekahau is so known for. There will be others that are brought in to fill the shoes of those that have left. And don’t forget that for every face we see publicly in the community there is an army of development people behind the scenes working diligently on the tools. They may not be the people that we always associate with the brand but they will try hard to put their own stamp on things. Just remember that we have to be patient and let them grow into their role. They have a lot to live up to, so give them the chance. It may take more than 12 months for them to really understand what they got themselves into.


Tom’s Take

No company goes out of business overnight without massive problems under the hood. Even the biggest corporate failures of the last 40 years took a long time to unfold. I don’t see that happening to Ekahau. Their tools are the best. Their reputation is sterling. And they have a bit of a cushion of goodwill to get the next release right. And there will be a next release. And one after that. Because what Ekahau is doing isn’t so much scaling the mountain they climbed to unseat AirMagnet. It’s proving they can keep going no matter what.

Fast Friday – Mobility Field Day 4

This week’s post is running behind because I’m out in San Jose enjoying great discussions from Mobility Field Day 4. This event is bringing a lot of great discussion to the community to get everyone excited for current and future wireless technologies. Some quick thoughts here with more ideas to come soon.

  • Analytics is becoming a huge driver for deployments. The more data you can gather, the better everything can be. When you start to include IoT as a part of the field you can see why all those analytics matter. You need to invest in a lot of CPU horsepower to make it all work the way you want. Which is also driving lots of people to build in the cloud to have access to what they need on-demand from an infrastructure side of things.
  • Spectrum is a huge problem and source of potential for wireless. You have to have access to spectrum to make everything work. 2.4 GHz is pretty crowded and getting worse with IoT. 5 GHz is getting crowded as well, especially with LAA being used. And the opening of the 6 GHz spectrum could be held up in political concerns. Are there new investigations that need to happen to find bands that can be used without causing friction?
  • The driver for technology has to be something other than desire. We have to build solutions and put things out there to make them happen. Because if we don’t we’re going to stuck with what we have for a long time. No one wants to move and reinvest without clear value. But clear value often doesn’t develop until people have already moved. Something has to break the logjam of hesitance. That’s the reason why we still need bold startups with new technology jumping out to make things work.

Tom’s Take

I know I’ll have more thoughts when I get back from this event, but wireless has become the new edge and that’s a very interesting shift. The more innovation we can drive there means the more capable we can make our clients and empower users.

Extremely Hive Minded

I must admit that I was wrong. After almost six years, I was mistake about who would end up buying Aerohive. You may recall back in 2013 I made a prediction that Aerohive would end up being bought by Dell. I recall it frequently because quite a few people still point out that post and wonder what if it’s happened yet.

Alas, June 26, 2019 is the date when I was finally proven wrong when Extreme Networks announced plans to purchase Aerohive for $4.45/share, which equates to around $272 million paid, which will be adjust for some cash on hand. Aerohive is the latest addition to the Extreme portfolio, which now includes pieces of Brocade, Avaya, Enterasys, and Motorola/Zebra.

Why did Extreme buy Aerohive? I know that several people in the industry told me they called this months ago, but that doesn’t explain the reasoning behind spending almost $300 million right before the end of the fiscal year. What was the draw that have Extreme buzzing about this particular company?

Flying Through The Clouds

The most apparent answer is HiveManager. Why? Because it’s really the only thing unique to Aerohive that Extreme really didn’t have already. Aerohive’s APs aren’t custom built. Aerohive’s switching line was rebadged from an ODM in order to meet the requirements to be included in Gartner’s Wired and Wireless Magic Quadrant. So the real draw was the software. The cloud management platform that Aerohive has pushed as their crown jewel for a number of years.

I’ll admit that HiveManager is a very nice piece of management software. It’s easy to use and has a lot of power behind the scenes. It’s also capable of being tuned for very specific vertical requirements, such as education. You can set up self-service portals and Private Pre-Shared Keys (PPSKs) fairly easily for your users. You can also build a lot of policy around the pieces of your network, both hardware and users. That’s a place to start your journey.

Why? Because Extreme is all about Automation! I talked to their team a few weeks ago and the story was all about building automation platforms. Extreme wants to have systems that are highly integrated and capable of doing things to make life easier for administrators. That means having the control pieces in place. And I’m not sure if what Extreme had already was in the same league as HiveManager. But I doubt Extreme has put as much effort into their software yet as Aerohive had invested in theirs over the past 8 years.

For Extreme to really build out the edge network of the future, they need to have a cloud-based management system that has easy policy creation and can be extended to include not only wireless access points but wired switches and other data center automation. If you look at what is happening with intent-based networking from other networking companies, you know how important policy definition is to the schema of your network going forward. In order to get that policy engine up and running quickly to feed the automation engine, Extreme made the call to buy it.

Part of the Colony

More importantly than the software piece, to me at least, is the people. Sure, you can have a bunch of people hacking away at code for a lot of hours to build something great. You can even choose to buy that something great from someone else and just start modifying it to your needs. Extreme knew that adapting HiveManager to fulfill the needs of their platform wasn’t going to be a walk in the park. So bringing the Aerohive team on board makes the most sense to me.

But it’s also important to realize who had a big hand in making the call. Abby Strong (@WiFi_Princess) is the VP of Product Marketing at Extreme. Before that she held the same role at Aerohive in some fashion for a number of years. She drove Aerohive to where they were before moving over to Extreme to do something similar.

When you’re building a team, how do you do it? Do you run out and find random people that you think are the best for the job and hope they gel quickly? Do you just throw darts at a stack of resumes and hope random chance favors your bold strategy? Or do you look at existing teams that work well together and can pull off amazing feats of technical talent with the right motivation? I’d say the third option is the most successful, wouldn’t you?

It’s not unheard of in the wireless industry for an entire team to move back and forth between companies. There’s a hospitality team that’s moved back and forth between Ruckus, Aerohive, and Ubiquiti. There are other teams, like some working on 802.11u, that bounced around a couple of times before they found a home. Which makes me wonder if Extreme bought Aerohive for HiveManager and ended up with the development team as a bonus? Or if they decided to buy the development team and got the software for “free”?


Tom’s Take

We all knew Aerohive was putting itself on the market. You don’t shed sales staff and middle management unless you’re making yourself a very attractive target for acquisition. I still held out hope that maybe Dell would come through for me and make my five-year-old prediction prescient. Instead, the right company snapped up Aerohive for next to nothing and will start in earnest integrating HiveManager into their stack in the coming months. I don’t know what the future plans for further integration look like, but the wireless world is buzzing right now and that should make life extremely sweet for the Aerohive team.

Will Spectrum Hunger Kill Weather Forecasting?

If you are a fan of the work we do each week with our Gestalt IT Rundown on Facebook, you probably saw a story in this week’s episode about the race for 5G spectrum causing some potential problems with weather forecasting. I didn’t have the time to dig into the details behind the story on that episode, so I wanted to take a few minutes and explain why it’s such a big deal.

First, you have to know that 5G (and many other) speeds are entirely dependent upon the amount of spectrum they can use to communicate. The more spectrum available to them, the more channels they have available to communicate. Which increases the speed they can exchange information and reduces the amount of interference between devices. Sounds simple right?

Except mobile devices aren’t the only things that are using the spectrum. We have all kinds of other devices out there that use radio waves to communicate. We’ve known for several years that there are a lot of devices in the 5 GHz spectrum used by 802.11 that interfere with wireless devices. Things like ISM radios for industrial and medical applications or government radar systems. The government has instituted many regulations in those frequency ranges to ensure that critical infrastructure isn’t interfered with.

When Nature Calls

However, sometimes you can’t regulate away interference. According to this Wired article the FCC, back in March, opened up auctions for the 24 GHz frequency band. This was over strenuous objections from NASA, NOAA, and the American Meteorological Society (AMS). Why is 24 GHz so important? Well, as it turns out, there’s a natural phenomenon that exists at that range.

Recall your kitchen microwave. How does it work? Basically, it uses microwave radiation to heat the water in the food you’re cooking. How does it do that? Turns out the natural frequency of water is 2.38 GHz. Now, thanks to the magic of math, 23.8 GHz is a multiple of that frequency. Which means that anything that broadcasts at 23.8 GHz will have issues with water, such as water in tree leaves or in water pipes.

So, why is NOAA and the AMS freaking out about auctioning off spectrum in the 23.8 GHz range? Because anything broadcasting in that range is not only going to have issues with water interference but it’s also going to look like water to sensitive equipment. That means that orbiting weather satellites that use microwaves to detect water vapor in the air that reacts to 23.8 GHz are going to encounter co-channel interference from 5G radio sources.

You might say to yourself, “So what? It’s just a little buzz, right?” Well, except that that little buzz creates interference in the data being fed into forecast prediction models. Those models are the basis for the weather forecasts we have today. And if you haven’t noticed the reliability of our long range forecasts has been steadily improving for the past 30 years or so. Today’s 7-day forecasts are almost 80% accurate, which is pretty good compared to how bad things were in the 80s, where you could only guarantee 80% accuracy from a 3-day forecast.

Guess what? NOAA says that if the 24 GHz spectrum gets auctioned off for 5G use, we could see the accuracy of our forecasting regress almost 30%, which would push our models back to where they were in the 80s. Now, for those of you that live in places that are fortunate enough to only get sun and the occasional rain shower that doesn’t sound too bad, right? Just make sure to pack an umbrella. But for those that live in places where there is a significant chance for severe weather, it’s a bit more problematic.

I live in Oklahoma. We’re right in the middle of Tornado Alley. In the spring between April 1 and June 1 my state becomes a fun place full of nasty weather that can destroy homes and cause widespread devastation. It’s never boring for sure. But in the last 30 years we’ve managed to go from being able to give people a few minutes warning about an impending tornado to being able to issue Potential Dangerous Situation (PDS) Tornado Watches up to 48 hours in advance. While a PDS Tornado Watch doesn’t mean that we’re going to get one in a specific area, it does mean that you need to be on the lookout for something that day. It gives enough warning to make sure you’re not going to get caught flat footed when things get nasty.

Yes Man

The easiest way to avoid this problem is probably the least likely to happen. The FCC needs to restrict the auction of that spectrum range identified by NOAA and NASA until it can be proven that there won’t be any interference or that the forecast accuracy isn’t going to be impacted. 5G rollouts are still far enough in the future that leaving a part of the spectrum out of the equation isn’t going to cause huge issues for the next few years. But if we have to start creating rules for how we have to change power settings for device manufacturers or create updates for fixed-position sensors and old satellites we’re going to have a lot more issues down the road than just slightly slow mobile devices.

The reason why this is hard is because an FCC focused on opening things up for corporations doesn’t care about the forecast accuracy of a farmer in Iowa. They care about money. They care about progress. And ultimately they care about looking good over saving lives. There’s no guarantee that reducing forecast accuracy will impact life saving, but the odds are that better forecasts will help people make better decisions. And ultimately, when you boil it down to the actual choices, the appearance is that the FCC is picking money over lives. And that’s a pretty easy choice for most people to make.


Tom’s Take

If I’m a bit passionate about weather tech, it’s because I live in one of the most weather-active places on the planet. The National Severe Storms Laboratory and the National Weather Center are both about 5-6 miles from my house. I see the use of this tech every day. And I know that it saves lives. It’s saved lives for years for people that need to know

Cisco’s Catalyst for Change

You’ve probably heard by now of the big launch of Cisco’s new 802.11ax (neé Wi-Fi 6) portfolio of devices. Cisco did a special roundtable with a group of influencers from the community called Just The Tech. Here’s a video from that event covering the APs that were released, the 9120:

Fred always does a great job of explaining the technical bits behind the APs. But one thing that caught my eye here is the name of the AP – Catalyst. Cisco has been using Aironet for their AP line since they purchased Aironet Wireless Communications back in 1999. The name was practically synonymous with wireless technologies for many people in the industry that worked exclusively with Cisco technologies.

So, is the name change something we should be concerned about?

A Rose Is a Rose Is An AP

Cisco moving toward a unified naming convention for their edge solutions makes a lot of sense. Ten years ago, wireless was still primarily 802.11g-based with 802.11n still a few months away from being proposed and ratified. Connectivity hadn’t quite yet reached the ubiquitous levels of wireless that we see today. The iPhone was only about to be on its third revision.

Cisco Catalyst devices were still the primary method of getting users connected to the network. Even laptop users hunted for Ethernet ports everywhere instead of just connecting to wireless. Ethernet was more reliable and faster than 54Mbps (at best) and fighting contention with all the other devices around. Catalyst stood for reliability.

In the time since, wireless has become the new edge device connectivity. No longer do we hunt for Ethernet ports unless we have a specific need for one. Laptops don’t come with dedicated wired networking options any longer. In 2019, wireless is king. And Aironet is the wireless name that Cisco has built. So why the change?

In short, because edge connectivity isn’t wired versus wireless any longer. Instead, it’s unified. Whether it was because of the idiotic decisions made by Gartner to required wired switching for their wireless Magic Quadrant (TM) or because people stopped thinking about Ethernet except to power wireless access points, the fact is that the edge no longer has wires. For Cisco, this means that Catalyst switches aren’t the edge any longer. So the name doesn’t have the same power as it once did.

However, the Aironet name has also lost its luster. Why? Because Aironet is a remnant of Cisco’s pre-controller AP past. The line of APs that most people are likely using in their office right now aren’t from the Aironet heritage. Instead, they are based on technology acquired by Cisco from Airespace that Cisco bought in 2005 to add controller-based technology to their portfolio. And, aside from references to Airespace in the code of the Wireless LAN Controllers (WLC), the line never really had a brand like Catalyst or Aironet.

Today, Cisco has started the move away from using Airespace technology in their controllers. As this video from 2018 shows, Cisco has begun to migrate their controller OS to a more modern platform instead of relying on modifying the old Airespace code again and again. This means that development going forward should be more rapid and less resultant on the whims of keeping everything running properly on a codebase over a decade old.

Branding New

So, that explains the reasons why Cisco might want to refresh everything. But why the naming of the APs? Why not just rely on Aironet and keep that branding going forward?

Well, because they want to make end users believe that the network is the same no matter if it’s wired or wireless. They want buyers to believe that Catalyst stands for edge connectivity, no matter where that edge might be. And, unless they really screw up and start making us think these new APs are switches they’ll be able to pull off this branding exercise fairly well.

That’s because users have stopped caring about the wired versus wireless debate. Instead, they only care about speed and reliability. 802.11ax will help on both fronts, and Cisco wants to capitalize on that by making these new APs feel different. And the best way to do that is by rebranding them.

Wireless professionals don’t care about the name. Most of the time they just refer to the model number anyway. And while Cisco’s model numbering strategies seem to be getting a bit crowded in the 9000-level of things, this makes a lot of sense to distance themselves from their past. The old 802.11ac APs are still very viable and will likely be useful all they way until the end of their life. But when the time comes to pull them out, you’ll be retiring Aironet and Airespace along with them. Even if you didn’t realize those were the branding names of those APs.


Tom’s Take

Branding matters. Or it doesn’t. Either you love the name of the thing you’ve been using or you couldn’t care less. Whether it’s an iPhone or a car or an access point, everything has a name and a number attached to it. Cisco has decided, for better or worse, to unify the edge under the Catalyst name. Maybe it will stick and reduce confusion with customers. Maybe it will be hated enough that they’ll bring back the Aironet name in a couple of cycles to “get back to basics” as it were. But for now, the catalyst for change at Cisco leads to a unified edge solution.

802.11ax Is NOT A Wireless Switch

802.11ax is fast approaching. Though not 100% ratified by the IEEE, the spec is at the point where most manufacturers and vendors are going to support what’s current as the “final” version for now. While the spec for what marketing people like to call Wi-Fi 6 is not likely to change, that doesn’t mean that the ramp up to get people to buy it is showing any signs of starting off slow. One of the biggest problems I see right now is the decision by some major AP manufacturers to call 802.11ax a “wireless switch”.

Complex Duplex

In case you had any doubts, 802.11ax is NOT a switch.1 But the answer to why that is takes some explanation. It all starts with the network. More specifically, with Ethernet.

Ethernet is a broadcast medium. Packets are launched into the network and it is hoped that the packet finds the destination. All nodes on the network listen and, if the packet isn’t destined for them they discard it. This is the nature of the broadcast. If multiple stations try to talk at once, the packets collide and no one hears anything. That’s why Ethernet developed a collision detection  system called CSMA/CD.

Switches solved this problem by segmenting the collision domain to a single port. Now, the only communications between the stations would be in the event that the switch couldn’t find the proper port to forward a packet. In every other case, the switch finds where the packet is meant to be sent and forwards it to that location. It prevents collisions by ensuring that no two stations can transmit at any one time except to the switch in the middle. This also allows communications to be full-duplex, meaning the stations can send and receive at the same time.

Wireless is a different medium. The AP still speaks Ethernet, and there is a bridge between the Ethernet interface and the radios on the other side. But the radio interfaces work differently than Ethernet. Firstly, they are half-duplex only. That means that they have to send traffic or listen to receive traffic but they can’t do both at the same time. Wireless also uses a different version of collision detection called CSMA/CA, where the last A stands for “avoidance”. Because of the half-duplex nature of wireless, clients have a complex process to make sure the frequency is clear before transmitting. They have to check whether or not other wireless clients are talking and if the ambient RF is within the proper thresholds. After all those checks are confirmed, then the client transmits.

Because of the half-duplex wireless connection and the need for stations to have permission to send, some people have said that wireless is a lot like an Ethernet hub, which is pretty accurate. All stations and APs exist in the contention (collision) domain. Aside from the contention algorithm, there’s nothing to stop the stations from talking all at once. And for the entire life of 802.11 so far, it’s worked. 802.11ac started to introduce more features designed to let APs send frames to multiple stations at the same time. That’s what’s called Multi-user Multi-In, Multi-out (MU-MIMO).In theory, it could allow for full-duplex transmissions by allowing a client to send on one antenna and receive on another, but utilizing client radios in this way has impacts on other things.

Switching It Up

Enter 802.11ax. The Wi-Fi 6 feature that has most people excited is Orthogonal Frequency-Division Multiple Access (OFDMA). Simply put, OFDMA allows the clients and APs to not use the entire transmission channel for sending data. It can be sliced up into sub-channels that can be used for low-bandwidth applications to reserve time to talk to the AP. Combined with enhanced MU-MIMO support in 802.11ax, the idea is that clients can talk directly to the AP and allocate a specific sub-channel resource unit all to themselves.

To the marketing people in the room, this sounds just like a switch. Reserved channels, single station access, right? Except it is still not a switch. The AP is still a bridge between two media types for one thing, but more importantly the transmission medium still hasn’t magically become full-duplex. Stations may get around this with some kind of trickery, but they still need to wait for the all-clear to send data. Remember that all stations and APs still hear all the transmissions. It’s still a broadcast medium at the most basic. No amount of software configuration is going to fix that. And for the networking people in the room that might be saying “so what?”, remember when Cisco tried to sell us on the idea that StackWise was capable of 40Gbps of throughput because it could send in both directions on the StackWise ring at once? Remember when you started screaming “THAT’S NOT HOW BANDWIDTH WORKS!!!” That’s what this is, basically. Smoke and mirrors and ignoring the underlying physical layer constraints.

In fact, if you read the above resources, you’re going to find a lot of caveats at the end about support for protocols coming up and not being in the first version of the spec. That’s exactly what happened with 802.11ac. The promise of “gigabit Wi-Fi” took a couple of years and the MU-MIMO enhancements everyone was trumpeting never fully materialized. Just like all technology, the really good stuff was deferred to the next release.

To make sure that both sides are heard, it is rightly pointed out by wireless professionals like Sam Clements (@Samuel_Clements) that 802.11ax is the most “switch-like” so far, with multiple dynamic collision domains. However, in the immortal words of Tyler Durden, “Sticking feathers up your butt doesn’t make you a chicken.” The switch moniker is still a marketing construct and doesn’t hold any water in reality aside from a comparison to a somewhat similar technology. The operation of wireless APs may be hub-like or switch-like, but these devices are not either of those types of devices.

CPU Bound and Determined

The other issue that I see that prevents this from becoming a switch is the CPU on the AP becoming a point of contention. In a traditional Ethernet switch, the forwarding hardware is a specialized ASIC that is optimized to forward packets super fast. It does this with some trickery, including cut through for packets and trusting the incoming CRC. When packets bounce up to the CPU to be process-switched, it bogs the entire system down terribly. That’s why most networking texts will tell you to avoid process switching at all costs.

Now apply those lessons to wireless. All this protocol enhancement is now causing the CPU to have to do extra duty to work on time-slicing and sub-channel optimization. And remember that those CPUs are operating on 18-28 watts of power right now. Maybe the newer APs will get over 30 watts with new PoE options, but that means those CPUs are still going to be eating a lot of power to process all this extra software work. Even adding dedicated processing power to the AP isn’t going to fix things in the long run. That might be one of the reasons why Cisco has been pushing enhanced PoE in the run-up to their big 802.11ax launch at the end of April.


Tom’s Take

Let me say it again for the cheap seats: 802.11ax is NOT a wireless switch! The physical layer technology that 802.11 is built on won’t be switchable any time soon. 802.11ax has given us a lot of enhancements in the protocol and there is a lot to be excited about, like OFDMA, BSS coloring, and TWT. But, like the decision to over-simplify the marketing name, the idea of calling it a wireless switch just to give people a frame of reference so they buy more of them is just silly. It’s disingenuous and sounds more like a snake oil salesman than honest technology marketing. Rather than trying to trick the users with cute sounding terms, how about we keep the discussion honest and discuss the pros and cons of the technology?

Special thanks to my friends in the wireless space for proofreading this post and correcting my errors in technology:


  1. The title was kind of a spoiler ↩︎

Fast Friday – Aruba Atmosphere 2019

A couple of quick thoughts that I’m having ahead of Aruba Atmosphere next week in Las Vegas, NV. Tech Field Day has a lot going on and you don’t want to miss a minute of the action for sure, especially on Wednesday at 3:15pm PST. In the meantime:

  • IoT is really starting to more down-market. Rather than being focused on enabling large machines with front-end devices to act as gateways we’re starting to see more and more IoT devices either come with integrated connective technology or interface with systems that do. Building control systems aren’t just for large corporations any more. You can automate an office on the cheap today. Just remember that any device that can talk can also listen. Security posture is going to be huge.
  • I remember some of the discussions that we had during the heady early days of SDN and how unimpressed wireless and mobility people were when they figured out how the controllers and dumb edge devices really worked. Most wireless pros have been there and done that already. However, recently there has been a lot of movement in the OpenConfig community around wireless devices. And that really has the wireless folks excited. Because the promise of SDN for them has never been about control, but instead about compatibility. The real key isn’t building another controller but instead making all the APs and controllers work better together.
  • Another great thing I’m looking forward too seeing at Atmosphere is Aruba HER. It’s an event focused on building stronger communities and increasing diversity for all. You can read a bit more about what will be going on there in this post from Claire Chaplais. Make sure to check out Zoë Rose keynoting the event as well! She’s got a very powerful story to tell. She gave us all a bit of it at Security Field Day last December in this Ignite Talk.

Tom’s Take

Make sure you stay tuned for all the things we’re going to be discussing during the event. We’re going to be using the event hashtag #ATM19 but also using #MFDx as a way to let you know about the great stuff we will have going on!

Atmosic and the Power of RF?

I recently talked to a company doing some very interesting things in the mobility space and I thought I’d take a stab at writing about them. Most of my mobility posts are about access points or controller software or me just complaining in general about the state of Wi-Fi 6. But this idea had me a little intrigued. And confused.

Bluetooth Moon Rising

Atmosic is a company that is focusing on low-power chips, especially for IoT applications. Most of their team came from Atheros, which you may recall powers a ton of the reference architectures used in wireless APs in many, many AP manufacturers that don’t make their own chips. Their team has the chops to make good wireless stuff one would think.

Atmosic wants to make IoT devices that use Bluetooth Low Energy (BLE). So far, this is sounding pretty good to me. I’ve seen a lot of crazy awesome ideas for BLE, like location tracking indoors or on-demand digital signage. Sure, there are some tracking issues that go along with that but it’s mostly okay. BLE is what the industry has decided to standardize on for a ton of IoT functionality.

How does Atmosic want to change things in the BLE space? Well, those Atheros chipset guys started out by building a chip that uses 5-10 times less power than before. That’s a staggering number when you think about it. BLE beacons already don’t use a ton of power. They’re designed to be used in concert with APs or with standalone, battery-powered devices. The BLE beacons I’ve seen from Aruba are about the size of the AirPods case. And that battery can last for a couple of years.

If Atmosic really did build a chip that can power those beacons with event 5x less power usage, you’re looking at a huge increase in the lifespan of a beacon! Imagine being able to deploy these things everywhere and have them run for a decade? You could literally cover a stadium or a hotel with them for next to nothing. Even if you included the chip in a new AP, which Atmosic is partnering to do, you could effectively run the BLE side of things for free from a power budget perspective.

This is something that is pretty big news. So why did I suddenly see things start sliding off the rails?

Unlimited Power!

The next part of the Atmosic pitch came when they told me about the the other part of their trinity of power savings. On their technology page, they tout the above mentioned chipset along with the special on-demand wake feature that allows the chips to be put into a deep sleep mode that will only awaken when it receives a special packet designed to rouse the chip like a custom alarm clock.

That third thing, though. Power harvesting. Now we’re starting to get into the real weeds of Wi-Fi stuff. Essentially, Atmosic is saying they can power their low-power BLE beacon indefinitely by harvesting power from RF in the air. Yeah, that’s right. They’re literally pulling nanoamps of power from remote power sources. Evidently, their power system is more reliable because they use known sources like 900MHz for coverage as opposed to just trying to pull the power from whatever happens to be around.

At this point, you’re probably saying one of two things:

  1. This is crap and it will never work.
  2. This is the most amazing thing ever!

Right now I tend to fall on the side of the first one. Why? Because if they really did invent a way to pull power from thin air, some really should cut them a check because they need to be building bigger, badder everythings! Imagine being able to power whatever you wanted without clunky batteries or power cords. It would be a revolution!

Sadly, the reality is that the Atmosic trinity pretty much requires all three parts to be so revolutionary. I talked to a couple of my friends in the wireless industry about this and Jonathan Davis (@SubNetwork) was about as skeptical as I was. Since he’s a real math wizard, he figured out that the amount of power being pulled in by the Atmosic chips through the air has to be pretty tiny. Like below nanoamps. And that’s not enough to run an active BLE beacon.

Building a Lower Powered Mousetrap

That’s where the whole system comes into play. It takes a very low power chip with a custom wake sensor (read: Passive Beacon) in order to be able to run on the kinds of power that you can draw from RF waves. And this is where the utility of the whole thing starts breaking down for me. Sure, you could do something crazy like put this on a piece of paper and “hide” it in the service tag of a piece of equipment like a laptop. Then you have a BLE that can track that device even if it’s powered down. But you still need a way to excite the BLE chip and make it wake up. And, at this point, if you’re doing passive Bluetooth is the solution really any better than a passive RFID tag that has the same lifespan? And is a lot cheaper?

The other issue that I have with this solution is the proposed longevity. Forever is the tag on the Atmosic site. For. Ev. Er. Sounds like a great idea in theory, right? Deploy a device in your network that can run forever on free energy and you never have to replace the batteries. Okay, that’s great. How old is your iPhone? Your laptop? Better yet, how old is the oldest piece of enterprise tech that you have on your person right now? I’d wager it can’t be more than 6 years old at this point.

Enterprises get chided for having old technology all the time. Maybe that laptop is 6 years old and still running. Perhaps those servers should have been decommissioned a refresh cycle ago. Compared to the mayfly lifespan of an iPhone, your average piece of enterprise tech is pretty long in the tooth. But not all Enterprise tech is that outdated. Take a look at wireless access points, for example. If you are running the oldest 802.11ac access point made it’s still just barely five years old, the standard having been ratified in December 2013. Most enterprises have already refreshed their 11ac Wave 1 APs. If they haven’t, they’re just holding off long enough for 802.11ax to maybe get certified this year so they can push out hot new hardware.

So, with 5-6 years as the standard for “old” technology in the enterprise, what on earth are we going to do with beacons that are a decade old? With the low-power chipset you’re already looking at a 5-7 lifespan on current battery technology if it really does deliver 5x power savings. Even current BLE beacons are designed with a short lifespan for a reason. Technology changes very fast. If you try and keep that device stuck the wall or a laptop for too long, it’s going to be out of sync with the rest of the tech around it.

Imagine trying to hook up a Bluetooth 2.x device to a current iPhone. It will work because the standards are there but it’s going to be painful because newer devices offer so much more functionality. Trying to keep devices around forever for the sake of doing it isn’t practical. And if you’re going to try and counter the argument by saying IoT devices can be around for quite a while you’re not going to win there either. Most IoT devices that are embedded for long term use wouldn’t use wireless or Bluetooth in the first place. They would be hardwired to cut down on potential points of failure. Sure, you might include something like this in the system, but it’s going to be powered enough already to not need to harvest power through the air.


Tom’s Take

I think the Atmosic people have the right idea for a baseline but their stretch goal is a bit lofty and sci-fi for my tastes. Sure, the idea of being able to harvest unlimited power from RF to run devices without batteries for years is great in theory. But technology demands for both enterprise tech and consumer/enterprise IoT devices is going to drive people to use the lowest common denominator of simplicity. I think that Atmosic has a lot of upside with these new super efficient chips. But I doubt we’re going to see anyone sucking power out of thin air any time soon.

Wi-Fi 6 Is A Stupid Branding Idea

You’ve probably seen recently that the Wi-Fi Alliance has decided the rebrand the forthcoming 802.11ax standard as “Wi-Fi CERTIFIED 6”, henceforth referred to as “Wi-Fi 6”. This branding decision happened late in 2018 and seems to be picking up steam in 2019 as 802.11ax comes closer to ratification later this year. With manufacturers shipping 11ax access points already and the consumer market poised to explode with the adoption of a new standard, I think it’s time to point out to the Wi-Fi Alliance that this is a dumb branding idea.

My Generation

On the surface, the branding decision looks like it makes sense. The Wi-Fi alliance wants to make sure that consumers aren’t confused about which wireless standard they are using. 802.11n, 802.11ac, and 802.11ax are all usable and valid infrastructure that could be in use at any one time, as 11n is 2.4GHz, 11ac is 5GHz, and 11ax encompasses both. According to the alliance, there will be a number displayed on the badge of the connection to denote which generation of wireless the client is using.

Except, it won’t be that simple. Users don’t care about speeds. They care about having the biggest possible number. They want that number to be a 6, not a 5 or a 4. Don’t believe me? AT&T released an update earlier this month that replaced the 4G logo with a 5G logo even when 5G service wasn’t around. Just so users could say they had “5G” and tell their friends.

Using numbers to denote generations isn’t a new thing in software either. We use version numbers all the time. But using those version numbers as branding usually leads to backlash in the community. Take Fibre Channel, for example. Brocade first announced they would refer to 16GB fibre channel as “Gen 5”, owing to the fifth generation of the protocol. Gen 6 was 32GB and so on. But, as the chart on this fibre channel page shows, they worked themselves into a corner. Gen 7 is both 64GB and 256GB depending on what you’re deploying. Even Gen 6 was both 32GB and 128GB. It’s confusing because the name could be many things depending on what you wanted it to mean. Branding doesn’t denote clear information.

Subversion of Versions

The Wi-Fi Alliance decision also doesn’t leave room for expansion or differentiation. For example, as I mentioned in a previous post on Gestalt IT, 802.11ax doesn’t make the new OWE spec mandatory. It’s up to the vendors to implement this spec as well as other things that have been made option, as upstream MU-MIMO is rumored to become as well. Does that mean that if I include both of those protocols as options that my protocol is Wi-Fi 6.1? Or could I even call it Wi-Fi 7 since it’s really good?

Windows has had this problem going all the way back to Windows 3.0. Moving to Windows 3.1 was a huge upgrade, but the point release didn’t make it seem that way. After that, Microsoft started using branding names by year instead of version numbers. But that still caused issues. Was Windows 98 that much better than 95? Were they both better than Windows NT 4? How about 2000? Must be better, right? Better than Windows 99?1

Windows even dropped the version numbers for a while with Windows XP (version 5.1) and Windows Vista (version 6.0) before coming back to versioning again with Windows 7 (version 6.1) and Windows 8 (version 6.2) before just saying to hell with it and making Windows 10 (version 10.0). Which, according to rumor, was decided on because developers may have assumed all legacy consumer Windows versions started with ‘9’.

See the trouble that versioning causes when it’s not consistent? What happens if the next minor revision to the 802.11ax specification doesn’t justify moving to Wi-Fi 7? Do you remember how confusing it was for consumers when we would start talking about the difference between 11ac Wave 1 and Wave 2? Did anyone really care? They just wanted the fastest stuff around. They didn’t care what wave or what version it was. They just bought what the sticker said and what the guy at Best Buy told them to get.

Enterprise Nightmares

Now, imagine the trouble that the Wi-Fi Alliance has potentially caused for enterprise support techs with this branding decision. What will we say when the users call in and say their wireless is messed up because they’re running Windows 10 and their Wi-Fi is on 6 still? Or if their cube neighbor has a 6 on their Wi-Fi but my Mac doesn’t?2

Think about how problematic it’s going to be when you try to figure out why someone is connecting to Wi-Fi 5 (802.11ac) instead of Wi-Fi 6 (802.11ax). Think about the fights you’re going to have about why we need to upgrade when it’s just one number higher. You can argue power savings or better cell sizes or more security all day long. But the jump from 5 to 6 really isn’t that big, right? Can’t we just wait for 7 and make a really big upgrade?


Tom’s Take

I think the Wi-Fi Alliance tried to do the right thing with this branding. But they did it in the worst way possible. There’s going to be tons of identity issues with 11ax and Wi-Fi 6 and all the things that are going to be made optional in order to get the standard ratified by the end of the year. We’re going to get locked into a struggle to define what Wi-Fi 6 really entails while trying not to highlight all the things that could potentially be left out. In the end, it would have been better to just call it 11ax and let users do their homework.


  1. You’d be shocked the number of times I heard it called that on support calls ↩︎
  2. You better believe Apple isn’t going to mar the Airport icon in the system bar with any stupid numbers ↩︎

iPhone 11 Plus Wi-Fi 6 Equals Undefined?

I read a curious story this weekend based on a supposed leak about the next iPhone, currently dubbed the iPhone 111. There’s a report that the next iPhone will have support for the forthcoming 802.11ax standard. The article refers to 802.11ax as Wi-Fi 6, which is a catch branding exercise that absolutely no one in the tech community is going to adhere to.

In case you aren’t familiar with 802.11ax, it’s essentially an upgrade of the existing wireless protocols to support better client performance and management across both 2.4GHz and 5GHz. Unlike 802.11ac, which was rebranded to be called Wi-Fi 5 or 802.11n, which curiously wasn’t rebranded as Wi-Fi 4, 802.11ax works in both bands. There’s a lot of great things on the drawing board for 11ax coming soon.

Why did I say soon? Because, as of this writing, 11ax isn’t a ratified standard. According to this FAQ from Aerohive, the standard isn’t set to be voted on for final ratification until Q3 of 2019. And if anyone wants to see the standard pushed along faster it would be Aerohive. They were one of, if not the, first company to bring a 802.11ax access point to the market. So they want to see a standard piece of equipment for sure.

Making pre-standard access points isn’t anything new to the market. Manufacturers have been trying to get ahead of the trends for a while now. I can distinctly remember being involved in IT when 802.11n was still in the pre-standard days. One of our employees brought in a Belkin Pre-N AP and client card and wanted us to get it working because, in his words, “It will cover my whole house with Wi-Fi!”

Sadly, we ended up having to ditch this device once the 802.11n standard was finalized. Why? Because Belkin had rushed it to the market and tried to capitalize on the fervor of people wanting fast connection speeds. The AP only worked with the PCMCIA client card sold by Belkin. Once you started to see ratified 802.11n devices they were incompatible with the Belkin AP and fell back to 802.11g speeds.

Belkin wasn’t the only manufacturer that was trying to get ahead of the curve. Cisco also pushed out the Aironet 1250, which had detachable lobes that could be pulled off and replaced. Why? Because they were shipping a draft 802.11n piece of hardware. They claimed that anyone purchasing the draft spec hardware could send in the lobes and get an upgrade to ratified hardware as soon as it was finalized. Except, as a rushed product the 1250 also consumed lots of power, ran hot, and generally had very low performance compared to the APs that came out after the ratification process was completed.

We’re seeing the same rush again with 802.11ax. Everyone wants to have something new when the next refresh cycle comes up. Instead of pushing people toward the stable performance of 802.11ac Wave 2 with proper design they are going out on a limb. Manufacturers are betting on the fact that their designs are going to be software-upgradable in the end. Which assumes there won’t be any major changes during the ratification process.

Cupertino Doesn’t Guess

One of the major criticism points of 802.11ax is that there is not any widespread adoption of clients out there to push us to need 802.11ax APs. The client vs. infrastructure argument is always a tough one. Do you make the client adapter and hope that someone will eventually come out with hardware to support it? Or do you choose to instead wait for the infrastructure to jump up in speed and then buy a client adapter to support it?

I’m usually one revision behind in most cases. My home hardware is running 802.11ac Wave 2 currently, but my devices were 11ac capable long before I installed any Meraki or Ubiquiti equipment. So my infrastructure was playing catchup with my clients. But not everyone runs the same gear that I do.

One of the areas where this is more apparent is not in the Wi-Fi realm but instead in the carrier space. We’re starting to hear that carriers like AT&T are deploying 5G in many cities even though there aren’t many 5G capable handsets. And, even when the first 5G handsets start hitting the market, the smart money says to avoid the first generation. Because the first generation is almost always hot, power hungry, and low performing. Sound familiar?

You want to know who doesn’t bet on non-standard technology? Apple. Time and again, Apple has chosen to take a very conservative approach to introducing new chipsets into their devices. And while their Wi-Fi chipsets often seen upgrades long before their cellular modems do, you can guarantee that they aren’t going to make a bet on non-standard technology that could potentially hamper adoption of their flagship mobile device.

A Logical Approach

Let’s look at it logically for a moment. Let’s assume that the standards bodies get off their laurels and kick into high gear to get 802.11ax ratified at the end of Q2. That’s just after Apple’s WWDC. Do you think Apple is going to wait until post-WWDC to decide what chipsets are going to be in the new iPhone? You bet your sweet bandwidth they aren’t!

The chipset decisions for the iPhone 11 are being made right now in Q1. They want to know they can get sufficient quantities of SoCs and modems by the time manufacturing has to ramp up to have them ready for stores in October. That means you can’t guess whether or not a standard is going to be approved in time for launch. Q3 2019 is during the iPhone announcement season. Apple is the most conservative manufacturer out there. They aren’t going to stake their connectivity on an unproven standard.

So, let’s just state it emphatically for the search engines: The iPhone 11 will not have 802.11ax, or Wi-Fi 6, support. And anyone trying to tell you differently is trying to sell you a load of marketing.

The Future of Connectivity

So, what about the iPhone XII or whatever we call it? That’s a more interesting discussion. And it hinges on something I heard in a recent episode of a new wireless podcast. The Contention Window was started by my friends Tauni Odia and Scott Lester. In Episode 1, they have their big 2019 predictions. Tauni predicted that 802.11ax won’t be ratified in 2019. I agree with her assessment. Despite the optimism of the working group these things tend to take longer than expected. Which means Q4 2019 or perhaps even Q1 2020.

If 802.11ax ratification slips into 2020 you’ll see Apple taking the same conservative approach to adoption. This is especially true if the majority of deployed infrastructure APs are still pre-standard. Apple would rather take an extra year to get things right and know they won’t have any bugs than to rush something to the market in the hopes of selling a few corner-case techies on something that doesn’t have much of an impact on speeds in the long run.

However, if the standards bodies prove us all wrong and push 11ax ratification through we should see it in the iPhone X+2. A mature technology with proper support should be seen as a winner. But you should see them move telegraphed far in advance with adoption of the 11ax radios in the MacBook Pro first. Once the bigger flagship computing devices get support it will trickle down. This is just an economic concern. The MacBook has more room in the case for a first-gen 11ax chip. Looser thermal tolerances and space considerations means more room to make mistakes.

In short: Don’t expect an 11ax (or Wi-Fi 6) chip before 2020. And if you’re betting the farm on the iPhone, you may be waiting a long time.


Tom’s Take

I like the predictions of professionals with knowledge over leaks with dubious marketing value. The Contention Window has lots of good information about why 802.11ax won’t be ratified any time soon. A report about a leaked report that may or may not be accurate holds a lot less value. Don’t listen to the hype. Listen to the people who know what they’re talking about, like Scott and Tauni for example. And don’t stress about having the newest, fastest wireless devices in your house. Odds are way better that you’re going to have to buy a new AP for Christmas this year than the hope of your next iPhone support 802.11ax. But the one thing we can all agree on: Wi-Fi 6 is a terrible branding decision!


  1. Or I suppose the XI if you’re into Roman numerals ↩︎