The Sky is Not Falling For Ekahau

Ekahau Hat (photo courtesy of Sam Clements)

You may have noticed quite a few high profile departures from Ekahau recently. A lot of very visible community members, concluding Joel Crane (@PotatoFi), Jerry Olla (@JOlla), and Jussi Kiviniemi (@JussiKiviniemi) have all decided to move on. This has generated quite a bit of discussion among the members of the wireless community as to what this really means for the company and the product that is so beloved by so many wireless engineers and architects.

Putting the people aside for a moment, I want to talk about the Ekahau product line specifically. There was an undercurrent of worry in the community about what would happen to Ekahau Site Survey (ESS) and other tools in the absence of the people we’ve seen working on them for so long. I think this tweet from Drew Lentz (@WirelessNerd) best exemplifies that perspective:

So, naturally, I decided to poke back:

That last tweet is where I really want to focus this post.

The More Things Change

Let’s think about where Ekahau is with regards to the wireless site survey market right now. With no exaggeration, they are on top and clearly head and shoulders above the rest. What other product out there has the marketshare and mindshare they enjoy? AirMagnet is the former king of the hill but the future for that tool is still in flux with all of the recent movement of the tool between Netscout and now with NetAlly. IBWave is coming up fast but they’re still not quite ready to go head-to-head in the same large enterprise space. I rarely hear TamoGraph Site Survey brought up in conversation. And as for NetSpot, they don’t check enough boxes for real site survey to even really be a strong contender In the enterprise.

So, Ekahau really is the 800lb gorilla of the site survey market. This market is theirs to lose. They have a commanding lead. And speaking to the above tweets from Drew, are they really in danger of losing their customer base after just 12 months? Honestly? I don’t think so. Ekahau has a top-notch offering that works just great today. If there was zero development done on the platform for the next two years it would still be one of the best enterprise site survey tools on the market. How long did AirMagnet flounder under Fluke and still retain the title of “the best” back in the early 2010s?

Here Comes A Challenger

So, if the only really competitor that’s up-and-coming to Ekahau right now is IBWave, does that mean this is a market ripe for disruption? I don’t think that’s the case either. When you look at all the offerings out there, no one is really rushing to build a bigger, better survey tool. You tend to see this in markets where someone has a clear advantage. Without a gap to exploit there is no room for growth. NetSpot gives their tool away so you can’t really win on price. IBWave and AirMagnet are fighting near the top so you don’t have a way to break in beside them.

What features could you offer that aren’t present in ESS today? You’d have to spend 18-24 months to even build something comparable to what is present in the software today. So, you dedicate resources to build something that is going to be the tool that people wanted to use two years ago? Good luck selling that idea to a VC firm. Investors want return on their money today.

And if you’re a vendor that’s trying to break into the market, why even consider it? Companies focused on building APs and wireless control solutions don’t always play nice with each other. If you’re going to build a tool to help survey your own deployments you’re going to be unconsciously biased against others and give yourself some breaks. You might even bias your survey results in favor of your own products. I’m not saying it would be intentional. But it has been known to happen in the past.

Here’s the other thing to keep in mind: inertia. Remember how we posed this question with the idea that Ekahau wouldn’t improve the product at all? We all know that’s not the case. Sure, there are some pretty big names on that list that aren’t there any more. But those people aren’t all the people at Ekahau. Development teams continue to work on the product roadmap. There are still plans in place to look at new technologies. Nothing stopped because someone left. Even if the only thing the people on the development side of the house did was finish the plans in place there would still be another 12-18 months of new features on the horizon. That means trying to develop a competitor to ESS means developing technology to replace what is going to be outdated by the time you finish!

People Matter

That brings me back to the people. It’s a sad fact that everyone leaves a company sooner or later. Bill Gates left Microsoft. Steve Jobs left Apple. You can’t count on people being around forever. You have to plan for their departure and hope that, even if you did catch lightning in a bottle, you have to find a way to do it again.

I’m proud to see some of the people that Ekahau has picked up in the last few months. Folks like Shamree Howard (@Shamree_Howard) and Stew Goumans (@WirelessStew) are going to go a long way to keeping the community engagement alive that Ekahau is so known for. There will be others that are brought in to fill the shoes of those that have left. And don’t forget that for every face we see publicly in the community there is an army of development people behind the scenes working diligently on the tools. They may not be the people that we always associate with the brand but they will try hard to put their own stamp on things. Just remember that we have to be patient and let them grow into their role. They have a lot to live up to, so give them the chance. It may take more than 12 months for them to really understand what they got themselves into.


Tom’s Take

No company goes out of business overnight without massive problems under the hood. Even the biggest corporate failures of the last 40 years took a long time to unfold. I don’t see that happening to Ekahau. Their tools are the best. Their reputation is sterling. And they have a bit of a cushion of goodwill to get the next release right. And there will be a next release. And one after that. Because what Ekahau is doing isn’t so much scaling the mountain they climbed to unseat AirMagnet. It’s proving they can keep going no matter what.

Fast Friday – Mobility Field Day 4

This week’s post is running behind because I’m out in San Jose enjoying great discussions from Mobility Field Day 4. This event is bringing a lot of great discussion to the community to get everyone excited for current and future wireless technologies. Some quick thoughts here with more ideas to come soon.

  • Analytics is becoming a huge driver for deployments. The more data you can gather, the better everything can be. When you start to include IoT as a part of the field you can see why all those analytics matter. You need to invest in a lot of CPU horsepower to make it all work the way you want. Which is also driving lots of people to build in the cloud to have access to what they need on-demand from an infrastructure side of things.
  • Spectrum is a huge problem and source of potential for wireless. You have to have access to spectrum to make everything work. 2.4 GHz is pretty crowded and getting worse with IoT. 5 GHz is getting crowded as well, especially with LAA being used. And the opening of the 6 GHz spectrum could be held up in political concerns. Are there new investigations that need to happen to find bands that can be used without causing friction?
  • The driver for technology has to be something other than desire. We have to build solutions and put things out there to make them happen. Because if we don’t we’re going to stuck with what we have for a long time. No one wants to move and reinvest without clear value. But clear value often doesn’t develop until people have already moved. Something has to break the logjam of hesitance. That’s the reason why we still need bold startups with new technology jumping out to make things work.

Tom’s Take

I know I’ll have more thoughts when I get back from this event, but wireless has become the new edge and that’s a very interesting shift. The more innovation we can drive there means the more capable we can make our clients and empower users.

Extremely Hive Minded

I must admit that I was wrong. After almost six years, I was mistake about who would end up buying Aerohive. You may recall back in 2013 I made a prediction that Aerohive would end up being bought by Dell. I recall it frequently because quite a few people still point out that post and wonder what if it’s happened yet.

Alas, June 26, 2019 is the date when I was finally proven wrong when Extreme Networks announced plans to purchase Aerohive for $4.45/share, which equates to around $272 million paid, which will be adjust for some cash on hand. Aerohive is the latest addition to the Extreme portfolio, which now includes pieces of Brocade, Avaya, Enterasys, and Motorola/Zebra.

Why did Extreme buy Aerohive? I know that several people in the industry told me they called this months ago, but that doesn’t explain the reasoning behind spending almost $300 million right before the end of the fiscal year. What was the draw that have Extreme buzzing about this particular company?

Flying Through The Clouds

The most apparent answer is HiveManager. Why? Because it’s really the only thing unique to Aerohive that Extreme really didn’t have already. Aerohive’s APs aren’t custom built. Aerohive’s switching line was rebadged from an ODM in order to meet the requirements to be included in Gartner’s Wired and Wireless Magic Quadrant. So the real draw was the software. The cloud management platform that Aerohive has pushed as their crown jewel for a number of years.

I’ll admit that HiveManager is a very nice piece of management software. It’s easy to use and has a lot of power behind the scenes. It’s also capable of being tuned for very specific vertical requirements, such as education. You can set up self-service portals and Private Pre-Shared Keys (PPSKs) fairly easily for your users. You can also build a lot of policy around the pieces of your network, both hardware and users. That’s a place to start your journey.

Why? Because Extreme is all about Automation! I talked to their team a few weeks ago and the story was all about building automation platforms. Extreme wants to have systems that are highly integrated and capable of doing things to make life easier for administrators. That means having the control pieces in place. And I’m not sure if what Extreme had already was in the same league as HiveManager. But I doubt Extreme has put as much effort into their software yet as Aerohive had invested in theirs over the past 8 years.

For Extreme to really build out the edge network of the future, they need to have a cloud-based management system that has easy policy creation and can be extended to include not only wireless access points but wired switches and other data center automation. If you look at what is happening with intent-based networking from other networking companies, you know how important policy definition is to the schema of your network going forward. In order to get that policy engine up and running quickly to feed the automation engine, Extreme made the call to buy it.

Part of the Colony

More importantly than the software piece, to me at least, is the people. Sure, you can have a bunch of people hacking away at code for a lot of hours to build something great. You can even choose to buy that something great from someone else and just start modifying it to your needs. Extreme knew that adapting HiveManager to fulfill the needs of their platform wasn’t going to be a walk in the park. So bringing the Aerohive team on board makes the most sense to me.

But it’s also important to realize who had a big hand in making the call. Abby Strong (@WiFi_Princess) is the VP of Product Marketing at Extreme. Before that she held the same role at Aerohive in some fashion for a number of years. She drove Aerohive to where they were before moving over to Extreme to do something similar.

When you’re building a team, how do you do it? Do you run out and find random people that you think are the best for the job and hope they gel quickly? Do you just throw darts at a stack of resumes and hope random chance favors your bold strategy? Or do you look at existing teams that work well together and can pull off amazing feats of technical talent with the right motivation? I’d say the third option is the most successful, wouldn’t you?

It’s not unheard of in the wireless industry for an entire team to move back and forth between companies. There’s a hospitality team that’s moved back and forth between Ruckus, Aerohive, and Ubiquiti. There are other teams, like some working on 802.11u, that bounced around a couple of times before they found a home. Which makes me wonder if Extreme bought Aerohive for HiveManager and ended up with the development team as a bonus? Or if they decided to buy the development team and got the software for “free”?


Tom’s Take

We all knew Aerohive was putting itself on the market. You don’t shed sales staff and middle management unless you’re making yourself a very attractive target for acquisition. I still held out hope that maybe Dell would come through for me and make my five-year-old prediction prescient. Instead, the right company snapped up Aerohive for next to nothing and will start in earnest integrating HiveManager into their stack in the coming months. I don’t know what the future plans for further integration look like, but the wireless world is buzzing right now and that should make life extremely sweet for the Aerohive team.

Will Spectrum Hunger Kill Weather Forecasting?

If you are a fan of the work we do each week with our Gestalt IT Rundown on Facebook, you probably saw a story in this week’s episode about the race for 5G spectrum causing some potential problems with weather forecasting. I didn’t have the time to dig into the details behind the story on that episode, so I wanted to take a few minutes and explain why it’s such a big deal.

First, you have to know that 5G (and many other) speeds are entirely dependent upon the amount of spectrum they can use to communicate. The more spectrum available to them, the more channels they have available to communicate. Which increases the speed they can exchange information and reduces the amount of interference between devices. Sounds simple right?

Except mobile devices aren’t the only things that are using the spectrum. We have all kinds of other devices out there that use radio waves to communicate. We’ve known for several years that there are a lot of devices in the 5 GHz spectrum used by 802.11 that interfere with wireless devices. Things like ISM radios for industrial and medical applications or government radar systems. The government has instituted many regulations in those frequency ranges to ensure that critical infrastructure isn’t interfered with.

When Nature Calls

However, sometimes you can’t regulate away interference. According to this Wired article the FCC, back in March, opened up auctions for the 24 GHz frequency band. This was over strenuous objections from NASA, NOAA, and the American Meteorological Society (AMS). Why is 24 GHz so important? Well, as it turns out, there’s a natural phenomenon that exists at that range.

Recall your kitchen microwave. How does it work? Basically, it uses microwave radiation to heat the water in the food you’re cooking. How does it do that? Turns out the natural frequency of water is 2.38 GHz. Now, thanks to the magic of math, 23.8 GHz is a multiple of that frequency. Which means that anything that broadcasts at 23.8 GHz will have issues with water, such as water in tree leaves or in water pipes.

So, why is NOAA and the AMS freaking out about auctioning off spectrum in the 23.8 GHz range? Because anything broadcasting in that range is not only going to have issues with water interference but it’s also going to look like water to sensitive equipment. That means that orbiting weather satellites that use microwaves to detect water vapor in the air that reacts to 23.8 GHz are going to encounter co-channel interference from 5G radio sources.

You might say to yourself, “So what? It’s just a little buzz, right?” Well, except that that little buzz creates interference in the data being fed into forecast prediction models. Those models are the basis for the weather forecasts we have today. And if you haven’t noticed the reliability of our long range forecasts has been steadily improving for the past 30 years or so. Today’s 7-day forecasts are almost 80% accurate, which is pretty good compared to how bad things were in the 80s, where you could only guarantee 80% accuracy from a 3-day forecast.

Guess what? NOAA says that if the 24 GHz spectrum gets auctioned off for 5G use, we could see the accuracy of our forecasting regress almost 30%, which would push our models back to where they were in the 80s. Now, for those of you that live in places that are fortunate enough to only get sun and the occasional rain shower that doesn’t sound too bad, right? Just make sure to pack an umbrella. But for those that live in places where there is a significant chance for severe weather, it’s a bit more problematic.

I live in Oklahoma. We’re right in the middle of Tornado Alley. In the spring between April 1 and June 1 my state becomes a fun place full of nasty weather that can destroy homes and cause widespread devastation. It’s never boring for sure. But in the last 30 years we’ve managed to go from being able to give people a few minutes warning about an impending tornado to being able to issue Potential Dangerous Situation (PDS) Tornado Watches up to 48 hours in advance. While a PDS Tornado Watch doesn’t mean that we’re going to get one in a specific area, it does mean that you need to be on the lookout for something that day. It gives enough warning to make sure you’re not going to get caught flat footed when things get nasty.

Yes Man

The easiest way to avoid this problem is probably the least likely to happen. The FCC needs to restrict the auction of that spectrum range identified by NOAA and NASA until it can be proven that there won’t be any interference or that the forecast accuracy isn’t going to be impacted. 5G rollouts are still far enough in the future that leaving a part of the spectrum out of the equation isn’t going to cause huge issues for the next few years. But if we have to start creating rules for how we have to change power settings for device manufacturers or create updates for fixed-position sensors and old satellites we’re going to have a lot more issues down the road than just slightly slow mobile devices.

The reason why this is hard is because an FCC focused on opening things up for corporations doesn’t care about the forecast accuracy of a farmer in Iowa. They care about money. They care about progress. And ultimately they care about looking good over saving lives. There’s no guarantee that reducing forecast accuracy will impact life saving, but the odds are that better forecasts will help people make better decisions. And ultimately, when you boil it down to the actual choices, the appearance is that the FCC is picking money over lives. And that’s a pretty easy choice for most people to make.


Tom’s Take

If I’m a bit passionate about weather tech, it’s because I live in one of the most weather-active places on the planet. The National Severe Storms Laboratory and the National Weather Center are both about 5-6 miles from my house. I see the use of this tech every day. And I know that it saves lives. It’s saved lives for years for people that need to know

Cisco’s Catalyst for Change

You’ve probably heard by now of the big launch of Cisco’s new 802.11ax (neé Wi-Fi 6) portfolio of devices. Cisco did a special roundtable with a group of influencers from the community called Just The Tech. Here’s a video from that event covering the APs that were released, the 9120:

Fred always does a great job of explaining the technical bits behind the APs. But one thing that caught my eye here is the name of the AP – Catalyst. Cisco has been using Aironet for their AP line since they purchased Aironet Wireless Communications back in 1999. The name was practically synonymous with wireless technologies for many people in the industry that worked exclusively with Cisco technologies.

So, is the name change something we should be concerned about?

A Rose Is a Rose Is An AP

Cisco moving toward a unified naming convention for their edge solutions makes a lot of sense. Ten years ago, wireless was still primarily 802.11g-based with 802.11n still a few months away from being proposed and ratified. Connectivity hadn’t quite yet reached the ubiquitous levels of wireless that we see today. The iPhone was only about to be on its third revision.

Cisco Catalyst devices were still the primary method of getting users connected to the network. Even laptop users hunted for Ethernet ports everywhere instead of just connecting to wireless. Ethernet was more reliable and faster than 54Mbps (at best) and fighting contention with all the other devices around. Catalyst stood for reliability.

In the time since, wireless has become the new edge device connectivity. No longer do we hunt for Ethernet ports unless we have a specific need for one. Laptops don’t come with dedicated wired networking options any longer. In 2019, wireless is king. And Aironet is the wireless name that Cisco has built. So why the change?

In short, because edge connectivity isn’t wired versus wireless any longer. Instead, it’s unified. Whether it was because of the idiotic decisions made by Gartner to required wired switching for their wireless Magic Quadrant (TM) or because people stopped thinking about Ethernet except to power wireless access points, the fact is that the edge no longer has wires. For Cisco, this means that Catalyst switches aren’t the edge any longer. So the name doesn’t have the same power as it once did.

However, the Aironet name has also lost its luster. Why? Because Aironet is a remnant of Cisco’s pre-controller AP past. The line of APs that most people are likely using in their office right now aren’t from the Aironet heritage. Instead, they are based on technology acquired by Cisco from Airespace that Cisco bought in 2005 to add controller-based technology to their portfolio. And, aside from references to Airespace in the code of the Wireless LAN Controllers (WLC), the line never really had a brand like Catalyst or Aironet.

Today, Cisco has started the move away from using Airespace technology in their controllers. As this video from 2018 shows, Cisco has begun to migrate their controller OS to a more modern platform instead of relying on modifying the old Airespace code again and again. This means that development going forward should be more rapid and less resultant on the whims of keeping everything running properly on a codebase over a decade old.

Branding New

So, that explains the reasons why Cisco might want to refresh everything. But why the naming of the APs? Why not just rely on Aironet and keep that branding going forward?

Well, because they want to make end users believe that the network is the same no matter if it’s wired or wireless. They want buyers to believe that Catalyst stands for edge connectivity, no matter where that edge might be. And, unless they really screw up and start making us think these new APs are switches they’ll be able to pull off this branding exercise fairly well.

That’s because users have stopped caring about the wired versus wireless debate. Instead, they only care about speed and reliability. 802.11ax will help on both fronts, and Cisco wants to capitalize on that by making these new APs feel different. And the best way to do that is by rebranding them.

Wireless professionals don’t care about the name. Most of the time they just refer to the model number anyway. And while Cisco’s model numbering strategies seem to be getting a bit crowded in the 9000-level of things, this makes a lot of sense to distance themselves from their past. The old 802.11ac APs are still very viable and will likely be useful all they way until the end of their life. But when the time comes to pull them out, you’ll be retiring Aironet and Airespace along with them. Even if you didn’t realize those were the branding names of those APs.


Tom’s Take

Branding matters. Or it doesn’t. Either you love the name of the thing you’ve been using or you couldn’t care less. Whether it’s an iPhone or a car or an access point, everything has a name and a number attached to it. Cisco has decided, for better or worse, to unify the edge under the Catalyst name. Maybe it will stick and reduce confusion with customers. Maybe it will be hated enough that they’ll bring back the Aironet name in a couple of cycles to “get back to basics” as it were. But for now, the catalyst for change at Cisco leads to a unified edge solution.

802.11ax Is NOT A Wireless Switch

802.11ax is fast approaching. Though not 100% ratified by the IEEE, the spec is at the point where most manufacturers and vendors are going to support what’s current as the “final” version for now. While the spec for what marketing people like to call Wi-Fi 6 is not likely to change, that doesn’t mean that the ramp up to get people to buy it is showing any signs of starting off slow. One of the biggest problems I see right now is the decision by some major AP manufacturers to call 802.11ax a “wireless switch”.

Complex Duplex

In case you had any doubts, 802.11ax is NOT a switch.1 But the answer to why that is takes some explanation. It all starts with the network. More specifically, with Ethernet.

Ethernet is a broadcast medium. Packets are launched into the network and it is hoped that the packet finds the destination. All nodes on the network listen and, if the packet isn’t destined for them they discard it. This is the nature of the broadcast. If multiple stations try to talk at once, the packets collide and no one hears anything. That’s why Ethernet developed a collision detection  system called CSMA/CD.

Switches solved this problem by segmenting the collision domain to a single port. Now, the only communications between the stations would be in the event that the switch couldn’t find the proper port to forward a packet. In every other case, the switch finds where the packet is meant to be sent and forwards it to that location. It prevents collisions by ensuring that no two stations can transmit at any one time except to the switch in the middle. This also allows communications to be full-duplex, meaning the stations can send and receive at the same time.

Wireless is a different medium. The AP still speaks Ethernet, and there is a bridge between the Ethernet interface and the radios on the other side. But the radio interfaces work differently than Ethernet. Firstly, they are half-duplex only. That means that they have to send traffic or listen to receive traffic but they can’t do both at the same time. Wireless also uses a different version of collision detection called CSMA/CA, where the last A stands for “avoidance”. Because of the half-duplex nature of wireless, clients have a complex process to make sure the frequency is clear before transmitting. They have to check whether or not other wireless clients are talking and if the ambient RF is within the proper thresholds. After all those checks are confirmed, then the client transmits.

Because of the half-duplex wireless connection and the need for stations to have permission to send, some people have said that wireless is a lot like an Ethernet hub, which is pretty accurate. All stations and APs exist in the contention (collision) domain. Aside from the contention algorithm, there’s nothing to stop the stations from talking all at once. And for the entire life of 802.11 so far, it’s worked. 802.11ac started to introduce more features designed to let APs send frames to multiple stations at the same time. That’s what’s called Multi-user Multi-In, Multi-out (MU-MIMO).In theory, it could allow for full-duplex transmissions by allowing a client to send on one antenna and receive on another, but utilizing client radios in this way has impacts on other things.

Switching It Up

Enter 802.11ax. The Wi-Fi 6 feature that has most people excited is Orthogonal Frequency-Division Multiple Access (OFDMA). Simply put, OFDMA allows the clients and APs to not use the entire transmission channel for sending data. It can be sliced up into sub-channels that can be used for low-bandwidth applications to reserve time to talk to the AP. Combined with enhanced MU-MIMO support in 802.11ax, the idea is that clients can talk directly to the AP and allocate a specific sub-channel resource unit all to themselves.

To the marketing people in the room, this sounds just like a switch. Reserved channels, single station access, right? Except it is still not a switch. The AP is still a bridge between two media types for one thing, but more importantly the transmission medium still hasn’t magically become full-duplex. Stations may get around this with some kind of trickery, but they still need to wait for the all-clear to send data. Remember that all stations and APs still hear all the transmissions. It’s still a broadcast medium at the most basic. No amount of software configuration is going to fix that. And for the networking people in the room that might be saying “so what?”, remember when Cisco tried to sell us on the idea that StackWise was capable of 40Gbps of throughput because it could send in both directions on the StackWise ring at once? Remember when you started screaming “THAT’S NOT HOW BANDWIDTH WORKS!!!” That’s what this is, basically. Smoke and mirrors and ignoring the underlying physical layer constraints.

In fact, if you read the above resources, you’re going to find a lot of caveats at the end about support for protocols coming up and not being in the first version of the spec. That’s exactly what happened with 802.11ac. The promise of “gigabit Wi-Fi” took a couple of years and the MU-MIMO enhancements everyone was trumpeting never fully materialized. Just like all technology, the really good stuff was deferred to the next release.

To make sure that both sides are heard, it is rightly pointed out by wireless professionals like Sam Clements (@Samuel_Clements) that 802.11ax is the most “switch-like” so far, with multiple dynamic collision domains. However, in the immortal words of Tyler Durden, “Sticking feathers up your butt doesn’t make you a chicken.” The switch moniker is still a marketing construct and doesn’t hold any water in reality aside from a comparison to a somewhat similar technology. The operation of wireless APs may be hub-like or switch-like, but these devices are not either of those types of devices.

CPU Bound and Determined

The other issue that I see that prevents this from becoming a switch is the CPU on the AP becoming a point of contention. In a traditional Ethernet switch, the forwarding hardware is a specialized ASIC that is optimized to forward packets super fast. It does this with some trickery, including cut through for packets and trusting the incoming CRC. When packets bounce up to the CPU to be process-switched, it bogs the entire system down terribly. That’s why most networking texts will tell you to avoid process switching at all costs.

Now apply those lessons to wireless. All this protocol enhancement is now causing the CPU to have to do extra duty to work on time-slicing and sub-channel optimization. And remember that those CPUs are operating on 18-28 watts of power right now. Maybe the newer APs will get over 30 watts with new PoE options, but that means those CPUs are still going to be eating a lot of power to process all this extra software work. Even adding dedicated processing power to the AP isn’t going to fix things in the long run. That might be one of the reasons why Cisco has been pushing enhanced PoE in the run-up to their big 802.11ax launch at the end of April.


Tom’s Take

Let me say it again for the cheap seats: 802.11ax is NOT a wireless switch! The physical layer technology that 802.11 is built on won’t be switchable any time soon. 802.11ax has given us a lot of enhancements in the protocol and there is a lot to be excited about, like OFDMA, BSS coloring, and TWT. But, like the decision to over-simplify the marketing name, the idea of calling it a wireless switch just to give people a frame of reference so they buy more of them is just silly. It’s disingenuous and sounds more like a snake oil salesman than honest technology marketing. Rather than trying to trick the users with cute sounding terms, how about we keep the discussion honest and discuss the pros and cons of the technology?

Special thanks to my friends in the wireless space for proofreading this post and correcting my errors in technology:


  1. The title was kind of a spoiler ↩︎

Fast Friday – Aruba Atmosphere 2019

A couple of quick thoughts that I’m having ahead of Aruba Atmosphere next week in Las Vegas, NV. Tech Field Day has a lot going on and you don’t want to miss a minute of the action for sure, especially on Wednesday at 3:15pm PST. In the meantime:

  • IoT is really starting to more down-market. Rather than being focused on enabling large machines with front-end devices to act as gateways we’re starting to see more and more IoT devices either come with integrated connective technology or interface with systems that do. Building control systems aren’t just for large corporations any more. You can automate an office on the cheap today. Just remember that any device that can talk can also listen. Security posture is going to be huge.
  • I remember some of the discussions that we had during the heady early days of SDN and how unimpressed wireless and mobility people were when they figured out how the controllers and dumb edge devices really worked. Most wireless pros have been there and done that already. However, recently there has been a lot of movement in the OpenConfig community around wireless devices. And that really has the wireless folks excited. Because the promise of SDN for them has never been about control, but instead about compatibility. The real key isn’t building another controller but instead making all the APs and controllers work better together.
  • Another great thing I’m looking forward too seeing at Atmosphere is Aruba HER. It’s an event focused on building stronger communities and increasing diversity for all. You can read a bit more about what will be going on there in this post from Claire Chaplais. Make sure to check out Zoë Rose keynoting the event as well! She’s got a very powerful story to tell. She gave us all a bit of it at Security Field Day last December in this Ignite Talk.

Tom’s Take

Make sure you stay tuned for all the things we’re going to be discussing during the event. We’re going to be using the event hashtag #ATM19 but also using #MFDx as a way to let you know about the great stuff we will have going on!