Wi-Fi 6E Growing Pains For Apple

You may have seen that the new iPad Pro has Wi-Fi 6E support. That caused a lot of my wireless friends to jump out and order one, as I expected. As I previously mentioned, 2023 is going to be a big year for Wi-Fi 6E. I was wrong about the 6E radio on the new iPhone but given the direction that Apple is going with the iPad Pro and probably the MacBook as well we’re in for a lot of fun. Why? Because Apple is changing their stance on how to configure 6GHz networks.

An SSID By Any Other Name

If you’ve ever set up wireless networks before you know there are some different suggestions about how to configure the SSIDs with multiple bands. One school of thought says that you need to combine both 2.4GHz and 5GHz in the same SSID and let the device figure out which one is the best to use. This is the way that I have mine set up at home.

However, if you do a quick Google search you’ll find a lot of other wisdom that suggests creating two different SSIDs that only work on a single band. The thought process here is that the device can’t distinguish between the different bands once it makes a decision so it will be stuck on one or the other. While that doesn’t sound like a bad idea it does have issues in practice. What if your device chooses 2.4GHz when you didn’t want it to? What if your device is stuck on 5GHz at the limit of the noise floor in your office and you want it to swap to the other band for better throughput instead of adding another AP?

There are several reasons to have more control over how the frequency band is chosen. Sadly, Apple has never allowed the device to choose the band when joining a network. The only way to influence that selection has been to create different networks, which leads to management issues and other challenges for devices that are unable to join one network or another. The management issues made the planning process rather challenging.

Now, with the advent of a device that has a Wi-Fi 6E radio in the 6GHz range, Apple has changed their thinking about how a network should operate. In a new support post, Apple now clarifies that the SSID names should not be different for the three different bands. There’s no other mention of what happens at a device level as far as band selection.

In a different tech support article, Apple describes what happens if you don’t give them the same name. If you join a 6GHz-only network on the new iPad Pro, the device will detect there is no corresponding 5GHz network and search for one from the same AP and let you join it as well. The article for this even mentions the ominous “limited compatibility”, even if the dialog box doesn’t. If you choose to join this split SSID setup there is another confirmation box that encourages you to go tweak your SSID settings to make the name the same on both networks. I’m not sure if that same prompt comes up for 2.4GHz networks too. Maybe I can borrow someone’s iPad to test it.

Disabling New Tech

Even though Apple has never allowed users to select the band that they want to use on an SSID there is a new feature for 6GHz that gives you the opportunity to work around any issues you have with this new band. In the settings for the SSID there is a toggle for “Wi-Fi 6E Mode” that allows you to disable 6GHz on that SSID until enabled again. This way you can use the recommended settings for the SSID per Apple but still disable the pieces that might be broken.

Interestingly, this toggle only appears for 6E networks according to the support article. There’s still no way to toggle between 2.4GHz and 5GHz. However, adding this support to the network settings should be easy to carry down into the other bands. Whether or not Apple does it is a much different matter. Also, the setting isn’t currently in MacOS Ventura. That could be because there isn’t a 6E radio available in a Mac yet and the setting might not show up until there’s a supported radio. Time will tell when Apple releases a MacBook with a built-in Wi-Fi 6E radio.


Tom’s Take

After months of professionals saying that Apple needs to release support for Wi-FI 6E it’s finally here. It also brings new capabilities from the software side to control how the 6E radios are used. Is it completely baked and foolproof? Of course not. Getting the radios into the iPad was the first step. By introducing them now with software for troubleshooting and configurations and following it up with a likely 6GHz MacBook and iMac lineup soon there will be plenty of time to work out the issues by the time the iPhone 15 gets support for Wi-Fi 6E. Apple is clearly defining their expectations for how an SSID should work so you have plenty of time to work through things or change your design documents before the explosion of Wi-Fi 6E clients arrives en masse in 2023.

Why 2023 is the Year of Wi-Fi 6E

If you’re like me, you chuckle every time someone tells you that next year is the year of whatever technology is going to be hot. Don’t believe me? Which year was the Year of VDI again? I know that writing the title of this post probably made you shake your head in amusement but I truly believe that we’ve hit the point of adoption of Wi-Fi 6E next year.

Device Support Blooms

There are rumors that the new iPhone 14 will adopt Wi-Fi 6E. There were the same rumors when the iPhone 13 was coming out and the iPhone rumor mill is always a mixed bag but I think we’re on track this time. Part of the reason for that is the advancements made in Wi-Fi 6 Release 2. The power management features for 6ER2 are something that should appeal to mobile device users, even if the name is confusing as can be.

Mobile phones don’t make a market. If they were the only driver for wireless adoption the Samsung handsets would have everyone on 6E by now. Instead, it’s the ecosystem. Apple putting a 6E radio in the iPhone wouldn’t be enough to tip the scales. It would take a concerted effort of adoption across the board, right? Well, what else does Apple have on deck that can drive the market?

The first thing is the rumored M2 iPad Pro. It’s expected to debut in October 2022 and feature upgrades aside from the CPU like wireless charging. One of the biggest features would be the inclusion of a Wi-Fi 6E radio as well to match the new iPhone. That would mean both of Apple’s mobile devices could enjoy the faster and less congested bandwidth of 6 GHz. The iPad would also be easier to build a new chip around compared to the relatively cramped space inside the iPhone. Give the professional nature of the iPad Pro one might expect an enterprise-grade feature like 6E support to help move some units.

The second thing is the looming M2 MacBook Pro. Note for this specific example I’m talking about the 14” and 16” models that would features the Pro and Max chips, not the 13” model running a base M2. Apple packed the M1 Pro and M1 Max models with new features last year, including more ports and a snazzy case redesign. What would drive people to adopt the new model so soon? How about faster connectivity? Given that people are already complaining that the M1 Pro has slow Wi-Fi Apple could really silence their armchair critics with a Wi-Fi 6E radio.

You may notice that I’m relying heavily on Apple here as my reasoning behind the projected growth of 6E in 2023. It’s not because I’m a fanboy. It’s because Apple is one of the only companies that controls their own ecosystem to the point of being able to add support for a technology across the board and drive adoption among their user base. Sure, we’ve had 6E radios from Samsung and Dell and many others for the past year or so. Did they drive the sales of 6E radios in the enterprise? Or even in home routers? I’d argue they haven’t. But Apple isn’t the only reason why.

Oldie But Goodie

The last reason that 2023 is going to be the year of Wi-Fi 6E is because of timing. Specifically I’m talking about the timing of a refresh cycle in the enterprise. The first Wi-Fi 6 APs started coming into the market in 2019. Early adopters jumped at the chance to have more bandwidth across the board. But those APs are getting old by the standards of tech. They may still pass traffic but users that are going back to the office are going to want more than standard connectivity. Especially if those users splurged for a new iPhone or iPad for Christmas or are looking for a new work laptop of the Macintosh variety.

Enterprises may not have been packed with users for the past couple of years but that doesn’t mean the tech stood still. Faster and better is always the mantra of the cutting edge company. The revisions in the new standards would make life easier for those trying to deploy new IoT sensors or deal with with congested buildings. If enterprise vendors adopt these new APs in the early part of the year it could even function as an incentive to get people back in the office instead of the slow insecure coffee shop around the corner.

One other little quirky thing comes from an report that Intel is looking to adopt Wi-Fi 7. It may just be the cynic in me talking but as soon as we start talking about a new technology on the horizon people start assuming that the “current” cutting edge tech is ready for adoption. It’s the same as people that caution you not to install a new operating system until after the first patch or service release. Considering that Wi-Fi 6 Release 2 is effectively Wi-Fi 6E Service Pack 1 I think the cynics in the audience are going to think that it’s time to adopt Wi-Fi 6E since it’s ready for action.


Tom’s Take

Technology for the sake of tech is always going to fail. You need drivers for adoption and usage. If cool tech won the day we’d be watching Betamax movies or HD-DVD instead of streaming on Netflix. Instead, the real winners are the technologies that get used. So far that hasn’t been Wi-Fi 6E for a variety of reasons. However, with the projections of releases coming soon from Apple I think we’re going to see a massive wave of adoption of Wi-Fi 6E in 2023. And if you’re reading this in late 2023 or beyond and it didn’t happen, just mentally change the title to whatever next year is and that will be the truth.

Is the M1 MacBook Pro Wi-Fi Really Slower?

I ordered a new M1 MacBook Pro to upgrade my existing model from 2016. I’m still waiting on it to arrive by managed to catch a sensationalist headline in the process:

“New MacBook Wi-Fi Slower than Intel Model!”

The article referenced this spec sheet from Apple referencing the various cards and capabilities of the MacBook Pro line. I looked it over and found that, according to the tables, the wireless card in the M1 MacBook Pro is capable of a maximum data rate of 1200 Mbps. The wireless card in the older model Intel MacBook Pro all the way back to 2017 is capable of 1300 Mbps. Case closed! The older one is indeed faster. Except that’s not the case anywhere but on paper.

PHYs, Damned Lies, and Statistics

You’d be forgiven for jumping right to the numbers in the table and using your first grade inequality math to figure out that 1300 is bigger than 1200. I’m sure it’s what the authors of the article did. Me? I decided to dig in a little deeper to find some answers.

It only took me about 10 seconds to find the first answer as to one of the differences in the numbers. The older MacBook Pro used a Wi-Fi card that was capable of three spacial streams (3SS). Non-wireless nerds reading this post may wonder what a spatial stream is. The short answer is that it is a separate unique stream of data along a different path. Multiple spacial streams can be leveraged through Multiple In, Multiple Out (MIMO) to increase the amount of data being sent to a wireless client.

The older MacBook Pro has support for 3SS. The new M1 MacBook Pro has a card that supports up to 2SS. Problem solved, right? Well, not exactly. You’re also talking about a client radio that supports different wireless protocols as well. The older model supported 802.11n (Wi-Fi 4) and 802.11ac (Wi-Fi 5) only. The newer model supports 802.11ax (Wi-Fi 6) as well. The quoted data rates on the Apple support page state that the maximum data rates for the cards are quoted in 11ac for the Intel MBP and 11ax for the M1 MBP.

Okay, so there are different Wi-Fi standards at play here. Can’t be too hard to figure out, right? Except that the move from Wi-Fi 5 to Wi-Fi 6 is more than just incrementing the number. There are a huge number of advances that have been included to increase efficiency of transmission and ensure that devices can get on and off the air quickly to help maximize throughput. It’s not unlike the difference between the M1 chip in the MacBook and its older Intel counterpart. They may both do processing but the way they do it is radically different.

You also have to understand something called Modulation Coding Set (MCS). MCS defines the data rates possible for a given definition of signal-to-noise ratio (SNR), RSSI, and Quadrature Amplitude Modulation (QAM). Trying to define QAM could take all day, so I’ll just leave it to GT Hill to do it for me:

The MCS table for a given protocol will tell you what the maximum data rate for the client radio is. Let’s look at the older MacBook Pro first. Here’s a resource from NetBeez that has the 802.11ac MCS rates. If you look up the details from the Apple support doc for a 3SS radio using VHT 9 and an 80MHz channel bandwidth you’ll find the rate is exactly 1300 Mbps.

Here’s the MCS table for 802.11ax courtesy of Francois Verges.. WAY bigger, right? You’re likely going to want to click on the link to the Google Sheet in his post to be able to read it without a microscope. If you look at the table and find the row that equates to an 11ax client using 2SS, MCS HE 11, and 80MHz channel bandwidth you’ll see that the number is 1201. I’ll forgive Apple for rounding it down to keep the comparison consistent.

Again, this all checks out. The Wi-Fi equivalent of actuarial tables says that the older one is faster. And it is under absolutely perfect conditions. Because the quoted numbers for the Apple document are the maximums for those MCSes. When’s the last time you got the maximum amount of throughput on a wired link? Now remember that in this case you’re going to need to have perfect airtime conditions to get there. Which usually means you’ve got to be right up against the AP or within a very short distance of it. And that 80MHz channel bandwidth? As my friend Sam Clements says, that’s like drag racing a school bus.

The World Isn’t Made Out Of Paper

If you are just taking the numbers off of a table and reproducing them and claiming one is better than the other then you’re probably the kind of person that makes buying decisions for your car based on what the highest number on the speedometer says. Why take into account other factors like cargo capacity, passenger size, or even convertible capability? The numbers on this one go higher!

In fact, when you unpack the numbers here as I did, you’ll see that the apparent 100 Mbps difference between the two radios isn’t likely to come into play at all in the real world. As soon as you move more than 15 feet away from the AP or put a wall between the client device and your AP you will see a reduction in the data rate. The top end of these two protocols are running in the 5GHz spectrum, which isn’t as forgiving with walls as 2.4GHz is. Moreover, if there are other interfering sources in your environment you’re not going to get nearly the amount of throughput you’d like.

What about that difference in spatial streams? I wondered about that for the longest time. Why would you purposely put fewer spatial streams in a client device when you know that you could max it out? The answer is that even with that many spatial streams reality is a very different beast. Devin Akin wrote a post about why throughput numbers aren’t always the same as the tables. In that post he mentioned that a typical client mix in a network is 2018 is about 66% devices with 1SS, 33% devices with 2SS, and less than 1% of devices have 3SS. While those numbers have probably changed in 2021 thanks to the iPhone and iPad now having 2SS radios, I don’t think the 3SS numbers have moved much. The only devices that have 3SS are laptops and other bigger units. It’s harder for a unit to keep the data rates from a 3SS radio so most devices only include support for two of them.

The other thing to notice here is that the value of what a spatial stream brings you is different between the two protocols. In 802.11ac, the max data rate for a single spatial stream is about 433 Mbps. For 802.11ax it’s 600 Mbps. So a 2SS 11ac radio maxes out at 866 Mbps while a 3SS 11ax radio setup would get you around 1800 Mbps. It’s far more likely that you’ll be using the 2SS 11ax radio more efficiently more often than you’ll see the maximum throughput of a 3SS 11ac radio.


Tom’s Take

This whole tale is a cautionary example of why you need to do your own research, even if you aren’t a Wi-Fi pro. The headline was both technically correct and wildly inaccurate. Yes, the numbers were different. Yes, the numbers favored the older model. No one is going to see the maximum throughput under most normal conditions. Yes, having support for Wi-Fi 6 in the new MacBook Pro is a better choice overall. You’re not going to miss that 100 Mbps of throughput in your daily life. Instead you’re going to enjoy a better protocol with more responsiveness in the bands you use on a regular basis. You’re still faster than the gigabit Ethernet adapters so enjoy the future of Wi-Fi. And don’t believe the numbers on paper.

Charting the Course For Aruba

By now you’ve seen the news that longtime CEO of Aruba Keerti Melkote is retiring. He’s decided that his 20-year journey has come to a conclusion and he is stepping down into an advisory role until the end of the HPE fiscal year on October 31, 2021. Leaving along with him are CTO Partha Narasimhan and Chief Architect Pradeep Iyer. It’s a big shift in the way that things will be done going forward for Aruba. There are already plenty of hot takes out there about how this is going to be good or bad for Aruba and for HPE depending on which source you want to read. Because I just couldn’t resist I’m going to take a stab at it too.

Happy Trails To You

Keerti is a great person. He’s smart and capable and has always surrounded himself with good people as well. The HPE acquisition honestly couldn’t have gone any better for him and his team. The term “reverse acquisition” gets used a lot and I think this is one of the few positive examples of it. Aruba became the networking division of HPE. They rebuilt the husk that was HP’s campus networking division and expanded it substantially. They introduced new data center switches and kept up with their leading place in the access point market.

However, even the best people eventually need new challenges. There was always a bit of a looming role on the horizon for Keerti according to many industry analysts. As speculated by Stephen Foskett on this past week’s episode of the Gestalt IT Rundown, Keerti was the odds-on favorite to take over HPE one day. He had the pedigree of running a successful business and he understood how data moving to the cloud was going to be a huge driver for hardware in the future. He even had taken over a combined business unit of networking devices and edge computing renamed Intelligent Edge last year. All signs pointed to him being the one to step up when Antonio Neri eventually moved on.

That Keerti chose to step away now could indicate that he realized the HPE CEO job was not going to break his way. Perhaps the pandemic has sapped some of his desire to continue to run the business. Given that Partha and Pradeep are also choosing to depart as well it could be more of an indicator of internal discussions and not a choice by Keerti to move on of his own accord. I’m not speculating that there is pressure on him. It could just be that this was the best time to make the exit after steering the ship through the rough seas of the pandemic.

Rearranging the Deck Chairs

That brings me to the next interesting place that Aruba finds itself. With Keerti and company off to greener pastures, who steps in to replace them? When I first heard the news of the departure of three very visible parts of Aruba all at once my first thought jumped immediately to David Hughes, the former CEO of Silver Peak.

HPE bought Silver Peak last year and integrated their SD-WAN solutions into Aruba. I was a bit curious about this when it first happened because Aruba had been touting their SD-Branch solution that leveraged ClearPass extensively. To shift gears and adopt Silver Peak as the primary solution for the WAN edge was a shift in thinking. By itself that might have been a minor footnote.

Then a funnier thing happened that gave me pause. I started seeing more and more Silver Peak names popping up at Aruba. That’s something you would expect to see when a company gets acquired. But the people that were hopping into roles elsewhere outside of the WAN side of the house was somewhat shocking. It felt for a while like Silver Peak was taking over a lot of key positions inside of Aruba on the marketing side of the house. Which meant that the team was poised for something bigger in the long run.

When David Hughes was named as the successor to Partha and Pradeep as the CTO and Chief Architect at Aruba it made sense to me. Hughes is good at the technology. He understand the WAN and networking. He doesn’t need to worry about much about the wireless side of the house because Aruba has tons of wireless experts, including Chuck Lukaszewski. Hughes will do a great job integrating the networking and WAN side of the house to embrace the edge mentality that Aruba and HPE have been talking about for the past several months.

So, if David Hughes isn’t running Aruba, who is? That would be Phil Mottram, a veteran of the HPE Communications Technology Group. He has management material written all over him. He’s been an executive at a number of companies and he is going to steer Aruba in the direction that HPE wants it to go. That’s where the real questions are going to start being asked around here. I’m sure there’s probably going to be some kind of a speech by Antonio Neri about how Aruba is a proud part of the HPE family and the culture that has existed at Aruba is going to continue even after the departure of the founder. That’s pretty much the standard discussion you have with everyone after they leave. I’m sure something very similar happened after the Meraki founders left Cisco post-acquisition.

The Sky’s The Limit

What is HPE planning for Aruba? If I were a betting man, I’d say the current trend is going to see Aruba become more integrated into HPE. Not quite on the level of Nimble Storage but nowhere near the practical independence they’ve had for the last few years. We’re seeing that HPE is looking at Aruba as a valuable brand as much as anything else. The moves above in relation to the departure of Keerti make that apparent.

Why would you put a seasoned CEO in the role of Chief Architect? Why would you name a senior Vice President to the role of President of that business unit? And why would the CEO agree to be where he is willingly when that carrot is just out of reach? I would say it’s because David Hughes either realizes or has been told that the role of Chief Architect is going to be much more important in the coming months. That would make a lot of sense if the identity of Aruba begins to be subsumed into HPE proper.

Think about Meraki and Cisco. Meraki has always been a fiercely independent company. You would have been hard pressed for the first year or two to even realize that Cisco was the owner. However, in the past couple of years the walls that separate Cisco and Meraki have started to come down. Meraki is functioning more like a brand inside of Cisco than an independent part of the organization. It’s not a negative thing. In fact, it’s what should happen to successful companies when they get purchased. However, given the independence streak of the past it seems more intriguing than what’s on the surface.

Aruba is going to find itself being pulled in more toward HPE’s orbit. The inclusion of Aruba in the HPE Intelligent Edge business unit says that HPE has big plans for the whole thing. They don’t want to have their customers seeing HPE and Aruba as two separate things. Instead, HPE would love to leverage the customers that Aruba does have today to bring in more HPE opportunities. The synergy between the two is the whole reason for the acquisition in the first place. Why not take advantage of it? Perhaps the departure of the old guard is the impetus for making that change?


Tom’s Take

Aruba isn’t going to go away. It’s not going to be like a storage solution being eaten alive and then disappearing into a nameplate on a rack unit. Aruba has too much value as a brand and a comfortable position in the networking space to be completely eliminated. However, it is going to become more valuable to have the expertise of the Aruba teams creating more synergy inside of HPE and leading efforts to integrate the edge networking and compute solutions together to come out ahead as people shift some of their workloads around to take advantage of all the work that’s been done there. Time will tell if Aruba stays separate enough to be remembered as the titan they’ve been.

The Silver (Peak) Lining For HPE and Cloud

You no doubt saw the news this week that HPE announced that they’re buying Silver Peak for just shy of $1 billion dollars. It’s a good exit for Silver Peak and should provide some great benefits for both companies. There was a bit of interesting discussion around where this fits in the bigger picture for HPE, Aruba, and the cloud. I figured I’d throw my hat in the ring and take a turn discussing it.

Counting Your Chickens

First and foremost, let’s discuss where this acquisition is headed. HPE announced it and they’re the ones holding the purse strings. But the acquisition post was courtesy of Keerti Melkote, who runs the Aruba, a Hewlett Packard Enterprise Company (Aruba) side of the house. Why is that? It’s because HPE “reverse acquired” Aruba and sent all their networking expertise and hardware down to the Arubans to get things done.

I would venture to say that Aruba’s acquisition was the best decision HPE could have made. It gave them immediate expertise in an area they sorely needed help. It gave Aruba a platform to build on and innovate from. And it ultimately allowed HPE to shore up their campus networking story while trying to figure out how they wanted to extend into the data center.

Aruba was one of the last major networking players to announce a strategy based on SD-WAN. We’ve seen a lot of major acquisitions on that front, including Cisco buying Viptela, VMware buying VeloCloud, Palo Alto Networks buying CloudGenix, and Oracle buying Talari. That last one is going to be important later in this story. Aruba didn’t go out and buy their own SD-WAN solution. Instead, they developed it in-house leveraging the expertise they had with ClearPass. Instead of calling it SD-WAN and focusing on connecting islands together, they used the term SD-Branch to denote the connectivity was more about the users in the branch and not the office itself.

I know that SD-Branch isn’t a term that’s en vogue with most analysts. But it’s important to realize that Aruba was trying to say more about the users than anything else. Hardware was an afterthought in this solution. It wasn’t about an edge gateway, although Aruba had those. It wasn’t about connectivity, even though Aruba could help on that front too. Instead, it was about pushing policy down to the edge and sorting out connectivity for devices. That’s the focus that Aruba had for many years with their wireless roots. It only made sense to leverage the tools to get where they wanted to be on the SD-Whatever front.

Coming Home To Roost

The world of SD-WAN isn’t the same as the branch any longer, though. Now, SD-WAN drives cloud on-ramp and edge security. Ironically enough, the drive to include Secure Access Service Edge (SASE) in SD-WAN is way more in line with the original concept of SD-Branch as defined by Aruba a couple of years ago. But you have to have a more well-rounded solution that includes cloud.

Why do you need to worry about the cloud? If you still have to ask that question in 2020 you’re not gonna make it in IT much longer. The cloud operationalizes a lot of things in IT that we have just accepted for years. The way that we do workloads and provisioning has changed on-site as well. If you don’t believe me then listen to HPE. Their biggest focus coming out of HPE Discover this year has been highlighting GreenLake, their on-premises IT-as-a-Service offering. It’s designed to give you cloud-like performance in your data center with cloud-like billing options. And, as marketed, the ability to move workloads back and forth between on-site and in-cloud as needed.

Hopefully, you’re starting to see why Silver Peak was such an important pickup for HPE now. SD-Branch is focused on devices but not services. It’s not designed to provide cloud on-ramp. HPE and Aruba need a solution that gives you the ability to accelerate user adoption of software-as-a-service no matter where it lives, be it in GreenLake or AWS or Azure. Essentially, HPE and Aruba needed Talari for their cloud offerings. And that’s what they’re going to get with Silver Peak.

Silver Peak has focused on cloud intelligence to accelerate SaaS for a few years now. They also have a multi-cloud networking solution. Multi-cloud is the way that you get people working between two different clouds, like AWS and GreenLake for example.

When you tie in Silver Peak’s DC-focused SD-WAN solution with Aruba’s existing SD-Branch capabilities, you see how holistic a solution you have now. And because it’s all based on software you don’t have to worry about clunky integration. It can run side-by-side for now and in the next revision of Aruba Central integration with the new Edge Services Platform (ESP), it’s all going to be seamless to use however you want to use.


Tom’s Take

I think Silver Peak is a good pickup for HPE and Aruba. Just remember that when you hear HPE and networking in the same sentence, you need to think about Aruba. Ultimately, the integration of Silver Peak into Aruba’s SD-Branch solution is going to benefit everyone from users to cloud to software and back again. And it’s going to help position Aruba as a major player in the SD-WAN market. Which is a silver lining on any cloud.

AI and Trivia

questions answers signage

Photo by Pixabay on Pexels.com

I didn’t get a chance to attend Networking Field Day Exclusive at Juniper NXTWORK 2019 this year but I did get to catch some of the great live videos that were recorded and posted here. Mist, now a Juniper Company, did a great job of talking about how they’re going to be extending their AI-driven networking into the realm of wired networking. They’ve been using their AI virtual assistant, named “Marvis”, for quite a while now to solve basic wireless issues for admins and engineers. With the technology moving toward the copper side of the house, I wanted to talk a bit about why this is important for the sanity of people everywhere.

Finding the Answer

Network and wireless engineers are walking storehouses of useless trivia knowledge. I know this because I am one. I remember the hello and dead timers for OSPF on NBMA networks. I remember how long it takes BGP to converge or what the default spanning tree bridge priority is for a switch. Where some of my friends can remember the batting average for all first basemen in the league in 1971, I can instead tell you all about LSA types and the magical EIGRP equation.

Why do we memorize this stuff? We live in a world with instant search at our fingertips. We can find anything we might need thanks to the omnipotent Google Search Box. As long as we can avoid sponsored results and ads we can find the answer to our question relatively quickly. So why do we require people to memorize esoteric trivia? Is it so we can win free drinks at the bar after we’re done troubleshooting?

The problem isn’t that we have to know the answer. It’s that we need to know the answer in order to ask the right question. More often than now we find ourselves stuck in the initial phase of figuring out the problem. The results are almost always the same – things aren’t working. Finding the cause isn’t always easy though. We have to find some nugget of information to latch onto in order to start the process.

One of my old favorites was trying to figure out why a network I was working with had a segmented spanning tree. One side of the network was working just fine but there were three switches daisy chained together that didn’t. Investigations turned up very little. Google searches were failing me. It wasn’t until I keyed in on a couple of differences that I found out that I had improperly used a BPDU filtering command because of a scoping issue. Sure, it only took me two hours of searching to find it after I discovered the problem. But if I hadn’t memorized the BDPU filtering and guard commands and their behavior I wouldn’t have even known to ask about them. So it’s super important to know how every minutia of every protocol works, right?

Presenting the Right Questions

Not exactly. We, as human computers, memorize the answers to more efficiently search through our database to find the right answers. If the problem takes 5 minutes to present we can eliminate a bunch of causes. If it’s happening in layer 3 and not layer 2 we can toss out a bunch of other stuff. Our knowledge is allowing us to discard useless possibilities and focus on the right result.

And it’s horribly inefficient. I can attest to that given my various attempts to learn OSPF hello and dead timers through osmosis of falling asleep in my big CCNP Routing book. The answers don’t crawl off the page and into your brain no matter how loudly you snore into it. So I spent hours learning something that I might use two or three times in my career. There has to be a better way.

Not coincidentally, that’s where the AI-driven systems from Mist, and now Juniper, come into play. Marvis is wonderful at looking at symptoms and finding potential causes. It’s what we do as humans. Except Marvis has no inherent biases. It also doesn’t misremember the values for a given protocol or get confused about whether or not OSPF point-to-point networks are broadcast or not. Marvis just knows what it was programmed with. But it does learn.

Learning is the key to how these AI and machine learning (ML) driven systems have to operate. People tend to discount solutions because they think there’s no way it could be that solution this time. For example, a haiku:

It’s not DNS.
Could it be DNS?
It was DNS.

DNS is often the cause of our problems even if we usually discount it out of hand in the first five minutes of troubleshooting. Even if it was only DNS 50% of the time we would still toss DNS as the root cause within the first five minutes because we’ve “trained” our brains to know what a DNS problem looks like without realizing how many things DNS can really affect.

But AI and ML don’t make these false correlations. Instead, they learn every time what the cause was. They can look at the network and see the failure state, present options based on the symptoms, and even if you don’t check in your changes they can analyze the network and figure out what change caused everything to start working again. Now, the next time the problem crops up, a system like Marvis can present you with a list of potential solutions with confidence levels. If DNS is at the top of the list, you might want to look into DNS first.

AI is going to make us all better troubleshooters because it’s going to make us all less reliant on poor memory. Instead of misremembering how a protocol should be configure, AI and ML will tell us how it should look. If something is causing routing loops or if layer 2 issues are happening because of duplex mismatches we’ll be able to see that quickly and have confidence it’s the right answer instead of just guessing and throwing things at the wall until they stick. Just like Google has supplanted the Cliff Claven people at the bar that are storehouses of useless knowledge, so too will AI and ML reduce our dependence on know-it-alls that may not have all the answers.


Tom’s Take

I’m ready to be forgetful. I’m tired of playing “stump the chump” in troubleshooting with the network playing the part of the stumper and me playing the chump. I’ve memorized more useless knowledge than I ever care to recall in my life. But I don’t want to have to do the work any longer. Instead, I want to apply my gifts to training algorithms with more processing power than me to do all the heavy lifting. I’m more than happy to look at DNS and EIGRP timers than try to remember if MTU and reliability are part of the K-values for this network.

Extremely Hive Minded

I must admit that I was wrong. After almost six years, I was mistake about who would end up buying Aerohive. You may recall back in 2013 I made a prediction that Aerohive would end up being bought by Dell. I recall it frequently because quite a few people still point out that post and wonder what if it’s happened yet.

Alas, June 26, 2019 is the date when I was finally proven wrong when Extreme Networks announced plans to purchase Aerohive for $4.45/share, which equates to around $272 million paid, which will be adjust for some cash on hand. Aerohive is the latest addition to the Extreme portfolio, which now includes pieces of Brocade, Avaya, Enterasys, and Motorola/Zebra.

Why did Extreme buy Aerohive? I know that several people in the industry told me they called this months ago, but that doesn’t explain the reasoning behind spending almost $300 million right before the end of the fiscal year. What was the draw that have Extreme buzzing about this particular company?

Flying Through The Clouds

The most apparent answer is HiveManager. Why? Because it’s really the only thing unique to Aerohive that Extreme really didn’t have already. Aerohive’s APs aren’t custom built. Aerohive’s switching line was rebadged from an ODM in order to meet the requirements to be included in Gartner’s Wired and Wireless Magic Quadrant. So the real draw was the software. The cloud management platform that Aerohive has pushed as their crown jewel for a number of years.

I’ll admit that HiveManager is a very nice piece of management software. It’s easy to use and has a lot of power behind the scenes. It’s also capable of being tuned for very specific vertical requirements, such as education. You can set up self-service portals and Private Pre-Shared Keys (PPSKs) fairly easily for your users. You can also build a lot of policy around the pieces of your network, both hardware and users. That’s a place to start your journey.

Why? Because Extreme is all about Automation! I talked to their team a few weeks ago and the story was all about building automation platforms. Extreme wants to have systems that are highly integrated and capable of doing things to make life easier for administrators. That means having the control pieces in place. And I’m not sure if what Extreme had already was in the same league as HiveManager. But I doubt Extreme has put as much effort into their software yet as Aerohive had invested in theirs over the past 8 years.

For Extreme to really build out the edge network of the future, they need to have a cloud-based management system that has easy policy creation and can be extended to include not only wireless access points but wired switches and other data center automation. If you look at what is happening with intent-based networking from other networking companies, you know how important policy definition is to the schema of your network going forward. In order to get that policy engine up and running quickly to feed the automation engine, Extreme made the call to buy it.

Part of the Colony

More importantly than the software piece, to me at least, is the people. Sure, you can have a bunch of people hacking away at code for a lot of hours to build something great. You can even choose to buy that something great from someone else and just start modifying it to your needs. Extreme knew that adapting HiveManager to fulfill the needs of their platform wasn’t going to be a walk in the park. So bringing the Aerohive team on board makes the most sense to me.

But it’s also important to realize who had a big hand in making the call. Abby Strong (@WiFi_Princess) is the VP of Product Marketing at Extreme. Before that she held the same role at Aerohive in some fashion for a number of years. She drove Aerohive to where they were before moving over to Extreme to do something similar.

When you’re building a team, how do you do it? Do you run out and find random people that you think are the best for the job and hope they gel quickly? Do you just throw darts at a stack of resumes and hope random chance favors your bold strategy? Or do you look at existing teams that work well together and can pull off amazing feats of technical talent with the right motivation? I’d say the third option is the most successful, wouldn’t you?

It’s not unheard of in the wireless industry for an entire team to move back and forth between companies. There’s a hospitality team that’s moved back and forth between Ruckus, Aerohive, and Ubiquiti. There are other teams, like some working on 802.11u, that bounced around a couple of times before they found a home. Which makes me wonder if Extreme bought Aerohive for HiveManager and ended up with the development team as a bonus? Or if they decided to buy the development team and got the software for “free”?


Tom’s Take

We all knew Aerohive was putting itself on the market. You don’t shed sales staff and middle management unless you’re making yourself a very attractive target for acquisition. I still held out hope that maybe Dell would come through for me and make my five-year-old prediction prescient. Instead, the right company snapped up Aerohive for next to nothing and will start in earnest integrating HiveManager into their stack in the coming months. I don’t know what the future plans for further integration look like, but the wireless world is buzzing right now and that should make life extremely sweet for the Aerohive team.

iPhone 11 Plus Wi-Fi 6 Equals Undefined?

I read a curious story this weekend based on a supposed leak about the next iPhone, currently dubbed the iPhone 111. There’s a report that the next iPhone will have support for the forthcoming 802.11ax standard. The article refers to 802.11ax as Wi-Fi 6, which is a catch branding exercise that absolutely no one in the tech community is going to adhere to.

In case you aren’t familiar with 802.11ax, it’s essentially an upgrade of the existing wireless protocols to support better client performance and management across both 2.4GHz and 5GHz. Unlike 802.11ac, which was rebranded to be called Wi-Fi 5 or 802.11n, which curiously wasn’t rebranded as Wi-Fi 4, 802.11ax works in both bands. There’s a lot of great things on the drawing board for 11ax coming soon.

Why did I say soon? Because, as of this writing, 11ax isn’t a ratified standard. According to this FAQ from Aerohive, the standard isn’t set to be voted on for final ratification until Q3 of 2019. And if anyone wants to see the standard pushed along faster it would be Aerohive. They were one of, if not the, first company to bring a 802.11ax access point to the market. So they want to see a standard piece of equipment for sure.

Making pre-standard access points isn’t anything new to the market. Manufacturers have been trying to get ahead of the trends for a while now. I can distinctly remember being involved in IT when 802.11n was still in the pre-standard days. One of our employees brought in a Belkin Pre-N AP and client card and wanted us to get it working because, in his words, “It will cover my whole house with Wi-Fi!”

Sadly, we ended up having to ditch this device once the 802.11n standard was finalized. Why? Because Belkin had rushed it to the market and tried to capitalize on the fervor of people wanting fast connection speeds. The AP only worked with the PCMCIA client card sold by Belkin. Once you started to see ratified 802.11n devices they were incompatible with the Belkin AP and fell back to 802.11g speeds.

Belkin wasn’t the only manufacturer that was trying to get ahead of the curve. Cisco also pushed out the Aironet 1250, which had detachable lobes that could be pulled off and replaced. Why? Because they were shipping a draft 802.11n piece of hardware. They claimed that anyone purchasing the draft spec hardware could send in the lobes and get an upgrade to ratified hardware as soon as it was finalized. Except, as a rushed product the 1250 also consumed lots of power, ran hot, and generally had very low performance compared to the APs that came out after the ratification process was completed.

We’re seeing the same rush again with 802.11ax. Everyone wants to have something new when the next refresh cycle comes up. Instead of pushing people toward the stable performance of 802.11ac Wave 2 with proper design they are going out on a limb. Manufacturers are betting on the fact that their designs are going to be software-upgradable in the end. Which assumes there won’t be any major changes during the ratification process.

Cupertino Doesn’t Guess

One of the major criticism points of 802.11ax is that there is not any widespread adoption of clients out there to push us to need 802.11ax APs. The client vs. infrastructure argument is always a tough one. Do you make the client adapter and hope that someone will eventually come out with hardware to support it? Or do you choose to instead wait for the infrastructure to jump up in speed and then buy a client adapter to support it?

I’m usually one revision behind in most cases. My home hardware is running 802.11ac Wave 2 currently, but my devices were 11ac capable long before I installed any Meraki or Ubiquiti equipment. So my infrastructure was playing catchup with my clients. But not everyone runs the same gear that I do.

One of the areas where this is more apparent is not in the Wi-Fi realm but instead in the carrier space. We’re starting to hear that carriers like AT&T are deploying 5G in many cities even though there aren’t many 5G capable handsets. And, even when the first 5G handsets start hitting the market, the smart money says to avoid the first generation. Because the first generation is almost always hot, power hungry, and low performing. Sound familiar?

You want to know who doesn’t bet on non-standard technology? Apple. Time and again, Apple has chosen to take a very conservative approach to introducing new chipsets into their devices. And while their Wi-Fi chipsets often seen upgrades long before their cellular modems do, you can guarantee that they aren’t going to make a bet on non-standard technology that could potentially hamper adoption of their flagship mobile device.

A Logical Approach

Let’s look at it logically for a moment. Let’s assume that the standards bodies get off their laurels and kick into high gear to get 802.11ax ratified at the end of Q2. That’s just after Apple’s WWDC. Do you think Apple is going to wait until post-WWDC to decide what chipsets are going to be in the new iPhone? You bet your sweet bandwidth they aren’t!

The chipset decisions for the iPhone 11 are being made right now in Q1. They want to know they can get sufficient quantities of SoCs and modems by the time manufacturing has to ramp up to have them ready for stores in October. That means you can’t guess whether or not a standard is going to be approved in time for launch. Q3 2019 is during the iPhone announcement season. Apple is the most conservative manufacturer out there. They aren’t going to stake their connectivity on an unproven standard.

So, let’s just state it emphatically for the search engines: The iPhone 11 will not have 802.11ax, or Wi-Fi 6, support. And anyone trying to tell you differently is trying to sell you a load of marketing.

The Future of Connectivity

So, what about the iPhone XII or whatever we call it? That’s a more interesting discussion. And it hinges on something I heard in a recent episode of a new wireless podcast. The Contention Window was started by my friends Tauni Odia and Scott Lester. In Episode 1, they have their big 2019 predictions. Tauni predicted that 802.11ax won’t be ratified in 2019. I agree with her assessment. Despite the optimism of the working group these things tend to take longer than expected. Which means Q4 2019 or perhaps even Q1 2020.

If 802.11ax ratification slips into 2020 you’ll see Apple taking the same conservative approach to adoption. This is especially true if the majority of deployed infrastructure APs are still pre-standard. Apple would rather take an extra year to get things right and know they won’t have any bugs than to rush something to the market in the hopes of selling a few corner-case techies on something that doesn’t have much of an impact on speeds in the long run.

However, if the standards bodies prove us all wrong and push 11ax ratification through we should see it in the iPhone X+2. A mature technology with proper support should be seen as a winner. But you should see them move telegraphed far in advance with adoption of the 11ax radios in the MacBook Pro first. Once the bigger flagship computing devices get support it will trickle down. This is just an economic concern. The MacBook has more room in the case for a first-gen 11ax chip. Looser thermal tolerances and space considerations means more room to make mistakes.

In short: Don’t expect an 11ax (or Wi-Fi 6) chip before 2020. And if you’re betting the farm on the iPhone, you may be waiting a long time.


Tom’s Take

I like the predictions of professionals with knowledge over leaks with dubious marketing value. The Contention Window has lots of good information about why 802.11ax won’t be ratified any time soon. A report about a leaked report that may or may not be accurate holds a lot less value. Don’t listen to the hype. Listen to the people who know what they’re talking about, like Scott and Tauni for example. And don’t stress about having the newest, fastest wireless devices in your house. Odds are way better that you’re going to have to buy a new AP for Christmas this year than the hope of your next iPhone support 802.11ax. But the one thing we can all agree on: Wi-Fi 6 is a terrible branding decision!


  1. Or I suppose the XI if you’re into Roman numerals ↩︎

The Cargo Cult of Google Tools

You should definitely watch this amazing video from Ben Sigelman of LightStep that was recorded at Cloud Field Day 4. The good stuff comes right up front.

In less than five minutes, he takes apart crazy notions that we have in the world today. I like the observation that you can’t build a system more than three or four orders of magnitude. Yes, you really shouldn’t be using Hadoop for simple things. And Machine Learning is not a magic wand that fixes every problem.

However, my favorite thing was the quick mention of how emulating Google for the sake of using their tools for every solution is folly. Ben should know, because he is an ex-Googler. I think I can sum up this entire discussion in less than a minute of his talk here:

Google’s solutions were built for scale that basically doesn’t exist outside of a maybe a handful of companies with a trillion dollar valuation. It’s foolish to assume that their solutions are better. They’re just more scalable. But they are actually very feature-poor. There’s a tradeoff there. We should not be imitating what Google did without thinking about why they did it. Sometimes the “whys” will apply to us, sometimes they won’t.

Gee, where have I heard something like this before? Oh yeah. How about this post. Or maybe this one on OCP. If I had a microphone I would have handed it to Ben so he could drop it.

Building a Laser Moustrap

We’ve reached the point in networking and other IT disciplines where we have built cargo cults around Facebook and Google. We practically worship every tool they release into the wild and try to emulate that style in our own networks. And it’s not just the tools we use, either. We also keep trying to emulate the service provider style of Facebook and Google where they treated their primary users and consumers of services like your ISP treats you. That architectural style is being lauded by so many analysts and forward-thinking firms that you’re probably sick of hearing about it.

Guess what? You are not Google. Or Facebook. Or LinkedIn. You are not solving massive problems at the scale that they are solving them. Your 50-person office does not need Cassandra or Hadoop or TensorFlow. Why?

  • Google Has Massive Scale – Ben mentioned it in the video above. The published scale of Google is massive, and even it’s on the low side of the number. The real numbers could even be an order of magnitude higher than what we realize. When you have to start quoting throughput numbers in “Library of Congress” numbers to make sense to normal people, you’re in a class by yourself.
  • Google Builds Solutions For Their Problems – It’s all well and good that Google has built a ton of tools to solve their issues. It’s even nice of them to have shared those tools with the community through open source. But realistically speaking, when are you really going to use Cassandra to solve all but the most complicated and complex database issues? It’s like a guy that goes out to buy a pneumatic impact wrench to fix the training wheels on his daughter’s bike. Sure, it will get the job done. But it’s going to be way overpowered and cause more problems than it solves.
  • Google’s Tools Don’t Solve Your Problems – This is the crux of Ben’s argument above. Google’s tools aren’t designed to solve a small flow issue in an SME network. They’re designed to keep the lights on in an organization that maps the world and provides video content to billions of people. Google tools are purpose built. And they aren’t flexible outside that purpose. They are built to be scalable, not flexible.

Down To Earth

Since Google’s scale numbers are hard to comprehend, let’s look at a better example from days gone by. I’m talking about the Cisco Aironet-to-LWAPP Upgrade Tool:

I used this a lot back in the day to upgrade autonomous APs to LWAPP controller-based APs. It was a very simple tool. It did exactly what it said in the title. And it didn’t do much more than that. You fed it an image and pointed it at an AP and it did the rest. There was some magic on the backend of removing and installing certificates and other necessary things to pave the way for the upgrade, but it was essentially a batch TFTP server.

It was simple. It didn’t check that you had the right image for the AP. It didn’t throw out good error codes when you blew something up. It only ran on a maximum of 5 APs at a time. And you had to close the tool every three or four uses because it had a memory leak! But, it was a still a better choice than trying to upgrade those APs by hand through the CLI.

This tool is over ten years old at this point and is still available for download on Cisco’s site. Why? Because you may still need it. It doesn’t scale to 1,000 APs. It doesn’t give you any other functionality other than upgrading 5 Aironet APs at a time to LWAPP (or CAPWAP) images. That’s it. That’s the purpose of the tool. And it’s still useful.

Tools like this aren’t built to be the ultimate solution to every problem. They don’t try to pack in every possible feature to be a “single pane of glass” problem solver. Instead, they focus on one problem and solve it better than anything else. Now, imagine that tool running at a scale your mind can’t comprehend. And you’ll know now why Google builds their tools the way they do.


Tom’s Take

I have a constant discussion on Twitter about the phrase “begs the question”. Begging the question is a logical fallacy. Almost every time the speaker really means “raises the question”. Likewise, every time you think you need to use a Google tool to solve a problem, you’re almost always wrong. You’re not operating at the scale necessary to need that solution. Instead, the majority of people looking to implement Google solutions in their networks are like people that put chrome everything on a car. They’re looking to show off instead of get things done. It’s time to retire the Google Cargo Cult and instead ask ourselves what problems we’re really trying to solve, as Ben Sigelman mentions above. I think we’ll end up much happier in the long run and find our work lives much less complicated.

Does Juniper Need To Be Purchased?

You probably saw the news this week that Nokia was looking to purchase Juniper Networks. You also saw pretty quickly that the news was denied, emphatically. It was a curious few hours when the network world was buzzing about the potential to see Juniper snapped up into a somewhat larger organization. There was also talk of product overlap and other kinds of less exciting but very necessary discussions during mergers like this. Which leads me to a great thought exercise: Does Juniper Need To Be Purchased?

Sins of The Father

More than any other networking company I know of, Juniper has paid the price for trying to break out of their mold. When you think Juniper, most networking professionals will tell you about their core routing capabilities. They’ll tell you how Juniper has a great line of carrier and enterprise switches. And, if by some chance, you find yourself talking to a security person, you’ll probably hear a lot about the SRX Firewall line. Forward thinking people may even tell you about their automation ideas and their charge into the world of software defined things.

Would you hear about their groundbreaking work with Puppet from 2013? How about their wireless portfolio from 2012? Would anyone even say anything about Junosphere and their modeling environments from years past? Odds are good you wouldn’t. The Puppet work is probably bundled in somewhere, but the person driving it in that video is on to greener pastures at this point. The wireless story is no longer a story, but a footnote. And the list could go on longer than that.

When Cisco makes a misstep, we see it buried, written off, and eventually become the butt of really inside jokes between groups of engineers that worked with the product during the short life it had on this planet. Sometimes it’s a hardware mistake. Other times it’s software architecture missteps. But in almost every case, those problems are anecdotes you tell as you watch the 800lb gorilla of networking squash their competitors.

With Juniper, it feels different. Every failed opportunity is just short of disaster. Every misstep feels like it lands on a land mine. Every advance not expanded upon is the “one that got away”. Yet we see it time and time again. If a company like Cisco pushed the envelope the way we see Juniper pushing it we would laud them with praise and tell the world that they are on the verge of greatness all over again.

Crimes Of The Family

Why then does Juniper look like a juicy acquisition target? Why are they slow being supplanted by Arista as the favored challenger of the Cisco Empire? How is it that we find Juniper under the crosshairs of everyone, fighting to say alive?

As it turns out, wars are expensive. And when you’re gearing to fight Cisco you need all the capital you can. That forces you to make alliances that may not be the best for you in the long run. And in the case of Juniper, it brought in some of the people that thought they could get in on the ground floor of a company that was ready to take on the 800lb gorilla and win.

Sadly, those “friends” tend to be the kind that desert you when you need them the most. When Juniper was fighting tooth and nail to build their offerings up to compete against Cisco, the investors were looking for easy gains and ways to make money. And when those investors realize that toppling empires takes more than two quarters, they got antsy. Some bailed. Those needed to go. But the ones that stayed cause more harm than good.

I’ve written before about Juniper’s issues with Elliott Capital Management, but it bears repeating here. Elliott is an activist investor in the same vein as Carl Ichan. They take a substantial position in a company and then immediately start demanding changes to raise the stock price. If they don’t get their way, they release paper after paper decrying the situation to the market until the stock price is depressed enough to get the company to listen to Elliott. Once Elliott’s demands are met, the company exits their position. They get a small profit and move on to do it all over again, leaving behind a shell of a company wonder what happened.

Elliott has done this to Juniper in droves. Pulse VPN. Trapeze. They’ve demanded executive changes and forced Juniper to abandon good projects that have long term payoffs because they won’t bounce the stock price higher this quarter. And worse yet, if you look back over the last five years you can find story in the finance industry about Juniper being up for sale or being a potential acquisition target. Five. Years. When’s the last time you heard about Cisco being a potential target for buyout? Hell, even Arista doesn’t get shopped as much as Juniper.


Tom’s Take

I think these symptoms are all the same root issue. Juniper is a great technology company that does some exciting and innovative things. But, much like a beautiful potted plant in my house, they are reaching the maximum amount of size they can grow to without making a move. Like a plant, you can only grow as big as their container. If you leave them in a small one, they’ll only ever be small. You can transfer them to something larger but you risk harm or death. But you’ll never grow if you don’t change. Juniper has the minds and the capability to grow. And maybe with the eyes of the Wall Street buzzards looking elsewhere for a while, they can build a practice that gives them the capability to challenge in the areas they are good at, not just being the answer for everything Cisco is doing.