Wi-Fi 6 Is A Stupid Branding Idea

You’ve probably seen recently that the Wi-Fi Alliance has decided the rebrand the forthcoming 802.11ax standard as “Wi-Fi CERTIFIED 6”, henceforth referred to as “Wi-Fi 6”. This branding decision happened late in 2018 and seems to be picking up steam in 2019 as 802.11ax comes closer to ratification later this year. With manufacturers shipping 11ax access points already and the consumer market poised to explode with the adoption of a new standard, I think it’s time to point out to the Wi-Fi Alliance that this is a dumb branding idea.

My Generation

On the surface, the branding decision looks like it makes sense. The Wi-Fi alliance wants to make sure that consumers aren’t confused about which wireless standard they are using. 802.11n, 802.11ac, and 802.11ax are all usable and valid infrastructure that could be in use at any one time, as 11n is 2.4GHz, 11ac is 5GHz, and 11ax encompasses both. According to the alliance, there will be a number displayed on the badge of the connection to denote which generation of wireless the client is using.

Except, it won’t be that simple. Users don’t care about speeds. They care about having the biggest possible number. They want that number to be a 6, not a 5 or a 4. Don’t believe me? AT&T released an update earlier this month that replaced the 4G logo with a 5G logo even when 5G service wasn’t around. Just so users could say they had “5G” and tell their friends.

Using numbers to denote generations isn’t a new thing in software either. We use version numbers all the time. But using those version numbers as branding usually leads to backlash in the community. Take Fibre Channel, for example. Brocade first announced they would refer to 16GB fibre channel as “Gen 5”, owing to the fifth generation of the protocol. Gen 6 was 32GB and so on. But, as the chart on this fibre channel page shows, they worked themselves into a corner. Gen 7 is both 64GB and 256GB depending on what you’re deploying. Even Gen 6 was both 32GB and 128GB. It’s confusing because the name could be many things depending on what you wanted it to mean. Branding doesn’t denote clear information.

Subversion of Versions

The Wi-Fi Alliance decision also doesn’t leave room for expansion or differentiation. For example, as I mentioned in a previous post on Gestalt IT, 802.11ax doesn’t make the new OWE spec mandatory. It’s up to the vendors to implement this spec as well as other things that have been made option, as upstream MU-MIMO is rumored to become as well. Does that mean that if I include both of those protocols as options that my protocol is Wi-Fi 6.1? Or could I even call it Wi-Fi 7 since it’s really good?

Windows has had this problem going all the way back to Windows 3.0. Moving to Windows 3.1 was a huge upgrade, but the point release didn’t make it seem that way. After that, Microsoft started using branding names by year instead of version numbers. But that still caused issues. Was Windows 98 that much better than 95? Were they both better than Windows NT 4? How about 2000? Must be better, right? Better than Windows 99?1

Windows even dropped the version numbers for a while with Windows XP (version 5.1) and Windows Vista (version 6.0) before coming back to versioning again with Windows 7 (version 6.1) and Windows 8 (version 6.2) before just saying to hell with it and making Windows 10 (version 10.0). Which, according to rumor, was decided on because developers may have assumed all legacy consumer Windows versions started with ‘9’.

See the trouble that versioning causes when it’s not consistent? What happens if the next minor revision to the 802.11ax specification doesn’t justify moving to Wi-Fi 7? Do you remember how confusing it was for consumers when we would start talking about the difference between 11ac Wave 1 and Wave 2? Did anyone really care? They just wanted the fastest stuff around. They didn’t care what wave or what version it was. They just bought what the sticker said and what the guy at Best Buy told them to get.

Enterprise Nightmares

Now, imagine the trouble that the Wi-Fi Alliance has potentially caused for enterprise support techs with this branding decision. What will we say when the users call in and say their wireless is messed up because they’re running Windows 10 and their Wi-Fi is on 6 still? Or if their cube neighbor has a 6 on their Wi-Fi but my Mac doesn’t?2

Think about how problematic it’s going to be when you try to figure out why someone is connecting to Wi-Fi 5 (802.11ac) instead of Wi-Fi 6 (802.11ax). Think about the fights you’re going to have about why we need to upgrade when it’s just one number higher. You can argue power savings or better cell sizes or more security all day long. But the jump from 5 to 6 really isn’t that big, right? Can’t we just wait for 7 and make a really big upgrade?


Tom’s Take

I think the Wi-Fi Alliance tried to do the right thing with this branding. But they did it in the worst way possible. There’s going to be tons of identity issues with 11ax and Wi-Fi 6 and all the things that are going to be made optional in order to get the standard ratified by the end of the year. We’re going to get locked into a struggle to define what Wi-Fi 6 really entails while trying not to highlight all the things that could potentially be left out. In the end, it would have been better to just call it 11ax and let users do their homework.


  1. You’d be shocked the number of times I heard it called that on support calls ↩︎
  2. You better believe Apple isn’t going to mar the Airport icon in the system bar with any stupid numbers ↩︎

Apple Watch Unlock, 802.11ac, and Time

applewatchface

One of the benefits of upgrading to MacOS 10.12 Sierra is the ability to unlock my Mac laptop with my Apple Watch. Yet I’m not able to do that. Why? Turns out, the answer involves some pretty cool tech.

Somebody’s Watching You

The tech specs list the 2013 MacBook and higher as the minimum model needed to enable Watch Unlock on your Mac. You also need a few other things, like Bluetooth enabled and a Watch running WatchOS 3. I checked my personal MacBook against the original specs and found everything in order. I installed Sierra and updated all my other devices and even enabled iCloud Two-Factor Authentication to be sure. Yet, when I checked the Security and Privacy section, I didn’t see the checkbox for the Watch Unlock to be enabled. What gives?

It turns out that Apple quietly modified the minimum specs during the Sierra beta period. Instead of early 2013 MacBooks being support, the shift moved support to mid-2013 MacBooks instead. I checked the spec sheets and mine is almost identical. The RAM, drive, and other features are the same. Why does Watch Unlock work on those Macs and not mine? The answer, it appears, is wireless.

Now AC The Light

The mid-2013 MacBook introduced Apple’s first 802.11ac wireless chipset. That was the major reason to upgrade over the earlier models. The Airport Extreme also supported 11ac starting in mid-2013 to increase speeds to more than 500Mbps transfer rates, or Wave 1 speeds.

While the majority of the communication that the Apple Watch uses with your phone and your MacBook is via Bluetooth, it’s not the only way it communicates. The Apple Watch has a built-in wireless radio as well. It’s a 2.4GHz b/g/n radio. Normally, the 11ac card on the MacBook can’t talk to the Watch directly because of the frequency mismatch. But the 11ac card in the 2013 MacBook enables a different protocol that is the basis for the unlocking feature.

802.11v has been used for a while as a fast roaming feature for mobile devices. Support for it has been spotty before wider adoption of 802.11ac Wave 1 access points. 802.11v allows client devices to exchange information about network topology. 11v also allows for clients to measure network latency information by timing the arrival of packets. That means that a client can ping an access point or another client and get a precise timestamp of the arrival of that packet. This can be used for a variety of things, most commonly location services.

Time Is On Your Side

The 802.11v timestamp has been proposed to be used as a “time of flight” calculation all the back since 2008. Apple has decided to use Time of Flight as a security mechanism for the Watch Unlock feature. Rather than just assume that the Watch is in range because it’s communicating over Bluetooth, Apple wanted to increase the security of the Watch/Mac connection. When the Mac detects that the Watch is within 3 meters of the Mac it is connected to via Handoff it is in the right range to trigger an unlock. This is where the 11ac card works magic.

When the Watch sends a Bluetooth signal to trigger the unlock, the Mac sends an additional 802.11v request to the watch via wireless. This request is then timed for arrival. Since the Mac knows the watch has to be within 3 meters, the timestamp on the packet has a very tight tolerance for delay. If the delay is within the acceptable parameters, the Watch unlock request is approved and your Mac is unlocked. If there is more than the acceptable deviation, such as when used via a Bluetooth repeater or some other kind of nefarious mechanism, the unlock request will fail because the system realizes the Watch is outside the “safe” zone for unlocking the Mac.

Why does the Mac require an 802.11ac card for 802.11v support? The simple answer is because the Broadcom BCM43xx card in the early 2013 MacBooks and before doesn’t support the 802.11v time stamp field (page 5). Without support for the timestamp field, the 802.11v Time of Flight packet won’t work. The newer Broadcom 802.11ac compliant BCM43xx card in the mid-2013 MacBooks does support the time stamp field, thus allowing the security measure to work.


Tom’s Take

All cool tech needs a minimum supported level. No one could have guess 3-4 years ago that Apple would need support for 802.11v time stamp fields in their laptop Airport cards. So when they finally implemented it in mid-2013 with the 802.11ac refresh, they created a boundary for support for a feature on a device that was in the early development stages. Am I disappointed that my Mac doesn’t support watch unlock? Yes. But I also understand why now that I’ve done the research. Unforeseen consequences of adoption decisions really can reach far into the future. But the technology that Apple is building into their security platform is cool no matter whether it’s support on my devices or not.

NBase-ing Your Wireless Decisions

Cat5

Copper is heavy. I’m not talking about it’s atomic weight of 63 or the fact that bundles of it can sag ceiling joists. I’m talking about the fact that copper has inertia. It’s difficult to install and even more difficult to replace. Significant expense is incurred when people want to run new lines through a building. I never really understood how expensive a proposition that was until I went to work for a company that run copper lines.

Out of Mind, Out of Sight

According to a presentation that we saw at Tech Field Day Extra at Cisco Live Milan from Peter Jones at Cisco, Category 5e and 6 UTP cabling still has a significant install base in today’s organizations. That makes sense when you consider that 5e and 6 are the minimum for gigabit Ethernet. Once we hit the 1k mark with speeds, desktop bandwidth never really increased. Ten gigabit UTP Ethernet is never going to take off outside the data center. The current limitations of 10Gig over Cat 6 makes it impossible to use in a desktop connectivity situation. With a practical limit of around 50 meters, you practically have to be on top of the IDF closet to get the best speeds.

There’s another reason why desktop connectivity stalled at 1Gig. Very little data today gets transferred back and forth between desktops across the network. With the exception of some video editing or graphics work, most data is edited in place today. Rather than bringing all the data down to a desktop to make changes or edits, the data is kept in a cloud environment or on servers with ample fast storage space. The desktop computer is merely a portal to the environment instead of the massive editing workstation of the past. If you even still have a desktop at all.

The vast majority of users today don’t care how fast the wire coming out of the wall is. They care more about the speed of the wireless in the building. The shift to mobile computing – laptops, tablets, and even phones, has spurred people to spend as little time as possible anchored to a desk. Even those that want to use large monitors or docking stations with lots of peripherals prefer to connect via wireless to grab things and go to meetings or off-site jobs.

Ethernet has gone from a “must have” to an infrastructure service supporting wireless access points. Where one user in the past could have been comfortable with a single gigabit cable, that new cable is supporting tens of users via an access point. With sub-gigabit technologies like 802.11n and 802.11ac Wave 1, the need for faster connectivity is moot. Users will hit overhead caps in the protocol long before they bump into the theoretical limit for a single copper wire. But with 802.11ac Wave 2 quickly coming up on the horizon and even faster technologies being cooked up, the need for faster connectivity is no longer a pipe dream.

All Your NBase

Peter Jones is the chairman of the NBaseT Alliance. The purpose of this group is to decide on a standard for 2.5 gigabit Ethernet. Why such an odd number? Long story short: It has to do with splitting 10 gigabit PHY connections in fourths and delivering that along a single Cat 5e/6 wire. That means it can be used with existing cable plants. It means that we can deliver more power along the wire to an access point that can’t run on 802.3af power and needs 802.3at (or more). It means we don’t have to rip and replace cable plants today and incur double the costs for new technology.

NBaseT represents a good solution. By changing modulations and pumping Cat 5e and 6 to their limits, we can forestall a cable plant armageddon. IT departments don’t want to hear that more cables are needed. They’ve spent the past 5 years in a tug-of-war between people saying you need 3–4 drops per user and the faction claiming that wireless is going to change all that. The wireless faction won that argument, as this video from last year’s Aruba Airheads conference shows. The idea of totally wireless office building used to be a fantasy. Now it’s being done by a few and strongly considered by many more.

NBaseT isn’t a final solution. The driver for 2.5 Gig Ethernet isn’t going to survive the current generation of technology. Beyond 802.11ac, wireless will jump to 10 Gigabit speeds to support primary connectivity from bandwidth hungry users. Copper cabling will need to be updated to support this, as fiber can’t deliver power and is much too fragile to support some of the deployment scenarios that I’ve seen. NBaseT will get us to the exhaustion point of our current cable plants. When the successor to 802.11ac is finally ratified and enters general production, it will be time for IT departments to make the decision to rip out their old cable infrastructure and replace it with fewer wires designed to support wireless deployments, not wired users.


Tom’s Take

Peter’s talk at Tech Field Day Extra was enlightening. I’m not a fan of the proposed 25Gig Ethernet spec. I don’t see the need it’s addressing. I can see the need for 2.5Gig on the other hand. I just don’t see the future. If we can keep the cable plant going for just a couple more years, we can spend that money on better wireless coverage for our users until the next wave is ready to take us to 10Gig and beyond. Users know what 1Gig connectivity feels like, especially if they are forced down to 100Mbps or below. NBaseT gives them the ability to keep those fast speeds in 802.11ac Wave 2. Adopting this technology has benefits for the foreseeable future. At least until it’s time to move to the next best thing.

Generation Lost

I’m not trying to cause a big sensation (talking about my generation) – The Who

GenTiltedNaming products is an art form.  When you let the technical engineering staff figure out what to call something, you end up with a model number like X440 or 8086.  When the marketing people get involved at first, you tend to get more order in the naming of things, usually in a series like the 6500 series or the MX series.  The idea that you can easily identify a product’s specs based on its name or a model number is nice for those that try to figure out which widget to use.  However, that’s all changing.

It started with mobile telephones.  Cellular technology has been around in identifiable form since the late 1970s.  The original analog signals worked on specific frequencies and didn’t have great coverage.  It wasn’t until the second generation of this technology moved entirely to digital transmission with superior encoding that the technology really started to take off.  In order to differentiate this new technology from the older analog version, many people made sure to market it as “second generation”, often shortening this to “2G” to save syllables.  When it came time to introduce a successor to the second generation personal carrier service (PCS) systems, many carriers started marketing their offerings withe moniker of “3G”, skipping straight past the idea of third generation offering in favor of the catchier marketing term.  AT&T especially loved touting the call quality and data transmission rate of 3G in advertising.  The 3G campaigns were so successful that when the successor to 3G was being decided, many companies started trying to market their incremental improvements as “4G” to get consumers to adopt them quickly.

Famously, the incremental improvement to high speed packet access (HSPA) that was being deployed en masse before the adoption of Long Term Evolution (LTE) as the official standard was known as high speed packet downlink access (HSPDA).  AT&T petitioned Apple to allow their carrier badge on the iPhone to say “4G” when HSPDA was active.  Even though speeds weren’t nearly as fast as the true 4G LTE standard, AT&T wanted a bit of marketing clout with customers over their Verizon rivals.  When the third generation iPad was released with a true LTE radio later on, Apple made sure to use the “LTE” carrier badge for it.  When the iOS 5 software release came out, Apple finally acquiesced to AT&T’s demands and rebranded the HSPDA network to be “4G” with a carrier update.  In fact, to this day my iPhone 4S still tells me I’m on 4G no matter where I am.  Only when I drop down to 2G does it say anything different.

The fact that we have started referring to carrier standards a “xG” something means the marketing is working.  And when marketing works, you naturally have to copy it in other fields.  The two most recent entries in the Generation Marketing contest come from Broadcom and Brocade.  Broadcom has started marketing their 802.11ac chipsets as “5G wireless.”  It’s somewhat accurate when you consider the original 802.11 standard through 802.11b, 802.11g, 802.11n, and now 802.11ac.  However, most wireless professionals see this more as an attempt to cash in on the current market trend of “G” naming rather than showing true differentiation.  In Brocade’s case, they recently changed the name of their 16G fibre channel solution to “Gen 5” in an attempt to shift the marketing message away from a pure speed measurement (16 gigabit) especially when starting to put it up against the coming 40 gigabit fibre channel over Ethernet (FCoE) offerings coming from their competitor at Cisco.

In both of these cases, the shift has moved away from strict protocol references or speed ratings.  That’s not necessarily a bad thing.  However, the shift to naming it “something G” reeks quite frankly.  Are we as consumers and purchases so jaded by the idea of 3G/4G/5G that we don’t get any other marketing campaigns?  What if they’d call it Revision Five or Fifth Iteration instead?  Doesn’t that convey the same point?  Perhaps it does, but I doubt more than an handful of CxO type people know what iteration means without help from a pocket dictionary.  Those same CxOs know what 4G/5G mean because they can look down at their phone and see it.  More Gs are better, right?

Generational naming should only be used in the broadest sense of the idea.  It should only be taken seriously when more than one company uses it.  Is Aruba going to jump on the 5G wireless bandwagon?  Will EMC release a 6G FC array?  If you’re shaking your head in answer to these questions, you probably aren’t the only one.  Also of note in this discussion – What determines a generation?  IT people have trouble keeping track of what constitutes the difference between a major version change and a point release update.  Why did 3 features cause this to be software version 8.0 but the 97 new features in the last version only made it go from 7.0 to 7.5?  Also, what’s to say that a company doesn’t just skip over a generation?  Why was HSPDA not good enough to be 4G?  Because the ITU said it was just an iteration of 3G and not truly a new generation.  How many companies would have looked at the advantage of jumping straight to 5G by “counting” HSPDA as the fourth generation absent oversight from the ITU?


Tom’s Take

My mom always told me to “call a spade a spade.”  I don’t like the idea of randomly changing the name of something just to give it a competitive edge.  Fibre channel has made it this far as 2 gig, 4 gig, and 8 gig.  Why the sudden shift away from 16 gig?  Especially if you’re going to have to say it runs at 16 gig so people will know what you’re talking about?  Is it a smoke-and-mirrors play?  Why does Broadcom insist on naming their wireless 5G?  802.11a/b/g/n has worked just fine up to now.  Is this just an attempt to confuse the consumer?  We may never know.  What we need to do in the meantime is consider holding feet to the fire and ensuring that consumers and purchasers ensure that this current naming “generation” gets lost.