It’s Probably Not The Wi-Fi

After finishing up Mobility Field Day last week, I got a chance to reflect on a lot of the information that was shared with the delegates. Much of the work in wireless now is focused on analytics. Companies like Cape Networks and Nyansa are trying to provide a holistic look at every part of the network infrastructure to help professionals figure out why their might be issues occurring for users. And over and over again, the resound cry that I heard was “It’s Not The Wi-Fi”

Building A Better Access Layer

Most of wireless is focused on the design of the physical layer. If you talk to any professional and ask them to show your their tool kit, they will likely pull out a whole array of mobile testing devices, USB network adapters, and diagramming software that would make AutoCAD jealous. All of these tools focus on the most important part of the equation for wireless professionals – the air. When the physical radio spectrum isn’t working users will complain about it. Wireless pros leap into action with their tools to figure out where the fault is. Either that, or they are very focused on providing the right design from the beginning with the tools validating that access point placement is correct and coverage overlap provides redundancy without interference.

These aren’t easy problems to solve. That’s why wireless folks get paid the big bucks to build it right or fix it after it was built wrong. Wired networkers don’t need to worry about microwave ovens or water pipes. Aside from the errant fluorescent light or overly aggressive pair of cable pliers, wired networks are generally free from the kinds of problems that can plague a wire-free access layer.

However, the better question that should be asked is how the users know it’s the wireless network that’s behind the faults? To the users, the system is in one of three states: perfect, horribly broken, or slow. I think we can all agree that the first state of perfection almost never actually exists in reality. It might exist shortly after installation when user load is low and actual application use is negligible. However, users are usually living in one of the latter states. Either the wireless is “slow” or it’s horribly broken. Why?

No-Service Station

As it turns out, thanks to some of the reporting from companies like Cape and Nyansa, it turns out that a large majority of the so-called wireless issues are in fact not wireless related at all. Those designs that wireless pros spend so much time fretting over are removed from the equation. Instead, the issues are with services.

Yes, those pesky network services. The ones like DNS or DHCP that seem invisible until they break. Or those services that we pay hefty sums to every month like Amazon or Microsoft Azure. The same issues that plague wired networking exist in the wireless world as well and seem to escape blame.

DNS is invisible to the majority of users. I’ve tried to explain it many times with middling to poor results. The idea that computers on the internet don’t understand words and must rely on services to translate them to numbers never seems to click. And when you add in the reliance on this system and how it can be knocked out with DDoS attacks or hijacking, it always comes back to being about the wireless.

It’s not hard to imagine why. The wireless is the first thing users see when they start having issues. It’s the new firewall. Or the new virus. Or the new popup. It’s a thing they can point to as the single source of problems. And if there is an issue at any point along the way, it must be the fault of the wireless. It can’t possibly be DNS or routing issues or a DDoS on AWS. Instead, the wireless is down.

And so wireless pros find themselves defending their designs and configurations without knowing that there is an issue somewhere else down the line. That’s why the analytics platforms of the future are so important. By giving wireless pros visibility into systems beyond the spectrum, they can reliably state that the wireless isn’t at fault. They can also engage other teams to find out why the DNS servers are down or why the default gateway for the branch office has been changed or is offline. That’s the kind of info that can turn a user away from blaming the wireless for all the problems and finding out what’s actually wrong.


Tom’s Take

If I had a nickel for every problem that was blamed on the wireless network or the firewall or some errant virus when that actually wasn’t the case, I could retire and buy my own evil overlord island next to Larry Ellison. Alas, these are issues that are never going to go away. Instead, the only real hope that we have is speeding the time to diagnose and resolve them by involving professionals that manage the systems that are actually down. And perhaps having some pictures of the monitoring systems goes a long way to tell users that they should make sure that the issue is indeed the wireless before proclaiming that it is. Because, to be honest, it probably isn’t the Wi-Fi.

Advertisements

The History of The Wireless Field Day AirCheck

Mobility Field Day 2 just wrapped up in San Jose. It’s always a little bittersweet to see the end of a successful event. However, one thing that does bring a bit of joy to the end of the week is the knowledge that one of the best and longest running traditions at the event continues. That tradition? The Wireless/Mobility Field Day AirCheck.

The Gift That Keeps Giving

The Wireless Field Day AirCheck story starts where all stories start. The beginning. At Wireless Field Day 1 in March of 2011, I was a delegate and fresh off my first Tech Field Day event just a month before. I knew some wireless stuff and was ready to learn a lot more about site surveys and other great things. Little did I know that I was about to get something completely awesome and unexpected.

As outlined in this post, Fluke Networks held a drawing at the end of their presentation for a first-generation AirCheck handheld wireless troubleshooting tool. I was thrilled to be the winner of this tool. I took it home and immediately put it to work around my office. I found it easy to use and it provided great information about wireless networks that I could use to make my life easier. I even loaned it out to some of my co-workers during troubleshooting calls and they immediately told me the wanted one of my own.

As the rest of 2011 rolled forward, I found uses for my AirCheck but I didn’t do as much wireless as a lot of the other people out there. I knew that someone else could probably get more out of having it than I did. So, I hatched a plan. I told Stephen Foskett that if I had the chance to come back to Wireless Field Day 2, I would gladly give my AirCheck away to another worthy delegate. I wanted to keep the tool in use with the best and brightest people in the community and help them see how awesome it was.

Sure enough, I was invited to Wireless Field Day 2 in January 2012. I arrived with my AirCheck and waited until the proper moment. During the welcome dinner, Matt Simmons and I found a way to randomly draw a number and award the special prize to Matthew Norwood. He was just as thrilled to get the AirCheck as I was. I sent my prize from Wireless Field Day 1 on its way to a new home, content that I would help someone get more wireless knowledge.

But the giving didn’t stop there. Even though I wasn’t a delegate for Wireless Field Day 3 or Wireless Field Day 4, the AirCheck kept coming back. Matthew gave it to Dan Cybulskie. Dan gave it to Scott Stapleton. The AirCheck headed down under for half of 2013. When Wireless Field Day 5 rolled around, I was now a staff member for Tech Field Day and working behind the scenes. I had forgotten about the AirCheck until a box arrived from Australia with Scott’s postmark on it. He mailed it back to the US to continue the tradition!

And so, the AirCheck passed along to a new set of hands every event. Blake Krone got it at Wireless Field Day 5. Then Jake Snyder, followed by Richard McIntosh and Scott McDermott. Even when we changed the name of the event to Mobility Field Day in 2016, the AirCheck passed along to Rowell Dionicio.

Changing Of The Guard

In the interim, the AirCheck product moved over to Netscout. They developed a new version, the G2, that was released after Mobility Field Day 1 in 2016. The word also got around to the Netscout folks that there was a magical G1 AirCheck that was passed along to successive Wireless/Mobility Field Day delegates as a way of keeping the learning active in the community.

Netscout was a presenter during Mobility Field Day 2 in 2017. Chris Hinz contacted me before the event and asked if we still gave away the AirCheck during the event. I assured him that we did. He said that a tradition like that should continue, even if the G1 AirCheck was getting a bit long in the tooth. He told me that he might be able to help us all out.

After the Netscout presentation at Mobility Field Day 2, Chris presented me with his special surprise: a brand new G2 AirCheck! Since we hadn’t given the old unit to its new recipient just yet, we decided that it was time to “retire” the old G1 and pass along the G2 to the next lucky contestant. Shaun Neal was the lucky delegate this time and took the new and improved G2 home with him Wednesday night. I was happy to see it go to him knowing that he’ll get to put it through its paces and learn from it. And then he will get to bring it back to the next Mobility Field Day for it to pass along to a new delegate and continue the chain of sharing.


Tom’s Take

When I gave away my G1 AirCheck all those years ago, I never expected it would turn into something so incredible. The sharing and exchange of tools and knowledge at both Wireless Field Day and Mobility Field Day help remind me of why I do this job with Stephen. The community is an awesome and amazing place sometimes. The new G2 AirCheck will have a long life helping delegates troubleshoot wireless issues.

The old G1 AirCheck, my AirCheck, is in my suitcase. It’s ready to start its retirement in my office, having earned thousands of frequent flyer miles as well as becoming a very important part of Tech Field Day lore. I couldn’t be happier to get it back at the end of its life knowing how much happiness it brought to people along the way.

The Future Of SDN Is Up In The Air

The announcement this week that Riverbed is buying Xirrus was a huge sign that the user-facing edge of the network is the new battleground for SDN and SD-WAN adoption. Riverbed is coming off a number of recent acquisitions in the SDN space, including Ocedo just over a year ago. So, why then, would Riverbed chase down a wireless company when they’re so focused on the wiring behind the walls?

The New User Experience

When SDN was a pile of buzzwords attached to an idea that had just come out of Stanford, a lot of people were trying to figure out just what exactly SDN could offer them in terms of their network. Things like network slicing were the first big pieces to be put up before things like orchestration, programmability, and APIs were really brought to the fore. People were trying to figure out how to make this hot new thing work for them. Well, almost everyone.

Wireless professionals are a bit jaded when it comes to SDN. That’s because they’ve seen it already in the form of controller-based solutions. The idea that a central device can issue commands to remote access devices and control configurations easily? Airespace was doing that over a decade ago before they got bought by Cisco. Programmability is a moot point to people that can import thousands of access points into a device and automatically have new SSIDs being broadcast on them all in a matter of seconds. Even the new crop of “controllerless” wireless systems on the market still have a central control infrastructure that sends commands to the APs. Much like we’ve found in recent years with SDN, removing the control plane from the data plane path has significant advantages.

So, what would it take to excite wireless pros about SDN? Well, as it turns out, the issue comes down to the user side of the equation. Wireless networks work very well in today’s enterprise. They form the backbone of user connectivity. Companies like Aruba are experimenting with all-wireless offices. The concept is crazy at first glance. How will users communicate without phones? As it turns out, most of them have been using instant messengers and soft phone programs for years. Their communications infrastructure has changed significantly since I learned how to install phone systems years ago. But what hasn’t changed is the need to get these applications to play nicely with each other.

Application behavior and analysis is a huge selling point for SDN and, by extension, SD-WAN. Being able to classify application traffic running on a desktop and treat it differently based on criteria like voice traffic versus web browsing traffic is huge for network professionals. This means the complicated configurations of QoS back in the day can be abstracted out of the network devices and handled by more intelligent systems further up the stack. The hard work can be done where it should be done – by systems with unencumbered CPUs making intelligent decisions rather than by devices that are processing packets as quickly as possible. These decisions can only be made if the traffic is correctly marked and identified as close to the point of origin as possible. That’s where Riverbed and Xirrus come into play.

Extending Your Brains To Your Fingers

By purchasing a company like Xirrus, Riverbed can build on their plans for SDN and SD-WAN by incorporating their software technology into the wireless edge. By classifying the applications where they live, the wireless APs can provide the right information to the SDN processes to ensure traffic is dealt with properly as it flies through the network. With SD-WAN technologies, that can mean making sure the web browsing traffic is sent through local internet links when traffic meant for main sites, like communications or enterprise applications, can be sent via encrypted tunnels and monitored for SLA performance.

Network professionals can utilize SDN and SD-WAN to make things run much more smoothly for remote users without the need to install cumbersome appliances at the edge to do the classification. Instead, the remote APs now become the devices needed to make this happen. It’s brilliant when you realize how much more effective it can be to deploy a larger number of connectivity devices that contain software for application analysis than it is to drop a huge server into a branch office where it’s not needed.

With the deployment of these remote devices, Riverbed can continue to build on the software side of technology by increasing the capabilities of these devices while not requiring new hardware every time a change comes out. You may need to upgrade your APs when a new technology shift happens in hardware, like when 802.11ax is finally released, but that shouldn’t happen for years. Instead, you can enjoy the benefits of using SDN and SD-WAN to accelerate your user’s applications.


Tom’s Take

Fortinet bought Meru. HPE bought Aruba. Now, Riverbed is buying Xirrus. The consolidation of the wireless market is about more than just finding a solution to augment your campus networking. It’s about building a platform that uses wireless networking as a delivery mechanism to provide additional value to users. The spectrum part of wireless is always going to be hard to do properly. Now, the additional benefit of turning those devices into SDN sensors is a huge value point for enterprise networking professionals as well. What better way to magically deploy SDN in your network than to flip a switch and have it everywhere all at once?

Connecting SMBs The Easy Way With Aerohive Connect

Aerohive

Wireless is hard. When you’re putting together large deployments of access points in challenging environments with tons of security on top of it all you realize the difficulty. That’s why most major wireless deployments require a lot of time, planning, and documentation to pull off correctly. But what if things are on the small side?

A Small World Without Wires

The average small business (SMB) is stuck in a wireless limbo. They have requirements that far exceed the performance profile of standard consumer wireless devices. Most SMBs have more than three or four devices connecting at a time. They have reliability issues that need to be dealt with. And they need it all in a package that doesn’t need constant minding to work appropriately.

When you look at the market for consumer wireless today, the real push is to get rid of any configuration at all. Even the old Apple Airport, which was simplistic in its day, is too “complicate” for modern users. Solutions like Google Wifi aim to be the kind of solution that just requires a cable plugged in. No additional configuration beyond that. Which works wonders if you’re a consumer at home that needs to enable some tablets and a smart TV. But for businesses, there needs to be a level of control above that.

At the same time, wireless solutions for SMBs need to offer a limited choice of options. When you give someone a huge list of choices with no real direction on how to use them, you get something I’ve started calling Freestyle Syndrome, after the infamous Coke Freestyle machines. Too many choices cause indecision. Even Coke has finally figured this out by creating guides on the first page of the machine to guide people to Low Calorie options or Fruit Flavored drinks. They realize that the best way to give people tons of choices is to artificially limit those choices in such a way as to give the average user more direction on how to use them.

Buzzing With Opportunity

Enter the newest offering from Aerohive. Announced yesterday, Aerohive has a new 2×2:2 AP on the market, the AP 122. They are combining this new AP with a unique software offering, Aerohive Connect. Aerohive Connect solves the above issues with by providing enhanced capabilities for SMBs without overwhelming them wth pointless options.

Aerohive Connect is a version of the HiveManager software that is optimized to deliver the features that most SMBs need. Included is basic RF planning to find the best place to put your APs, guided deployment and configuration to ensure that you set those APs up correctly, and health monitoring to make sure they are working correctly into the future. You also get features to help create guest access networks to keep your traffic segmented between employees and customers.

What you don’t get with Aerohive Connect is some of the more advanced features of deploying multiple branch sites, advanced security profiles, and other advanced enterprise features of HiveManager. That’s how Aerohive is able to provide these features at a lower price point to stay attractive for SMBs.

Another thing that you won’t see from Aerohive is something common to other solutions like this. Instead of pushing you into “upgrading” to a full-featured version of the software by limiting the number of APs that can be connected, Aerohive Connect does not have a limit on the number of connected APs. You can use it with 2 APs or 25 APs with no limits. If the basic feature set is all you ever need, that’s all you’ll ever pay for. There’s no hidden uplift to recover costs, which essentially turns the SMB solution into an extended trial.


Tom’s Take

As far as solutions for SMBs go, I think Aerohive is on track with Aerohive Connect. They are giving a reduced feature offering that’s perfect for the target market with none of the traditional “gotchas” that I see from other solutions that are simply trying to upset users into a more expensive and more useless solution. Rather than trying to get the mom-and-pop convenience store chain on a full-blown enterprise wireless control system, why not target them with the best solution for them rather than a one-size-fits-all-but-not-really offering?

I think Aerohive is going to get a lot of traction with Aerohive Connect in the market. I will be curious to get an update from them in the coming months to see just how popular things have become.

Apple Watch Unlock, 802.11ac, and Time

applewatchface

One of the benefits of upgrading to MacOS 10.12 Sierra is the ability to unlock my Mac laptop with my Apple Watch. Yet I’m not able to do that. Why? Turns out, the answer involves some pretty cool tech.

Somebody’s Watching You

The tech specs list the 2013 MacBook and higher as the minimum model needed to enable Watch Unlock on your Mac. You also need a few other things, like Bluetooth enabled and a Watch running WatchOS 3. I checked my personal MacBook against the original specs and found everything in order. I installed Sierra and updated all my other devices and even enabled iCloud Two-Factor Authentication to be sure. Yet, when I checked the Security and Privacy section, I didn’t see the checkbox for the Watch Unlock to be enabled. What gives?

It turns out that Apple quietly modified the minimum specs during the Sierra beta period. Instead of early 2013 MacBooks being support, the shift moved support to mid-2013 MacBooks instead. I checked the spec sheets and mine is almost identical. The RAM, drive, and other features are the same. Why does Watch Unlock work on those Macs and not mine? The answer, it appears, is wireless.

Now AC The Light

The mid-2013 MacBook introduced Apple’s first 802.11ac wireless chipset. That was the major reason to upgrade over the earlier models. The Airport Extreme also supported 11ac starting in mid-2013 to increase speeds to more than 500Mbps transfer rates, or Wave 1 speeds.

While the majority of the communication that the Apple Watch uses with your phone and your MacBook is via Bluetooth, it’s not the only way it communicates. The Apple Watch has a built-in wireless radio as well. It’s a 2.4GHz b/g/n radio. Normally, the 11ac card on the MacBook can’t talk to the Watch directly because of the frequency mismatch. But the 11ac card in the 2013 MacBook enables a different protocol that is the basis for the unlocking feature.

802.11v has been used for a while as a fast roaming feature for mobile devices. Support for it has been spotty before wider adoption of 802.11ac Wave 1 access points. 802.11v allows client devices to exchange information about network topology. 11v also allows for clients to measure network latency information by timing the arrival of packets. That means that a client can ping an access point or another client and get a precise timestamp of the arrival of that packet. This can be used for a variety of things, most commonly location services.

Time Is On Your Side

The 802.11v timestamp has been proposed to be used as a “time of flight” calculation all the back since 2008. Apple has decided to use Time of Flight as a security mechanism for the Watch Unlock feature. Rather than just assume that the Watch is in range because it’s communicating over Bluetooth, Apple wanted to increase the security of the Watch/Mac connection. When the Mac detects that the Watch is within 3 meters of the Mac it is connected to via Handoff it is in the right range to trigger an unlock. This is where the 11ac card works magic.

When the Watch sends a Bluetooth signal to trigger the unlock, the Mac sends an additional 802.11v request to the watch via wireless. This request is then timed for arrival. Since the Mac knows the watch has to be within 3 meters, the timestamp on the packet has a very tight tolerance for delay. If the delay is within the acceptable parameters, the Watch unlock request is approved and your Mac is unlocked. If there is more than the acceptable deviation, such as when used via a Bluetooth repeater or some other kind of nefarious mechanism, the unlock request will fail because the system realizes the Watch is outside the “safe” zone for unlocking the Mac.

Why does the Mac require an 802.11ac card for 802.11v support? The simple answer is because the Broadcom BCM43xx card in the early 2013 MacBooks and before doesn’t support the 802.11v time stamp field (page 5). Without support for the timestamp field, the 802.11v Time of Flight packet won’t work. The newer Broadcom 802.11ac compliant BCM43xx card in the mid-2013 MacBooks does support the time stamp field, thus allowing the security measure to work.


Tom’s Take

All cool tech needs a minimum supported level. No one could have guess 3-4 years ago that Apple would need support for 802.11v time stamp fields in their laptop Airport cards. So when they finally implemented it in mid-2013 with the 802.11ac refresh, they created a boundary for support for a feature on a device that was in the early development stages. Am I disappointed that my Mac doesn’t support watch unlock? Yes. But I also understand why now that I’ve done the research. Unforeseen consequences of adoption decisions really can reach far into the future. But the technology that Apple is building into their security platform is cool no matter whether it’s support on my devices or not.

Will Dell Networking Wither Away?

chopping-block-Dell-EMC

The behemoth merger of Dell and EMC is nearing conclusion. The first week of August is the target date for the final wrap up of all the financial and legal parts of the acquisition. After that is done, the long task of analyzing product lines and finding a way to reduce complexity and product sprawl begins. We’ve already seen the spin out of Quest and Sonicwall into a separate entity to raise cash for the final stretch of the acquisition. No doubt other storage and compute products are going to face a go/no go decision in the future. But one product line which is in real danger of disappearing is networking.

Whither Whitebox?

The first indicator of the problems with Dell and networking comes from whitebox switching. Dell released OS 10 earlier this year as a way to capitalize on the growing market of free operating systems running on commodity hardware. Right now, OS 10 can run on Dell equipment. In the future, they are hoping to spread it out to whitebox devices. That assumes that soon you’ll see Dell branded OSes running on switches purchased from non-Dell sources booting with ONIE.

Once OS 10 pushes forward, what does that mean for Dell’s hardware business? Dell would naturally want to keep selling devices to customers. Whitebox switches would undercut their ability to offer cheap ports to customers in data center deployments. Rather than give up that opportunity, Dell is positioning themselves to run some form of Dell software on top of that hardware for management purposes, which has always been a strong point for Dell. Losing the hardware means little to Dell if they have to lose profit margin to keep it there in the first place.

The second indicator of networking issues comes from comments from Michael Dell at EMCworld this year. Check out this short video featuring him with outgoing EMC CEO Joe Tucci:

Some of the telling comments in here involve Michael Dell’s praise for the NSX business model and how it is being adopted by a large number of other vendors in the industry. Also telling is their reaffirmation that Cisco is an important partnership in VCE and won’t be going away any time soon. While these two things don’t seem to be related on the surface, they both point to a truth Dell is trying hard to accept.

In the future, with overlay network virtualization models gaining traction in the data center, the underlying hardware will matter little. In almost every case, the hardware choice will come down to one of two options:

  1. Which switch is the cheapest?
  2. Which switch is on the Approved List?

That’s it. That’s the whole decision tree. No one will care what sticker is on the box. They will only care that it didn’t cost a fortune and that they won’t get fired for buying it. That’s bad for companies that aren’t making white boxes or named Cisco. Other network vendors are going to try and add value in some way, but the overlay sitting on top of those bells and whistles will make it next to impossible to differentiate in anything but software. Whether that’s superior management capabilities, open plug-in model, or some other thing we haven’t thought of will make no difference in the end. Software will still be king and the hardware will be an inexpensive pawn or a costly piece that has been pre-approved.

Whither Wireless?

The other big inflection point that makes me worry about the Dell networking story is the lack of movement in the wireless space. Dell has historically been a company to partner first and acquire second. But with HPE’s acquisition of Aruba Networks last year, the dominos in the wireless space are still waiting to fall. Brocade raced out to buy Ruckus. Meru offered itself on a platter to anyone that would buy them. Now Aerohive stands as the last independent wireless vendor without a dance partner. Yes, they’ve announced that they are partnering with Dell, but have you been to the Dell Wireless Networking page? Can you guess what the Dell W-series is? Here’s a hint: it rhymes with “Peruba”.

Every time Dell leads with a W-series deployment, they are effectively paying their biggest competitor. They are opening the door to allowing HPE/Aruba to come in and not only start talking about wireless but servers, storage, and other networking as well. Dell would do well at this point to start deemphasizing the W-series and start highlighting the “new generation” of Aerohive APs and how they are going to the be the focus moving forward.

The real solution would be for Dell to buy a wireless company and take all the wireless expertise they are selling in-house. That would show they are serious about both the campus network of the future and the data center network needed to support their other server and storage infrastructure. Sadly, with Dell being leveraged due to the privatization of his company just two years ago and mounting debt for this mega merger, Dell is looking to make cash with spin offs instead of spending it on yet another company to ingest and subsume. Which means a real non-partner wireless solution is still many years away.


Tom’s Take

Dell’s networking strategy is in maintenance mode. Make switches to support faster speeds for now, probably with Tomahawk support soon, and hope that this whole networking thing goes software sooner rather than later. Otherwise, the need to shore up the campus wireless areas along with the coming decision about showing support fully behind NSX and partnerships is going to be a bitter pill to swallow. Perhaps Dell Networking will exist as an option for companies wanting a 100% Dell solution? Or maybe they are waiting for a new offering from Dell/EMC in the data center to drive profits to research and development to keep pace with Cisco and Arista? One can only hope that their networking flower doesn’t wither on the vine.

Wireless As We Know It Is Dead

WirelessTombstone

Congratulations! We have managed to slay the beast that is wireless. We’ve driven a stake through it’s heart and prevented it from destroying civilization. We’ve taken a nascent technology with potential and turned it into the same faceless corporate technology as the Ethernet that it replaced. Alarmist? Not hardly. Let’s take a look at how 802.11 managed to come to an inglorious end.

Maturing Or Growing Up

Wireless used to be the wild frontier of networking. Sure, those access points bridged to the traditional network and produced packets and frames like all the other equipment. But wireless was unregulated. It didn’t conform to the plans of the networking team. People could go buy a wireless access point and put it under their desk to make that shiny new laptop with 802.11b work without needing to be plugged in.

Wireless used to be about getting connectivity. It used to be about squirreling away secret gear in the hopes of getting a leg up on the poor schmuck in the next cube that had to stay chained to his six feet of network connectivity under the desk. That was before the professionals came in. They changed wireless. They put a suit on it.

Now, wireless isn’t about making my life easier. It’s about advancing the business. It’s about planning and preparation and enabling applications. It’s about buying lots of impressively-specced access points every three years to light up new wings of the building. It’s about surveying for coverage and resource management to make sure the signal is strong everywhere. Everyone has to play nice and understand the rules.

Wireless professionals are the worst of the lot. They used to deal in black magic and secret knowledge that made them the most valuable people on the planet. They alone knew the secrets of how spectrum worked or what co-channel interference was. That was before the dark times. Before people wanted to learn more about it. Now, we can teach people these concepts. How to use tools to fix problems. Why things must be laid out in certain ways to maximize usefulness. We’ve made everyone special.

Now, the business doesn’t want wizards with strange work habits and even stranger results. They want the same predictable group that they’ve gotten for the last decade with the network team. They want people to blame when their application is slow. They want the infrastructure to work full time in every little corner of the building. And when it doesn’t, they want to know whose head must roll for this affront!

The Establishment

Another thing that destroyed wireless was everyone’s attempt to make it mainstream. Gartner’s Wired and Wireless reports didn’t help. Neither did the push to create tools that make it easy to diagnose issues with a minimum of effort. Now, companies think that wireless is something that just happens. Something that doesn’t take planning to execute. Now, wireless professionals are fired or marginalized because it shouldn’t take that much money to configure something so simple, right?

Why do wireless people need professional development? The networking team gets by with reading those old dusty books. How much can wireless really change year to year? It just gets faster and more expensive. Why should you have to learn how to put up those little access points all over again?

Now that wireless is a part of the infrastructure like switches and routers, it’s time to be forgotten. Now the business needs to focus on other technology that’s likely to be implemented incorrectly that doesn’t support the mission of the business. You know, the kinds of things that we read about in industry trade magazines that they use in sports stadiums or hospitals that sound really awesome and can’t be all that expensive, right?


Tom’s Take

We killed wireless because we used it to do the job it was designed to do. We made it boring and useful and pervasive. As soon as a technology achieves that level of use it naturally becomes something unimportant. Which you will be quick to argue about until you realize that you’re probably reading this from a smartphone that is so commonplace you forget you’re using it.

Now we talk about the apps and technology we’re building on top of wireless. Mobility, location, and other things that are more appealing to people shelling out money to buy things. Buyers don’t want boring. They want expensive gadgets they can point to and loudly proclaim that they spent a lot for this bauble.

Wireless is a victim of its own success. We fought to make it a part of the mainstream and now that it is no one cares about it any more. Now that we take it for granted we must accept that it’s not a “thing” any more. It just is.