Why Do We Accept Bad Wireless Clients?

We recorded a fun roundtable discussion last week during Mobility Field Day that talked about the challenges that wireless architects face in their daily lives. It’s about an hour but it’s packed with great discussions about hard things we deal with:

One of the surprises for me is that all the conversations came back to how terrible wireless clients can be. The discussion kept coming back to how hard it is to find quality clients and how we adjust our expectations for the bad ones.

Driven to Madness

Did you know that 70% of Windows crashes are caused by third-party drivers? That’s Microsoft’s own research saying it. That doesn’t mean that Windows is any better or more stable with their OS design compared to Linux or MacOS. However, I’ve fiddled with drivers on Linux and I can tell you how horrible that experience can be1. Windows is quite tolerant of hardware that wouldn’t work anywhere else. As long as the manufacturer provides a driver you’re going to get something that works most of the time.

Apply that logic to a wireless networking card. You can buy just about anything and install it on your system and it will mostly work. Even with reputable companies like Intel you have challenges though. I have heard stories of driver updates working in one release and then breaking horribly in another. I’ve had to do the dance of installing beta software to make a function work at the expense of stability of the networking stack. Anyone that has ever sent out an email cautioning users to not update any drivers on their system knows the pain that can be caused by bad drivers corrupting clients.

That’s just the software we can control. What if it’s an OS we can’t do anything about? More and more users are turning to phones and tablets for their workhorse devices. Just a causal glance at Youtube will reveal a cornucopia of using a tablet as a daily driver machine. Those devices aren’t immune to driver challenges. They just come in a hidden package during system updates. Maybe the developers decided to roll out a new feature. Maybe they wanted to test a new power management algorithm. Maybe they’re just chaotic neutral and wanted to disrupt the world. Whatever the reason you’re stuck with the results. If you can’t test it fast enough you may find your users updated their devices chasing a feature. Most companies stop signing the code for the older version shortly after issuing an update so downgrading is impossible. Then what? You have a shiny brick? Maybe you have to create a special network that disables features for them? There are no solid answers.

Pushing Back

My comment in the roundtable boils down to something simple: Why do we allow this to happen? Why are we letting client manufacturers do this? The answer is probably more elegant than you realize. We do it because users expect every device to work. Just like the Windows driver issues you wouldn’t plug something into a computer and then expect it to not work, right? Wireless is no different to the user. They want to walk in somewhere and connect. Whether it’s a coffee shop or their home office or the corporate network it needs to be seamless and friction-free.

Would you expect the same of an Ethernet cable? or a PATA hard drive? Would you expect to be able to bring a phone from home and plug it into your corporate PBX? Of course not. Part of the issue is a lack of visible incompatibility. If you know the Ethernet cable won’t plug into a device you won’t try to connect it. If the cable for your disk drive isn’t compatible with your motherboard you get a different drive. With wireless we expect the nerds in the back to “make it work”. Wireless is one of the best protocols at making things work poorly just to say it is up and running. If you had an Ethernet network with 15% packet loss you’d claim it was broken. Yet Wi-Fi will connect and drop packets due to bad SNR and other factors because it’s designed to work under adverse conditions.

Why do we tolerate bad clients? Why don’t we push back against the vendors and tell them to do better? The standard argument is that we don’t control the client manufacturing process. How are we supposed to tell vendors to support a function if we can’t make our voices heard? While we may not be able to convince Intel or Apple or Samsung to build in support for specific protocols we can affect that change with consumption. If you work in an enterprise and you need support for something, say 802.11r, you can refuse to purchase a device until it’s supported.

But wait, you say, I don’t control that either. You may not control the devices but you control the network to which they attach. You can tell your users that the device isn’t supported. Just like a PATA hard disk or a floppy drive you can tell users that what they want to do won’t work and you need to do something different. If they want to use their personal iPad for work or their ancient laptop to connect they need to update it or use a different communications method. If your purchasing department wants to save $10 per laptop because they come with inferior wireless cards you can push back and tell them that the specs aren’t compatible with the network setup. Period, full stop, end of sentence.


Tom’s Take

The power to solve bad clients won’t come from companies that make money doing the least amount of work possible. It won’t come from companies that don’t provide feedback in the form of lost sales. It will come when someone puts their foot down and refuses to support any more bad client hardware and software. If the Wi-Fi Alliance won’t enforce good client connectivity it’s time we do it for them.

If you disagree I’d love to hear what you think. Is there a solution I’m not seeing? Or are we just doomed to live with bad client devices forever?


  1. If you say Winmodem around me I will scream. ↩︎

A Gift Guide for Sanity In Your Home IT Life

If you’re reading my blog you’re probably the designated IT person for your family or immediate friend group. Just like doctors that get called for every little scrape or plumbers that get the nod when something isn’t draining over the holidays, you are the one that gets an email or a text message when something pops up that isn’t “right” or has a weird error message. These kinds of engagements are hard because you can’t just walk away from them and you’re likely not getting paid. So how can you be the Designated Computer Friend and still keep your sanity this holiday season?

The answer, dear reader, is gifts. If you’re struggling to find something to give your friends that says “I like you but I also want to reduce the number of times that you call me about your computer problems” then you should definitely read on for more info! Note that I’m not going to fill this post will affiliate links or plug products that have sponsored anything. Instead, I’m going to just share the classes or types of devices that I think are the best way to get control of things.

Step 1: Infrastructure Upgrades

When you go visit your parents for Thanksgiving or some other holiday check in, are they still running the same wireless network they got when they got their high-speed Internet? Is their Wi-Fi SSID still the default with the password printed on the side of the router/modem combo? Then you’re going to want to upgrade their experience to keep your sanity for the next few holidays.

The first thing you need to do it get control of their wireless setup. You need to get some form of wireless access point that wasn’t manufactured in the early part of the century. Most of the models on the market have Wi-Fi 6 support now. You don’t need to go crazy with a Wi-Fi 6E model for your loved ones right now because none of their devices will support it. You just need something more modern with a user interface that wasn’t written to look like Windows 3.1.

You also need to see about an access point that is controlled via a cloud console. If you’re the IT person in the group you probably already use some form control for your home equipment. You don’t need a full Meraki or Juniper Mist setup to lighten your load. That is, unless you already have one of those dashboards set up and you have spare capacity. Otherwise you could look at something like Ubiquiti as a middle ground.

Why a cloud controller AP? Because then you can log in and fix things or diagnose issues without needing to spend time talking to less technical users. You can find out if they have an unstable Internet connection or change SSID passwords at the drop of a hat. You can even set up notifications for those remote devices to let you know when a problem happens so you can be ready and waiting for the call. And you can keep tabs on necessary upgrades and such so you aren’t fielding calls when the next major exploit comes out and your parents call you asking if they’re going to get infected by this virus. You can just tell them they’re up-to-date and good to go. The other advantage of this method is that when you upgrade your own equipment at home you can just waterfall the old functional gear down to them and give them a “new to you” upgrade that they’ll appreciate.

Step 2: Device Upgrades

My dad was notorious for using everything long past the point of needing to be retired. It’s the way he was raised. If there’s a hole you patch it. If it breaks you fix it. If that fix doesn’t work you wrap it in duct tape and use it until it crumbles to dust. While that works for the majority of things out there it does cause issues with technology far too often.

He had a iPad that he loved. He didn’t use it all day, every day but he did use it frequently enough to say that it was his primary computing device. It was a fourth-generation device, so it fell out of fashion a few years ago. When he would call me and ask me questions about why it was behaving a certain way or why he couldn’t download some new app from the App Store I would always remind him that he had an older device that wasn’t fast enough or new enough to run the latest programs or even operating software. This would usually elicit a grumble or two and then we would move on.

If you’re the Designated IT Person and you spend half your time trying to figure out what versions of OS and software are running on a device, do yourself a favor and invest in a new device for your users just to ease the headaches. If they use a tablet as their primary computing device, which many people today do, then just buy a new one and help them migrate all the data across to the new one while you’re eating turkey or opening presents.

Being on later hardware ensures that the operating system is the latest version with all the patches for security that are needed to keep your users safe. It also means you’re not trying to figure out what the last supported version of the software was that works with the rest of the things. I’ve played this game trying to get an Apple Watch to connect to an older phone with mismatched software as well as trying to get support for newer wireless security on older laptops with very little capability to do much more than WPA1. The amount of hours I burned trying to make the old junk work with the new stuff would have been better served just buying a new version of the same old thing and getting all their software moved over. Problems seem to just disappear when you are running on something that was manufactured within the last five years.

Step 3: Help Them Remember

This is probably my biggest request: Forgotten passwords. Either it’s the forgotten Apple ID or maybe the wireless network password. My parents and in-laws forget the passwords they need to log into things all the time. I finally broke down and taught them how to use a password management tool a few years ago and it made all the difference in the world. Now, instead of them having to remember what their password was for a shopping site they can just set it to automatically fill everything in. And since they only need to remember the master password for their app they don’t have to change it.

Better yet, most of these apps have a secure section for notes. So all those other important non-password things that seem to come up all the time are great to put in here. Social Security Numbers, bank account numbers, and so much more can be put in one central location and made easy to access. The best part? If you make it a shared vault you can request access to help them out when they forget how to get in. Or you can be designated as a trusted party that can access the account in the event of a tragedy. Getting your loved ones used to using password vaults now makes it much easier to have them storing important info there in case something happens down the road that requires you to jump in without their interaction. Trust me on this.


Tom’s Take

Your loved ones don’t need knick knacks and useless junk. If you want to show them you love them, give them the gift of not having to call you every couple of days because they can’t remember the wireless password or because they keep getting this error that says their app isn’t support on this device. Invest in your sanity and their happiness by giving them something that works and that has the ability for you to help manage it from the background. If you can make it stable and useful and magically work before they call you with a problem you’re going to find yourself a happier person in the years to come.

Private 5G Needs Complexity To Thrive

I know we talk about the subject of private 5G a lot in the industry but there are more players coming out every day looking to add their voice to the growing supporters of these solutions. And despite the fact that we tend to see 5G and Wi-Fi technologies as ships in the night this discussion isn’t going to go away any time soon. In part it’s because decision makers aren’t quite savvy enough to distinguish between the bands, thinking all wireless communications are pretty much the same.

I think we’re not going to see much overlap between these two technologies. But the reasons why aren’t quite what you might think.

Walking Workforces

Working from anywhere other than the traditional office is here to stay. Every major Silicon Valley company has looked at the cost benefit analysis and decided to let workers do their thing from where they live. How can I tell it’s permanent? Because they’re reducing salaries for those that choose to stay away from the Bay Area. That carrot is pretty enticing and for the companies to say that it’s not on the table for remote work going forward means they have no incentive to make people want to move to work from an office.

Mobile workers don’t care about how they connect. As long as they can get online they are able to get things done. They are the prime use case for 5G and Private 5G deployments. Who cares about the Wi-Fi at a coffee shop if you’ve got fast connectivity built in to your mobile phone or tablet? Moreover, I can also see a few of the more heavily regulated companies requiring you to use a 5G uplink to connect to sensitive data though a VPN or other technology. It eliminates some of the issues with wireless protection methods and ensures that no one can easily snoop on what you’re sending.

Mobile workers will start to demand 5G in their devices. It’s a no-brainer for it to be in the phone and the tablet. As laptops go it’s a smart decision at some point, provided enough people have swapped over to using tablets by then. I use my laptop every day when I work but I’m finding myself turning to my iPad more and more. Not for any magical reason but because it’s convenient if I want to work from somewhere other than my desk. I think that when laptops hit a wall from a performance standpoint you’re going to see a lot of manufacturers start to include 5G as a connection option to lure people back to them instead of abandoning them to the big tablet competition.

However, 5G is really only a killer technology for these more complex devices. The cost of a 5G radio isn’t inconsequential to the overall cost of a device. After all, Apple raised the price of their iPad when they included a 5G radio, didn’t they? You could argue that they didn’t when they upgraded the iPhone to a 5G chipset but the cellular technology is much more integral to the iPhone than the iPad. As companies examine how they are going to move forward with their radio technology it only makes sense to put the 5G radios in things that have ample space, appropriate power, and the ability to recover the costs of including the chips. It’s going to be much more powerful but it’s also going to be a bigger portion of the bill of materials for the device. Higher selling prices and higher margins are the order of the day in that market.

Reassuringly Expensive IoT

One of the drivers for private 5G that I’ve heard of recently is the drive to have IoT sensors connected over the protocol. The thinking goes that the number of devices that are going to be deployed it going to create a significant amount of traffic in a dense area that is going to require the controls present in 5G to ensure they aren’t creating issues. I would tend to agree but with a huge caveat.

The IoT sensors that people are talking about here aren’t the ones that you might think of in the consumer space. For whatever reason people tend to assume IoT is a thermostat or a small device that does simple work. That’s not the case here. These IoT devices aren’t things that you’re going to be buying one or two at a time. They are sensors connected to a larger system. Think HVAC relays and probes. Think lighting sensors or other environmental tech. You know what comes along with that kind of hardware? Monitoring. Maintenance. Subscription costs.

The IoT that is going to take advantage of private 5G isn’t something you’re going to be deploying yourself. Instead, it’s going to be something that you partner with another organization to deploy. You might “own” the tech in the sense that you control the data but you aren’t going to be the one going out to Best Buy or Tech Data to order a spare. Instead, you’re going to pay someone to deploy it and it when it goes wrong. So how does that differ from the IoT thermostat that comes to mind? Price. Those sensors are several hundred dollars each. You’re paying for the technology included in them with that monthly fee to monitor and maintain them. They will talk to the radio station in the building or somewhere nearby and relay that data back to your dashboard. Perhaps it’s on-site or, more likely, in a cloud instance somewhere. All those fees mean that the devices become more complex and can absorb the cost of more complicated radio technology.

What About Wireless?

Remember when wireless was something cool that you had to show off to people that bought a brand new laptop? Or the thrill of seeing your first iPhone connect to attwifi at Starbucks instead of using that data plan you paid so dearly to get? Wireless isn’t cool any more. Yes, it’s faster. Yes, it is the new edge of our world. But it’s not cool. In the same way that Ethernet isn’t cool. Or web browsers aren’t cool. Or the internal combustion engine isn’t cool. Wi-Fi isn’t cool any more because it is necessary. You couldn’t open an office today without having some form of wireless communications. Even if you tried I’m sure that someone would hop over to the nearest big box store and buy a consumer-grade router to get wireless working before the paint was even dry on the walls.

We shouldn’t think about private 5G replacing Wi-Fi because it never will. There will be use cases where 5G makes much more sense, like in high-density deployments or in areas were the contention in the wireless spectrum is just too great to make effective use of it. However, not deploying Wi-Fi in favor of deploying private 5G is a mistake. Wireless is the perfect “set it and forget it” technology. Provide an SSID for people to connect to and then let them go crazy. Public venues are going to rely on Wi-Fi for the rest of time. These places don’t have the kind of staff necessary to make private 5G economical in the long run.

Instead, think of private 5G deployments more like the way that Wi-Fi used to be. It’s an option for devices that need to be managed and controlled by the organization. They need to be provisioned. They need to consume cycles to operate properly. They need to be owned by the company and not the employee. Private 5G is more of a play for infrastructure. Wi-Fi is the default medium given the wide adoption it has today. It may not be the coolest way to connect to the network but it’s the one you can be sure is up and running without the need for the IT department to come down and make it work for you.


Tom’s Take

I’ll admit that the idea of private 5G makes me smile some days. I wish I had some kind of base station here at my house to counteract the horrible reception that I get. However, as long as my Internet connection is stable I have enough wireless coverage in the house to make the devices I have work properly. Private 5G isn’t something that is going to displace the installed base of Wi-Fi devices out there. With the amount of management that 5G requires in devices you’re not going to see a cheap or simple method to deploying it appear any time soon. The pie-in-the-sky vision of having pervasive low power deployments in cheap devices is not going to be realistic on the near future horizon. Instead, think of private 5G as something that you need to use when your other methods won’t work or when someone you are partnering with to deploy new technology requires it. That way you won’t be caught off-guard when the complexity of the technology comes to play.

It’s A Wireless Problem, Right?

How many times have your users come to your office and told you the wireless was down? Or maybe you get a phone call or a text message sent from their phone. If there’s a way for people to figure out that the wireless isn’t working they will not hesitate to tell you about it. But is it always the wireless?

Path of Destruction

During CWNP Wi-Fi Trek 2019, Keith Parsons (@KeithRParsons) gave a great talk about Tips, Techniques, and Tools for Troubleshooting Wireless LAN. It went into a lot of detail about how many things you have to look at when you start troubleshooting wireless issues. It makes your head spin when you try and figure out exactly where the issues all lie.

However, I did have to put up a point that I didn’t necessarily agree with Keith on:

I spent a lot of time in the past working with irate customers in schools. And you can better believe that every time there was an issue it was the network’s fault. Or the wireless. Or something. But no matter what the issue ended up being, someone always made sure to remind me the next time, “You know, the wireless is flaky.”

I spent a lot of time trying to educate users about the nuances between signal strength, throughput, Internet uplinks, application performance (in the days before cloud!) and all the various things that could go wrong in the pathway between the user’s workstation and wherever the data was that they were trying to access. Wanna know how many times it really worked?

Very few.

Because your users don’t really care. It’s not all that dissimilar to the utility companies that you rely on in your daily life. If there is a water main break halfway across town that reduces your water pressure or causes you to not have water at all, you likely don’t spend your time troubleshooting the water system. Instead, you complain that the water is out and you move on with your day until it somehow magically gets fixed. Because you don’t have any visibility into the way the water system works. And, honestly, even if you did it might not make that much difference.

A Little Q&A

But educating your users about issues like this isn’t a lost cause. Because instead of trying to avail them of all the possible problems that you could be dealing with, you need to help them start to understand the initial steps of the troubleshooting process. Honestly, if you can train them to answer the following two questions you’ll likely get a head start on the process and make them very happy.

  1. When Did The Problem Start? One of the biggest issues with troubleshooting is figuring out when the problem started in the first place. How many times have you started troubleshooting something only to learn that the real issue has been going on for days or weeks and they don’t really remember why it started in the first place? Especially with wireless you have to know when things started. Because of the ephemeral nature of things like RSSI your issue could be caused by something crazy that you have no idea about. So you have to get your users to start writing down when things are going wrong. That way you have a place to start.
  2. What Were You Doing When It Happened? This one usually takes a little more coaching. People don’t think about what they’re doing when a problem happens outside of what is causing them to not be able to get anything done. Rarely do they even think that something they did caused the issue. And if they do, they’re going to try and hide it from you every time. So you need to get them to start detailing what was going on. Maybe they were up walking around and roamed to a different AP in a corner of the office. Maybe they were trying to work from the break room during lunch and the microwave was giving them fits. You have to figure out what they were doing as well as what they were trying to accomplish before you can really be sure that you are on the right track to solving a problem.

When you think about the insane number of things that you can start troubleshooting in any scenario, you might just be tempted to give up. But, as I mentioned almost a decade ago, you have to start isolating factors before you can really fix issues. As Keith mentioned in his talk, it’s way too easy to pick a rabbit hole issue and start troubleshooting to make it fit instead of actually fixing what’s ailing the user. Just like the scientific method, you have to make your conclusions fit the data instead of making the data fit your hypothesis. If you are dead set on moving an AP or turning off a feature you’re going to be disappointed when that doesn’t actually fix what the users are trying to tell you is wrong.


Tom’s Take

Troubleshooting is the magic that makes technical people look like wizards to regular folks. But it’s not because we have magical powers. Instead, it’s because we know instinctively what to start looking for and what to discard when it becomes apparent that we need to move on to a new issue. That’s what really separates a the best of the best. Being able to focus on the issue and not the next solution that may not work. Keith Parsons is one such wizard. Having him speak at Wi-Fi Trek 2019 was a huge success and should really help people understand how to look at these issues and keep them on the right troubleshooting track.

Fast Friday – Mobility Field Day 4

This week’s post is running behind because I’m out in San Jose enjoying great discussions from Mobility Field Day 4. This event is bringing a lot of great discussion to the community to get everyone excited for current and future wireless technologies. Some quick thoughts here with more ideas to come soon.

  • Analytics is becoming a huge driver for deployments. The more data you can gather, the better everything can be. When you start to include IoT as a part of the field you can see why all those analytics matter. You need to invest in a lot of CPU horsepower to make it all work the way you want. Which is also driving lots of people to build in the cloud to have access to what they need on-demand from an infrastructure side of things.
  • Spectrum is a huge problem and source of potential for wireless. You have to have access to spectrum to make everything work. 2.4 GHz is pretty crowded and getting worse with IoT. 5 GHz is getting crowded as well, especially with LAA being used. And the opening of the 6 GHz spectrum could be held up in political concerns. Are there new investigations that need to happen to find bands that can be used without causing friction?
  • The driver for technology has to be something other than desire. We have to build solutions and put things out there to make them happen. Because if we don’t we’re going to stuck with what we have for a long time. No one wants to move and reinvest without clear value. But clear value often doesn’t develop until people have already moved. Something has to break the logjam of hesitance. That’s the reason why we still need bold startups with new technology jumping out to make things work.

Tom’s Take

I know I’ll have more thoughts when I get back from this event, but wireless has become the new edge and that’s a very interesting shift. The more innovation we can drive there means the more capable we can make our clients and empower users.

The Cargo Cult of Google Tools

You should definitely watch this amazing video from Ben Sigelman of LightStep that was recorded at Cloud Field Day 4. The good stuff comes right up front.

In less than five minutes, he takes apart crazy notions that we have in the world today. I like the observation that you can’t build a system more than three or four orders of magnitude. Yes, you really shouldn’t be using Hadoop for simple things. And Machine Learning is not a magic wand that fixes every problem.

However, my favorite thing was the quick mention of how emulating Google for the sake of using their tools for every solution is folly. Ben should know, because he is an ex-Googler. I think I can sum up this entire discussion in less than a minute of his talk here:

Google’s solutions were built for scale that basically doesn’t exist outside of a maybe a handful of companies with a trillion dollar valuation. It’s foolish to assume that their solutions are better. They’re just more scalable. But they are actually very feature-poor. There’s a tradeoff there. We should not be imitating what Google did without thinking about why they did it. Sometimes the “whys” will apply to us, sometimes they won’t.

Gee, where have I heard something like this before? Oh yeah. How about this post. Or maybe this one on OCP. If I had a microphone I would have handed it to Ben so he could drop it.

Building a Laser Moustrap

We’ve reached the point in networking and other IT disciplines where we have built cargo cults around Facebook and Google. We practically worship every tool they release into the wild and try to emulate that style in our own networks. And it’s not just the tools we use, either. We also keep trying to emulate the service provider style of Facebook and Google where they treated their primary users and consumers of services like your ISP treats you. That architectural style is being lauded by so many analysts and forward-thinking firms that you’re probably sick of hearing about it.

Guess what? You are not Google. Or Facebook. Or LinkedIn. You are not solving massive problems at the scale that they are solving them. Your 50-person office does not need Cassandra or Hadoop or TensorFlow. Why?

  • Google Has Massive Scale – Ben mentioned it in the video above. The published scale of Google is massive, and even it’s on the low side of the number. The real numbers could even be an order of magnitude higher than what we realize. When you have to start quoting throughput numbers in “Library of Congress” numbers to make sense to normal people, you’re in a class by yourself.
  • Google Builds Solutions For Their Problems – It’s all well and good that Google has built a ton of tools to solve their issues. It’s even nice of them to have shared those tools with the community through open source. But realistically speaking, when are you really going to use Cassandra to solve all but the most complicated and complex database issues? It’s like a guy that goes out to buy a pneumatic impact wrench to fix the training wheels on his daughter’s bike. Sure, it will get the job done. But it’s going to be way overpowered and cause more problems than it solves.
  • Google’s Tools Don’t Solve Your Problems – This is the crux of Ben’s argument above. Google’s tools aren’t designed to solve a small flow issue in an SME network. They’re designed to keep the lights on in an organization that maps the world and provides video content to billions of people. Google tools are purpose built. And they aren’t flexible outside that purpose. They are built to be scalable, not flexible.

Down To Earth

Since Google’s scale numbers are hard to comprehend, let’s look at a better example from days gone by. I’m talking about the Cisco Aironet-to-LWAPP Upgrade Tool:

I used this a lot back in the day to upgrade autonomous APs to LWAPP controller-based APs. It was a very simple tool. It did exactly what it said in the title. And it didn’t do much more than that. You fed it an image and pointed it at an AP and it did the rest. There was some magic on the backend of removing and installing certificates and other necessary things to pave the way for the upgrade, but it was essentially a batch TFTP server.

It was simple. It didn’t check that you had the right image for the AP. It didn’t throw out good error codes when you blew something up. It only ran on a maximum of 5 APs at a time. And you had to close the tool every three or four uses because it had a memory leak! But, it was a still a better choice than trying to upgrade those APs by hand through the CLI.

This tool is over ten years old at this point and is still available for download on Cisco’s site. Why? Because you may still need it. It doesn’t scale to 1,000 APs. It doesn’t give you any other functionality other than upgrading 5 Aironet APs at a time to LWAPP (or CAPWAP) images. That’s it. That’s the purpose of the tool. And it’s still useful.

Tools like this aren’t built to be the ultimate solution to every problem. They don’t try to pack in every possible feature to be a “single pane of glass” problem solver. Instead, they focus on one problem and solve it better than anything else. Now, imagine that tool running at a scale your mind can’t comprehend. And you’ll know now why Google builds their tools the way they do.


Tom’s Take

I have a constant discussion on Twitter about the phrase “begs the question”. Begging the question is a logical fallacy. Almost every time the speaker really means “raises the question”. Likewise, every time you think you need to use a Google tool to solve a problem, you’re almost always wrong. You’re not operating at the scale necessary to need that solution. Instead, the majority of people looking to implement Google solutions in their networks are like people that put chrome everything on a car. They’re looking to show off instead of get things done. It’s time to retire the Google Cargo Cult and instead ask ourselves what problems we’re really trying to solve, as Ben Sigelman mentions above. I think we’ll end up much happier in the long run and find our work lives much less complicated.

A Wireless Brick In The Wall

I had a very interesting conversation today with some friends about predictive wireless surveys. The question was really more of a confirmation: Do you need to draw your walls in the survey plan when deciding where to put your access points? Now, before you all run screaming to the comments to remind me that “YES YOU DO!!!”, there were some other interesting things that were offered that I wanted to expound upon here.

Don’t Trust, Verify

One of the most important parts of the wall question is material. Rather than just assuming that every wall in the building is made from gypsum or from wood, you need to actually go to the site or have someone go and tell you what the building material is made from. Don’t guess about the construction material.

Why? Because not everyone uses the same framing for buildings. Wood beams may be popular in one type of building, but steel reinforcement is used in other kinds. And you don’t want to base your predictive survey on one only to find out it’s the other.

Likewise, you need to make sure that the wall itself is actually made of what you think it is. Find out what kind of sheetrock they used. Make sure it’s not actually something like stucco plastered over chicken wire. Chicken wire as the structure of a plaster wall is a guaranteed Faraday Cage.

Another fun thing to run across is old buildings. One site survey I did for a wireless bid involved making sure that a couple of buildings on the outer campus were covered as well. When I asked about the buildings and when they were made, I found out they had been built in the 1950s and were constructed like bomb shelters. Thick concrete walls everywhere. Reinforcement all throughout. Once I learned this, the number of APs went up and the client had to get an explanation of why all the previous efforts to cover the buildings with antennas hadn’t worked out so well.

X-Ray Vision

Speaking of which, you also need to make sure to verify the structures underneath the walls. Not just the reinforcement. But the services behind the walls. For example, water pipes go everywhere in a building. They tend to be concentrated in certain areas but they can run the entire length of a floor or across many floors in a high rise.

Why are water pipes bad for wireless? Well, it turns out that the resonant frequency of water is the same as 802.11b/g/n – 2.4GHz. It’s how microwaves operate. And water loves to absorb radiation in that spectral range. Which means water pipes love to absorb wireless signals. So you need to know where they are in the building.

Architectural diagrams are a great way to find out these little details. Don’t just assume that walking through a building and staring at a wall is going to give you every bit of info you need. You need to research plans, blueprints, and diagrams about things. You need to understand how these things are laid out in order to know where to locate access points and how to correct predictive surveys when they do something unexpected.

Lastly, don’t forget to take into account the movement and placement of things. We often wish we could get involved in a predictive survey at the beginning of the project. A greenfield building is a great time to figure out the best place to put APs so we don’t have to go crawling over bookcases. However, you shouldn’t discount the chaos that can occur when an office is furnished or when people start moving things around. Things like plants don’t matter as much as when someone moves the kitchen microwave across the room or decides to install a new microphone system in the conference room without telling anyone.


Tom’s Take

Wireless engineers usually find out when the take the job that it involves being part radio engineer, part networking engineer, part artist, and part construction general contractor. You need to know a little bit about how buildings are made in order to make the invisible network operate optimally. Sure, traditional networking guys have it easy. They can just avoid running cables by florescent lights or interference sources and be good. But wireless engineers need to know if the very material of the wall is going to cause problems for them.

It’s Probably Not The Wi-Fi

After finishing up Mobility Field Day last week, I got a chance to reflect on a lot of the information that was shared with the delegates. Much of the work in wireless now is focused on analytics. Companies like Cape Networks and Nyansa are trying to provide a holistic look at every part of the network infrastructure to help professionals figure out why their might be issues occurring for users. And over and over again, the resound cry that I heard was “It’s Not The Wi-Fi”

Building A Better Access Layer

Most of wireless is focused on the design of the physical layer. If you talk to any professional and ask them to show your their tool kit, they will likely pull out a whole array of mobile testing devices, USB network adapters, and diagramming software that would make AutoCAD jealous. All of these tools focus on the most important part of the equation for wireless professionals – the air. When the physical radio spectrum isn’t working users will complain about it. Wireless pros leap into action with their tools to figure out where the fault is. Either that, or they are very focused on providing the right design from the beginning with the tools validating that access point placement is correct and coverage overlap provides redundancy without interference.

These aren’t easy problems to solve. That’s why wireless folks get paid the big bucks to build it right or fix it after it was built wrong. Wired networkers don’t need to worry about microwave ovens or water pipes. Aside from the errant fluorescent light or overly aggressive pair of cable pliers, wired networks are generally free from the kinds of problems that can plague a wire-free access layer.

However, the better question that should be asked is how the users know it’s the wireless network that’s behind the faults? To the users, the system is in one of three states: perfect, horribly broken, or slow. I think we can all agree that the first state of perfection almost never actually exists in reality. It might exist shortly after installation when user load is low and actual application use is negligible. However, users are usually living in one of the latter states. Either the wireless is “slow” or it’s horribly broken. Why?

No-Service Station

As it turns out, thanks to some of the reporting from companies like Cape and Nyansa, it turns out that a large majority of the so-called wireless issues are in fact not wireless related at all. Those designs that wireless pros spend so much time fretting over are removed from the equation. Instead, the issues are with services.

Yes, those pesky network services. The ones like DNS or DHCP that seem invisible until they break. Or those services that we pay hefty sums to every month like Amazon or Microsoft Azure. The same issues that plague wired networking exist in the wireless world as well and seem to escape blame.

DNS is invisible to the majority of users. I’ve tried to explain it many times with middling to poor results. The idea that computers on the internet don’t understand words and must rely on services to translate them to numbers never seems to click. And when you add in the reliance on this system and how it can be knocked out with DDoS attacks or hijacking, it always comes back to being about the wireless.

It’s not hard to imagine why. The wireless is the first thing users see when they start having issues. It’s the new firewall. Or the new virus. Or the new popup. It’s a thing they can point to as the single source of problems. And if there is an issue at any point along the way, it must be the fault of the wireless. It can’t possibly be DNS or routing issues or a DDoS on AWS. Instead, the wireless is down.

And so wireless pros find themselves defending their designs and configurations without knowing that there is an issue somewhere else down the line. That’s why the analytics platforms of the future are so important. By giving wireless pros visibility into systems beyond the spectrum, they can reliably state that the wireless isn’t at fault. They can also engage other teams to find out why the DNS servers are down or why the default gateway for the branch office has been changed or is offline. That’s the kind of info that can turn a user away from blaming the wireless for all the problems and finding out what’s actually wrong.


Tom’s Take

If I had a nickel for every problem that was blamed on the wireless network or the firewall or some errant virus when that actually wasn’t the case, I could retire and buy my own evil overlord island next to Larry Ellison. Alas, these are issues that are never going to go away. Instead, the only real hope that we have is speeding the time to diagnose and resolve them by involving professionals that manage the systems that are actually down. And perhaps having some pictures of the monitoring systems goes a long way to tell users that they should make sure that the issue is indeed the wireless before proclaiming that it is. Because, to be honest, it probably isn’t the Wi-Fi.

The History of The Wireless Field Day AirCheck

Mobility Field Day 2 just wrapped up in San Jose. It’s always a little bittersweet to see the end of a successful event. However, one thing that does bring a bit of joy to the end of the week is the knowledge that one of the best and longest running traditions at the event continues. That tradition? The Wireless/Mobility Field Day AirCheck.

The Gift That Keeps Giving

The Wireless Field Day AirCheck story starts where all stories start. The beginning. At Wireless Field Day 1 in March of 2011, I was a delegate and fresh off my first Tech Field Day event just a month before. I knew some wireless stuff and was ready to learn a lot more about site surveys and other great things. Little did I know that I was about to get something completely awesome and unexpected.

As outlined in this post, Fluke Networks held a drawing at the end of their presentation for a first-generation AirCheck handheld wireless troubleshooting tool. I was thrilled to be the winner of this tool. I took it home and immediately put it to work around my office. I found it easy to use and it provided great information about wireless networks that I could use to make my life easier. I even loaned it out to some of my co-workers during troubleshooting calls and they immediately told me the wanted one of my own.

As the rest of 2011 rolled forward, I found uses for my AirCheck but I didn’t do as much wireless as a lot of the other people out there. I knew that someone else could probably get more out of having it than I did. So, I hatched a plan. I told Stephen Foskett that if I had the chance to come back to Wireless Field Day 2, I would gladly give my AirCheck away to another worthy delegate. I wanted to keep the tool in use with the best and brightest people in the community and help them see how awesome it was.

Sure enough, I was invited to Wireless Field Day 2 in January 2012. I arrived with my AirCheck and waited until the proper moment. During the welcome dinner, Matt Simmons and I found a way to randomly draw a number and award the special prize to Matthew Norwood. He was just as thrilled to get the AirCheck as I was. I sent my prize from Wireless Field Day 1 on its way to a new home, content that I would help someone get more wireless knowledge.

But the giving didn’t stop there. Even though I wasn’t a delegate for Wireless Field Day 3 or Wireless Field Day 4, the AirCheck kept coming back. Matthew gave it to Dan Cybulskie. Dan gave it to Scott Stapleton. The AirCheck headed down under for half of 2013. When Wireless Field Day 5 rolled around, I was now a staff member for Tech Field Day and working behind the scenes. I had forgotten about the AirCheck until a box arrived from Australia with Scott’s postmark on it. He mailed it back to the US to continue the tradition!

And so, the AirCheck passed along to a new set of hands every event. Blake Krone got it at Wireless Field Day 5. Then Jake Snyder, followed by Richard McIntosh and Scott McDermott. Even when we changed the name of the event to Mobility Field Day in 2016, the AirCheck passed along to Rowell Dionicio.

Changing Of The Guard

In the interim, the AirCheck product moved over to Netscout. They developed a new version, the G2, that was released after Mobility Field Day 1 in 2016. The word also got around to the Netscout folks that there was a magical G1 AirCheck that was passed along to successive Wireless/Mobility Field Day delegates as a way of keeping the learning active in the community.

Netscout was a presenter during Mobility Field Day 2 in 2017. Chris Hinz contacted me before the event and asked if we still gave away the AirCheck during the event. I assured him that we did. He said that a tradition like that should continue, even if the G1 AirCheck was getting a bit long in the tooth. He told me that he might be able to help us all out.

After the Netscout presentation at Mobility Field Day 2, Chris presented me with his special surprise: a brand new G2 AirCheck! Since we hadn’t given the old unit to its new recipient just yet, we decided that it was time to “retire” the old G1 and pass along the G2 to the next lucky contestant. Shaun Neal was the lucky delegate this time and took the new and improved G2 home with him Wednesday night. I was happy to see it go to him knowing that he’ll get to put it through its paces and learn from it. And then he will get to bring it back to the next Mobility Field Day for it to pass along to a new delegate and continue the chain of sharing.


Tom’s Take

When I gave away my G1 AirCheck all those years ago, I never expected it would turn into something so incredible. The sharing and exchange of tools and knowledge at both Wireless Field Day and Mobility Field Day help remind me of why I do this job with Stephen. The community is an awesome and amazing place sometimes. The new G2 AirCheck will have a long life helping delegates troubleshoot wireless issues.

The old G1 AirCheck, my AirCheck, is in my suitcase. It’s ready to start its retirement in my office, having earned thousands of frequent flyer miles as well as becoming a very important part of Tech Field Day lore. I couldn’t be happier to get it back at the end of its life knowing how much happiness it brought to people along the way.

Will Dell Networking Wither Away?

chopping-block-Dell-EMC

The behemoth merger of Dell and EMC is nearing conclusion. The first week of August is the target date for the final wrap up of all the financial and legal parts of the acquisition. After that is done, the long task of analyzing product lines and finding a way to reduce complexity and product sprawl begins. We’ve already seen the spin out of Quest and Sonicwall into a separate entity to raise cash for the final stretch of the acquisition. No doubt other storage and compute products are going to face a go/no go decision in the future. But one product line which is in real danger of disappearing is networking.

Whither Whitebox?

The first indicator of the problems with Dell and networking comes from whitebox switching. Dell released OS 10 earlier this year as a way to capitalize on the growing market of free operating systems running on commodity hardware. Right now, OS 10 can run on Dell equipment. In the future, they are hoping to spread it out to whitebox devices. That assumes that soon you’ll see Dell branded OSes running on switches purchased from non-Dell sources booting with ONIE.

Once OS 10 pushes forward, what does that mean for Dell’s hardware business? Dell would naturally want to keep selling devices to customers. Whitebox switches would undercut their ability to offer cheap ports to customers in data center deployments. Rather than give up that opportunity, Dell is positioning themselves to run some form of Dell software on top of that hardware for management purposes, which has always been a strong point for Dell. Losing the hardware means little to Dell if they have to lose profit margin to keep it there in the first place.

The second indicator of networking issues comes from comments from Michael Dell at EMCworld this year. Check out this short video featuring him with outgoing EMC CEO Joe Tucci:

Some of the telling comments in here involve Michael Dell’s praise for the NSX business model and how it is being adopted by a large number of other vendors in the industry. Also telling is their reaffirmation that Cisco is an important partnership in VCE and won’t be going away any time soon. While these two things don’t seem to be related on the surface, they both point to a truth Dell is trying hard to accept.

In the future, with overlay network virtualization models gaining traction in the data center, the underlying hardware will matter little. In almost every case, the hardware choice will come down to one of two options:

  1. Which switch is the cheapest?
  2. Which switch is on the Approved List?

That’s it. That’s the whole decision tree. No one will care what sticker is on the box. They will only care that it didn’t cost a fortune and that they won’t get fired for buying it. That’s bad for companies that aren’t making white boxes or named Cisco. Other network vendors are going to try and add value in some way, but the overlay sitting on top of those bells and whistles will make it next to impossible to differentiate in anything but software. Whether that’s superior management capabilities, open plug-in model, or some other thing we haven’t thought of will make no difference in the end. Software will still be king and the hardware will be an inexpensive pawn or a costly piece that has been pre-approved.

Whither Wireless?

The other big inflection point that makes me worry about the Dell networking story is the lack of movement in the wireless space. Dell has historically been a company to partner first and acquire second. But with HPE’s acquisition of Aruba Networks last year, the dominos in the wireless space are still waiting to fall. Brocade raced out to buy Ruckus. Meru offered itself on a platter to anyone that would buy them. Now Aerohive stands as the last independent wireless vendor without a dance partner. Yes, they’ve announced that they are partnering with Dell, but have you been to the Dell Wireless Networking page? Can you guess what the Dell W-series is? Here’s a hint: it rhymes with “Peruba”.

Every time Dell leads with a W-series deployment, they are effectively paying their biggest competitor. They are opening the door to allowing HPE/Aruba to come in and not only start talking about wireless but servers, storage, and other networking as well. Dell would do well at this point to start deemphasizing the W-series and start highlighting the “new generation” of Aerohive APs and how they are going to the be the focus moving forward.

The real solution would be for Dell to buy a wireless company and take all the wireless expertise they are selling in-house. That would show they are serious about both the campus network of the future and the data center network needed to support their other server and storage infrastructure. Sadly, with Dell being leveraged due to the privatization of his company just two years ago and mounting debt for this mega merger, Dell is looking to make cash with spin offs instead of spending it on yet another company to ingest and subsume. Which means a real non-partner wireless solution is still many years away.


Tom’s Take

Dell’s networking strategy is in maintenance mode. Make switches to support faster speeds for now, probably with Tomahawk support soon, and hope that this whole networking thing goes software sooner rather than later. Otherwise, the need to shore up the campus wireless areas along with the coming decision about showing support fully behind NSX and partnerships is going to be a bitter pill to swallow. Perhaps Dell Networking will exist as an option for companies wanting a 100% Dell solution? Or maybe they are waiting for a new offering from Dell/EMC in the data center to drive profits to research and development to keep pace with Cisco and Arista? One can only hope that their networking flower doesn’t wither on the vine.