A Gift Guide for Sanity In Your Home IT Life

If you’re reading my blog you’re probably the designated IT person for your family or immediate friend group. Just like doctors that get called for every little scrape or plumbers that get the nod when something isn’t draining over the holidays, you are the one that gets an email or a text message when something pops up that isn’t “right” or has a weird error message. These kinds of engagements are hard because you can’t just walk away from them and you’re likely not getting paid. So how can you be the Designated Computer Friend and still keep your sanity this holiday season?

The answer, dear reader, is gifts. If you’re struggling to find something to give your friends that says “I like you but I also want to reduce the number of times that you call me about your computer problems” then you should definitely read on for more info! Note that I’m not going to fill this post will affiliate links or plug products that have sponsored anything. Instead, I’m going to just share the classes or types of devices that I think are the best way to get control of things.

Step 1: Infrastructure Upgrades

When you go visit your parents for Thanksgiving or some other holiday check in, are they still running the same wireless network they got when they got their high-speed Internet? Is their Wi-Fi SSID still the default with the password printed on the side of the router/modem combo? Then you’re going to want to upgrade their experience to keep your sanity for the next few holidays.

The first thing you need to do it get control of their wireless setup. You need to get some form of wireless access point that wasn’t manufactured in the early part of the century. Most of the models on the market have Wi-Fi 6 support now. You don’t need to go crazy with a Wi-Fi 6E model for your loved ones right now because none of their devices will support it. You just need something more modern with a user interface that wasn’t written to look like Windows 3.1.

You also need to see about an access point that is controlled via a cloud console. If you’re the IT person in the group you probably already use some form control for your home equipment. You don’t need a full Meraki or Juniper Mist setup to lighten your load. That is, unless you already have one of those dashboards set up and you have spare capacity. Otherwise you could look at something like Ubiquiti as a middle ground.

Why a cloud controller AP? Because then you can log in and fix things or diagnose issues without needing to spend time talking to less technical users. You can find out if they have an unstable Internet connection or change SSID passwords at the drop of a hat. You can even set up notifications for those remote devices to let you know when a problem happens so you can be ready and waiting for the call. And you can keep tabs on necessary upgrades and such so you aren’t fielding calls when the next major exploit comes out and your parents call you asking if they’re going to get infected by this virus. You can just tell them they’re up-to-date and good to go. The other advantage of this method is that when you upgrade your own equipment at home you can just waterfall the old functional gear down to them and give them a “new to you” upgrade that they’ll appreciate.

Step 2: Device Upgrades

My dad was notorious for using everything long past the point of needing to be retired. It’s the way he was raised. If there’s a hole you patch it. If it breaks you fix it. If that fix doesn’t work you wrap it in duct tape and use it until it crumbles to dust. While that works for the majority of things out there it does cause issues with technology far too often.

He had a iPad that he loved. He didn’t use it all day, every day but he did use it frequently enough to say that it was his primary computing device. It was a fourth-generation device, so it fell out of fashion a few years ago. When he would call me and ask me questions about why it was behaving a certain way or why he couldn’t download some new app from the App Store I would always remind him that he had an older device that wasn’t fast enough or new enough to run the latest programs or even operating software. This would usually elicit a grumble or two and then we would move on.

If you’re the Designated IT Person and you spend half your time trying to figure out what versions of OS and software are running on a device, do yourself a favor and invest in a new device for your users just to ease the headaches. If they use a tablet as their primary computing device, which many people today do, then just buy a new one and help them migrate all the data across to the new one while you’re eating turkey or opening presents.

Being on later hardware ensures that the operating system is the latest version with all the patches for security that are needed to keep your users safe. It also means you’re not trying to figure out what the last supported version of the software was that works with the rest of the things. I’ve played this game trying to get an Apple Watch to connect to an older phone with mismatched software as well as trying to get support for newer wireless security on older laptops with very little capability to do much more than WPA1. The amount of hours I burned trying to make the old junk work with the new stuff would have been better served just buying a new version of the same old thing and getting all their software moved over. Problems seem to just disappear when you are running on something that was manufactured within the last five years.

Step 3: Help Them Remember

This is probably my biggest request: Forgotten passwords. Either it’s the forgotten Apple ID or maybe the wireless network password. My parents and in-laws forget the passwords they need to log into things all the time. I finally broke down and taught them how to use a password management tool a few years ago and it made all the difference in the world. Now, instead of them having to remember what their password was for a shopping site they can just set it to automatically fill everything in. And since they only need to remember the master password for their app they don’t have to change it.

Better yet, most of these apps have a secure section for notes. So all those other important non-password things that seem to come up all the time are great to put in here. Social Security Numbers, bank account numbers, and so much more can be put in one central location and made easy to access. The best part? If you make it a shared vault you can request access to help them out when they forget how to get in. Or you can be designated as a trusted party that can access the account in the event of a tragedy. Getting your loved ones used to using password vaults now makes it much easier to have them storing important info there in case something happens down the road that requires you to jump in without their interaction. Trust me on this.


Tom’s Take

Your loved ones don’t need knick knacks and useless junk. If you want to show them you love them, give them the gift of not having to call you every couple of days because they can’t remember the wireless password or because they keep getting this error that says their app isn’t support on this device. Invest in your sanity and their happiness by giving them something that works and that has the ability for you to help manage it from the background. If you can make it stable and useful and magically work before they call you with a problem you’re going to find yourself a happier person in the years to come.

IP Class is Now in Session

You may have seen something making the rounds on Twitter this week about a couple of proposed drafts designed to alleviate the problems with IPv4 exhaustion by repurposing some old IP spaces that aren’t available for use right now. Specifically:

Ultimately, this is probably going to fail for a variety of reasons and looks like it’s more of a suggestion than anything else but I wanted to take a moment to talk about why this isn’t an effective way of fixing address issues.

Error Bearers

The first reason that the Schoen drafts are going to fail is because most of the operating systems in the world won’t allow you to use reserved spaces for a system address. Because we knew years ago that certain spaces were marked as non-usable the logic was configured into the system to disallow the use of those spaces. And even if the system isn’t configured to disallow that space there’s no guarantee the traffic is going to be transmitted.

Let’s take 127/8 as a good example. Was it a smart idea to mark 16 million addresses as loopback host-only space? Nope. But that ship has sailed and we aren’t going to be able to easily fix it. Too many systems will see any address starting with 127 in first octet and assume it’s a loopback address. In much the same way as people have been known to assume the entire 192/8 address space is RFC1918 reserved instead of 192.168.0.0/16. Logic rules and people making decisions aren’t going to trust any space being used in that manner. Even if you did something creative like using NAT and only using it internally you’re not going to be able to patch every version of every operating system in your organization.

We modify rules all the time and then have to spend years updating those modifications. Take area codes in North America for example. The old rules used to say that an area code had to have a zero or a one for the middle digit – ([2-9][0-1][2-9]) to use the Cisco UCM parlance. If your middle digit was something other than a zero or a one it wasn’t a valid NANP area code. As we began to expand the phone system in 1995 we changed those rules and now have area codes with all manner of middle numbers.

What about prefixes? Those follow rules too. NANP prefixes must not start with a zero or a one – (area code) [2-9]XX-XXXX is the way they are coded. Prefixes that start with a zero or a one are invalid and can’t be used. If we suddenly decided that we needed to open up the numbers in existing area codes and include prefixes that start with those forbidden numbers we would need to reset all the dialing rules in systems all over the country. I know that I specifically programmed my CUCM servers to send an immediate error if you dialed a prefix with a zero or a one. And all of them would have to be manually reconfigured for such a change.

In much the same way, the address spaces that are reserved today as invalid would need to be patched out of systems from home computers to phones to networking equipment. And even if you think you got it all you’re going to miss one and wonder why it isn’t working. Worse yet, it might even silently fail because you may be able to transmit data to 95% of the systems out there but some intermediate system may discard your packets as invalid and never tell you what happened. You’ll spend hours or days chasing a problem you may not even be able to fix.

Avoiding the Solutions

The easiest way to look at these proposals is by understanding that people are really, really, really in love with IPv4. Despite the fact that using the effort of the changes necessary to implement these reserved spaces would be better spent on IPv6 adoption we still get these things being submitted. There is a solution but people don’t want to use it. The modern Internet relies so much on the cloud that it would be simple to enable IPv6 in your provider space and use your engineering talent to help provide better adoption for that instead. We’re already seeing that all over places with address space has been depleted for a while now.

It may feel easier to spend more effort to revitalize the IPv4 space we all know and love. It may even feel triumphant when we’re able to reclaim address space that was wasted and use it for something productive instead of just teaching that you can’t configure devices with those spaces. And millions of devices will have IP address space to use, or more accurately there will be millions of addresses available to sell to people that will waste them anyway. Then what?

The short term gain from opening up IPv4 space at the expense of not developing IPv6 adoption is a fallacy that will end in pain. We can keep putting policy duct tape on the IPv4 exhaustion problem but we are eventually going to hit a wall we can’t overcome. The math doesn’t work when your address space is only 32 bits in total. That’s why IPv6 expanded the amount of information in the address space.

Sure, there have been mistakes in the way that IPv6 address space has been allocated and provisioned. Those mistakes would need to eventually be corrected and other configurations would need to be done in order to efficiently utilize the space. Again, the effort should be made to fix problems with a future-proof solution instead of trying our hardest to keep the lights on with the old system that’s falling apart for a few more years.


Tom’s Take

The race to find every last possible way to utilize the IPv4 space is exactly what I expected when we’re in the death throes of using it instead of IPv6. The easy solutions are done. The market and hunger for IPv4 space is only getting stronger. Instead of weaning the consumers off their existing setups and moving them to something future proof we’re feeding their needs for short term gains. If the purpose of this whole exercise was to get more address space to be rationed out for key systems to keep them online longer I might begrudgingly accept it. However, knowing that it would likely be opened up and fed to providers to be auctioned off in blocks to be ultimately wasted means all the extra effort is for no gain. These IETF drafts have a lot of issues and we’re better off letting them expire in May 2022. Because if we take up this cause and try to make them a reality we’re going to have to relearn a lot of lessons of the past we’ve forgotten.

The Process Will Save You

I had the opportunity to chat with my friend Chris Marget (@ChrisMarget) this week for the first time in a long while. It was good to catch up with all the things that have been going on and reminisce about the good old days. One of the topics that came up during our conversation was around working inside big organizations and the way that change processes are built.

I worked at IBM as an intern 20 years ago and the process to change things even back then was arduous. My experience with it was the deployment procedures to set up a new laptop. When I arrived the task took an hour and required something like five reboots. By the time I left we had changed that process and gotten it down to half an hour and only two reboots. However, before we could get the new directions approved as the procedure I had to test it and make sure that it was faster and produced the same result. I was frustrated but ultimately learned a lot about the glacial pace of improvements in big organizations.

Slow and Steady Finishes the Race

Change processes work to slow down the chaos that comes from having so many things conspiring to cause disaster. Probably the most famous change management framework is the Information Technology Infrastructure Library (ITIL). That little four-letter word has caused a massive amount of headaches in the IT space. Stage 3 of ITIL is the one that deals with changes in the infrastructure. There’s more to ITIL overall, including asset management and continual improvement, but usually anyone that takes ITIL’s name in vain is talking about the framework for change management.

This isn’t going to be a post about ITIL specifically but about process in general. What is your current change management process? If you’re in a medium to large sized shop you probably have a system that requires you to submit changes, get the evaluated and approved, and then implemented on a schedule during a downtime window. If you’re in a small shop you probably just make changes on the fly and hope for the best. If you work in DevOps you probably call them “deployments” and they happen whenever someone pushes code. Whatever the actual name for the process is you have one whether you realize it or not.

The true purpose of change management is to make sure what you’re doing to the infrastructure isn’t going to break anything. As frustrating as it is to have to go through the process every time the process is the reason why. You justify your changes and evaluate them for impact before scheduling them. As opposed to something that can be termed as “Change and find out” kind of methodologies.

Process is ugly and painful and keeps you from making simple mistakes. If every part of a change form needs to be filled out you’re going to complete it to make sure you have all the information that is needed. If the change requires you to validate things in a lab before implementation then it’s forcing you to confirm that it’s not going to break anything along the way. There’s even a process exception for emergency changes and such that are more focused on getting the system running as opposed to other concerns. But whatever the process is it is designed to save you.

ITIL isn’t a pain in the ass on accident. It’s purposely built to force your justify and document at every step of the process. It’s built to keep you from creating disaster by helping you create the paper trail that will save you when everything goes wrong.

Saving Your Time But Not Your Sanity

I used to work with a great engineer name John Pross. John wrote up all the documentation for our migrations between versions of software, including Novell NetWare and Novell Groupwise. When it came time to upgrade our office Groupwise server there was some hesitation on the part of the executive suite because they were worried we were going to run into an error and lock them out of their email. The COO asked John if he had a process he followed for the migration. John’s response was perfect in my mind:

“Yes, and I treat every migration like the first one.”

What John meant is that he wasn’t going to skip steps or take shortcuts to make things go faster. Every part of the procedure was going to be followed to the letter. And if something came up that didn’t match what he thought the output should have been it was going to stop until he solved that issue. John was methodical like that.

People like to take shortcuts. It’s in our nature to save time and energy however we can. But shortcuts are where the change process starts falling apart. If you do something different this time compared to the last ten times you’ve done it because you’re in a hurry or you think this might be more efficient without testing it you’re opening yourself up for a world of trouble. Maybe not this time but certainly down the road when you try to build on your shortcut even more. Because that’s the nature of what we do.

As soon as you start cutting corners and ignoring process you’re going to increase the risk of creating massive issues rapidly. Think about something as simple as the Windows Server 2003 shutdown dialog box. People used to reboot a server on a whim. In Windows 2003, the server had a process that required you to type in a reason why you were manually shutting the server down from the console. Most people that rebooted the server fell into two camps: Those that followed their process and typed in the reason for the reboot and those that just typed “;Lea;lksjfa;ldkjfadfk” as the reason and then were confused six months from now when doing the post-mortem on an issue and cursing their snarky attitude toward reboot documentation.

Saving the Day

Change process saves you in two ways. The first is really apparent: it keeps you from making mistakes. By forcing you to figure out what needs to happen along the way and document the whole process from start to finish you have all the info you need to make things successful. If there’s an opportunity to catch mistakes along the way you’re going to have every opportunity to do that.

The second way change process saves you is when it fails. Yes, no process is perfect and there are more than a few times when the best intentions coupled with a flaw in the process created a beautiful disaster that gave everyone lots of opportunity to learn. The question always comes back to what was learned in that process.

Bad change failures usually lead to a sewer pipe of blame being pointed in your direction. People use process failures as a change to deflect blame and avoid repercussions for doing something they shouldn’t have or trying to increase their stock in the company. The truly honest failure analysis doesn’t blame anyone but the failed process and tries to find a way to fix it.

Chris told me in our conversation that he loved ITIL at one of his former jobs because every time it failed it led to a meaningful change in the process to avoid failure in the future. These are the reasons why blameless post-mortem discussions are so important. If the people followed the process and the process the people aren’t at fault. The process is incorrect or flawed and needs to be adjusted.

It’s like a recipe. If the instructions tell you to cook something for a specific amount of time and it’s not right, who is to blame? Is it you because you did what you were told? Is the recipe? Is it the instructions? If you start with the idea that you did the process right and start trying to figure out where the process is wrong you can fix the process for next time. Maybe you used a different kind of ingredient that needs more time. Or you made it thinner than normal and that meant cooking it too long this time. Whatever the result, you end up documenting the process and changing things for the future to prevent that mistake from happening again.

Of course, just like all good frameworks, change processes shouldn’t be changed without analysis. Because changing something just to save time or take a shortcut defeats the whole purpose! You need to justify why changes are necessary and prove they provide the same benefit with no additional exposure or potential loss. Otherwise you’re back to making changes and hoping you don’t get burned this time.


Tom’s Take

ITIL didn’t really become a popular thing until after I left IBM but I’m sure if I were still there I’d be up to my eyeballs in it right now. Because ITIL was designed to keep keyboard cowboys like me from doing things we really shouldn’t be doing. Change management process are designed to save us at every step of the way and make us catch our errors before they become outages. The process doesn’t exist to make our lives problematic. That’s like saying a seat belt in a car only exists to get in my way. It may be a pain when you’re dealing with it regularly but when you need it you’re going to wish you’d been using it the whole time. Trust in the process and you will be saved.

Is the M1 MacBook Pro Wi-Fi Really Slower?

I ordered a new M1 MacBook Pro to upgrade my existing model from 2016. I’m still waiting on it to arrive by managed to catch a sensationalist headline in the process:

“New MacBook Wi-Fi Slower than Intel Model!”

The article referenced this spec sheet from Apple referencing the various cards and capabilities of the MacBook Pro line. I looked it over and found that, according to the tables, the wireless card in the M1 MacBook Pro is capable of a maximum data rate of 1200 Mbps. The wireless card in the older model Intel MacBook Pro all the way back to 2017 is capable of 1300 Mbps. Case closed! The older one is indeed faster. Except that’s not the case anywhere but on paper.

PHYs, Damned Lies, and Statistics

You’d be forgiven for jumping right to the numbers in the table and using your first grade inequality math to figure out that 1300 is bigger than 1200. I’m sure it’s what the authors of the article did. Me? I decided to dig in a little deeper to find some answers.

It only took me about 10 seconds to find the first answer as to one of the differences in the numbers. The older MacBook Pro used a Wi-Fi card that was capable of three spacial streams (3SS). Non-wireless nerds reading this post may wonder what a spatial stream is. The short answer is that it is a separate unique stream of data along a different path. Multiple spacial streams can be leveraged through Multiple In, Multiple Out (MIMO) to increase the amount of data being sent to a wireless client.

The older MacBook Pro has support for 3SS. The new M1 MacBook Pro has a card that supports up to 2SS. Problem solved, right? Well, not exactly. You’re also talking about a client radio that supports different wireless protocols as well. The older model supported 802.11n (Wi-Fi 4) and 802.11ac (Wi-Fi 5) only. The newer model supports 802.11ax (Wi-Fi 6) as well. The quoted data rates on the Apple support page state that the maximum data rates for the cards are quoted in 11ac for the Intel MBP and 11ax for the M1 MBP.

Okay, so there are different Wi-Fi standards at play here. Can’t be too hard to figure out, right? Except that the move from Wi-Fi 5 to Wi-Fi 6 is more than just incrementing the number. There are a huge number of advances that have been included to increase efficiency of transmission and ensure that devices can get on and off the air quickly to help maximize throughput. It’s not unlike the difference between the M1 chip in the MacBook and its older Intel counterpart. They may both do processing but the way they do it is radically different.

You also have to understand something called Modulation Coding Set (MCS). MCS defines the data rates possible for a given definition of signal-to-noise ratio (SNR), RSSI, and Quadrature Amplitude Modulation (QAM). Trying to define QAM could take all day, so I’ll just leave it to GT Hill to do it for me:

The MCS table for a given protocol will tell you what the maximum data rate for the client radio is. Let’s look at the older MacBook Pro first. Here’s a resource from NetBeez that has the 802.11ac MCS rates. If you look up the details from the Apple support doc for a 3SS radio using VHT 9 and an 80MHz channel bandwidth you’ll find the rate is exactly 1300 Mbps.

Here’s the MCS table for 802.11ax courtesy of Francois Verges.. WAY bigger, right? You’re likely going to want to click on the link to the Google Sheet in his post to be able to read it without a microscope. If you look at the table and find the row that equates to an 11ax client using 2SS, MCS HE 11, and 80MHz channel bandwidth you’ll see that the number is 1201. I’ll forgive Apple for rounding it down to keep the comparison consistent.

Again, this all checks out. The Wi-Fi equivalent of actuarial tables says that the older one is faster. And it is under absolutely perfect conditions. Because the quoted numbers for the Apple document are the maximums for those MCSes. When’s the last time you got the maximum amount of throughput on a wired link? Now remember that in this case you’re going to need to have perfect airtime conditions to get there. Which usually means you’ve got to be right up against the AP or within a very short distance of it. And that 80MHz channel bandwidth? As my friend Sam Clements says, that’s like drag racing a school bus.

The World Isn’t Made Out Of Paper

If you are just taking the numbers off of a table and reproducing them and claiming one is better than the other then you’re probably the kind of person that makes buying decisions for your car based on what the highest number on the speedometer says. Why take into account other factors like cargo capacity, passenger size, or even convertible capability? The numbers on this one go higher!

In fact, when you unpack the numbers here as I did, you’ll see that the apparent 100 Mbps difference between the two radios isn’t likely to come into play at all in the real world. As soon as you move more than 15 feet away from the AP or put a wall between the client device and your AP you will see a reduction in the data rate. The top end of these two protocols are running in the 5GHz spectrum, which isn’t as forgiving with walls as 2.4GHz is. Moreover, if there are other interfering sources in your environment you’re not going to get nearly the amount of throughput you’d like.

What about that difference in spatial streams? I wondered about that for the longest time. Why would you purposely put fewer spatial streams in a client device when you know that you could max it out? The answer is that even with that many spatial streams reality is a very different beast. Devin Akin wrote a post about why throughput numbers aren’t always the same as the tables. In that post he mentioned that a typical client mix in a network is 2018 is about 66% devices with 1SS, 33% devices with 2SS, and less than 1% of devices have 3SS. While those numbers have probably changed in 2021 thanks to the iPhone and iPad now having 2SS radios, I don’t think the 3SS numbers have moved much. The only devices that have 3SS are laptops and other bigger units. It’s harder for a unit to keep the data rates from a 3SS radio so most devices only include support for two of them.

The other thing to notice here is that the value of what a spatial stream brings you is different between the two protocols. In 802.11ac, the max data rate for a single spatial stream is about 433 Mbps. For 802.11ax it’s 600 Mbps. So a 2SS 11ac radio maxes out at 866 Mbps while a 3SS 11ax radio setup would get you around 1800 Mbps. It’s far more likely that you’ll be using the 2SS 11ax radio more efficiently more often than you’ll see the maximum throughput of a 3SS 11ac radio.


Tom’s Take

This whole tale is a cautionary example of why you need to do your own research, even if you aren’t a Wi-Fi pro. The headline was both technically correct and wildly inaccurate. Yes, the numbers were different. Yes, the numbers favored the older model. No one is going to see the maximum throughput under most normal conditions. Yes, having support for Wi-Fi 6 in the new MacBook Pro is a better choice overall. You’re not going to miss that 100 Mbps of throughput in your daily life. Instead you’re going to enjoy a better protocol with more responsiveness in the bands you use on a regular basis. You’re still faster than the gigabit Ethernet adapters so enjoy the future of Wi-Fi. And don’t believe the numbers on paper.

Getting In Front of Future Regret

Yesterday I sat in on the keynote from Commvault Connections21 and participated in a live blog of it on Gestalt IT. There was a lot of interesting info around security, especially related to how backup and disaster recovery companies are trying to add value to the growing ransomware issue in global commerce. One thing that I did take away from the conversation wasn’t specifically related to security though and I wanted to dive into a bit more.

Reza Morakabati, CIO for Commvault, was asked what he thought teams needed to do to advance their data strategy. And his response was very insightful:

Ask your team to imagine waking up to hear some major incident has happened. What would their biggest regret be? Now, go to work tomorrow and fix it.

It’s a short, sweet, and powerful sentence. Technology professionals are usually focused on implementing new things to improve productivity or introduce new features to users and customers. We focus on moving fast and making people happy. Security is often seen as running counter to this ideal. Security wants to keep people safe and secure. It’s not unlike the parents that hold on to their child’s bicycle after the training wheels come off just to make sure the kids are safe. The kids want to ride and be free. The parents aren’t quite sure how secure they’re going to be just yet.

Regret Storming

Thought exercises make for entertaining ways to scare yourself to death some days. If you spend too much time thinking about all the ways that things can go wrong you’re going to spend far too much energy focused on the negative aspects of your work. However, you do need to occasionally open yourself up to the likelihood that things are going to go wrong at some point.

For the thought exercise above, it’s not crucial to think about how they could go wrong. It’s more important to think about what could be the worst thing that could happen as a result of those bad things and how much you’ll regret it. You need to identify those areas and try to figure out how they can be mitigated.

Let me give you a specific example from my area. In May 2013 a massive tornado ripped through Moore, OK just north of where I live. It was a tragic event that had loss of life. People were displaced and homes and businesses were destroyed. One of the places that was damaged severely was the Moore Public Schools administration building. In the aftermath of trying to clean up the debris and find survivors, one of my friends that worked for an IT vendor told me he spent hours helping sift through the rubble of the building looking for hard disk drives for the district’s servers. Why? Because the tornado had struck just before the payroll for the district’s teachers and staff was due to be run. Without the drives they couldn’t run payroll or print paychecks for those employees. With an even greater need to have funds to pay for food or start repairs on their homes you can imagine that not getting paid was going to be a big deal for those educators and staff.

There are a lot of regrets that came out of the May 2013 tornado. Loss of life and loss of property are always at the top of the list. The psychological damage of enduring something like that is also a huge impact. But for the school district one of the biggest regrets they faced was not having a contingency plan for what to do about paying their employees to help them deal with the disaster. It sounds small in comparison to the millions of dollars of damage that happened but it also represents something important that can be controlled. The school system can’t upgrade the warning system or build businesses that can withstand the most powerful storms imaginable. But they can fix their systems to prevent teachers from going without resources in the event of an emergency.

In this case, the regret is not being able to pay teachers if the district data center goes down. How could we fix that regret today if we had imagined it beforehand? We could have migrated the data center to the cloud so no one weather event could take it out. Likewise, we could have moved to a service that provides payroll entry and check printing that could be accessed from anywhere. We could also have encouraged our teachers and employees to use direct deposit functions to reduce the need to physically print checks. Technology today provides us with a number of solutions to the regret we face. We can put together plans to implement any one of them quickly. We just need to identify the problem and build a resolution for it.

Building Your Future

It’s not easy to foresee every possible outcome. Nor should it be. But if you focus on the feelings those unknown outcomes could bring you’ll have a much better sense for what’s important to protect and how to go about doing it. Are you worried your customer data is going to be stolen and shared on the Internet? Then you need to focus your efforts on protecting it. Are you concerned your AWS bill is going to skyrocket if someone steals your credentials and starts borrowing your resource pool? Then you need to have governance in place to prevent unauthorized users from doing that thing.

You don’t have to have a solution for every possible regret. You may even find that some of the things you thought you might end up regretting are actually pretty mild. If you’re not concerned about what would happen to your testing environment because you can just clone it from a repository then you can put that to bed and not worry about it any longer. Likewise, you may discover some regrets you didn’t anticipate. For example, if you’re using Active Directory credentials to back up your server data, you need to make sure you’re backing up Active Directory as well. You’re going to find yourself infuriated if you have the data you need to get back to business but it’s locked behind cryptographic locks that you can’t open because someone forgot to back up a domain controller.


Tom’s Take

I’ve been told that I’m somewhat negative because I’m always worried about what could go wrong with a project or an event. It’s not that I’m a pessimist as much as I’ve got a track record for seeing how things can go off the rails. Thanks to Commvault I’m going to spend more time thinking of my regrets and trying to plan for them to be mitigated ahead of time so all the possible ways things could fail won’t consume my thoughts. I don’t have to have a plan for everything. I just need to get in front of the regrets before I feel them for real.

Fast Friday Thoughts From Security Field Day

It’s a busy week for me thanks to Security Field Day but I didn’t want to leave you without some thoughts that have popped up this week from the discussions we’ve been having. Security is one of those topics that creates a lot of thought-provoking ideas and makes you seriously wonder if you’re doing it right all the time.

  • Never underestimate the value of having plumbing that connects all your systems. You may look at a solution and think to yourself “All this does is aggregate data from other sources”. Which raises the question: How do you do it now? Sure, antivirus fires alerts like a car alarm. But when you get breached and find out that those alerts caught it weeks ago you’re going to wish you had a better idea of what was going on. You need a way to send that data somewhere to be dealt with and cataloged properly. This is one of the biggest reasons why machine learning is being applied to the massive amount of data we gather in security. Having an algorithm working to find the important pieces means you don’t miss things that are important to you.
  • Not every solution is going to solve every problem you have. My dishwasher does a horrible job of washing my clothes or vacuuming my carpets. Is it the fault of the dishwasher? Or is it my issue with defining the problem? We need to scope our issues and our solutions appropriately. Just because my kitchen knives can open a package in a pinch doesn’t mean that the makers need to include package-opening features in a future release because I use them exclusively for that purpose. Once we start wanting the vendors to build a one-stop-shop kind of solution we’re going to create the kind of technical debt that we need to avoid. We also need to remember to scope problems so that they’re solvable. Postulating that there are corner cases with no clear answers are important for threat hunting or policy creation. Not so great when shopping through a catalog of software.
  • Every term in every industry is going to have a different definition based on who is using it. A knife to me is either a tool used on a campout or a tool used in a kitchen. Others see a knife as a tool for spreading butter or even doing surgery. It’s a matter of perspective. You need to make sure people know the perspective you’re coming from before you decide that the tool isn’t going to work properly. I try my best to put myself in the shoes of others when I’m evaluating solutions or use cases. Just because I don’t use something in a certain way doesn’t mean it can’t be used that way. And my environment is different from everyone else’s. Which means best practices are really just recommended suggestions.
  • Whatever acronym you’ve tied yourself to this week is going to change next week because there’s a new definition of what you should be doing according to some expert out there. Don’t build your practice on whatever is hot in the market. Build it on what you need to accomplish and incorporate elements of new things into what you’re doing. The story of people ripping and replacing working platforms because of an analyst suggestion sounds horrible but happens more often than we’d like to admit. Trust your people, not the brochures.

Tom’s Take

Security changes faster than any area that I’ve seen. Cloud is practically a glacier compare to EPP, XDR, and SOPV. I could even make up an acronym and throw it on that list and you might not even notice. You have to stay current but you also have to trust that you’re doing all you can. Breaches are going to happen no matter what you do. You have to hope you’ve done your best and that you can contain the damage. Remember that good security comes from asking the right questions instead of just plugging tools into the mix to solve issues you don’t have.

Choosing the Least Incorrect Answer

My son was complaining to me the other day that he missed on question on a multiple choice quiz in his class and he got a low B grade instead of getting a perfect score. When I asked him why he was frustrated he told me, “Because it was easy and I missed it. But I think the question was wrong.” As usual, I pressed him further to explain his reasoning and found out that the question was indeed ambiguous but the answer choices were pretty obviously wrong all over. He asked me why someone would write a test like that. Which is how he got a big lesson on writing test questions.

Spin the Wheel

When you write a multiple choice test question for any reputable exam you are supposed to pick “wrong” answers, known as distractors, that ensure that the candidate doesn’t have a better than 25% chance of guessing the correct answer. You’ve probably seen this before because you took some kind of simple quiz that had answers that were completely wrong to the point of being easy to pick out. Those quizzes are usually designed to be passed with the minimum amount of effort.

This also extends to a question that includes answer choices that are paired. If you write a question that says “pick the three best answers” with six options that are binary pairs you’re basically saying to the candidate “Pick between these two three times and you’re probably going to get it right”. I’ve seen a number of these kinds of questions over the years and it feels like a shortcut to getting one on the house.

The most devious questions come from the math side of the house. Some of my friends have been known to write questions for their math tests and purposely work the problem wrong at a critical point to get a distractor that looks very plausible. You make the same mistake and you’re going to see the correct answer in the choices and get it wrong. The extra effort here matters because if you see too many students getting the same wrong distractor as the answer you know that there may be confusion about the process at that critical point. Also, the effort to make math question distractors look plausible is impressive and way too time consuming.

Why Is It Wrong?

Compelling distractors are a requirement for any sufficiently advanced testing platform. The professionals that write the tests understand that guessing your way through a multiple choice exam is a bad precedent and the whole format needs to be fair. The secret to getting the leg up on these exams is more than just knowing the right answer. It’s about knowing why things are wrong.

Take an easy example: OSPF LSAs. A question may ask you about a particular router in a diagram and ask you which LSAs that it sees. If the answer choices are fairly configured you’re going to be faced with some plausible looking answers. Say the question is about a not-so-stubby-area (NSSA). If you know the specifics of what makes this area unique you can start eliminating choices from the question. What if it’s asking about which LSAs are not allowed? Well, if you forgot the answer to that you can start by reading the answer choices and applying logic.

You can usually improve your chances of getting a question right by figuring out why the answers given are wrong for the question. In the above example, if LSA Type 1 is listed as an answer choice ask yourself “Why is this the wrong answer?” For the question about disallowed LSA types you can eliminate this choice because LSA Type 1 is always present inside an area. For a question about visibility of that LSA outside of an area you’d be asking a different question. But if you know that Type 1 LSAs are local and always visible you can cross off that as a potential answer. That means you boosted your chances of guessing the answer to 33%!

The question itself is easy if you know that NSSAs use Type 7 LSAs to convey information because Type 5 LSAs aren’t allowed. But if you understand why the other answers are wrong for the question asked you can also check your work. Why would you want to do that? Because the wording of the question can trip you up. How many times have you skimmed the question looking for keywords and missing things like “not” or “except”? If you work the question backwards looking for why answers are wrong and you keep coming up with them being right you may have read the question incorrectly in the first place. Likewise, if every answer is wrong somehow you may have a bad question on your hands.

What happens if the question is poorly worded and all the answer choices are wrong? Well, that’s when you get to pick the least incorrect answer and leave feedback. It’s not about picking the perfect answer in these situations. You have to know that a lot of hands touch test questions and there are times when things are rewritten and the intent can be changed somehow. If you know that you are dealing with a question that is ambiguous or flat-out wrong you should leave feedback in the question comments so it can be corrected. But you still have to answer the question. So, use the above method to find the piece that is the least incorrect and go with that choice. It may not be “right” according to the test question writer, but if enough people pick that answer you’re going to see someone taking a hard look at the question.


Tom’s Take

We are going to take a lot of tests in our lives. Multiple choice tests are easier but require lots of work, both on the part of the writer and the taker. It’s not enough to just memorize what the correct answers are going to be. If you study hard and understand why the distractors are incorrect you’ll have a more complete understanding of the material and you’ll be able to check your work as you go along. Given that most certification exams don’t allow you to go back and change answers once you’ve moved past the question the ability to check yourself in real time gives you an advantage that can mean the difference between passing and retaking the exam. And that same approach can help you when everything on the page looks wrong.

What Can You Learn From Facebook’s Meltdown?

I wanted to wait to put out a hot take on the Facebook issues from earlier this week because failures of this magnitude always have details that come out well after the actual excitement is done. A company like Facebook isn’t going to do the kind of in-depth post-mortem that we might like to see but the amount of information coming out from other areas does point to some interesting circumstances causing this situation.

Let me start off the whole thing by reiterating something important: Your network looks absolutely nothing like Facebook. The scale of what goes on there is unimaginable to the normal person. The average person has no conception of what one billion looks like. Likewise, the scale of the networking that goes on at Facebook is beyond the ken of most networking professionals. I’m not saying this to make your network feel inferior. More that I’m trying to help you understand that your network operations resemble those at Facebook in the same way that a model airplane resembles a space shuttle. They’re alike on the surface only.

Facebook has unique challenges that they have to face in their own way. Network automation there isn’t a bonus. It’s a necessity. The way they deploy changes and analyze results doesn’t look anything like any software we’ve ever used. I remember moderating a panel that had a Facebook networking person talking about some of the challenges they faced all the way back in 2013:

That technology that Najam Ahmad is talking about is two or three generations removed for what is being used today. They don’t manage switches. They manage racks and rows. They don’t buy off-the-shelf software to do things. They write their own tools to scale the way they need them to scale. It’s not unlike a blacksmith making a tool for a very specific use case that would never be useful to any non-blacksmith.

Ludicrous Case Scenarios

One of the things that compounded the problems at Facebook was the inability to see what the worst case scenario could bring. The little clever things that Facebook has done to make their lives easier and improve reaction times ended up harming them in the end. I’ve talked before about how Facebook writes things from a standpoint of unlimited resources. They build their data centers as if the network will always be available and bandwidth is an unlimited resource that never has contention. The average Facebook programmer likely never lived in a world where a dial-up modem was high-speed Internet connectivity.

To that end, the way they build the rest of their architecture around those presumptions creates the possibility of absurd failure conditions. Take the report of the door entry system. According to reports part of the reason why things were slow to come back up was because the door entry system for the Facebook data centers wouldn’t allow access to the people that knew how to revert the changes that caused the issue. Usually, the card readers will retain their last good configuration in the event of a power outage to ensure that people with badges can access the system. It could be that the ones at Facebook work differently or just went down with the rest of their network. But whatever the case the card readers weren’t allowing people into the data center. Another report says that the doors didn’t even have the ability to be opened by a key. That’s the kind of planning you do when you’ve never had to break open a locked door.

Likewise, I find the situation with the DNS servers to be equally crazy. Per other reports the DNS servers at Facebook are constantly monitoring connectivity to the internal network. If that goes down for some reason the DNS servers withdraw the BGP routes being advertised for the Facebook AS until the issue is resolved. That’s what caused the outage from the outside world. Why would you do this? Sure, it’s clever to basically have your infrastructure withdraw the routing info in case you’re offline to ensure that users aren’t hammering your system with massive amounts of retries. But why put that decision in the hands of your DNS servers? Why not have some other more reliable system do it instead?

I get that the mantra at Facebook has always been “fail fast” and that their architecture is built in such a way as to encourage individual systems to go down independently of others. That’s why Messenger can be down but the feed stays up or why WhatsApp can have issues but you can still use Instagram. However, why was their no test of “what happens when it all goes down?” It could be that the idea of the entire network going offline is unthinkable to the average engineer. It could also be that the response to the whole network going down all at once was to just shut everything down anyway. But what about the plan for getting back online? Or, worse yet, what about all the things that impacted the ability to get back online?

Fruits of the Poisoned Tree

That’s where the other part of my rant comes into play. It’s not enough that Facebook didn’t think ahead to plan on a failure of this magnitude. It’s also that their teams didn’t think of what would be impacted when it happened. The door entry system. The remote tools used to maintain the networking equipment. The ability for anyone inside the building to do anything. There was no plan for what could happen when every system went down all at once. Whether that was because no one knew how interdependent those services were or because no one could think of a time when everything would go down all at once is immaterial. You need to plan for the worst and figure out what dependencies look like.

Amazon learned this the hard way a few years ago when US-East-1 went offline. No one believed it at the time because the status dashboard still showed green lights. The problem? The board was hosted on the zone that went down and the lights couldn’t change! That problem was remedied soon afterwards but it was a chuckle-worthy issue for sure.

Perhaps it’s because I work in an area where disasters are a bit more common but I’ve always tried to think ahead to where the issues could crop up and how to combat them. What if you lose power completely? What if your network connection is offline for an extended period? What if the same tornado that takes our your main data center also wipes out your backup tapes? It might seem a bit crazy to consider these things but the alternative is not having an answer in the off chance it happens.

In the case of Facebook, the question should have been “what happens if a rogue configuration deployment takes us down?” The answer better not be “roll it back” because you’re not thinking far enough ahead. With the scale of their systems it isn’t hard to create a change to knock a bunch of it offline quickly. Most of the controls that are put in place are designed to prevent that from happening but you need to have a plan for what to do if it does. No one expects a disaster. But you still need to know what to do if one happens.

Thus Endeth The Lesson

What we need to take away from this is that our best intentions can’t defeat the unexpected. Most major providers were silent on the schadenfreude of the situation because they know they could have been the one to suffer from it. You may not have a network like Facebook but you can absolutely take away some lessons from this situation.

You need to have a plan. You need to have a printed copy of that plan. It needs to be stored in a place where people can find it. It needs to be written in a way that people that find it can implement it step-by-step. You need to keep it updated to reflect changes. You need to practice for disaster and quit assuming that everything will keep working correctly 100% of the time. And you need to have a backup plan for everything in your environment. What if the doors seal shut? What if the person with the keys to unlock the racks is missing? How do we ensure the systems don’t come back up in a degraded state before they’re ready. The list is endless but that’s only because you haven’t started writing it yet.


Tom’s Take

There is going to be a ton of digital ink spilled on this outage. People are going to ask questions that don’t have answers and pontificate about how it could have been avoided. Hell, I’m doing it right now. However, I think the issues that compounded the problems are ones that can be addressed no matter what technology you’re using. Backup plans are important for everything you do, from campouts to dishwasher installations to social media websites. You need to plan for the worst and make sure that the people you work with know where to find the answers when everything fails. This is the best kind of learning experience because so many eyes are on it. Take what you can from this and apply it where needed in your enterprise. Your network may not look anything like Facebook, but with some planning now you don’t have to worry about it crashing like theirs did either.

Chip Shortages Aren’t Sweet for Networking

Have you tried to order networking gear recently? You’re probably cursing because the lead times on most everything are getting long. It’s not uncommon to see lead times on wireless access points or switch gear reaching 180 days or more. Reports from the Internet say that some people are still waiting to get things they ordered this spring. The prospect of rapid delivery of equipment is fading like the summer sun.

Why are we here? What happened? And can we do anything about it?

Fewer Chips, More Air

The pandemic has obviously had the biggest impact for a number of reasons. When a fabrication facility shuts down it doesn’t just ramp back up. Even when all the workers are healthy and the city where it is located is open for business it takes weeks to bring everything back online to full capacity. Just like any manufacturing facility you can’t just snap your fingers and get back to churning out the widgets.

The pandemic has also strained supply chains around the world. Even if the fabs had stayed open this entire time you’d be looking at a shortage of materials to make the equipment. Global supply chains were running extremely lean in 2019 and exposing one aspect of them has created a cascade effect that has caused stress everywhere. The lack of toilet paper or lunchmeat in your grocery store shows that. Even when the supply is available the ability to deliver it is impacted.

The supply chain problem also belies the issue on the other side of the shipping container. Even if the fabs had enough chips to sell to anyone that wanted them it’s hard to get those parts delivered to the companies that make things. If this were simply an issue of a company not getting the materials it needed to make a widget in a reasonable time there wouldn’t be as much issue. But because these companies make things that other companies use to make things the hiccups in the chain are exacerbated. If TSMC is delayed by a month getting a run of chips out, that month-long delay only increases for those down the line.

We’ve got issues getting facilities back online. We’ve got supply chains causing problems all over the place. Simple economics says we should just build more facilities, right? The opportunity costs of not having enough production around means we have ample space to make more of the things we need and profit. You’re right. Companies like Intel are bringing new fabs online as fast as they can. Sadly, that is a process that is measured in months or even years. The capacity we need to offset the disruption to the chip market should have been built two years ago if we wanted it ready now.

All of these factors are mixed into one simple truth. Without the materials, manufacturing, or supply chain to deliver the equipment we’re going to be left out in the cold if we want something delivered today. Just in Time inventory is about to become Somewhere in Time inventory. We’re powerless to change the supply chain. Does that means we’re powerless to prevent disruption to our planning process?

Proactive Processes

We may not be able to assemble networking gear ourselves to speed up the process but we are far from helpless. The process and the planning around gear acquisition and deployment has to change to reflect the current state of the world. We can have an impact provided we’re ready to lead by example.

  • Procure NOW: Purchasing departments are notorious for waiting until the last minute to buy things. Part of that reasoning is that expenditures are worth less in the future than right now because those assets are more valuable today gaining interest or something. You need to go to the purchasing department and educate them about how things are working right now. Instead of them sitting on the project for another few months you need to tell them that the parts have to be ordered right now in order for them to be delivered in six or seven months. They’re going to fight you and tell you that they can just wait. However, we all know this isn’t going to clear up any time soon. If they persist in trying to tell you that you need to wait just have them try to go car shopping to illustrate the issue. If you want stuff by the end of Q1 2022, you need to get that order in NOW.
  • Preconfigure Things However You Can: If you’re stuck waiting six months to get switches and access points, are you going to be stuck waiting another month after they come in to configure them? I hope that answer is a resounding “NO”. There are resources available to make sure you can get things configured now so you’re not waiting when the equipment is sitting on a loading dock somewhere. You need to reach out to your VAR or your vendor and get some time on lab gear in the interim. If you ordered a wireless controller or a data center switch you can probably get some rack time on a very similar device or even the exact same one in a lab somewhere. That means you can work on a basic configuration or even provision things like VLANs or SSIDs so you’re not recreating the wheel when things come in. Even if all you have is a skeleton config you’re hours ahead of where you would be otherwise. And if the VAR or vendor gives you a hard time about lab gear you can always remind them that there are other options available for the next product refresh.
  • Minimum Viable Functionality: All this advice is great for a new pod or an addition to an existing network that isn’t critical. What if the gear you ordered is needed right now? What if this project can’t wait? How can you make things work today with nothing in hand? This is a bit trickier because it will require duplicate work. If you need to get things operational today you need to work with what you have today. That means you may have to salvage an old lab switch or pull something out of production and reduce available ports until the gear can arrive. It also means you’re going to have to backup the old configs, erase them completely (don’t forget about the VLAN database and VTP server configurations), and then put on the new info. When the new equipment comes in you’re going to have to do it all over again in reverse. It’s more work but it leads to things being operational today instead of constantly telling someone that it’s going to be a while. If you’re a VAR that’s doing this for a customer, you’d better make it very clear this is temporary and just a loan. Otherwise you might find your equipment being a permanent addition even after everything comes in.

Tom’s Take

The chip shortage is one of those things that’s going to linger under the best of circumstances. We’re going to be pressed to get gear in well into 2022. That means delayed projects and lots of arguing about what’s critical and what’s not. We can’t fix the semiconductor sector of the market but we can work to make sure that the impact felt there is the only one that impacts us right now. The more we do ahead of time to make things smooth the better it will be when it’s finally time to make things happen. Don’t let the lack of planning on the part of the supply chain sour your outlook on doing your role in networking.

Private 5G Needs Complexity To Thrive

I know we talk about the subject of private 5G a lot in the industry but there are more players coming out every day looking to add their voice to the growing supporters of these solutions. And despite the fact that we tend to see 5G and Wi-Fi technologies as ships in the night this discussion isn’t going to go away any time soon. In part it’s because decision makers aren’t quite savvy enough to distinguish between the bands, thinking all wireless communications are pretty much the same.

I think we’re not going to see much overlap between these two technologies. But the reasons why aren’t quite what you might think.

Walking Workforces

Working from anywhere other than the traditional office is here to stay. Every major Silicon Valley company has looked at the cost benefit analysis and decided to let workers do their thing from where they live. How can I tell it’s permanent? Because they’re reducing salaries for those that choose to stay away from the Bay Area. That carrot is pretty enticing and for the companies to say that it’s not on the table for remote work going forward means they have no incentive to make people want to move to work from an office.

Mobile workers don’t care about how they connect. As long as they can get online they are able to get things done. They are the prime use case for 5G and Private 5G deployments. Who cares about the Wi-Fi at a coffee shop if you’ve got fast connectivity built in to your mobile phone or tablet? Moreover, I can also see a few of the more heavily regulated companies requiring you to use a 5G uplink to connect to sensitive data though a VPN or other technology. It eliminates some of the issues with wireless protection methods and ensures that no one can easily snoop on what you’re sending.

Mobile workers will start to demand 5G in their devices. It’s a no-brainer for it to be in the phone and the tablet. As laptops go it’s a smart decision at some point, provided enough people have swapped over to using tablets by then. I use my laptop every day when I work but I’m finding myself turning to my iPad more and more. Not for any magical reason but because it’s convenient if I want to work from somewhere other than my desk. I think that when laptops hit a wall from a performance standpoint you’re going to see a lot of manufacturers start to include 5G as a connection option to lure people back to them instead of abandoning them to the big tablet competition.

However, 5G is really only a killer technology for these more complex devices. The cost of a 5G radio isn’t inconsequential to the overall cost of a device. After all, Apple raised the price of their iPad when they included a 5G radio, didn’t they? You could argue that they didn’t when they upgraded the iPhone to a 5G chipset but the cellular technology is much more integral to the iPhone than the iPad. As companies examine how they are going to move forward with their radio technology it only makes sense to put the 5G radios in things that have ample space, appropriate power, and the ability to recover the costs of including the chips. It’s going to be much more powerful but it’s also going to be a bigger portion of the bill of materials for the device. Higher selling prices and higher margins are the order of the day in that market.

Reassuringly Expensive IoT

One of the drivers for private 5G that I’ve heard of recently is the drive to have IoT sensors connected over the protocol. The thinking goes that the number of devices that are going to be deployed it going to create a significant amount of traffic in a dense area that is going to require the controls present in 5G to ensure they aren’t creating issues. I would tend to agree but with a huge caveat.

The IoT sensors that people are talking about here aren’t the ones that you might think of in the consumer space. For whatever reason people tend to assume IoT is a thermostat or a small device that does simple work. That’s not the case here. These IoT devices aren’t things that you’re going to be buying one or two at a time. They are sensors connected to a larger system. Think HVAC relays and probes. Think lighting sensors or other environmental tech. You know what comes along with that kind of hardware? Monitoring. Maintenance. Subscription costs.

The IoT that is going to take advantage of private 5G isn’t something you’re going to be deploying yourself. Instead, it’s going to be something that you partner with another organization to deploy. You might “own” the tech in the sense that you control the data but you aren’t going to be the one going out to Best Buy or Tech Data to order a spare. Instead, you’re going to pay someone to deploy it and it when it goes wrong. So how does that differ from the IoT thermostat that comes to mind? Price. Those sensors are several hundred dollars each. You’re paying for the technology included in them with that monthly fee to monitor and maintain them. They will talk to the radio station in the building or somewhere nearby and relay that data back to your dashboard. Perhaps it’s on-site or, more likely, in a cloud instance somewhere. All those fees mean that the devices become more complex and can absorb the cost of more complicated radio technology.

What About Wireless?

Remember when wireless was something cool that you had to show off to people that bought a brand new laptop? Or the thrill of seeing your first iPhone connect to attwifi at Starbucks instead of using that data plan you paid so dearly to get? Wireless isn’t cool any more. Yes, it’s faster. Yes, it is the new edge of our world. But it’s not cool. In the same way that Ethernet isn’t cool. Or web browsers aren’t cool. Or the internal combustion engine isn’t cool. Wi-Fi isn’t cool any more because it is necessary. You couldn’t open an office today without having some form of wireless communications. Even if you tried I’m sure that someone would hop over to the nearest big box store and buy a consumer-grade router to get wireless working before the paint was even dry on the walls.

We shouldn’t think about private 5G replacing Wi-Fi because it never will. There will be use cases where 5G makes much more sense, like in high-density deployments or in areas were the contention in the wireless spectrum is just too great to make effective use of it. However, not deploying Wi-Fi in favor of deploying private 5G is a mistake. Wireless is the perfect “set it and forget it” technology. Provide an SSID for people to connect to and then let them go crazy. Public venues are going to rely on Wi-Fi for the rest of time. These places don’t have the kind of staff necessary to make private 5G economical in the long run.

Instead, think of private 5G deployments more like the way that Wi-Fi used to be. It’s an option for devices that need to be managed and controlled by the organization. They need to be provisioned. They need to consume cycles to operate properly. They need to be owned by the company and not the employee. Private 5G is more of a play for infrastructure. Wi-Fi is the default medium given the wide adoption it has today. It may not be the coolest way to connect to the network but it’s the one you can be sure is up and running without the need for the IT department to come down and make it work for you.


Tom’s Take

I’ll admit that the idea of private 5G makes me smile some days. I wish I had some kind of base station here at my house to counteract the horrible reception that I get. However, as long as my Internet connection is stable I have enough wireless coverage in the house to make the devices I have work properly. Private 5G isn’t something that is going to displace the installed base of Wi-Fi devices out there. With the amount of management that 5G requires in devices you’re not going to see a cheap or simple method to deploying it appear any time soon. The pie-in-the-sky vision of having pervasive low power deployments in cheap devices is not going to be realistic on the near future horizon. Instead, think of private 5G as something that you need to use when your other methods won’t work or when someone you are partnering with to deploy new technology requires it. That way you won’t be caught off-guard when the complexity of the technology comes to play.