Double the Fun in 2022

It’s January 1 again. The last 365 days have been fascinating for sure. The road to recovery doesn’t always take the straightest path. 2021 brought some of the the normal things back to us but we’re still not quite there yet. With that in mind, I wanted to look back at some of the things I proposed last year and see how they worked out for me:

  • Bullet Journaling: This one worked really well. When I remembered to do it. Being able to chart out what I was working on and what I needed to be doing helped keep me on track. The hardest part was remembering to do it. As I’ve said before, I always think I have a great memory and then remember that I forgot I don’t. Bullet journaling helped me get a lot of my tasks prioritized and made sure that the ones that didn’t get done got carried over to be finished later. I kind of stopped completely at the end of the year when things got hectic and I think that is what led me to feeling like everything was chaotic. I’m going to start again for 2022 and make sure to add some more flair to what I’m doing to make it stick for real this time.
  • More Video Content: This one was a mixed bag. I did record a full year of Tomversations episodes as well as the Rundown and various episodes of the On-Premise IT Roundtable podcast. The rest of my plans didn’t quite come to fruition but I think there’s still a spot for me to do things in 2022 to increase the amount of video content I’m doing. The reason is simple: more and more people are consuming content in video form instead of reading it. I think I can find a happy medium for both without increasing the workload of what I’m doing.
  • More Compelling Content: This was the part I think I did the most with. A considerable number of my posts this year were less about enterprise IT technical content and more about things like planning, development, and soft skills. I spent more time talking about the things around tech than I did talking about the tech itself. While that does have a place I wonder if it’s as compelling for my audience as the other analysis that I do. Given that my audience has likely shifted a lot over the last decade I’m not even sure what people read my blog for any longer. Given the number of comments that I get on IPv6 posts that were written five or six years ago I may not even be sure who would be interested in the current content here.

Okay, 2021 was a mixed bag of success and areas for improvement. My journaling helped me stay on task but I still felt a lot of the pressure of racing from task to task and my grand ideas of how to create more and do more ultimately fell away as things stayed busy. So, where to go from here?

  • More Analytical Content: Some of the conversations I’ve had over the year remind me that I have a unique place in the industry. I get to see a lot of what goes on and I talk to a lot of people about it. That means I have my own viewpoint on technologies that are important. While I do a lot of this for work, there are some kinds of analysis that are better suited for this blog. I’m going to spend some time figuring those out and posting them here over the year to help create content that people want to read.
  • Saying No to More Things: Ironically enough, one of the things I need to get better at in 2022 is turning things down. It’s in my nature to take on more than I can accomplish to make sure that things get done. And that needs to stop if I’m going to stay sane this year. I’m going to do my best to spread out my workload and also to turn down opportunities that I’m not going to be able to excel at doing. It may be one of the hardest things I do but I need to make it happen. Only time will tell how good I am and turning people down.
  • Getting In Front of Things: This one is more of a procedural thing for me but it’s really important. Rather than scrambling at the last minute to finish a script or get something confirmed, I’m going to try my hardest to plan ahead and make sure I’m not racing through chaos. With all the events I have coming up, both work and personal, I can’t afford to leave things to the last minute. So I’m going to be trying really hard to think ahead. We’ll see how it goes.

Tom’s Take

My January 1 post is mostly for me to keep myself honest over the year. It’s a way for me to set goals and stick to them, or at the very least come back to the next January 1 and see where I need to improve. I hope that it helps you a bit in your planning as well!

Holiday Networking Thoughts from 2021

It’s the Christmas break for 2021, which means lots of time spent doing very little work-related stuff. I’m currently putting together a Lego set, playing Metroid Dread and working on beating Ocarina of Time again.

As I waited for updates to download on Christmas morning I remembered how many packets must be flying across the wire to update software and operating systems for consoles. Even having done a few of the updates the night before I could see the traffic to those servers started to get a bit congested. It’s like Black Friday but for the latest patches to keep your games running. Add in the content that needs to be installed now in order to make that game disc work, or the download-only consoles for sale, and you can see that network engineers aren’t going to be a dying profession any time soon.

I’m a bit jaded because I come from a time when you didn’t need to be constantly connected to use software or need to download an update every few days. Heck, some of the bugs in Ocarina of Time have been there for over twenty years because those cartridges are not designed to be patched, having been created before a time when you could barely get online with a modem, let alone wirelessly connect a console.

I also am happy that upgrading devices in the house means fewer and fewer older units performing poorly on the wireless network. As more devices require me to connect them to the network for updates or app connectivity, I’m reminded that things like the Xbox 360 need low data rates enabled to work properly and that makes me sad. But I also can’t turn them off for fear that nothing will work and my children will scream. I don’t think spending a ton of money to get rid of an 802.11b client is really that big of a deal but I’m happy to see them go when I get the chance.

Likewise, I’m going to need to upgrade my APs a bit now that I have clients that can actually use 802.11ax (Wi-Fi 6). Even the older clients will get a performance boost. So It’s a matter of catching a good AP on sale and getting it done. Since I don’t use big box APs I just have to look a bit harder.


Tom’s Take

Make sure you give a shoutout to your friendly neighborhood network engineer for all their hard work making sure the services we’re currently consuming stayed up while the skeleton crew was carrying the pager this weekend. We’ve seen a lot of services crash on Christmas morning in recent years because of unexpected load. Also, give yourselves a hand for keeping your own network up long enough to download the latest DLC for a game or ensure that your new smart appliance can talk to the fancy app you need to use to control it. Let’s make it through the rest of the year with the change freeze intact and start 2022 off on the right foot with no outages.

A Recipe for Presentation Success

When I was a kid, I loved to help my mother bake. My absolute favorite thing to make was a pecan pie. I made sure I was always the one that got to do the work to fix it during the holidays. When I was first starting out I made sure I followed the recipe to the letter. I mixed everything in the order that it was listed. One of the first times I made the pie I melted the butter and poured it into the mixture which also had an egg. To my horror I saw the egg starting to cook and scramble in the bowl due to the hot butter. When I asked my mom she chuckled and said, “Now you get to learn about why the recipe isn’t always right.”

Throughout my career in IT and in presentations, I’ve also had to learn about why even if the recipe for success is written down properly there are other things you need to take into account before you put everything together. Just like tempering a mixture or properly creaming butter and sugar together, you may find that you need to do some things in a different order to make it all work correctly.

Step by Out of Step

As above, sometimes you need to know how things are going to interact so you do them in the right order. If you pour hot liquid on eggs you’re going to cook them. If you do a demo of your product without providing context for what’s happening you’re likely going to lose your audience. You need to set things up in the proper order for it all to make sense.

Likewise if you spend all your time talking about a problem that needs to be solved without telling your listeners that you solve the problem you’re going to have them focused on what’s wrong, not on how you fix it. Do you want them thinking about how you get a flat tire when you run over a nail? Or do you want them to buy your tires that don’t go flat when you run over sharp objects? It’s important to sell your product, not the problem.

It’s also important to know when to do those things out of order. Does your demo do something magical or amazing with a common issue? It might be more impactful to have your audience witness what happens before explaining how it works behind the scenes. It’s almost like a magician revealing their trick. Wow them with the result before you pull back the curtain to show them how it’s done.

The feel for how to do this varies from presentation to presentation. Are you talking to an audience that doesn’t understand the topic at all? You need to start with a lead-in or some other kind of level setting so no one gets lost. Are they experienced and understand the basics? You should be able to jump in at a higher level and show off a few things before going into detail. You have to understand whether or not you’re taking to a group of neophytes or a crowd of wizened veterans.

A counterpoint to this is the crowd of people that might be funding your project or startup. If they’re a person that gets pitched daily about “the problem” or they have a keen understanding of the market, what exactly are you educating them about when you open with a discussion of the issues? Are you telling them that you know what they are? Or are you just trying to set a hook? Might be worth explaining what you do first and then showing how you attack the problem directly.

Weaving a Story

The other thing that I see being an issue in presentations is the lack of a story. A recipe tells a story if you listen. Things have relationships. Liquids should be mixed together. Dry ingredients should be combined beforehand. Certain pieces should be put on last. If you put the frosting on a cake before you put it in the oven you’re going to be disappointed. It’s all part of the story that links the parts together.

Likewise, your presentation or lesson should flow. There should be a theme. It should make sense if you watch it. You can have individual pieces but if you tie it all together you’re going to have a better time of helping people understand it.

When I was growing up, TV shows didn’t tell longer stories. Episodes of the Addams Family or Gilligan’s Island stood alone. What happened in the first season didn’t matter in the next. Later, the idea of a narrative arc in a story started appearing. If you watch Babylon 5 today you’ll see how earlier episodes introduce things that matter later. Characters have growth and plot threads are tied up before being drawn out into new tapestries. It’s very much a job of weaving them all together.

When you present, do your sections have a flow? Do they make sense to be together? Or does it all feel like an anthology that was thrown together? Even anthologies have framing devices. Maybe you’re brining in two different groups that have different technologies that need to be covered. Rather than just throwing them out there you could create an overview of why they are important or how they work together. It’s rare that two things are completely unrelated, especially if you’re presenting them together.


Tom’s Take

If all you ever did was list out ingredients for recipes you’d be missing the important parts. They need to be combined in a certain order. Things need to go together properly. Yes, you’re going to make mistakes when you do it for the first time and you don’t understand the importance of certain things. But that learning process should help you put them together the way they need to be arranged. Take notes. Ask for feedback. And most importantly, know when it’s time to change the recipe to help you make it better the next time.

Is Disaggregation Going to Be Cord Cutting for the Enterprise?

There’s a lot of talk in the networking industry around disaggregation. The basic premise is that by decoupling the operating system from the hardware you can gain the freedom to run the devices you want from any vendor with the software that does what you want it to do. You can standardize or mix-and-match as you see fit. You gain the ability to direct the way your network works and you control how things will be going forward.

To me it sounds an awful lot like the trend of “cutting the cord” or unsubscribing from cable TV service and picking and choosing how you want to consume your content. Ten years ago the idea of getting rid of your cable TV provider was somewhat crazy. In 2021 it seems almost a given that you no long need to rely on your cable provider for entertainment. However, just like with the landscape of the post-cable cutting world, I think disaggregation is going to lead to a vastly different outcome than expected.

TNSTAAFL

Let’s get one thing out of the way up front: This idea of “freedom” when it comes to disaggregation and cord cutting is almost always about money. Yes, you want the ability to decide what software runs on your system. You don’t want to have unnecessary features or channels in your lineup. But why? I think maybe 5% of the community is worried about code quality or attack surfaces. The rest? They want to pay less for the software or hardware by unbundling the two. Instead of getting better code for their switches they’re really just chasing a lower cost per unit to run things. If that weren’t the case, why do so many of these NOS vendors run on Linux?

Yes, that feels like a bit of shot but reality speaks volumes over the pleasantries we often spout. The value of disaggregation is a smaller bottom line. Code quality can be improved over time with the proper controls in place. Hell, you could even write your own NOS given the right platform and development resources. However, people don’t want to build the perfect NOS or help vendors with the code issues. They want someone to build 90% of the perfect NOS and then sell it to them cheaply so they can run it on a cheap whitebox switch.

This is an issue that is faced by developers the world over. Look at the number of apps in the various mobile app stores that have a free entry point or are a “Freemium” business model. You don’t pay up front but as soon as you find a feature you really like it’s locked behind a subscription model. Why? Because one-time purchases don’t fund development. If everyone buys your app and then expects you to keep providing features for it and not just bug fixes, where does the investment for that development come from? Work requires resources – time or money. If you’re not getting paid for something you have to invest more time to make it work the way you want.

Vendors of disaggregated systems are finding themselves in a similar quandary. How do we charge enough for the various features we want to put into the system to be able to develop new features? The common way I see this done is to put in the most basic features that customers would want and then wait for someone to ask for something to be added. If the customer is asking for it the odds are good they’ll be willing to pay for it. You can even get them to buy your software now and sign an agreement that you’ll include the new feature in a few weeks in order to be sure your development resources aren’t wasted.

There are other ways, such as relying on single merchant silicon platforms or developing tight relationships with other vendors in the market, but ultimately it comes back to the question of resources. What are you willing to invest to make this happen? And what are you willing to accept as a cost that must be paid to get what you think you want?

The Buffet of Plenty…of Stuff You Don’t Want

The other aspect of this comparison is how the cable TV market responded to cord cutting. People started leaving cable TV for apps like Netflix and Hulu because they were cheaper than paying for a full cable subscription and had most of the content that people wanted. For the few pieces that weren’t available there were workarounds. By and large, you could find most of what you wanted in an auxiliary app when you occasionally wanted it.

So is this how things are today? Or did the market shift to the response of what customer behavior was? I think you’ll find that you’re not paying a single lump sum for content if you cut the cord for your cable provider. However, you are paying a large portion of that investment in separate apps that offer a portion of the content on-demand. And that’s why separating things is going to lead to new market dynamics.

The first behavior we saw was every media company coming up with their own app to host content. Instead of having a Disney channel on cable you now had a number of Disney apps that replicated the content channels. Later they merged into a single app with all the content. But was it all the content you wanted? Or was it all the content they owned? The drive for companies to create apps was not to offer customers a way to consume content along with their existing subscriptions. It was to provide a landing page for content you couldn’t find anywhere else.

That’s where phase two kicks in. Once you’ve created the destination, you need to make it the only place to be. That means removing content from other locations. Netflix started losing content when the creators started taking control of their own content. Soon it was necessary to create custom content to replace what was lost. Now, instead of buying a cable subscription and getting all the channels you had to sign up for five different apps, each comprising one or two of the channels you used to watch. Disney content is in the Disney app. NBC content is in another. The idea of channel surfing is gone. The back catalog of content added to the apps served more to entice people to keep their subscriptions during droughts of fresh new content.

How does this whole model break down in the enterprise? Well, going back to our earlier discussion about features being added to devices, what are you going to have to do to get new functions in your operating system? Are you going to require the vendor to write them on their schedule? Are you going to use a separate app or platform? Why should the vendor support some random feature that might not get much adoption and would take a significant amount of resources to build? Why not just make you do it yourself?

The idea is that you gain freedom and cheaper software. The hope is that you can build an enterprise network for half of what it would normally cost. The reality is that you’re going to gain less functionality and spend more time integrating things together on your own instead of just putting in a turnkey solution. And yes, there are people out there that are nodding their heads and saying they would love to do this. They want the perfect network with the perfect cheap NOS and whitebox hardware. But do you want this to be your only job for the rest of your career?

Once you build things the way you want them you become the only person that can work on them. You become the only source of support for your solution. If it’s a custom snowflake of a network you are the only person that can fix the snow issues. Traditional software and hardware may be unwieldy and difficult to troubleshoot but you can also call a support line where people have been paid to get training on how to implement and fix issues. If you built it yourself you’re the person that has to pick up the phone to fix it. Unless you want to train your team to support it too. Which takes time and money. So your savings between the two solutions are going to evaporate. And if you want the NOS vendor or the hardware supplier to support more functions to make it all easier you’re going to drive the price of the equipment up. So instead of writing one big check to the old guard you’re writing a bunch of little ones to every part of the new infrastructure you helped create.


Tom’s Take

I know it sounds like I’m not a fan of all this disaggregation stuff. In fact, I am a huge proponent of it. I just don’t buy the “freedom” excuse. My business background helps me understand the resource contention issues. My history of supporting snowflake implementations reminds me that you have to be able to turn your work over to someone else at some point in the future. Disaggregation has a lot of positive effects. You can mix and match your software and hardware and make it much easier to support for your own purposes. You no longer have to take a completed project and find workarounds to fit it to your needs. You get what you want. But don’t think you’re going to be able to get exactly what you need without some work of your own. Just like the cable cord cutting craze, you’re going to find out that you’re getting something totally different in the short term and a much different consumption model when the market shifts to the demands of the consumers. Don’t get complacent with your solutions and be ready to adapt when the suppliers force your hand.

You Down with IoT? You Better Be!

Did you see the big announcement from AWS re:Invent that Amazon has a preview of a Private 5G service? It probably got buried under the 200 other announcements that came out on so many other things so I’ll forgive you for missing it. Especially if you also managed to miss a few of the “hot takes” that mentioned how Amazon was trying to become a cellular provider. If I rolled my eyes any harder I might have caused permanent damage. Leave it to the professionals to screw up what seems to be the most cut-and-dried case of not reading the room.

Amazon doesn’t care about providing mobile service. How in the hell did we already forget about the Amazon (dumpster) Fire Phone? Amazon isn’t trying to supplant AT&T or Verizon. They are trying to provide additional connectivity for their IoT devices. It’s about as clear as it can get.

Remember all the flap about Amazon Sidewalk? How IoT devices were going to use 900 MHz to connect to each other if they had no other connectivity? Well, now it doesn’t matter because as long as one speaker or doorbell has a SIM slot for a private 5G or CBRS node then everything else can connect to it too. Who’s to say they aren’t going to start putting those slots in everything going forward? I’d be willing to bet the farm that they are. It’s cheap compared to upgrading everything to use 802.11ax radios or 6 GHz technology. And the benefits for Amazon are legion.

It’s Your Density

Have you ever designed a wireless network for a high-density deployment? Like a stadium or a lecture hall? The needs of your infrastructure look radically different compared to your home. You’re not planning for a couple of devices in a few dozen square feet. You’re thinking about dozens or even hundreds of devices in the most cramped space possible. To say that a stadium is one of the most hostile environments out there is underselling both the rabid loyalty of your average fan and the wireless airspace they’re using to post about how the other team sucks.

You know who does have a lot of experience designing high density deployments with hundreds of devices? Cellular and mobile providers. That’s because those devices were designed from the start to be more agreeable to hostile environments and have higher density deployments. Anyone that can think back to the halcyon days of 3G and how crazy it got when you went to Cisco Live and had no cell coverage in the hotel until you got to the wireless network in the convention center may disagree with me. But that exact scenario is why providers started focusing more on the number of deployed devices instead of the total throughput of the tower. It was more important in the long run to get devices connected at lower data rates than it was to pump up the wattage and get a few devices to shine at the expense of all the other ones that couldn’t get connected.

In today’s 5G landscape, it’s all about the clients. High density and good throughput. And that’s for devices with a human attached to them. Sure, we all carry a mobile phone and a laptop and maybe a tablet that are all connected to the Wi-Fi network. With IoT, the game changes significantly. Even in your consumer-focused IoT landscape you can probably think of ten devices around you right now that are connected to the network, from garage door openers to thermostats to light switches or light bulbs.

IoT at Work

In the enterprise it’s going to get crazy with industrial and operational IoT. Every building is going to have sensors packed all over the place. Temperature, humidity, occupancy, and more are going to be little tags on the walls sampling data and feeding it back to the system dashboard. Every piece of equipment you use on a factory floor is going to be connected, either by default with upgrade kits or with add-on networking gear that provides an interface to the control system. If it can talk to the Internet it’s going to be enabled to do it. And that’s going to crush your average Wi-Fi network unless you build it like a stadium.

On the other hand, private 5G and private LTE deployments are built for this scale. And because they’re lightly regulated compared to full-on provider setups you can do them easily without causing interference. As long as someone that owns a license for your frequency isn’t nearby you can just set things up and get moving. And as soon as you order the devices that have SIM slots you can plug in your cards and off you go!

I wouldn’t be shocked to see Amazon start offering a “new” lineup of enterprise-ready IoT devices with pre-installed SIMs for Amazon Private 5G service. Just buy these infrastructure devices from us and click the button on your AWS dashboard and you can have on-prem 5G. Hell, call it Network Outpost or something. Just install it and pay us and we’ll take care of the rest for you. And as soon as they get you locked in to their services they’ve got you hooked. Because if you’re already using those devices with 5G, why would you want to go through the pain on configuring them for the Wi-Fi?

This isn’t a play for consumers. Configuring a consumer-grade Wi-Fi router from a big box store is one thing. Private 5G is beyond most people, even if it’s a managed service. It also offers no advantages for Amazon. Because private 5G in the consumer space is just like hardware sales. Customers aren’t going to buy features as much as they’re shopping for the lowest sticker price. In the enterprise, Amazon can attach private 5G service to existing cloud spend and make a fortune while at the same time ensuring their IoT devices are connected at all times and possibly even streaming telemetry and collecting anonymized data, depending on how the operations contracts are written. But that’s a whole different mess of data privacy.


Tom’s Take

I’ve said it before but I’ll repeat it until we finally get the picture: IoT and 5G are now joined at the hip and will continue to grow together in the enterprise. Anyone out there that sees IoT as a hobby for home automation or sees 5G as a mere mobile phone feature will be enjoying their Betamax movies along with web apps on their mobile phones. This is bigger than the consumer space. The number of companies that are jumping into the private 5G arena should prove the smoke is hiding a fire that can signal that Gondor is calling for aid. It’s time you get on board with IoT and 5G and see that. The future isn’t a thick client with a Wi-Fi stack that you need to configure. It’s a small sensor with a SIM slot running on a private network someone else fixes for you. Are you down with that?

A Gift Guide for Sanity In Your Home IT Life

If you’re reading my blog you’re probably the designated IT person for your family or immediate friend group. Just like doctors that get called for every little scrape or plumbers that get the nod when something isn’t draining over the holidays, you are the one that gets an email or a text message when something pops up that isn’t “right” or has a weird error message. These kinds of engagements are hard because you can’t just walk away from them and you’re likely not getting paid. So how can you be the Designated Computer Friend and still keep your sanity this holiday season?

The answer, dear reader, is gifts. If you’re struggling to find something to give your friends that says “I like you but I also want to reduce the number of times that you call me about your computer problems” then you should definitely read on for more info! Note that I’m not going to fill this post will affiliate links or plug products that have sponsored anything. Instead, I’m going to just share the classes or types of devices that I think are the best way to get control of things.

Step 1: Infrastructure Upgrades

When you go visit your parents for Thanksgiving or some other holiday check in, are they still running the same wireless network they got when they got their high-speed Internet? Is their Wi-Fi SSID still the default with the password printed on the side of the router/modem combo? Then you’re going to want to upgrade their experience to keep your sanity for the next few holidays.

The first thing you need to do it get control of their wireless setup. You need to get some form of wireless access point that wasn’t manufactured in the early part of the century. Most of the models on the market have Wi-Fi 6 support now. You don’t need to go crazy with a Wi-Fi 6E model for your loved ones right now because none of their devices will support it. You just need something more modern with a user interface that wasn’t written to look like Windows 3.1.

You also need to see about an access point that is controlled via a cloud console. If you’re the IT person in the group you probably already use some form control for your home equipment. You don’t need a full Meraki or Juniper Mist setup to lighten your load. That is, unless you already have one of those dashboards set up and you have spare capacity. Otherwise you could look at something like Ubiquiti as a middle ground.

Why a cloud controller AP? Because then you can log in and fix things or diagnose issues without needing to spend time talking to less technical users. You can find out if they have an unstable Internet connection or change SSID passwords at the drop of a hat. You can even set up notifications for those remote devices to let you know when a problem happens so you can be ready and waiting for the call. And you can keep tabs on necessary upgrades and such so you aren’t fielding calls when the next major exploit comes out and your parents call you asking if they’re going to get infected by this virus. You can just tell them they’re up-to-date and good to go. The other advantage of this method is that when you upgrade your own equipment at home you can just waterfall the old functional gear down to them and give them a “new to you” upgrade that they’ll appreciate.

Step 2: Device Upgrades

My dad was notorious for using everything long past the point of needing to be retired. It’s the way he was raised. If there’s a hole you patch it. If it breaks you fix it. If that fix doesn’t work you wrap it in duct tape and use it until it crumbles to dust. While that works for the majority of things out there it does cause issues with technology far too often.

He had a iPad that he loved. He didn’t use it all day, every day but he did use it frequently enough to say that it was his primary computing device. It was a fourth-generation device, so it fell out of fashion a few years ago. When he would call me and ask me questions about why it was behaving a certain way or why he couldn’t download some new app from the App Store I would always remind him that he had an older device that wasn’t fast enough or new enough to run the latest programs or even operating software. This would usually elicit a grumble or two and then we would move on.

If you’re the Designated IT Person and you spend half your time trying to figure out what versions of OS and software are running on a device, do yourself a favor and invest in a new device for your users just to ease the headaches. If they use a tablet as their primary computing device, which many people today do, then just buy a new one and help them migrate all the data across to the new one while you’re eating turkey or opening presents.

Being on later hardware ensures that the operating system is the latest version with all the patches for security that are needed to keep your users safe. It also means you’re not trying to figure out what the last supported version of the software was that works with the rest of the things. I’ve played this game trying to get an Apple Watch to connect to an older phone with mismatched software as well as trying to get support for newer wireless security on older laptops with very little capability to do much more than WPA1. The amount of hours I burned trying to make the old junk work with the new stuff would have been better served just buying a new version of the same old thing and getting all their software moved over. Problems seem to just disappear when you are running on something that was manufactured within the last five years.

Step 3: Help Them Remember

This is probably my biggest request: Forgotten passwords. Either it’s the forgotten Apple ID or maybe the wireless network password. My parents and in-laws forget the passwords they need to log into things all the time. I finally broke down and taught them how to use a password management tool a few years ago and it made all the difference in the world. Now, instead of them having to remember what their password was for a shopping site they can just set it to automatically fill everything in. And since they only need to remember the master password for their app they don’t have to change it.

Better yet, most of these apps have a secure section for notes. So all those other important non-password things that seem to come up all the time are great to put in here. Social Security Numbers, bank account numbers, and so much more can be put in one central location and made easy to access. The best part? If you make it a shared vault you can request access to help them out when they forget how to get in. Or you can be designated as a trusted party that can access the account in the event of a tragedy. Getting your loved ones used to using password vaults now makes it much easier to have them storing important info there in case something happens down the road that requires you to jump in without their interaction. Trust me on this.


Tom’s Take

Your loved ones don’t need knick knacks and useless junk. If you want to show them you love them, give them the gift of not having to call you every couple of days because they can’t remember the wireless password or because they keep getting this error that says their app isn’t support on this device. Invest in your sanity and their happiness by giving them something that works and that has the ability for you to help manage it from the background. If you can make it stable and useful and magically work before they call you with a problem you’re going to find yourself a happier person in the years to come.

IP Class is Now in Session

You may have seen something making the rounds on Twitter this week about a couple of proposed drafts designed to alleviate the problems with IPv4 exhaustion by repurposing some old IP spaces that aren’t available for use right now. Specifically:

Ultimately, this is probably going to fail for a variety of reasons and looks like it’s more of a suggestion than anything else but I wanted to take a moment to talk about why this isn’t an effective way of fixing address issues.

Error Bearers

The first reason that the Schoen drafts are going to fail is because most of the operating systems in the world won’t allow you to use reserved spaces for a system address. Because we knew years ago that certain spaces were marked as non-usable the logic was configured into the system to disallow the use of those spaces. And even if the system isn’t configured to disallow that space there’s no guarantee the traffic is going to be transmitted.

Let’s take 127/8 as a good example. Was it a smart idea to mark 16 million addresses as loopback host-only space? Nope. But that ship has sailed and we aren’t going to be able to easily fix it. Too many systems will see any address starting with 127 in first octet and assume it’s a loopback address. In much the same way as people have been known to assume the entire 192/8 address space is RFC1918 reserved instead of 192.168.0.0/16. Logic rules and people making decisions aren’t going to trust any space being used in that manner. Even if you did something creative like using NAT and only using it internally you’re not going to be able to patch every version of every operating system in your organization.

We modify rules all the time and then have to spend years updating those modifications. Take area codes in North America for example. The old rules used to say that an area code had to have a zero or a one for the middle digit – ([2-9][0-1][2-9]) to use the Cisco UCM parlance. If your middle digit was something other than a zero or a one it wasn’t a valid NANP area code. As we began to expand the phone system in 1995 we changed those rules and now have area codes with all manner of middle numbers.

What about prefixes? Those follow rules too. NANP prefixes must not start with a zero or a one – (area code) [2-9]XX-XXXX is the way they are coded. Prefixes that start with a zero or a one are invalid and can’t be used. If we suddenly decided that we needed to open up the numbers in existing area codes and include prefixes that start with those forbidden numbers we would need to reset all the dialing rules in systems all over the country. I know that I specifically programmed my CUCM servers to send an immediate error if you dialed a prefix with a zero or a one. And all of them would have to be manually reconfigured for such a change.

In much the same way, the address spaces that are reserved today as invalid would need to be patched out of systems from home computers to phones to networking equipment. And even if you think you got it all you’re going to miss one and wonder why it isn’t working. Worse yet, it might even silently fail because you may be able to transmit data to 95% of the systems out there but some intermediate system may discard your packets as invalid and never tell you what happened. You’ll spend hours or days chasing a problem you may not even be able to fix.

Avoiding the Solutions

The easiest way to look at these proposals is by understanding that people are really, really, really in love with IPv4. Despite the fact that using the effort of the changes necessary to implement these reserved spaces would be better spent on IPv6 adoption we still get these things being submitted. There is a solution but people don’t want to use it. The modern Internet relies so much on the cloud that it would be simple to enable IPv6 in your provider space and use your engineering talent to help provide better adoption for that instead. We’re already seeing that all over places with address space has been depleted for a while now.

It may feel easier to spend more effort to revitalize the IPv4 space we all know and love. It may even feel triumphant when we’re able to reclaim address space that was wasted and use it for something productive instead of just teaching that you can’t configure devices with those spaces. And millions of devices will have IP address space to use, or more accurately there will be millions of addresses available to sell to people that will waste them anyway. Then what?

The short term gain from opening up IPv4 space at the expense of not developing IPv6 adoption is a fallacy that will end in pain. We can keep putting policy duct tape on the IPv4 exhaustion problem but we are eventually going to hit a wall we can’t overcome. The math doesn’t work when your address space is only 32 bits in total. That’s why IPv6 expanded the amount of information in the address space.

Sure, there have been mistakes in the way that IPv6 address space has been allocated and provisioned. Those mistakes would need to eventually be corrected and other configurations would need to be done in order to efficiently utilize the space. Again, the effort should be made to fix problems with a future-proof solution instead of trying our hardest to keep the lights on with the old system that’s falling apart for a few more years.


Tom’s Take

The race to find every last possible way to utilize the IPv4 space is exactly what I expected when we’re in the death throes of using it instead of IPv6. The easy solutions are done. The market and hunger for IPv4 space is only getting stronger. Instead of weaning the consumers off their existing setups and moving them to something future proof we’re feeding their needs for short term gains. If the purpose of this whole exercise was to get more address space to be rationed out for key systems to keep them online longer I might begrudgingly accept it. However, knowing that it would likely be opened up and fed to providers to be auctioned off in blocks to be ultimately wasted means all the extra effort is for no gain. These IETF drafts have a lot of issues and we’re better off letting them expire in May 2022. Because if we take up this cause and try to make them a reality we’re going to have to relearn a lot of lessons of the past we’ve forgotten.

The Process Will Save You

I had the opportunity to chat with my friend Chris Marget (@ChrisMarget) this week for the first time in a long while. It was good to catch up with all the things that have been going on and reminisce about the good old days. One of the topics that came up during our conversation was around working inside big organizations and the way that change processes are built.

I worked at IBM as an intern 20 years ago and the process to change things even back then was arduous. My experience with it was the deployment procedures to set up a new laptop. When I arrived the task took an hour and required something like five reboots. By the time I left we had changed that process and gotten it down to half an hour and only two reboots. However, before we could get the new directions approved as the procedure I had to test it and make sure that it was faster and produced the same result. I was frustrated but ultimately learned a lot about the glacial pace of improvements in big organizations.

Slow and Steady Finishes the Race

Change processes work to slow down the chaos that comes from having so many things conspiring to cause disaster. Probably the most famous change management framework is the Information Technology Infrastructure Library (ITIL). That little four-letter word has caused a massive amount of headaches in the IT space. Stage 3 of ITIL is the one that deals with changes in the infrastructure. There’s more to ITIL overall, including asset management and continual improvement, but usually anyone that takes ITIL’s name in vain is talking about the framework for change management.

This isn’t going to be a post about ITIL specifically but about process in general. What is your current change management process? If you’re in a medium to large sized shop you probably have a system that requires you to submit changes, get the evaluated and approved, and then implemented on a schedule during a downtime window. If you’re in a small shop you probably just make changes on the fly and hope for the best. If you work in DevOps you probably call them “deployments” and they happen whenever someone pushes code. Whatever the actual name for the process is you have one whether you realize it or not.

The true purpose of change management is to make sure what you’re doing to the infrastructure isn’t going to break anything. As frustrating as it is to have to go through the process every time the process is the reason why. You justify your changes and evaluate them for impact before scheduling them. As opposed to something that can be termed as “Change and find out” kind of methodologies.

Process is ugly and painful and keeps you from making simple mistakes. If every part of a change form needs to be filled out you’re going to complete it to make sure you have all the information that is needed. If the change requires you to validate things in a lab before implementation then it’s forcing you to confirm that it’s not going to break anything along the way. There’s even a process exception for emergency changes and such that are more focused on getting the system running as opposed to other concerns. But whatever the process is it is designed to save you.

ITIL isn’t a pain in the ass on accident. It’s purposely built to force your justify and document at every step of the process. It’s built to keep you from creating disaster by helping you create the paper trail that will save you when everything goes wrong.

Saving Your Time But Not Your Sanity

I used to work with a great engineer name John Pross. John wrote up all the documentation for our migrations between versions of software, including Novell NetWare and Novell Groupwise. When it came time to upgrade our office Groupwise server there was some hesitation on the part of the executive suite because they were worried we were going to run into an error and lock them out of their email. The COO asked John if he had a process he followed for the migration. John’s response was perfect in my mind:

“Yes, and I treat every migration like the first one.”

What John meant is that he wasn’t going to skip steps or take shortcuts to make things go faster. Every part of the procedure was going to be followed to the letter. And if something came up that didn’t match what he thought the output should have been it was going to stop until he solved that issue. John was methodical like that.

People like to take shortcuts. It’s in our nature to save time and energy however we can. But shortcuts are where the change process starts falling apart. If you do something different this time compared to the last ten times you’ve done it because you’re in a hurry or you think this might be more efficient without testing it you’re opening yourself up for a world of trouble. Maybe not this time but certainly down the road when you try to build on your shortcut even more. Because that’s the nature of what we do.

As soon as you start cutting corners and ignoring process you’re going to increase the risk of creating massive issues rapidly. Think about something as simple as the Windows Server 2003 shutdown dialog box. People used to reboot a server on a whim. In Windows 2003, the server had a process that required you to type in a reason why you were manually shutting the server down from the console. Most people that rebooted the server fell into two camps: Those that followed their process and typed in the reason for the reboot and those that just typed “;Lea;lksjfa;ldkjfadfk” as the reason and then were confused six months from now when doing the post-mortem on an issue and cursing their snarky attitude toward reboot documentation.

Saving the Day

Change process saves you in two ways. The first is really apparent: it keeps you from making mistakes. By forcing you to figure out what needs to happen along the way and document the whole process from start to finish you have all the info you need to make things successful. If there’s an opportunity to catch mistakes along the way you’re going to have every opportunity to do that.

The second way change process saves you is when it fails. Yes, no process is perfect and there are more than a few times when the best intentions coupled with a flaw in the process created a beautiful disaster that gave everyone lots of opportunity to learn. The question always comes back to what was learned in that process.

Bad change failures usually lead to a sewer pipe of blame being pointed in your direction. People use process failures as a change to deflect blame and avoid repercussions for doing something they shouldn’t have or trying to increase their stock in the company. The truly honest failure analysis doesn’t blame anyone but the failed process and tries to find a way to fix it.

Chris told me in our conversation that he loved ITIL at one of his former jobs because every time it failed it led to a meaningful change in the process to avoid failure in the future. These are the reasons why blameless post-mortem discussions are so important. If the people followed the process and the process the people aren’t at fault. The process is incorrect or flawed and needs to be adjusted.

It’s like a recipe. If the instructions tell you to cook something for a specific amount of time and it’s not right, who is to blame? Is it you because you did what you were told? Is the recipe? Is it the instructions? If you start with the idea that you did the process right and start trying to figure out where the process is wrong you can fix the process for next time. Maybe you used a different kind of ingredient that needs more time. Or you made it thinner than normal and that meant cooking it too long this time. Whatever the result, you end up documenting the process and changing things for the future to prevent that mistake from happening again.

Of course, just like all good frameworks, change processes shouldn’t be changed without analysis. Because changing something just to save time or take a shortcut defeats the whole purpose! You need to justify why changes are necessary and prove they provide the same benefit with no additional exposure or potential loss. Otherwise you’re back to making changes and hoping you don’t get burned this time.


Tom’s Take

ITIL didn’t really become a popular thing until after I left IBM but I’m sure if I were still there I’d be up to my eyeballs in it right now. Because ITIL was designed to keep keyboard cowboys like me from doing things we really shouldn’t be doing. Change management process are designed to save us at every step of the way and make us catch our errors before they become outages. The process doesn’t exist to make our lives problematic. That’s like saying a seat belt in a car only exists to get in my way. It may be a pain when you’re dealing with it regularly but when you need it you’re going to wish you’d been using it the whole time. Trust in the process and you will be saved.

Is the M1 MacBook Pro Wi-Fi Really Slower?

I ordered a new M1 MacBook Pro to upgrade my existing model from 2016. I’m still waiting on it to arrive by managed to catch a sensationalist headline in the process:

“New MacBook Wi-Fi Slower than Intel Model!”

The article referenced this spec sheet from Apple referencing the various cards and capabilities of the MacBook Pro line. I looked it over and found that, according to the tables, the wireless card in the M1 MacBook Pro is capable of a maximum data rate of 1200 Mbps. The wireless card in the older model Intel MacBook Pro all the way back to 2017 is capable of 1300 Mbps. Case closed! The older one is indeed faster. Except that’s not the case anywhere but on paper.

PHYs, Damned Lies, and Statistics

You’d be forgiven for jumping right to the numbers in the table and using your first grade inequality math to figure out that 1300 is bigger than 1200. I’m sure it’s what the authors of the article did. Me? I decided to dig in a little deeper to find some answers.

It only took me about 10 seconds to find the first answer as to one of the differences in the numbers. The older MacBook Pro used a Wi-Fi card that was capable of three spacial streams (3SS). Non-wireless nerds reading this post may wonder what a spatial stream is. The short answer is that it is a separate unique stream of data along a different path. Multiple spacial streams can be leveraged through Multiple In, Multiple Out (MIMO) to increase the amount of data being sent to a wireless client.

The older MacBook Pro has support for 3SS. The new M1 MacBook Pro has a card that supports up to 2SS. Problem solved, right? Well, not exactly. You’re also talking about a client radio that supports different wireless protocols as well. The older model supported 802.11n (Wi-Fi 4) and 802.11ac (Wi-Fi 5) only. The newer model supports 802.11ax (Wi-Fi 6) as well. The quoted data rates on the Apple support page state that the maximum data rates for the cards are quoted in 11ac for the Intel MBP and 11ax for the M1 MBP.

Okay, so there are different Wi-Fi standards at play here. Can’t be too hard to figure out, right? Except that the move from Wi-Fi 5 to Wi-Fi 6 is more than just incrementing the number. There are a huge number of advances that have been included to increase efficiency of transmission and ensure that devices can get on and off the air quickly to help maximize throughput. It’s not unlike the difference between the M1 chip in the MacBook and its older Intel counterpart. They may both do processing but the way they do it is radically different.

You also have to understand something called Modulation Coding Set (MCS). MCS defines the data rates possible for a given definition of signal-to-noise ratio (SNR), RSSI, and Quadrature Amplitude Modulation (QAM). Trying to define QAM could take all day, so I’ll just leave it to GT Hill to do it for me:

The MCS table for a given protocol will tell you what the maximum data rate for the client radio is. Let’s look at the older MacBook Pro first. Here’s a resource from NetBeez that has the 802.11ac MCS rates. If you look up the details from the Apple support doc for a 3SS radio using VHT 9 and an 80MHz channel bandwidth you’ll find the rate is exactly 1300 Mbps.

Here’s the MCS table for 802.11ax courtesy of Francois Verges.. WAY bigger, right? You’re likely going to want to click on the link to the Google Sheet in his post to be able to read it without a microscope. If you look at the table and find the row that equates to an 11ax client using 2SS, MCS HE 11, and 80MHz channel bandwidth you’ll see that the number is 1201. I’ll forgive Apple for rounding it down to keep the comparison consistent.

Again, this all checks out. The Wi-Fi equivalent of actuarial tables says that the older one is faster. And it is under absolutely perfect conditions. Because the quoted numbers for the Apple document are the maximums for those MCSes. When’s the last time you got the maximum amount of throughput on a wired link? Now remember that in this case you’re going to need to have perfect airtime conditions to get there. Which usually means you’ve got to be right up against the AP or within a very short distance of it. And that 80MHz channel bandwidth? As my friend Sam Clements says, that’s like drag racing a school bus.

The World Isn’t Made Out Of Paper

If you are just taking the numbers off of a table and reproducing them and claiming one is better than the other then you’re probably the kind of person that makes buying decisions for your car based on what the highest number on the speedometer says. Why take into account other factors like cargo capacity, passenger size, or even convertible capability? The numbers on this one go higher!

In fact, when you unpack the numbers here as I did, you’ll see that the apparent 100 Mbps difference between the two radios isn’t likely to come into play at all in the real world. As soon as you move more than 15 feet away from the AP or put a wall between the client device and your AP you will see a reduction in the data rate. The top end of these two protocols are running in the 5GHz spectrum, which isn’t as forgiving with walls as 2.4GHz is. Moreover, if there are other interfering sources in your environment you’re not going to get nearly the amount of throughput you’d like.

What about that difference in spatial streams? I wondered about that for the longest time. Why would you purposely put fewer spatial streams in a client device when you know that you could max it out? The answer is that even with that many spatial streams reality is a very different beast. Devin Akin wrote a post about why throughput numbers aren’t always the same as the tables. In that post he mentioned that a typical client mix in a network is 2018 is about 66% devices with 1SS, 33% devices with 2SS, and less than 1% of devices have 3SS. While those numbers have probably changed in 2021 thanks to the iPhone and iPad now having 2SS radios, I don’t think the 3SS numbers have moved much. The only devices that have 3SS are laptops and other bigger units. It’s harder for a unit to keep the data rates from a 3SS radio so most devices only include support for two of them.

The other thing to notice here is that the value of what a spatial stream brings you is different between the two protocols. In 802.11ac, the max data rate for a single spatial stream is about 433 Mbps. For 802.11ax it’s 600 Mbps. So a 2SS 11ac radio maxes out at 866 Mbps while a 3SS 11ax radio setup would get you around 1800 Mbps. It’s far more likely that you’ll be using the 2SS 11ax radio more efficiently more often than you’ll see the maximum throughput of a 3SS 11ac radio.


Tom’s Take

This whole tale is a cautionary example of why you need to do your own research, even if you aren’t a Wi-Fi pro. The headline was both technically correct and wildly inaccurate. Yes, the numbers were different. Yes, the numbers favored the older model. No one is going to see the maximum throughput under most normal conditions. Yes, having support for Wi-Fi 6 in the new MacBook Pro is a better choice overall. You’re not going to miss that 100 Mbps of throughput in your daily life. Instead you’re going to enjoy a better protocol with more responsiveness in the bands you use on a regular basis. You’re still faster than the gigabit Ethernet adapters so enjoy the future of Wi-Fi. And don’t believe the numbers on paper.

Getting In Front of Future Regret

Yesterday I sat in on the keynote from Commvault Connections21 and participated in a live blog of it on Gestalt IT. There was a lot of interesting info around security, especially related to how backup and disaster recovery companies are trying to add value to the growing ransomware issue in global commerce. One thing that I did take away from the conversation wasn’t specifically related to security though and I wanted to dive into a bit more.

Reza Morakabati, CIO for Commvault, was asked what he thought teams needed to do to advance their data strategy. And his response was very insightful:

Ask your team to imagine waking up to hear some major incident has happened. What would their biggest regret be? Now, go to work tomorrow and fix it.

It’s a short, sweet, and powerful sentence. Technology professionals are usually focused on implementing new things to improve productivity or introduce new features to users and customers. We focus on moving fast and making people happy. Security is often seen as running counter to this ideal. Security wants to keep people safe and secure. It’s not unlike the parents that hold on to their child’s bicycle after the training wheels come off just to make sure the kids are safe. The kids want to ride and be free. The parents aren’t quite sure how secure they’re going to be just yet.

Regret Storming

Thought exercises make for entertaining ways to scare yourself to death some days. If you spend too much time thinking about all the ways that things can go wrong you’re going to spend far too much energy focused on the negative aspects of your work. However, you do need to occasionally open yourself up to the likelihood that things are going to go wrong at some point.

For the thought exercise above, it’s not crucial to think about how they could go wrong. It’s more important to think about what could be the worst thing that could happen as a result of those bad things and how much you’ll regret it. You need to identify those areas and try to figure out how they can be mitigated.

Let me give you a specific example from my area. In May 2013 a massive tornado ripped through Moore, OK just north of where I live. It was a tragic event that had loss of life. People were displaced and homes and businesses were destroyed. One of the places that was damaged severely was the Moore Public Schools administration building. In the aftermath of trying to clean up the debris and find survivors, one of my friends that worked for an IT vendor told me he spent hours helping sift through the rubble of the building looking for hard disk drives for the district’s servers. Why? Because the tornado had struck just before the payroll for the district’s teachers and staff was due to be run. Without the drives they couldn’t run payroll or print paychecks for those employees. With an even greater need to have funds to pay for food or start repairs on their homes you can imagine that not getting paid was going to be a big deal for those educators and staff.

There are a lot of regrets that came out of the May 2013 tornado. Loss of life and loss of property are always at the top of the list. The psychological damage of enduring something like that is also a huge impact. But for the school district one of the biggest regrets they faced was not having a contingency plan for what to do about paying their employees to help them deal with the disaster. It sounds small in comparison to the millions of dollars of damage that happened but it also represents something important that can be controlled. The school system can’t upgrade the warning system or build businesses that can withstand the most powerful storms imaginable. But they can fix their systems to prevent teachers from going without resources in the event of an emergency.

In this case, the regret is not being able to pay teachers if the district data center goes down. How could we fix that regret today if we had imagined it beforehand? We could have migrated the data center to the cloud so no one weather event could take it out. Likewise, we could have moved to a service that provides payroll entry and check printing that could be accessed from anywhere. We could also have encouraged our teachers and employees to use direct deposit functions to reduce the need to physically print checks. Technology today provides us with a number of solutions to the regret we face. We can put together plans to implement any one of them quickly. We just need to identify the problem and build a resolution for it.

Building Your Future

It’s not easy to foresee every possible outcome. Nor should it be. But if you focus on the feelings those unknown outcomes could bring you’ll have a much better sense for what’s important to protect and how to go about doing it. Are you worried your customer data is going to be stolen and shared on the Internet? Then you need to focus your efforts on protecting it. Are you concerned your AWS bill is going to skyrocket if someone steals your credentials and starts borrowing your resource pool? Then you need to have governance in place to prevent unauthorized users from doing that thing.

You don’t have to have a solution for every possible regret. You may even find that some of the things you thought you might end up regretting are actually pretty mild. If you’re not concerned about what would happen to your testing environment because you can just clone it from a repository then you can put that to bed and not worry about it any longer. Likewise, you may discover some regrets you didn’t anticipate. For example, if you’re using Active Directory credentials to back up your server data, you need to make sure you’re backing up Active Directory as well. You’re going to find yourself infuriated if you have the data you need to get back to business but it’s locked behind cryptographic locks that you can’t open because someone forgot to back up a domain controller.


Tom’s Take

I’ve been told that I’m somewhat negative because I’m always worried about what could go wrong with a project or an event. It’s not that I’m a pessimist as much as I’ve got a track record for seeing how things can go off the rails. Thanks to Commvault I’m going to spend more time thinking of my regrets and trying to plan for them to be mitigated ahead of time so all the possible ways things could fail won’t consume my thoughts. I don’t have to have a plan for everything. I just need to get in front of the regrets before I feel them for real.