Time to Talk

It’s a holiday week here in the US so most people are working lighter days or just taking the whole week off. They’re looking forward to spending time with family and friends. Perhaps they’re already plotting their best strategy for shopping during Black Friday and snagging a new TV or watch. Whatever the case may be there’s lots things going on all over.

One thing that I feel needs to happen is conversation. Not just the kind of idle conversation that we make when we don’t know what to talk about. I also don’t mean the kinds of deep conversations that we need to prepare ourselves to have. I’m talking about the ones where we learn. The ones we have with friends and family where we pick up tidbits of stories and preserve them for the future.

It sounds rather morbid but these conversations aren’t going to be available forever. Our older loved ones are getting older every year. Time marches on and we never know when that time I going to run out. I have several friends that have lost loved ones this year and still others that have realized the time is growing shorter. Mortality is something that reminds us how important those experiences can be.

This year, talk to your friends. Listen to the stories of your family. Make the time to really hear them. That might mean turning off the football game or skipping that post-turkey nap. But trust me when I say that you’ll appreciate that time more when you realize you won’t have it any more.

Continuity is Not Recovery

It was a long weekend for me but it wasn’t quite as long as it could have been. The school district my son attends is in the middle of a ransomware attack. I got an email from them on Friday afternoon telling us to make sure that any district-owned assets are powered off until further notice to keep our home networks from being compromised. That’s pretty sound advice so we did it immediately.

I know that the folks working on the problem spent the whole weekend trying to clean it up and make sure there isn’t any chance of getting reinfected. However, I also wondered how that would impact school this week. The growing amount of coursework that happens online or is delivered via computer is large enough that going from that to a full stop of no devices is probably jarring. That got me to thinking once more about the difference between continuity and recovery

Keeping The Lights On

We talk about disaster recovery a lot. Backups of any kind are designed to get back what was lost. Whether it’s a natural disaster or a security incident you want to be able to recover things back to the way they were before the disaster. We talk about making sure the data is protected and secured, whether from attackers or floods or accidental deletion. It’s a sound strategy but I feel it’s a missing a key component.

Aside from getting your data back, which is called the recovery point objective (RPO), you also need to consider how long it’s going to take to get you there. That’s called the recovery time objective (RTO). RTO tells you how long it will be until you can get your stuff back. For a few files the RTO could be minutes. For an entire data center it could be weeks. The RTO can even change based on the nature of the disaster. If you lose power to the building due to a natural disaster you may not even be able to start recovery for days which will extend the RTO due to circumstances outside your control.

For a business or organization looking to stay up and running during a disaster, RTO is critical but so too is the need for business continuity. How critical is it? The category was renamed to “Disaster Recovery and Business Continuity” many years ago. It’s not enough to get your data back. You have to stay up and running as much as possible during the process. You’ve probably experienced this if you’ve ever been to a store that didn’t have working registers or the ability to process credit cards. How can you pay for something if you can’t ring it up or process a payment option?

Business continuity isn’t about the data. It’s about keeping the lights on while you recover it. In the case of my son’s school they’re going to teach the old fashioned way. Lectures and paper are going to replace videos and online quizzes. Teachers are thankfully very skilled in this manner. They’ve spent hundreds if not thousands of hours in a classroom instructing with a variety of techniques. Are your employees equally as skilled when everything goes down? Could they get the job done if your Exchange Server goes down or they’re unable to log into Salesforce?

Back to Good, Eventually

In order to make sure you have a business left to recover you need to have some sort of a continuity plan. Especially in a world where cyberattacks are common you need to know what you have to do to keep things going while you work on fixing the damage. Most bad actors are counting on you not being able to conduct business as a driver to pay the ransom. If you’re losing thousands of dollars per minute you’re more likely to cave in and pay than try to spend days or weeks recovering.

Your continuity plan needs to exist separately from your backup RTO objectives. It may sound pessimistic but you need to have a plan for what happens if the RTO is met but also one for what happens if you miss your RTO. You don’t want to count on a quick return to normal operations as your continuity plan only to find out you’re not going to get there.

The other important thing to keep in mind is that continuity plans need to be functional, not perfect. You use the systems you use for a reason. Credit card machines make processing payments quick and easy. If they’re down you’re not going to have the same functionality. Yes, using the old manual process with paper slips and carbon copies is a pain and takes time. It’s also the only way you’re going to be able to take those payments when you can’t use the computer.

You also need to plan around how to handle your continuity plan. If you’re suddenly using more paper, such as invoices or credit card slips, where do you store those? How will you process them once the systems come back online? Will you need to destroy anything after it’s entered? Does that need to happen in a special way? All of these questions should be asked now so there is time to debate them instead of waiting until you’re in the middle of a disaster to solve them.


Tom’s Take

Disasters are never fun and we never really want them to happen. However we need to make sure we’re ready when they do. You need to have a plan for how to get everything back as well as how to keep doing everything you can until that happens. You may not be able to do 100% of the things you could before but if you don’t try to at least do some of them you’re going to lose a lot more in the long run. Have a plan and make sure everyone knows what to do when disaster strikes. Don’t count on getting everything back as the only way to recovery.

Cleaning Out The Cruft

I spent the weekend doing something I really should have done a long time ago. I went through my piles of technology that I was going to get around to using one day and finally got rid of anything I didn’t recognize. Old access points, old networking gear, and even older widgets that went to devices that I don’t even remember owning.

Do you have one of these piles? Boxes? Corners of your office or cave? The odds are good there’s a pile of stuff that you keep thinking you’re eventually going to get around to doing something with some day. Except some day hasn’t come yet. So maybe it’s time to get rid of that pile. Trust me you’re going to feel better for getting rid of that stuff.

What to do with it? It needs to be properly recycled so don’t just toss it in the trash can. Anything with electric circuits needs to be properly disposed of so look for an electronics recycling facility. Yes, there are stories that electronics recycling isn’t all it’s cracked up to be but it’s better than polluting with e-waste everywhere.

Consider donating the devices to a trade school or other maker space. Maybe they won’t work properly as intended but giving students the chance to take them apart is a much better option than just junking it all. Maybe you’ll inspire the next group of scientists and inventors because your old 802.11g access point fascinated them when they pulled it apart.

No matter whether you recycle or donate you should go through it all. Consolidate and be honest with yourself. If you don’t recognize it or haven’t used it in the last few months you’re not going to miss it.

Why Do We Accept Bad Wireless Clients?

We recorded a fun roundtable discussion last week during Mobility Field Day that talked about the challenges that wireless architects face in their daily lives. It’s about an hour but it’s packed with great discussions about hard things we deal with:

One of the surprises for me is that all the conversations came back to how terrible wireless clients can be. The discussion kept coming back to how hard it is to find quality clients and how we adjust our expectations for the bad ones.

Driven to Madness

Did you know that 70% of Windows crashes are caused by third-party drivers? That’s Microsoft’s own research saying it. That doesn’t mean that Windows is any better or more stable with their OS design compared to Linux or MacOS. However, I’ve fiddled with drivers on Linux and I can tell you how horrible that experience can be1. Windows is quite tolerant of hardware that wouldn’t work anywhere else. As long as the manufacturer provides a driver you’re going to get something that works most of the time.

Apply that logic to a wireless networking card. You can buy just about anything and install it on your system and it will mostly work. Even with reputable companies like Intel you have challenges though. I have heard stories of driver updates working in one release and then breaking horribly in another. I’ve had to do the dance of installing beta software to make a function work at the expense of stability of the networking stack. Anyone that has ever sent out an email cautioning users to not update any drivers on their system knows the pain that can be caused by bad drivers corrupting clients.

That’s just the software we can control. What if it’s an OS we can’t do anything about? More and more users are turning to phones and tablets for their workhorse devices. Just a causal glance at Youtube will reveal a cornucopia of using a tablet as a daily driver machine. Those devices aren’t immune to driver challenges. They just come in a hidden package during system updates. Maybe the developers decided to roll out a new feature. Maybe they wanted to test a new power management algorithm. Maybe they’re just chaotic neutral and wanted to disrupt the world. Whatever the reason you’re stuck with the results. If you can’t test it fast enough you may find your users updated their devices chasing a feature. Most companies stop signing the code for the older version shortly after issuing an update so downgrading is impossible. Then what? You have a shiny brick? Maybe you have to create a special network that disables features for them? There are no solid answers.

Pushing Back

My comment in the roundtable boils down to something simple: Why do we allow this to happen? Why are we letting client manufacturers do this? The answer is probably more elegant than you realize. We do it because users expect every device to work. Just like the Windows driver issues you wouldn’t plug something into a computer and then expect it to not work, right? Wireless is no different to the user. They want to walk in somewhere and connect. Whether it’s a coffee shop or their home office or the corporate network it needs to be seamless and friction-free.

Would you expect the same of an Ethernet cable? or a PATA hard drive? Would you expect to be able to bring a phone from home and plug it into your corporate PBX? Of course not. Part of the issue is a lack of visible incompatibility. If you know the Ethernet cable won’t plug into a device you won’t try to connect it. If the cable for your disk drive isn’t compatible with your motherboard you get a different drive. With wireless we expect the nerds in the back to “make it work”. Wireless is one of the best protocols at making things work poorly just to say it is up and running. If you had an Ethernet network with 15% packet loss you’d claim it was broken. Yet Wi-Fi will connect and drop packets due to bad SNR and other factors because it’s designed to work under adverse conditions.

Why do we tolerate bad clients? Why don’t we push back against the vendors and tell them to do better? The standard argument is that we don’t control the client manufacturing process. How are we supposed to tell vendors to support a function if we can’t make our voices heard? While we may not be able to convince Intel or Apple or Samsung to build in support for specific protocols we can affect that change with consumption. If you work in an enterprise and you need support for something, say 802.11r, you can refuse to purchase a device until it’s supported.

But wait, you say, I don’t control that either. You may not control the devices but you control the network to which they attach. You can tell your users that the device isn’t supported. Just like a PATA hard disk or a floppy drive you can tell users that what they want to do won’t work and you need to do something different. If they want to use their personal iPad for work or their ancient laptop to connect they need to update it or use a different communications method. If your purchasing department wants to save $10 per laptop because they come with inferior wireless cards you can push back and tell them that the specs aren’t compatible with the network setup. Period, full stop, end of sentence.


Tom’s Take

The power to solve bad clients won’t come from companies that make money doing the least amount of work possible. It won’t come from companies that don’t provide feedback in the form of lost sales. It will come when someone puts their foot down and refuses to support any more bad client hardware and software. If the Wi-Fi Alliance won’t enforce good client connectivity it’s time we do it for them.

If you disagree I’d love to hear what you think. Is there a solution I’m not seeing? Or are we just doomed to live with bad client devices forever?


  1. If you say Winmodem around me I will scream. ↩︎

Monday Mobility Quick Thoughts

I’m getting ready for Mobility Field Day 8 later this week and there’s been a lot of effort making sure we’re ready to go. That means I’ve spent lots of time thinking about event planning instead of writing. So I wanted to share some quick thoughts with you ahead of this week as well as WLPC Europe next week.

  • I remain convinced than half of the objections that are raised by oversight organizations when it comes to adopting new technology come from the fact they got caught flat-footed and weren’t ready for it to be popular. Whether it’s the Wi-Fi 6E safety issue or the report earlier this year from the FAA about 5G and airports it just seems like organizations spend less time doing actual investigation and more time writing press releases about how they are ready to figure it all out yet.
  • I also remain cautiously optimistic that the new Apple devices rumored to be coming out later this year, namely the iPad Pro and MacBook Pro with M2 chips, will have Wi-Fi 6E support. Yes, the iPhone didn’t. It’s also a smaller device with less room to add new hardware. The iPad and MacBook have historically gotten new chips before the smaller mobile device does. If I’m wrong then I guess we’ll get to see if 6E is enough of a factor to get people to ditch their Apple device for a Google or Samsung one.
  • As we rely more and more on software to expand the capabilities of our hardware I think we’re going to see more and more companies working toward the model of hardware-as-a-service. As in you lease the equipment from them for a monthly payment and, in return, you get to have a base level of features that can be expanded in higher “tiers” of service. Expect some more on this idea in the near future with the launch of solutions like Nile.

Tom’s Take

Make sure you tune in for Mobility Field Day 8 and don’t forget to tell us what you think! Maybe by next year we’ll have lots of Wi-Fi 7 content to discuss.

Brand Protection

I woke up at 5am this morning to order a new iPhone. I did this because I wanted the new camera upgrades along with some other nice-to-haves. Why did I get an iPhone and not a new Samsung? Why didn’t I look at any of the other phones on the market? It’s because I am a loyal Apple customer at this point. Does that mean I think the iPhone is perfect? Far from it! But I will choose it in spite of the flaws because I know it has room to be better.

That whole story is repeated time and again in technology. People find themselves drawn to particular companies or brands. They pick a new phone or computer or car based on their familiarity with the way they work or the design choices that are made. But does that mean they have to be loyal to that company no matter what?

Agree to Disagree

One of the things that I feel is absolutely paramount to being a trusted advisor in the technology space is the ability to be critical of a product or brand. If you look at a lot of the ambassador or influencer program agreements you’ll see language nestled toward the bottom of the legalese. That language usually states you are not allowed to criticize the brand for their decisions or talk about them in a disparaging way. In theory the idea is important because it prevents people from signing up for the program and then using the platform to harshly and unfairly criticize the company.

However, the dark side of those agreements usually outweigh the benefits. The first issue is that companies will wield the power to silence you to great effect. The worst offenders will have you removed from the program and potentially even sue you. Samsung almost stranded bloggers 10 years ago because of some brand issues. At the time it seemed crazy that a brand would do that. Today it doesn’t seem quite so far-fetched.

The second issue is that those agreements are written in such a way as to be able to cause issues for you even if you didn’t realize you were doing something you weren’t supposed to be doing. Think about celebrities that have tweeted about a new Android phone and the tweet has metadata that says sent with Twitter for iPhone. How about companies that get very upset when you discuss companies that they see as competitors. Even if you don’t see them as competitors or don’t see the issue with it you may find yourself running afoul of the brand when they get mad about you posting a pic of their product next to the supposed competition.

In my career I’ve worked at a value-added reseller (VAR) where I found myself bound by certain agreements to talk positively about brands. I’ve also found myself on the wrong side of the table when that brand went into a bidding process with another VAR and then tried to tell me I could say bad things about them in the process because I was also their partner. The situation was difficult because I was selling against a partner that went with another company but I also needed to do the work to do the bid. Hamstringing me by claiming I had to play by some kind of weird rules ultimately made me very frustrated.

Blind Faith

Do companies really want ambassadors that only say positive things about the brand? Do they want people to regurgitate the marketing points with everyone and never discuss the downsides of the product? Would you trust someone that only ever had glowing things to say about something you were trying to buy?

The reality of our world today is that the way that people discuss products like this influences what we think about them. If the person doing the discussion never has a negative thing to say about a company then it creates issues with how they are perceived. It can create issues for a supposedly neutral or unbiased source if they only ever say positive things, especially if it later comes out they weren’t allowed to say something negative for fear they’d get silenced or sued.

Think about those that never say anything negative toward a brand or product. You probably know them by a familiar epithet: fanboys. Whether it’s Apple or Tesla or Android or Ford there are many people out there that aren’t just bound by agreement to always speak positively about something. They will go out of their way to attack those that speak ill of their favorite product. If you’ve every had an interaction with a fan online that left you shaking your head because you can’t understand why they don’t see the issues you know how difficult that conversation can be.

As a company, you want people discussing the challenges your product could potentially face. You want an honest opinion that it doesn’t fit in a particular vertical, for example. Imagine how upset a customer would be if they bought your product based on a review from a biased influencer only to find that it didn’t fit your need because the influencer couldn’t say anything negative. Would that customer be happy with your product? Would the community trust that influencer in the future?


Tom’s Take

Honesty isn’t negativity. You can be critical of something you enjoy and not insinuate you’re trying to destroy it. I’ll be the first person to point out the shortcomings of a product or company. I’ll be fair but honest. I’ll point out where the improvements need to be made. One of the joys of my day job at Tech Field Day is that I have the freedom to say what I want in my private life and not worry about my work agreements getting me in trouble as has happened with some in the past. I’ll always tell you straight up how I feel. That’s how you protect your brand. Not with glowing reviews but with honest discussion.

When Were You Last a Beginner?

In a couple of weeks I’m taking the opportunity to broaden my leadership horizons by attending the BSA leadership course known as Philmont Leadership Challenge. It’s a course that builds on a lot of the things that I’ve been learning and teaching for the past five years. It’s designed to be a sort of capstone for servant leadership and learning how to inspire others. I’m excited to be a part of it in large part because I get to participate for a change.

Being a member of the staff for my local council Wood Badge courses has given me a great opportunity to learn the material inside and out. I love being able to teach and see others grow into leaders. It’s also inspired me to share some of those lessons here to help others in the IT community that might not have the chance to attend a course like that. However the past 3 years have also shown me the value of being a beginner at something from time to time.

Square One

Everyone is new at something. No one is born knowing every piece of information they’ll need to know for their entire lives. We learn language and history and social skills throughout our formative years. When we get to our career we learn skills and trades and figure out how to do complex things easily. For some of us we also learn how to lead and manage others. It’s a process of building layer upon layer to be better at what we do. Those skills give us the chance to show how far we’ve come in a given area by the way we understand how the complex things we do interact.

One of my favorite stories about this process is when I first started studying for my CCIE back in 2008. I knew the first place I should look was the Cisco Press certification guide for the written exam. As I started reading through the copy I caught myself thinking, “This is easy. I already know this.” I even pondered why I bothered with those pesky CCNP routing books because everything I needed to know was right here!

The practitioners in the audience have already spotted the logical fallacy in my thinking. The CCIE certification guide was easy and remedial for me because I’d already spent so much time reading over those CCNP guides. And those CCNP guides only made sense to me because I’d studied for my CCNA beforehand. The advanced topics I was refreshing myself on could be expanded because I understood the rest of the information that was being presented already.

When you’re a beginner everything looks bigger. There’s so much to learn. It’s worrisome to try and figure out what you need to know. You spend your time categorizing things that might be important later. It can be an overwhelming process. But it’s necessary because it introduces you to the areas you have to understand. You can’t start off knowing everything. You need to work you way into it. You need to digest information and work with it before moving on to add more to what you’ve learned. Trying to drink from a firehose makes it impossible to do anything.

However, when you approach things from a perspective of an expert you lose some of the critical nature of being bad at something. You might think to yourself that you don’t need to remember a protocol number or a timer value because “they never worry about that anyway”. I’ve heard more than a few people in my time skip over valuable information at the start of a course because they want to get to the “good stuff” that they just know comes later. Of course, skipping over the early lessons means they’re going to be spending more time reviewing the later information because they missed the important stuff up front.

Those Who Teach

You might think to yourself that teaching something is a harder job. You need to understand the material well enough to instruct others and anticipate questions. You need to prep and practice. It’s not easy. But it also takes away some of the magic of learning.

Everyone has a moment in their journey with some technology or concept where everything just clicks. You can call it a Eureka moment or something similar but we all remember how it felt. Understanding how the pieces fit together and how you grasp that interconnection is one of the keys to how we process complex topics. If you don’t get it you may never remember it. Those moments mean a lot to someone at the start of their journey.

When you teach something you have to grasp it all. You may have had your Eureka moment already. You’re also hoping that you can inspire one in others. If you’re trying to find ways to impart the knowledge to others based on how you grasped it you may very well inspire that moment. But you also don’t have the opportunity to do it for yourself. We’re all familiar with the old adage that familiarity breeds contempt. It’s easy to fall into that trap with a topic you are intimately familiar with.

In your career have you ever asked a question about a technical subject to an expert that started their explanation with “it’s really easy…”? Most of us have. We’ve probably even said that phrase ourselves. But it’s important to remember that not everyone has had the same experiences. Not everyone knows the topic to the level that we know it. And not everyone is going to form the same connections to recall that information when they need it again. It may be simple to you but for a beginner it’s a difficult subject they’re struggling to understand. How they comprehend it relies heavily on how you impart that knowledge.

Wide Eyed Wonder

Lastly, the thing that I think is missing in the expert level of things is the wonder of learning something new for the first time. It’s easy to get jaded when you have to take in a new piece of information and integrate it into your existing view. It can be frustrating in cases where the new knowledge conflicts with old knowledge. We spent a lot of time learning the old way and now we have to change?

Part of the value of being a beginner is looking at things with fresh eyes. No doubt you’ve heard things like “this is the way we’ve always done it” in meetings before. I’ve written about challenging those assumptions in the past and how to go about doing it properly but having a beginner perspective helps. Pretend I’m new to this. Explain to me why we do it that way. Help me understand. By taking an approach of learning you can see the process and help fix the broken pieces or optimize the things that need to be improved.

Even if you know the subject inside and out it can be important to sit back and think through it from the perspective of a beginner. Why is a vanilla spanning tree timer 50 seconds? What can be improved in that process? Why should things not be hurried. What happens when things go wrong? How long does it take for them to get fixed? These are all valid beginner questions that help you understand how others look at something you’re very familiar with. You’ll find that being able to answer them as a beginner would will lead to even more understanding of the process and the way things are supposed to work.


Tom’s Take

There are times when I desperately want to be new at something again. I struggle with finding the time to jump into a new technology or understand a new concept because my tendency is to want to learn everything about it and there are many times when I can’t. But the value of being new at something isn’t just acquiring new knowledge. It’s learning how a beginner thinks and seeing how they process something. It’s about those Eureka moments and integrating things into your process. It’s about chaos and change and eventually understanding. So if you find yourself burned out it’s important to stop and ask when you were last a beginner.

Why 2023 is the Year of Wi-Fi 6E

If you’re like me, you chuckle every time someone tells you that next year is the year of whatever technology is going to be hot. Don’t believe me? Which year was the Year of VDI again? I know that writing the title of this post probably made you shake your head in amusement but I truly believe that we’ve hit the point of adoption of Wi-Fi 6E next year.

Device Support Blooms

There are rumors that the new iPhone 14 will adopt Wi-Fi 6E. There were the same rumors when the iPhone 13 was coming out and the iPhone rumor mill is always a mixed bag but I think we’re on track this time. Part of the reason for that is the advancements made in Wi-Fi 6 Release 2. The power management features for 6ER2 are something that should appeal to mobile device users, even if the name is confusing as can be.

Mobile phones don’t make a market. If they were the only driver for wireless adoption the Samsung handsets would have everyone on 6E by now. Instead, it’s the ecosystem. Apple putting a 6E radio in the iPhone wouldn’t be enough to tip the scales. It would take a concerted effort of adoption across the board, right? Well, what else does Apple have on deck that can drive the market?

The first thing is the rumored M2 iPad Pro. It’s expected to debut in October 2022 and feature upgrades aside from the CPU like wireless charging. One of the biggest features would be the inclusion of a Wi-Fi 6E radio as well to match the new iPhone. That would mean both of Apple’s mobile devices could enjoy the faster and less congested bandwidth of 6 GHz. The iPad would also be easier to build a new chip around compared to the relatively cramped space inside the iPhone. Give the professional nature of the iPad Pro one might expect an enterprise-grade feature like 6E support to help move some units.

The second thing is the looming M2 MacBook Pro. Note for this specific example I’m talking about the 14” and 16” models that would features the Pro and Max chips, not the 13” model running a base M2. Apple packed the M1 Pro and M1 Max models with new features last year, including more ports and a snazzy case redesign. What would drive people to adopt the new model so soon? How about faster connectivity? Given that people are already complaining that the M1 Pro has slow Wi-Fi Apple could really silence their armchair critics with a Wi-Fi 6E radio.

You may notice that I’m relying heavily on Apple here as my reasoning behind the projected growth of 6E in 2023. It’s not because I’m a fanboy. It’s because Apple is one of the only companies that controls their own ecosystem to the point of being able to add support for a technology across the board and drive adoption among their user base. Sure, we’ve had 6E radios from Samsung and Dell and many others for the past year or so. Did they drive the sales of 6E radios in the enterprise? Or even in home routers? I’d argue they haven’t. But Apple isn’t the only reason why.

Oldie But Goodie

The last reason that 2023 is going to be the year of Wi-Fi 6E is because of timing. Specifically I’m talking about the timing of a refresh cycle in the enterprise. The first Wi-Fi 6 APs started coming into the market in 2019. Early adopters jumped at the chance to have more bandwidth across the board. But those APs are getting old by the standards of tech. They may still pass traffic but users that are going back to the office are going to want more than standard connectivity. Especially if those users splurged for a new iPhone or iPad for Christmas or are looking for a new work laptop of the Macintosh variety.

Enterprises may not have been packed with users for the past couple of years but that doesn’t mean the tech stood still. Faster and better is always the mantra of the cutting edge company. The revisions in the new standards would make life easier for those trying to deploy new IoT sensors or deal with with congested buildings. If enterprise vendors adopt these new APs in the early part of the year it could even function as an incentive to get people back in the office instead of the slow insecure coffee shop around the corner.

One other little quirky thing comes from an report that Intel is looking to adopt Wi-Fi 7. It may just be the cynic in me talking but as soon as we start talking about a new technology on the horizon people start assuming that the “current” cutting edge tech is ready for adoption. It’s the same as people that caution you not to install a new operating system until after the first patch or service release. Considering that Wi-Fi 6 Release 2 is effectively Wi-Fi 6E Service Pack 1 I think the cynics in the audience are going to think that it’s time to adopt Wi-Fi 6E since it’s ready for action.


Tom’s Take

Technology for the sake of tech is always going to fail. You need drivers for adoption and usage. If cool tech won the day we’d be watching Betamax movies or HD-DVD instead of streaming on Netflix. Instead, the real winners are the technologies that get used. So far that hasn’t been Wi-Fi 6E for a variety of reasons. However, with the projections of releases coming soon from Apple I think we’re going to see a massive wave of adoption of Wi-Fi 6E in 2023. And if you’re reading this in late 2023 or beyond and it didn’t happen, just mentally change the title to whatever next year is and that will be the truth.

Enforcing SLAs with Real Data

I’m sure by now you’ve probably seen tons of articles telling you about how important it is to travel with location devices in your luggage. The most common one I’ve seen is the Apple AirTag. The logic goes that if you have one in your checked suitcase that you’ll know if there are any issues with your luggage getting lost right away because you’ll be notified as soon as you’re separated from it. The advice is sound if you’re someone that checks your bag frequently or has it lost on a regular basis.

The idea behind using technology to enforce an agreement is a great one. We make these agreements all the time, especially in networking. These service level agreements (SLAs) are the way we know we’re getting what we pay for. Take a leased line, for example. You typically pay for a certain speed and a certain amount of availability. The faster the link or the more available it is the more it costs. Any good consumer is going to want to be sure they’re paying for the right service. How can you verify you’re getting what you’re paying for?

For a long time this was very hard to do. Sure, you could run constant speed tests to check the bandwidth. However, the reliability was the part that was typically the more expensive thing to add to the service. Making a circuit more reliable means adding more resources on the provider side to ensure it stays up. Allocating those resources means someone needs to pay for them and that is usually on the customer side.

It’s easy to tell when a link goes down during working hours. You can see that you’re not getting traffic out of your network. But what about those times when the circuit isn’t being as heavily utilized? What about the middle of the night or holidays? How can you ensure that you’re getting the uptime you pay for?

Trust and Verify

This is actually one of the groundbreaking areas that SD-WAN pioneered for networking teams when it came out years ago. Because you have a more modern way of maintaining the network as well as a way to route application traffic based on more than source and destination address you can build in some fancy logic to determine the reliability of your circuits too.

One of the biggest reasons in the past to use a leased line over a broadband circuit was that reliability factor. You need MPLS because it’s more stable than a cable modem. You get guaranteed bandwidth. It’s way more reliable. Those are the claims that you might hear when you talk to the salesperson at your ISP. But do they really hold water? The issue for years is that you had no way of knowing because your analytics capabilities were rudimentary in most cases. You could monitor the link interface but that was usually from the central office and not the remote branch. And depending on the polling interval you could miss downtime events.

SD-WAN changed that because now you could put an intelligent device on the edge that constantly monitored the link for throughput and reliability. The first thought was to do this for the broadband links to see how congested and unreliable they could be do know when to switch traffic to the more stable link. Over time admins started putting those same monitors on the MPLS and leased line circuits as well. They found that while broadband, such as cable and DSL, was typically more reliable than the sales people would have you believe it was the relatively unreliable leased circuits that surprised everyone.

Much like the above example of lost luggage you previously had to take the airlines at their word when they said something was lost. No details were available. Did your bag ever leave the airport when you flew out? Did it make it to the right location but get stuck somewhere? Did your bag get on the wrong plane and end up in Cleveland instead of Las Vegas? Because the airline reporting systems were so opaque you had no idea. Now, with the advent of Tile and AirTags and many others, you can see exactly where it is at all times.

The same thing happened with SD-WAN analytics. Now, armed with proof that links weren’t as reliable as the SLA, admins could choose to do one of two things. They could either force the provider to honor the agreement and provide better service to meet the agreement. Or the company could reduce their contract to the level of service they are actually getting instead of the one that is promised by not delivered. Having the data and a way to prove the reliability helped with the negotiations.

Once the providers knew that organizations had the ability to verify things they also had to up their game by including ways to monitor their own performance to ensure that they were meeting their own metrics. Overall the situation led to better results because having technology to enforce agreements makes all sides aware of what’s going on.


Tom’s Take

I’ve only had a couple of bags misplaced in my travel time so I haven’t yet had the need to go down the AirTag route. I’ve done it on occasion for international travel because knowing when my bag is about to come out on the carousel helps me figure out how much time it’s going to take me to get through customs. The idea of using technology to make things more transparent is important to me on a bigger scale though. If you can’t verify the promises you make then you shouldn’t make them. Having the AirTag or SD-WAN software to ensure we’re getting what we pay for are just parts of a bigger opportunity to provide better experiences.

All Problems Are Hardware Problems

When I was a lad in high school I worked for Walmart. I learned quite a bit about retail at my early age but one of the fascinating things I used in the late 1990s was a wireless inventory unit, colloquially known as a Telxon. I was amazed by the ability to get inventory numbers on a device without a cable. Since this was prior to the adoption of IEEE 802.11 it was a proprietary device that only worked with that system.

Flash forward to the 2020s. I went to Walmart the other day to look for an item and I couldn’t find it. I asked one of the associates if it was in stock. They said they could check and pulled out their phone. To my surprise they were able to launch an app and see that it was in stock in the back. As I waited for them to return with the item I thought about how 25 years of progress had changed that hardware solution into something software focused.

Hardware Genesis

All problems start as hardware problems. If there’s a solution to an issue you’re going to build something first. Need to get somewhere fast? Trains or cars. Need to get there faster? Planes. Communcations? Phones or the Internet. All problems start out by inventing a thing that does something.

Technology is all about overcoming challenges with novel solutions. Sometimes those leaps are major, like radio. Other times the tool is built on that technology, like wireless. However they are built they almost always take physical form first. The reasoning is simple in my view. You have to have the capability to build something before it can be optimized.

If you’re sitting there saying to yourself that a lot of our solutions today are software-focused you would be correct. However, you’re also making some assumptions about hardware ubiquity already. Sure, the smartphone is a marvel of software simplicity that provides many technological solutions. It’s also a platform that relies on wireless radio communications, Internet, and cloud computing. If you told someone in 2005 that cellular phones would be primarily used as compute devices they would have laughed at you. Because the hardware at the time was focused on audio communications and only starting to be used for other things, like texting or PDA functions.

Hardware exists first to solve the issue at hand. Once we’ve built something that can accomplish the goal we can then optimize it and make it more common. Servers seem like a mainstay of our tech world today but client/server architecture wasn’t always the king of the hill. Cloud computing seems like an obvious solution today but AWS and GCP haven’t always existed. Servers needed to become commonplace before the idea to cluster them together and offer their capacity as a rentable service emerged.

Software to the Rescue

Software doesn’t like unpredictability. Remember the Telxon example above? Those devices ran proprietary software that interfaced with a single server. There was no variation. If you wanted to use the inventory software on a different device you were out of luck. There was zero flexibility. It was a tool that was designed for a purpose. So many of the things in our lives are built the same way. Just look at your home phone, for example. That is, if you still have one. It’s a simple device that doesn’t even run software as we know it. Just a collection of electrical switches and transistors that accomplish a goal.

However, we have abstracted the interface of a telephone into a device that provides flexibility. Any smart phone you see is a computer running software with a familiar interface to make a phone call. There’s no specialized hardware involved. Just an interface to a system that performs the same functions that a traditional phone would. There are no wires. No switches like an old telecom engineer would recognize. Just a software platform that sends voice over the Internet to a receiving station.

Software becomes an option when we’ve built a hardware environment that is sufficiently predictable as to allow the functions to be abstracted. The Walmart Telxon can function as an app on a smartphone now because we can write an app to perform those same functions. We’ve also created interfaces into inventory systems that can be called by software apps and everyone that works for Walmart either carries a phone or has access to a device that can run the software.


Tom’s Take

Programs and platforms provide us with the flexibility to do things any time we want. But they only have that capability because of the infrastructure we’ve built out. We have to build the hardware before we can abstract the functions into software. The most complicated unsolved problem you can think of today will eventually be solved by a piece of kit that will eventually become a commodity and replicated as a series of functions that run on everything years later. That’s the way that things have always been and how they will always be.