Asking The Right Question About The Wireless Future

It wasn’t that long ago that I wrote a piece about how Wi-Fi 6E isn’t going to move the needle very much in terms of connectivity. I stand by my convictions that the technology is just too new and doesn’t provide a great impetus to force users to upgrade or augment systems that are already deployed. Thankfully, someone at the recent Mobility Field Day 10 went and did a great job of summarizing some of my objections in a much simpler way. Thanks to Nick Swiatecki for this amazing presentation:

He captured so many of my hesitations as he discussed the future of wireless connectivity. And he managed to expand on them perfectly!

New Isn’t Automatically Better

Any time I see someone telling me that Wi-Fi 7 is right around the corner and that we need to see what it brings I can’t help but laugh. There may be devices that have support for it right now, but as Nick points out in the above video, that’s only one part of the puzzle. We still have to wait for the clients and the regulatory bodies to catch up to the infrastructure technology. Could you imagine if we did the same thing with wired networks? If we deployed amazing new cables that ran four times the speed but didn’t interface with the existing Ethernet connections at the client? We’d be laughed out of the building.

Likewise, deploying pre-standard Wi-Fi 7 devices today doesn’t gain you much unless you have a way to access them with a client adapter. Yes, they do exist. Yes, they’re final. However, they’re more final than the Draft 802.11n cards that I deployed years and years ago. That doesn’t mean that we’re going to see a lot of benefit from them however. Because the value of the first generation of a technology is rarely leaps and bounds above what came before it.

A couple of years ago I asked if the M1 MacBook wireless was really slower than the predecessor laptop. Spoiler alert, it is but not so much you’d really notice. Since then we’ve gained two more generations of that hardware and the wireless has gotten faster. Not because the specs have changed in the standard. It’s because the manufacturers have gotten better about building the devices. We’ve squeezed more performance out of them instead of just slapping a label on the box and saying it’s a version number higher or it’s got more of the MHz things so it must be better.

Nick, in the above video, points this out perfectly. People keep asking about Wi-Fi 7 and they miss out on the fact that there’s a lot of technology that needs to run very smoothly in order to give us significant gains in speed over Wi-Fi 6 and Wi-Fi 6E. And those technologies probably aren’t going to be implemented well (if at all) in the first cards and APs that come off the line. In fact, given the history of 802.11 specifications those important features are probably going to be marked as optional anyway to ensure the specifications get passed on time to allow the shipping hardware to be standardized.

In a perfect world you’re going to miss a lot of the advances in the first revision of the hardware. I remember a time when you had to be right under the AP to see the speed increases promised by the “next generation” of wireless. Adding more and more advanced technology to the AP and hoping the client adapters catch up quickly isn’t going to help sell your devices any faster either. Everything has to work together to ensure it all runs smoothly for the users. If you think for a minutes that they aren’t going to call you to tell you that the wireless is running slow then you’re very mistaken. They’re upset they didn’t get the promised speeds on the box or that something along the line is making their experience difficult. That’s the nature of the beast.

Asking the Right Questions

The other part of this discussion is how to ensure that everyone has realistic ideas about what new technology brings. For that, we recorded a great roundtable discussion about Wi-Fi 7 promises and reality:

I think the biggest takeaway from this discussion is that, despite the hype, we’re not ready for Wi-Fi 7 just yet. The key to having this talk with your stakeholders is to remind them that spending the money on the new devices isn’t going to automatically mean increased speeds or enhanced performance. In fact, you’re going to do a great job of talking them out of deploying cutting edge hardware simply by reminding them they aren’t going to see anywhere near the promises from the vendors without investing even more in client hardware or understanding that those amazing fast multi-spectrum speeds aren’t going to be possible on an iPhone.

We’re not even really touching on the reality that some of the best parts of 6GHz aren’t even available yet because of FCC restrictions. Or that we just assume that Wi-Fi 7 will include 6GHz when it doesn’t even have to. That’s especially true of IoT devices. Lower cost devices will likely have lower cost radios for components which means the best speed increases are going to be for the most expensive pieces of the puzzle. Are you ready to upgrade your brand new laptop in six months because a new version of the standard came out that’s just slightly faster?

Those are the questions you have to ask and answer from your stakeholders before you ever decide how the next part of the project is going to proceed. Because there is always going to be faster hardware or newer revisions of the specification for you to understand. And if the goalposts keep moving every time something new comes along you’re either going to be broke or extremely disappointed.


Tom’s Take

I’m glad that Nick from Cisco was able to present at Mobility Field Day. Not only did he confirm what a lot of professionals are thinking but he did it in a way that helped other viewers understand where the challenges with new wireless technologies lie. We may be a bit jaded in the wired world because Ethernet is such a bedrock standard. In the wireless world I promise that clients are always going to be getting more impressive and the amount of time between those leaps is going to shrink even more than it already has. The real question should be whether or not we need to chase that advantage.

AI Is Making Data Cost Too Much

You may recall that I wrote a piece almost six years ago comparing big data to nuclear power. Part of the purpose of that piece was to knock the wind out of the “data is oil” comparisons that were so popular. Today’s landscape is totally different now thanks to the shifts that the IT industry has undergone in the past few years. I now believe that AI is going to cause a massive amount of wealth transfer away from the AI companies and cause startup economics to shift.

Can AI Really Work for Enterprises?

In this episode of Packet Pushers, Greg Ferro and Brad Casemore debate a lot of topics around the future of networking. One of the things that Brad brought up that Greg pointed out is that data being used for AI algorithm training is being stored in the cloud. That massive amount of data is sitting there waiting to be used between training runs and it’s costing some AI startups a fortune in cloud costs.

AI algorithms need to be trained to be useful. When someone uses ChatGPT to write a term paper or ask nonsensical questions you’re using the output of the GPT training run. The real work happens when OpenAI is crunching data and feeding their monster. They have to give it a set of parameters and data to analyze in order to come up with the magic that you see in the prompt window. That data doesn’t just come out of nowhere. It has to be compiled and analyzed.

There are a lot of creators of content that are angry that their words are being fed into the GPT algorithm runs and then being used in the results without giving proper credit. That means that OpenAI is scraping the content from the web and feeding it into the algorithm without care for what they’re looking it. It also creates issues where the validity and the accuracy of the data isn’t verified ahead of time.

Now, this focuses on OpenAI and GPT specifically because everyone seems to think that’s AI right now. Much like every solution in the history of IT, GPT-based large language models (LLMs) are just a stepping stone along the way to greater understanding of what AI can do. The real value for organizations, as Greg pointed out in the podcast, can be something as simple as analyzing the trouble ticket a user has submitted and then offering directed questions to help clarify the ticket for the help desk so they spend less time chasing false leads.

No Free Lunchboxes

Where are organizations going to store that data? In the old days it was going to be collected in on-prem storage arrays that weren’t being used for anything else. The opportunity cost of using something you already owned was minimal. After all, you bought the capacity so why not use it? Organizations that took this approach decided to just save every data point they could find in an effort to “mine” for insights later. Hence the references to oil and other natural resources.

Today’s world is different. LLMs need massive resources to run. Unless you’re willing to drop several million dollars to build out your own cluster resources and hire engineers to keep the running at peak performance you’re probably going to be using a hosted cloud solution. That’s easy enough to set up and run. And you’re only paying for what you use. CPU and GPU times are important so you want the job to complete as fast as possible in order to keep your costs low.

What about the data that you need to feed to the algorithm? Are you going to feed it from your on-prem storage? That’s way too slow, even with super fast WAN links. You need to get the data as close to the processors as possible. That means you need to migrate it into the cloud. You need to keep it there while the magic AI building machine does the work. Are you going to keep that valuable data in the cloud, incurring costs every hour it’s stored there? Or are you going to pay to have it moved back to your enterprise? Either way the sound of a cash register is deafening to your finance department and music to the ears of cloud providers and storage vendors selling them exabytes of data storage.

All those hopes of making tons of money from your AI insights are going to evaporate in a pile of cloud bills. The operations costs of keeping that data are now more than minimal. If you want to have good data to operate on you’re going to need to keep it. And if you can’t keep it locally in your organization you’re going to have to pay someone to keep it for you. That means writing big checks to the cloud providers that have effectively infinite storage, bounded only by the limit on your credit card or purchase order. That kind of wealth transfer makes investors seem a bit hesitant when they aren’t going to get the casino-like payouts they’d been hoping for.

The shift will cause AI startups to be very frugal in what they keep. They will either amass data only when they think their algorithm is ready for a run or keep only critical data that they know they’re going to need to feed the monster. That means they’re going to be playing a game with the accuracy of the resulting software as well as giving up chances that some insignificant piece of data ends up being the key to a huge shift. In essence, the software will all start looking and sounding the same after a while and there won’t be enough differentiation to make they competitive because no one will be able to afford it.


Tom’s Take

The relative ease with which data could be stored turned companies into data hoarders. They kept it forever hoping they could get some value out of it and create a return curve that soared to the moon. Instead, the application for that data mining finally came along and everyone realized that getting the value out of the data meant investing even more capital into refining it. That kind of investment makes those curves much flatter and makes investors more reluctant. That kind of shift means more work and less astronomical payout. All because your resources were more costly than you first thought.

Does Automation Require Reengineering?

During Networking Field Day 33 this week we had a great presentation from Graphiant around their solution. While the presentation was great you should definitely check out the videos linked above, Ali Shaikh said something in one of the sessions that resonated with me quite a bit:

Automation of an existing system doesn’t change the system.

Seems simple, right? It belies a major issue we’re seeing with automation. Making the existing stuff run faster doesn’t actually fix our issues. It just makes them less visible.

Rapid Rattletraps

Most systems don’t work according to plan. They’re an accumulation of years of work that doesn’t always fit well together. For instance, the classic XKCD comic:

When it comes to automation, the idea is that we want to make things run faster and reduce the likelihood of error. What we don’t talk about is how each individual system has its own quirks and may not even be a good candidate for automation at any point. Automation is all about making things work without intervention. It’s also dependent on making sure the process you’re trying to automate is well-documented and repeatable in the first place.

How many times have you seen or heard of someone spending hours trying to script a process that takes about five minutes to do once or even twice a year? The return on time investment in automating something like that doesn’t really make sense, does it? Sure, it’s cool to automate everything but it’s not really useful, especially if the task has changes every time it’s run that requires you to change in the inputs. It’s like building a default query for data that needs to be rewritten every time the query is run.

You’re probably laughing right now but you also have at least one or two things that would fit this bill. Rather than asking if you should be automating this task you should instead be asking why we’re doing it in the first place. Why are we looking to accomplish this goal if it only needs to be done on occasion? Is it something critical like a configuration backup? Or maybe just a sanity check to see that unused switch ports have been disabled or tagged with some kind of security configuration. Are you trying to do the task for safety or security? Or are you doing it for busy work purposes?

Streamlining the System

In all of those cases we have to ask why the existing system exists. That’s because investing time and resources into automating a system can result in a big overrun in budget when you run into unintended side effects or issues that weren’t documented in the first place. Nothing defeats an automation project faster than hitting roadblocks out of nowhere.

If you shouldn’t invest time in automating something that is already there, what should you do instead? How about reengineering the whole process instead? If you occasionally run configuration backups to make sure you have good copies of the devices why not institute change controls or rolling automatic backups? Instead of solving an existing problem with a script why shouldn’t you change the way you do things that might have other hidden benefits? If you’re scripting changes to ports to verify security status why not have a system in place that creates configuration on those ports when they’re configured and require change controls to enable them?

It feels like extra work. It always seems easier to jump in from the bottom up with both feet and work on a problem until you solve it. Top down means you’re changing the way the system does things instead so the problems either disappear or change to something more manageable. The important question to ask is “where are my resources best spent?” If you see your time as a resource to invest in projects are you better served making something existing work slightly faster? Or would it be better for you to take the time to do something in a different, potentially better way?

If you believe your process is optimized as much as possible and just needs to run on its own that makes for an easy conversation. But if you’re thinking you need to change the way you do things this is a great time to make those changes and use your time investment to do things properly this time around. You may have to knock down a few walls to get there but it’s way better than building a house of cards that is just going to collapse faster.


Tom’s Take

I’m a fan of automation. Batch files and scripting and orchestration systems have a big place in the network to reduce error and multiply the capabilities of teams. Automation isn’t a magic solution. It requires investment of time and effort and a return for the stakeholders to see value. That means you may need to approach the problem from a different perspective to understand what really should be done instead of just doing the same old things a little faster. Future you will thank you for reengineering today.

Victims of Success

It feels like the cybersecurity space is getting more and more crowded with breaches in the modern era. I joke that on our weekly Gestalt IT Rundown news show that we could include a breach story every week and still not cover them all. Even Risky Business can’t keep up. However, the defenders seem to be gaining on the attackers and that means the battle lines are shifting again.

Don’t Dwell

A recent article from The Register noted that dwell times for detection of ransomware and malware hav dropped almost a full day in the last year. Dwell time is especially important because detecting the ransomware early means you can take preventative measures before it can be deployed. I’ve seen all manner of early detection systems, such as data protection companies measuring the entropy of data-at-rest to determine when it is no longer able to be compressed, meaning it likely has been encrypted and should be restored.

Likewise, XDR companies are starting to reduce the time it takes to catch behaviors on the network that are out of the ordinary. When a user starts scanning for open file shares and doing recon on the network you can almost guarantee they’ve been compromised somehow. You can start limiting access and begin cleanup right away to ensure that they aren’t going to get much further. This is an area where zero trust network architecture (ZTNA) is shining. The less a particular user has access to without additional authentication, the less they can give up before the controls in place in the system catch them doing something out of the ordinary. This holds true even if the user hasn’t been tricked into giving up their credentials but instead is working with the attackers through monetary compensation or misguided ire toward the company.

Thanks to the advent of technologies like AI, machine learning, and automation we can now put controls in place quickly to prevent the spread of disaster. You might be forgiven for thinking that kind of response will eradicate this vector of attack. After all, killing off the nasty things floating in our systems means we’re healthier overall, right? It’s not like we’re breeding a stronger strain of disease?

Breeding Grounds

Ask a frightening question and get a frightening answer, right? In the same linked Register article the researchers point out that while dwell times have been reduced the time it takes attackers to capitalize on their efforts has also been accelerated. In addition, attackers are looking at multiple vectors of persistence in order to accomplish their ultimate goal of getting paid.

Let’s assume for the moment that you are an attacker that knows the company you’re going after is going to notice your intrusion much more quickly than before. Do you try to sneak in and avoid detection for an extra day? Or do you crash in through the front door and cause as much chaos as possible before anyone notices? Rather than taking the sophisticated approach of persistence and massive system disruption, attackers are instead taking a more low-tech approach to grabbing whatever they can before they get spotted and neutralized.

If you look at the most successful attacks so far in 2023 you might notice they’ve gone for a “quantity over quality” approach. Sure, a heist like Oceans 11 is pretty impressive. But so is smashing the display case and running out with the jewels. Maybe it’s not as lucrative but when you hit twenty jewelry stores a week you’re going to make up the low per capita take with volume.

Half of all the intrusion attempts are coming at the expense of stolen or compromised credentials. There are a number of impressive tools out there that can search for weak points in the system and expose bugs you never even dreamed could exist. There are also much easier ways to phish knowledge workers for their passwords or just bribe them to gain access to restricted resources. Think of it like the crowbar approach to the heist scenario above.

Lock It Down

Luckily, even the fastest attackers still have to gain access to the system to do damage. I know we all harp on it constantly but the best way to prevent attacks is to minimize the ways that attack vectors get exploited in the first place. Rotate credentials frequently. Have knowledge workers use generated passwords in place of ones that can be tied back to them. Invest in password management systems or, more broadly, identity management solutions in the enterprise. You can’t leak what you don’t know or can’t figure out quickly.

After that, look at how attackers capitalize on leaks or collusion. I know it’s a tale as old as time but you shouldn’t be running anything with admin access that doesn’t absolutely need it. Yes, even YOUR account. You can’t be the vector for a breach if you are just as unimportant as everyone else. Have a separate account with a completely different password for doing those kinds of tasks. Regularly audit accounts that have system-level privilege and make sure they’re being rotated too. Another great reason for having an identity solution is that the passwords can be rotated quickly without disruption. Oh, and make sure the logins to the identity system are as protected as anything else.

Lastly, don’t make the mistake of thinking you’re an unappealing target. Just because you don’t deal with customer data or have personally identifiable information (PII) stored in your system doesn’t mean you’re not going to get swept up in the next major attack. With the quantity approach the attackers don’t care what they grab as long as they can get out with something. They can spend time analyzing it later to figure out how to best take advantage of what they’ve stolen. Don’t give them the chance. Security through obscurity doesn’t work well in an age where you can be targeted and exposed before you realize what’s going on.


Tom’s Take

Building a better mousetrap means you catch more mice. However, the ones that you don’t catch just get smarter and figure out how to avoid the bait. That’s the eternal game in security. You stamp out the low-level threats quickly but that means the ones that aren’t ensnared become more resistant to your efforts. You can’t assume every attack is going to be a sophisticated nation state attempt to steal classified info. You may just be the unlucky target of a smash-and-grab with stolen passwords. Don’t become a victim of your own success. Keep tightening the defenses and make sure you don’t wind up missing the likely while looking for the impossible.

The Essence of Cisco and Splunk

You no doubt noticed that Cisco bought Splunk last week for $28 billion. It was a deal that had been rumored for at least a year if not longer. The purchase makes a lot of sense from a number of angles. I’m going to focus on a couple of them here with some alliteration to help you understand why this may be one of the biggest signals of a shift in the way that Cisco does business.

The S Stands for Security

Cisco is now a premier security company now. The addition of the most power SIEM on the market means that Cisco’s security strategy now has a completeness of vision. SecureX has been a very big part of the sales cycle for Cisco as of late and having all the parts to make it work top to bottom is a big win. XDR is a great thing for organizations but it doesn’t work without massive amounts of data to analyze. Guess where Splunk comes in?

Aside from some very specialized plays, Cisco now has an answer for just about everything a modern enterprise could want in a security vendor. They may not be number one in every market but they’re making a play for number two in as many places as possible. More importantly it’s a stack that is nominally integrated together to serve as a single source for customers. I’ll be the first person to say that the integration of Cisco software acquisitions isn’t seamless. However, when the SKUs all appear together on a bill of materials most organizations won’t look beyond that. Especially if there are professional services available to just make it work.

Cisco is building out their security presence in a big way. All thanks to a big investment in Splunk.

The S Stands for Software

When the pundits said that Cisco could never really transform themselves from a hardware vendor to a software company there was a lot of agreement. Looking back at the transition that Chuck Robbins has led since then what would you say now? Cisco has aggressively gone after software companies across the board to build up a portfolio of recurring revenue that isn’t dependent on refresh cycles or silicon innovations.

Software is the future for Cisco. Iteration on their core value products is going to increase their profit far beyond what they could hope to realize through continuing to push switches and routers. That doesn’t mean that Cisco is going to abandon the hardware market. It just means that Cisco is going to spend more time investing in things with better margins. The current market for software subscriptions and recurring licensing revenue is hot and investors want to see those periodic returns instead of a cycle-based push for companies to adopt new technologies.

What makes more sense to you? Betting on a model where customers need to pay per gigabyte of data stored or a technology which may be dead on the vine?. Taking the nerd hat off for a moment means you need to see the value that companies want to realize, not the hope that something is going to be big in the future. Hardware will come along when the software is ready to support it. Blu-Ray didn’t win over HD-DVD because it was technically superior. It won because Sony supported it and convinced Disney to move their media onto it exclusively.

Software is the way forward for Cisco. Software that provides value for enterprises and drives upgrades and expansion. The hardware itself isn’t going to pull more software onto the bottom line.

The S Stands for Synergy

The word synergy is overused in business vernacular. Jack Welch burned us out on the idea that we can get more out of things together by finding hidden gems that aren’t readily apparent. I think that the real value in the synergy between Cisco and Splunk can be found in the value of creating better code and better programmers.

A bad example of synergy is Cisco’s purchase of Flip Video. When it became clear that the market for consumer video gear wasn’t going to blow up quite like Cisco had hoped they pivoted to talk about using the optics inside the cameras to improve video quality for their video collaboration products. Which never struck me as a good argument. They bet on something that didn’t pay off and had to salvage it the best they could. How many people went on to use Cisco cameras outside of the big telepresence rooms? How many are using them today instead of phone cameras or cheap webcams?

Real synergy comes when the underlying processes are examined and improved or augmented. Cisco is gaining a group of developers and product people that succeed based on the quality of their code. If Splunk didn’t work it wouldn’t be a market leader. The value of what that team can deliver to Cisco across the organization is important. The ongoing critiques of Cisco’s code quality has created a group of industry professionals that are guarded when it comes to using Cisco software on their devices or for their enterprises. I think adding the expertise of the Splunk teams to that group will go a long way to helping Cisco write more stable code in the long term.


Tom’s Take

Cisco needed Splunk. They needed a SIEM to compete in the wider security market and rather than investing in R&D to build one they decided to pick up the best that was available. The long courtship between the two companies finally paid off for the investors. Now the key is to properly integrate Splunk into the wider Cisco Security strategy and also figure out how to get additional benefits from the development teams to offset the big purchase price. The essence of those S’s above is that Cisco is continuing their transformation away from a networking hardware company and becoming a more diversified software firm. It will take time for the market to see if that is the best course of action.

Wi-Fi 6E Won’t Make a Difference

It’s finally here. The vaunted day when the newest iPhone model has Wi-Fi 6E. You’d be forgiven for missing it. It wasn’t mentioned as a flagship feature in the keynote. I had to unearth it in the tech specs page linked above. The trumpets didn’t sound heralding the coming of a new paradigm shift. In fact, you’d be hard pressed to find anyone that even cares in the long run. Even the rumor mill had moved on before the iPhone 15 was even released. If this is the technological innovation we’ve all been waiting for, why does it sound like no one cares?

Newer Is Better

I might be overselling the importance of Wi-Fi 6E just a bit, but that’s because I talk to a lot of wireless engineers. More than a couple of them had said they weren’t even going to bother upgrading to the new USB-C wonder phone unless it had Wi-Fi 6E. Of course, I didn’t do a survey to find out how many of them had 6E-capable access points at home, either. I’d bet the number was 100%. I’d be willing to be the survey of people outside of that sphere looking to buy an iPhone 15 Pro that can tell me if they have a 6E-capable chipset at home is much, much lower.

The newest flagship device has cool stuff. Better cameras, faster processor, more RAM, and even titanium! The reasons to upgrade are legion depending on how old your device is. Are you really ready to sink it all because of a wireless chipset design? There are already a number of folks saying they won’t upgrade their amazing watch because Apple didn’t make it black this year. Are the minor technical achievements really deal breakers in the long run?

The fact of the matter is that the community of IT pros outside of the wireless space don’t actually care about the wireless chipset in their phone. Maybe it’s faster. Maybe it’s cooler. It could even be more about bragging rights than anything else. However, just like the M1 MacBook Wi-Fi, the real-world results are going to be a big pile of “it depends”. That’s because organizations don’t make buying decisions based on consumer tech.

Sure, the enterprise may have been pushed in certain directions in the past due to the adoption of smart phones. Go into any big box store and see how the employees are using phones instead of traditional scanners for inventory management. Go into your average bank or hospital and ask the CIO what their plans are to upgrade the wireless infrastructure to support Wi-Fi 6E now that Apple supports it across the board on their newest devices. I bet you get a very terse answer.

Gen Minus One

The buying patterns for enterprise IT don’t support bleeding edge technology. That’s because most enterprises don’t run on the bleeding edge. Their buying decisions are informed by the installation base of their users, not on their projected purchases. Enterprises aren’t going to take a risk on buying something that isn’t going to provide benefit for the investment. Trying to provide that benefit for a small number of users is even more suspect. Why spend big bucks for a new access point that a tenth of my workforce can properly use?

Buying decisions and deployment methodology follow a timeline that was decided upon months ago, even for projects that come up out of the blue. If you interview your average CIO with a good support team they can tell you how old their devices are, what order they are planned to be replaced, and roughly how much that will cost today. They have a plan ready to plug in when the executive team decides there is budget to spend. Strike while the funding iron is hot!

To upend the whole plan because some new device came out is not an easy sell to the team. Especially if it means reducing the number of devices that can be purchased because the newer ones cost more. If anything it will encourage the teams to hold on to that particular budget until the prices of those cutting edge devices falls to a point where they are more cost effective for a user base that has refreshed devices and has a need for faster connectivity.

Wi-Fi 6E suffers from a problem common to IT across the board. It’s not exciting enough to be important. The current generation of devices can utilize the connectivity it provides efficiently. The airspace in an enterprise is certainly crowded enough to need new bands for high performance devices to move into. But does the performance of Wi-Fi 6E create such a gap as to make it a “must have” in the budget? What would you be willing to sacrifice to get it? And would your average user notice the difference? If you can’t say for certain that incremental improvement will make that much of a difference for the non-wireless savvy person then you’re going to find yourself waiting for the next revision of the standard. Which, sadly, as the benefit of having a higher number. Which means it’s obviously better, right?


Tom’s Take

I like shiny new things. I didn’t upgrade my phone this year because my older one is good enough for my use case. If I were to rank all the reasons why I wanted to upgrade I’d put Wi-Fi 6E near the bottom of the list. It’s neat. I like the technology behind it. For the average CIO it doesn’t move the needle. It doesn’t have an impressive pie chart or cost savings associated with it. If you upgraded everyone to Wi-Fi 6E overnight no one would notice. And even if they did they’d be asking when Wi-Fi 7 was coming out because that one is really cool, even if they know zero about what it does. Wi-Fi 6E on a mobile device won’t matter in the long run because the technology isn’t cool enough to be noticed by people that aren’t looking for it.

Overcoming the Wall

I was watching a Youtube video this week that had a great quote. The creator was talking about sanding a woodworking project and said something about how much it needed to be sanded.

Whenever you think you’re done, that’s when you’ve just started.

That statement really resonated with me. I’ve found that it’s far too easy to think you’re finished with something right about the time you really need to hunker down and put in extra effort. In running they call it “hitting the wall” and it usually marks the point when your body is out of energy. There’s often another wall you hit mentally before you get there, though, and that’s the one that needs to be overcome with some tenacity.

The Looming Rise

If your brain is like mine you don’t like belaboring something. The mind craves completion and resolution. Once you’ve solved a problem it’s done and finished. No need to continue on with it once you’ve reached a point where it’s good enough. Time to move on to something else that’s new and exciting and a source of dopamine.

However, that feeling of being done with something early on is often a false sense of completion. I learned that the hard way when I was studying for my CCIE. Every question has an answer. Some questions have a couple of different answers. However, knowing the correct answer isn’t the same as knowing all the incorrect answers. Why would I want to take the time to learn all the wrong things instead of just learning what’s right and moving on to the next topic?

The reason to keep going even after you know what’s right is to recognize what the wrong thing looks like. When studying you’re often confronted with suboptimal situations or, especially with the CCIE, put into positions where you can make mistakes that will lead to disaster if you don’t recognize the pitfalls early. Maybe it’s creating a routing loop. It could be a choice between two methods of configuration that really only has one correct answer if you know why the other one will cause problems.

Persevering through that mental wall that says “you’ve done enough” is important because the extra value you gain when you do is critical to understand the myriad ways that something can be broken. It’s not enough to know it’s not right. You have to recognize what isn’t right about it. That kind of understanding can come from practice experience, like making the mistake, or through careful study in controlled situations like learning all the wrong ways to work the problem.

The Challenging Ascent

Getting over that wall isn’t easy. Your brain doesn’t want to struggle past the right way to do things. It craves challenge and novelty. You’re going to have to work against your better nature to get to a point where you’re past the wall. Don’t be afraid to lie to yourself to get where you need to be.

When running I will trick myself when I hit my mental wall by saying “one more song” or “one more block” when I’m ready to give up. The idea that I can make it a short distance or short amount of time is comforting to my brain when it wants to stop. And by tricking it I can often push a little harder to another song or two more blocks before I get completely over the wall and have the mental toughness to continue.

Likewise, when you’re studying and you’ve found the correct answer you need to push yourself to find one incorrect way at first. Maybe a second. If it’s something that has configurable settings you should investigate a few wrong values to figure out what happens when things are outside of bounds or when they’re just a little bit off. Maybe convince yourself to figure out two or three and write down the results. If one of them ends up being really interesting it could spark you to do more investigation to find out what caused that particular outcome.

You’ll find that you can get past your mental blocks much easier with tricks like that. More importantly, you’ll also find that you can get them to pop up faster and be overcome with less effort as you understand when they happen. If you’ve ever sat down to study something and your brain immediately wants to give up you know that the wall is right in front of you. How you overcome it can mean the difference between truly understanding a topic and just knowing enough about the answer to regurgitate it later.


Tom’s Take

As always, your mileage may vary with skills like these. I’d wager that most people do hit a wall whether it’s running or doing math or studying the intricacies of how OSPF works over non-broadcast networks. Don’t settle for your brain telling you that you’re done. Seek to really put in the work and understand what’s going on. Write everything down so you know what you’ve discovered. And when that wall seems like it’s too high to climb just whisper to yourself you’re going to climb another foot. And then another. And pretty soon you’ll be over and in the clear.

Networking Is Fast Enough

Without looking up the specs, can you tell me the PHY differences between Gigabit Ethernet and 10GbE? How about 40GbE and 800GbE? Other than the numbers being different do you know how things change? Do you honestly care? Likewise for Wi-Fi 6, 6E, and 7. Can you tell me how the spectrum changes affect you or why the QAM changes are so important? Or do you want those technologies simply because the numbers are bigger?

The more time I spend in the networking space the more I realize that we’ve come to a comfortable point with our technology. You could call it a wall but that provides negative connotations to things. Most of our end-user Ethernet connectivity is gigabit. Sure, there are the occasional 10GbE cards for desktop workstations that do lots of heavy lifting for video editing or more specialized workflows like medical imaging. The rest of the world has old fashioned 1000Mb connections based on 802.3z ratified in 1998.

Wireless is similar. You’re probably running on a Wi-Fi 5 (802.11ac) or Wi-Fi 6 (802.11ax) access point right now. If you’re running on 11ac you might even be connected using Wi-Fi 4 (802.11n) if you’re running in 2.4GHz. Those technologies, while not quite as old as GigE, are still prevalent. Wi-Fi 6E isn’t really shipping in quantity right now due to FCC restrictions on outdoor use and Wi-Fi 7 is a twinkle in hardware manufacturers’ eye right now. Why aren’t we clamoring for more, faster, better, stronger all the time?

Speedometers

How fast can your car go? You might say you’ve had it up to 100 mph or above. You might take a look at your speedometer and say that it can go as high as 150 mph. But do you know for sure? Have you really driven it that fast? Or are you guessing? Would you be shocked to learn that even in Germany, where the Autobahn has an effectively unlimited speed limit, that cars are often limited to 155 mph?. Even though the speedometer may go higher the cars are limited through an agreement for safety reasons. Many US vehicles are also speed limited between 110 and 140 mph.

Why are we restricting the speeds for these vehicles? Safety is almost always the primary concern, driven by the desire for insurance companies to limit claims and reduce accidents. However, another good reason is also why the Autobahn has a higher effective speed limit: road conditions. My car may go 100 mph but there are very few roads in my part of the US that I would feel comfortable going that fast on. The Autobahn is a much better road surface for driving fast compared to some of the two-lane highways around here. Even if the limit was higher I would probably drive slower for safety reasons. The roads aren’t built for screaming speeds.

That same analogy applies to networking. Sure, you may have a 10GbE connection to your Mac Mini and you may be moving gigs of files back and forth between machines in your local network. What happens if you need to upload it to Youtube or back it up to cloud storage? Are you going to see those 10GbE speeds? Or are you going to be limited to your ISP’s data rates? The fastest engine can only go as fast the pathways will permit. In essence, that hot little car is speed limited because of the pathway the data takes to the destination.

There’s been a lot of discussion in the space about ever-increasing connectivity from 400GbE to 800GbE and soon even into the terabit range. But most of it is specialized for AI workloads or other massive elephant flows that are delivered via a fabric. I doubt an ISP is going to put in an 800GbE cross connect to increase bandwidth for consumers any time soon. They won’t do it because they don’t need to. No consumer is going to be running quite that fast.

Likewise, increasing speeds on wireless APs to more than gigabit speeds is silly unless you want to run multiple cables or install expensive 10GbE cards that will require new expensive switches. Forgetting Multigig stuff for now you’re not going to be able to plug in a 10GbE AP to an older switch and get the same performance levels. And most companies aren’t making 10GbE campus switches. They’re still making 1GbE devices. Clients aren’t topping out their transfer rates over wireless. And even if they did they aren’t going to be going faster than the cable that plugs the AP into the rest of the network.

Innovation Idling

It’s silly, right? Why can’t we make things go faster?!? We need to use these super fast connections to make everything better. Yet somehow our world works just fine today. We’ve learned to work with the system we have. Streaming movies wouldn’t work on a dial-up connection but adding 10GbE connections to the home won’t make Netflix work any faster than it does today. That’s because the system is optimized to deliver content just fast enough to keep your attention. If the caching servers or the network degrades to the point where you have to buffer your experience is poor. But so long as the client is getting streaming data ahead of you consuming it you never know the difference, right?

Our networks are optimized to deliver data to clients running on 1GbE. Without a massive change in the way that workloads are done in the coming years we’re never going to be faster than that. Our software programs might be more optimized to deliver content within that framework but I wouldn’t expect to see 10GbE become a huge demand in client devices. Frankly, we don’t need that much speed. We don’t need to run flat out all the time. Just like a car engine we’re more comfortable running at a certain safe speed that preserves our safety and the life of the equipment.


Tom’s Take

Be honest with yourself. Do you want 10GbE or Wi-Fi 7 because you actually need the performance? Or do you just want to say you have the latest and greatest? Would you pay extra for a v12 engine in a sports car that you never drive over 80 mph? Just to say you have it? Ironically enough, this is the same issue that cloud migrations face today. We buy more than we need and never use it because we don’t know what our workloads require. Instead, we buy the fastest biggest thing we can afford and complain that something is holding it back. Rather than rushing out to upgrade your Wi-Fi or Ethernet, ask yourself what you need, not what you want. I think you’ll realize the network is fast enough for the foreseeable future.

Argument Farming

The old standard.

I’m no stranger to disagreement with people on the Internet. Most of my popular posts grew from my disagreement with others around things like being called an engineer, being a 10x engineer, and something about IPv6 and NAT. I’ve always tried to explain my reasoning for my positions and discuss the relevant points with people that want to have a debate. I tend to avoid commenting on people that just accuse me of being wrong and tell me I need to grow up or work in the real world.

Buying the Farm

However, I’ve noticed recently that there have been some people in the realm of social media and influencing that have taken to posting so-called hot takes on things solely for the purpose of engagement. It’s less of a discussion and more of a post that outlines all the reasons why a particular thing that people might like is wrong.

For example, it would be like me posting something about how an apple is the dumbest fruit because it’s not perfectly round or orange or how the peel is ridiculous because you can eat it. While there are some opinions and points to be made, the goal isn’t to discuss the merits of the fruit hierarchy. Instead, it’s designed to draw in people that disagree to generate comments about how apples are, in fact, good fruits and maybe if I tried one some time I would understand. In this example, I would reply to the comment with something along the lines of “thanks for your perspective” or maybe even a flippant question about why you think that way to keep the chain going.

I’ve found that this is very prevalent on platforms that reward engagement over content. Facebook and LinkedIn chiefly spring to mind. The content of the message isn’t as important as how people react to it. The reward isn’t a well-reasoned discussion. It’s people sharing your post and telling you how stupid you are for making it. Or trying to change your mind.

Except I know what I’m doing. I may not even have strongly held beliefs on my post. I may even prefer apples to oranges. The point is to get you all in an uproar and make you drive my post to the top of someone’s feed. A contrarian way to look at things for sure. But it works. Because we’ve rewarded people for making a splash instead of making a case.

Crop Rotation

In the 10x engineer post I linked above, I had no intention of it blowing up. I noticed some things that irked me about the culture we’ve created around the people that do a lot and how we worship their aura without examining the downsides. Naturally, that meant that it got picked up on Hacker News and there were a raft of comments about how I was an idiot and how I’d get fired if I worked for a “real” company because I wasn’t pulling my weight.

I was horrified, to say the least. I didn’t want that kind of engagement. I wanted a reasoned discussion. I wanted people to see my points and engage in debate. I certainly wasn’t trying to specifically craft a post with a contrarian viewpoint explicitly designed to incense the community to drive them to my page or blog. Yet that is exactly how I’m seeing some members of the wider community acting today. The clicks are more important than the words. And if you end up being proven wrong? So be it. Whoops. On to the next hot take!

I wish I had a better method for dealing with this new angle other than just ignoring it. If it’s someone with a legitimate bad viewpoint that could use some guidance or education I am happy to chip in and provide a different viewpoint. However the difference between the occasional post and constant engagement farming for arguments in the comments to drive your view counts higher is disingenuous. Disagreeing with something is one thing. Writing 400 words about how it’s the “worst mistake you can make” or “you should think about what that will mean for your career” are a bit heavy handed. And yes, I’ve seen both of those statements in recent months about something as innocuous as a training class.


Tom’s Take

Healthy disagreement and debate makes us improve. Honest mistakes happen and can be corrected. I have no issue with either of these, even if both sides will never agree. What I take issue with is people being deliberately disingenuous to manipulate algorithms or manufacture outrage for their own ends. I always come back to a simple question: Are you doing this to solve a problem? Or become popular? If the answer is the latter it might be time to put down the plow and ask yourself if the crop you’re sewing is worth it.

Changing Diapers, Not Lives

When was the last time you heard a product pitch that included words like paradigm shift or disruptive or even game changing? Odds are good that covers the majority of them. Marketing teams love to sell people on the idea of radically shifting the way that they do something or revolutionizing an industry. How often do you feel that companies make something that accomplishes the goal of their marketing hype? Once a year? Once a decade? Of the things that really have changed the world, did they do it with a big splash? Or was it more of a gradual change?

Repetition and Routine

When children are small they are practically helpless. They need to be fed and held and have their diapers changed. Until they are old enough to move and have the motor functions to feed themselves they require constant care. In fact, potty training is usually one of the last things on that list above. Kids can feed themselves and walk places and still be wearing diapers. It’s just one of those things that we do as parents.

Yet, changing diapers represents a task that we usually have no issue with. Sure it’s not the most glamorous work. But it’s necessary. Children can’t do it themselves. Maybe they can take off a wet or soiled diaper on their own (my kids did on occasion), but they can’t quite put one on. We encourage them to conform to the societal norm of using a bathroom instead of using a disposable diaper.

I use changing diapers as a metaphor for something we do regularly that is thankless but necessary. Kids never thank you for changing their diapers when they get older but it needs to be done. You may not think it’s a life-changing experience at the time but you know it’s one small part of what needs to happen to make them better as people later on. As a company that is trying to change people’s lives with the products you’re selling you often aim toward the sky. You want a utopia of flying cars and automated homes and AI-driven everything. But do your customers want that?

Your customers don’t want self-driving cars. They want to not have to spend their time driving. They don’t want AI-powered dinner ordering. They want to not have to make dinner decisions. Your customers don’t want a magical dashboard that makes automatic configuration changes for them. They want to operate their systems without constant attention to every little detail to keep them from falling apart. They don’t want revolutionary. They want relief.

Aim Small, Miss Small

If your first thought when building a product is “we’re going to change the world!” then you need to stop back because you missed the target. One of smartest things I overheard regarding startups was “Don’t solve a problem. Solve a problem someone has every day.” People are so focused on making an impact a revolutionizing the world they often miss the opportunity to do something that really does change things by simply solving common problems that happen all the time.

When you go back to your vision, think about changing diapers, not lives. Think about solving the problems people have every day. Take network automation, for example. You’re not going to create a paradigm shifting organizational restructuring in a day or a week or even a year. What you can do is automate things like password changes or switch deployments. You can solve that everyday problem so there is more time to work on other things. You can remove errors and create responsiveness where it didn’t exist before. Sure, your Ansible script that provisions a switch isn’t going to get your name etched in stone in Silicon Valley. But it can lead to changes in the organization that create efficiency and make your team happier and more focused on solving other hard problems.

Likewise, if you tell someone your product is going to change their life they will probably laugh at you or shake their head in disbelief. After all, everything promises to change their lives. However, if you tell them your product will solve a specific issue they have then they are very likely to take you up on it. Your target market will identify what you do and respond positively. Rather than trying to boil an ocean with hype you’re providing clear messaging on what you can do and how it can help. People want that clarity over hype.


Tom’s Take

If you try to promise me a life-changing experience with an app or a piece of hardware I’m going to make sure you understand what that means and what it takes. On the other hand, if you come to me with a proposal to change something I dislike doing every day or simplifying it in some way I’m more likely to listen to your pitch. Changing lives is hard. Changing diapers is not fun but it is necessary and repetitive. Focus on the small things and make those easier to do before you take on the rest of the world. Your customers will be happier and you will too.