Production Reductions

You’ve probably noticed that I haven’t been writing quite as much this year as I have in years past. I finally hit the wall that comes for all content creators. A combination of my job and the state of the industry meant that I found myself slipping off my self-appointed weekly posting schedule more and more often in 2023. In fact, there were several times I skipped a whole week to get put something out every other week, especially in the latter half of the year.

I’ve always wanted to keep the content level high around here and give my audience things to think about. As the year wore on I found myself running out of those ideas as portions of the industry slowed down. If other people aren’t getting excited about tech why should I? Sure, I could probably write about Wi-Fi 7 or SD-WAN or any number of topics over and over again but it’s harder to repeat yourself for an audience that takes a more critical eye to your writing than it is for someone that just wants to churn out material.

My Bruce Wayne job kept me busy this year. I’m proud of all the content that we created through Tech Field Day and Gestalt IT, especially things like the weekly Rundown show. Writing a post every week is hard. Writing a snarky news show script is just as taxing. If I can find a way to do that I can find a way to write, right?

Moving Targets

Alas, in order to have a plan for content creation you have to make a plan and then stick to it. I did that last year with my Tomversations pieces and it succeeded. This year? I managed to make one. Granted, it was a good one but it was still only one. Is it because I didn’t plan ahead far enough? Or because I didn’t feel like I had much to say?

Part of the secret behind writing is to jot down your ideas right away , no matter how small they might be. You can develop an idea that has merit. You can’t develop a lack of an idea. I have a note where I add quotes and suggestions and random things that I overhear that give me inspiration. Sometimes those ideas pan out. Other times they don’t. I won’t know either way if I don’t write them down and do something about them. If you don’t create the ground for your ideas to flourish you’ll have nothing to reap when it’s time.

The other thing that causes falloffs in content creation is timing. I always knew that leaving my posts until Friday mornings was going to eventually bite me and this year was the year with teeth. Forcing myself to come up with something in a couple of hours time initially led to some pretty rushed ideas and that later pushed into the following Monday (or beyond). While creating a schedule for my thoughts has helped me stay consistent throughout the years the pressures on my schedule this year have meant letting some things slip when they weren’t critical. Hard to prioritize a personal post over a work video that needs to be edited or a paper that needs to be written first.

One other thing that I feel merits some mention is the idea of using tools to help the creative process. I am personally against using a GPT algorithm to write for me. It just doesn’t sound like me and I feel that having something approximating who I am doesn’t have the same feel. Likewise, one of the other things this year that I’m fighting with is word predictions in writing tools. Not as bad as full-on content creation but merely “suggestions” about what word I want to use next. I’ve disabled them for the most part because, while helpful in certain situations they annoy me more than anything when writing. Seeing a tool suggest a word for me while I’m in the flow of writing a post is like hearing a note a half step out of tune in a piece of music. It’s just jarring enough to take you out of the whole experience. Stop trying to anticipate what I’m going to say and let me say it!

Producing Ahead

Does all this mean I’m giving up on my writing? Not hardly. I still feel like writing is my best form of communication. Even a simple post about complaining about my ability to write this year is going to be wordy. I feel it’s because written words give us more opportunity to work at our own pace. When we watch videos we work at someone else’s idea of a learning pace. If you make a ten-minute video to get across a point that could have been read in three minutes you’re either doing a very good job of explaining everything or you’re padding out your work. I prefer to skim, condense, and study the parts that are important to me. I can’t really do that with a video.

I feel the written form of content is still going to be king for years to come. You can search words. You can rephrase words. You can get a sense for how dense a topic is by word count. There’s value in seeing the entire body of knowledge in front of you before you begin. Besides, the backspace key is a whole lot easier to edit than doing another take and remembering to edit out the bad one in the first place.


Tom’s Take

Writing is practically meditation for me at this point. I can find a topic I’m interested in and write. Empty my brain of thoughts and ideas and let them take shape here. AI can’t approximate that for me. Video has too many other variables to worry about. That’s why I’m a writer. I love the way the process works with just a keyboard, a couple of references, and my brain doing the heavy lifting. I’m not sure what my schedule for posting is going to look like in 2024 and beyond but trust me when I say it’s not going away any time soon.

Routing Through the Forest of Trees

Some friends shared a Reddit post the other day that made me both shake my head and ponder the state of the networking industry. Here is the locked post for your viewing pleasure. It was locked because the comments were going to devolve into a mess eventually. The person making the comment seems to be honest and sincere in their approach to “layer 3 going away”. The post generated a lot of amusement from the networking side of IT about how this person doesn’t understand the basics but I think there’s a deeper issue going on.

Trails To Nowhere

Our visibility of the state of the network below the application interface is very general in today’s world. That’s because things “just work” to borrow an overused phrase. Aside from the occasional troubleshooting exercise to find out why packets destined for Azure or AWS are failing along the way when is the last time you had to get really creative in finding a routing issue in someone else’s equipment? We spend more time now trying to figure out how to make our own networks operate efficiently and less time worrying about what happens to the packets when they leave our organization. Provided, of course, that the users don’t start complaining about latency or service outages.

That means that visibility of the network functions below the interface of the application doesn’t really exist. As pointed out in the post, applications have security infrastructure that communicates with other applications and everything is nicely taken care of. Kind of like ordering packages from your favorite online store. The app places the order with a storefront and things arrive at your house. You don’t have to worry about picking the best shipping method or trying to find a storefront with availability or any of the older ways that we had to deal with weirdness.

That doesn’t mean that the processes that enable that kind of service are going away though. Optimizing transport networks is a skill that is highly specialized but isn’t a solved issue. You’ve probably heard by now that UPS trucks avoid left turns whenever possible to optimize safety and efficiency. The kind of route planning that needs to be done in order to eliminate as many left turns as possible from the route is massive. It’s on the order of a very highly specialized routing protocol. What OSPF and BGP are doing is akin to removing the “left turns” from the network. They find the best path for packets and keep up-to-date as the information changes. That doesn’t mean the network is going away. It means we’re finding the most efficient route through it for a given set of circumstances. If a shipping company decides tomorrow that they can no longer guarantee overnight delivery or even two-day shipping that would change the nature of the applications and services that offer that kind of service drastically. The network still matters.

OSI Has to Die

The other thing that jumped out at me about the post was the title. Referring to Layer 3 of the OSI model as a routing function. The timing was fortuitous because I had just finished reading Robert Graham’s excellent treatise on getting rid of the OSI model and I couldn’t agree more with him. Containing routing and addressing functions to a single layer of an obsolete model gives people the wrong ideas. At the very least is encourages them to form bad opinions about those ideas.

Let’s look at the post as an example. Taking a stance like “we don’t need layer three because applications will connect to each other” is bad. So is “We don’t need layer two because all devices can just broadcast for the destination”. It’s wrong to say those things but if you don’t know why it’s wrong then it doesn’t sound so bad. Why spend time standing up routing protocols if applications can just find their endpoints? Why bother putting higher order addresses on devices when the nature of Ethernet means things can just be found easily with a broadcast or neighbor discovery transmission? Except you know that’s wrong if you understand how remote networks operate and why having a broadcast domain of millions of devices would be chaos.

Graham has some very compelling points about relegating the OSI model to history and teaching how networks really operate. It helps people understand that there are multiple networks that exist at one time to get traffic to where it belongs. While we may see the Internet and Ethernet LAN as a single network they have different purposes. One is for local traffic delivery and the other is for remote traffic delivery. The closest analog for certain generations is the phone system. There was a time when you have local calls and long distance calls that required different dialing instructions. You still have it today but it’s less noticeable thanks to mobile devices not requiring long distance dialing instructions.

It might be more appropriate to think of the local/remote dichotomy like a private branch exchange (PBX) phone network. Phones inside the PBX have locally significant extensions that have no meaning outside of the system. Likewise, remote traffic can only enter the system through entry points created by administrators, like a main dial-in number that terminates on an extension or direct inward dial (DID) numbers that have significance outside the system. Extensions only matter for the local users and have no way to communicate outside without addressing rules. Outside addresses have no way of communicating into the local system without creating rules that allow it to happen. It’s a much better metaphor than the OSI model.


Tom’s Take

I don’t blame our intrepid poster for misunderstanding the way network addresses operate. I blame IT for obfuscating it because it doesn’t matter anymore to application developers. Sure, we’ve finally hit the point where the network has merged into a single entity with almost no distinction from remote WAN and local LAN. But we’ve also created a system where people forget the dependencies of the system at lower levels. You can’t encode signals without a destination and you can’t determine the right destination without knowing where it’s supposed to be. That’s true if you’re running a simple app in an RFC 1918 private space or the public cloud. Forgetting that little detail means you could end up lost in a forest not being able to route yourself out of it again.

Asking The Right Question About The Wireless Future

It wasn’t that long ago that I wrote a piece about how Wi-Fi 6E isn’t going to move the needle very much in terms of connectivity. I stand by my convictions that the technology is just too new and doesn’t provide a great impetus to force users to upgrade or augment systems that are already deployed. Thankfully, someone at the recent Mobility Field Day 10 went and did a great job of summarizing some of my objections in a much simpler way. Thanks to Nick Swiatecki for this amazing presentation:

He captured so many of my hesitations as he discussed the future of wireless connectivity. And he managed to expand on them perfectly!

New Isn’t Automatically Better

Any time I see someone telling me that Wi-Fi 7 is right around the corner and that we need to see what it brings I can’t help but laugh. There may be devices that have support for it right now, but as Nick points out in the above video, that’s only one part of the puzzle. We still have to wait for the clients and the regulatory bodies to catch up to the infrastructure technology. Could you imagine if we did the same thing with wired networks? If we deployed amazing new cables that ran four times the speed but didn’t interface with the existing Ethernet connections at the client? We’d be laughed out of the building.

Likewise, deploying pre-standard Wi-Fi 7 devices today doesn’t gain you much unless you have a way to access them with a client adapter. Yes, they do exist. Yes, they’re final. However, they’re more final than the Draft 802.11n cards that I deployed years and years ago. That doesn’t mean that we’re going to see a lot of benefit from them however. Because the value of the first generation of a technology is rarely leaps and bounds above what came before it.

A couple of years ago I asked if the M1 MacBook wireless was really slower than the predecessor laptop. Spoiler alert, it is but not so much you’d really notice. Since then we’ve gained two more generations of that hardware and the wireless has gotten faster. Not because the specs have changed in the standard. It’s because the manufacturers have gotten better about building the devices. We’ve squeezed more performance out of them instead of just slapping a label on the box and saying it’s a version number higher or it’s got more of the MHz things so it must be better.

Nick, in the above video, points this out perfectly. People keep asking about Wi-Fi 7 and they miss out on the fact that there’s a lot of technology that needs to run very smoothly in order to give us significant gains in speed over Wi-Fi 6 and Wi-Fi 6E. And those technologies probably aren’t going to be implemented well (if at all) in the first cards and APs that come off the line. In fact, given the history of 802.11 specifications those important features are probably going to be marked as optional anyway to ensure the specifications get passed on time to allow the shipping hardware to be standardized.

In a perfect world you’re going to miss a lot of the advances in the first revision of the hardware. I remember a time when you had to be right under the AP to see the speed increases promised by the “next generation” of wireless. Adding more and more advanced technology to the AP and hoping the client adapters catch up quickly isn’t going to help sell your devices any faster either. Everything has to work together to ensure it all runs smoothly for the users. If you think for a minutes that they aren’t going to call you to tell you that the wireless is running slow then you’re very mistaken. They’re upset they didn’t get the promised speeds on the box or that something along the line is making their experience difficult. That’s the nature of the beast.

Asking the Right Questions

The other part of this discussion is how to ensure that everyone has realistic ideas about what new technology brings. For that, we recorded a great roundtable discussion about Wi-Fi 7 promises and reality:

I think the biggest takeaway from this discussion is that, despite the hype, we’re not ready for Wi-Fi 7 just yet. The key to having this talk with your stakeholders is to remind them that spending the money on the new devices isn’t going to automatically mean increased speeds or enhanced performance. In fact, you’re going to do a great job of talking them out of deploying cutting edge hardware simply by reminding them they aren’t going to see anywhere near the promises from the vendors without investing even more in client hardware or understanding that those amazing fast multi-spectrum speeds aren’t going to be possible on an iPhone.

We’re not even really touching on the reality that some of the best parts of 6GHz aren’t even available yet because of FCC restrictions. Or that we just assume that Wi-Fi 7 will include 6GHz when it doesn’t even have to. That’s especially true of IoT devices. Lower cost devices will likely have lower cost radios for components which means the best speed increases are going to be for the most expensive pieces of the puzzle. Are you ready to upgrade your brand new laptop in six months because a new version of the standard came out that’s just slightly faster?

Those are the questions you have to ask and answer from your stakeholders before you ever decide how the next part of the project is going to proceed. Because there is always going to be faster hardware or newer revisions of the specification for you to understand. And if the goalposts keep moving every time something new comes along you’re either going to be broke or extremely disappointed.


Tom’s Take

I’m glad that Nick from Cisco was able to present at Mobility Field Day. Not only did he confirm what a lot of professionals are thinking but he did it in a way that helped other viewers understand where the challenges with new wireless technologies lie. We may be a bit jaded in the wired world because Ethernet is such a bedrock standard. In the wireless world I promise that clients are always going to be getting more impressive and the amount of time between those leaps is going to shrink even more than it already has. The real question should be whether or not we need to chase that advantage.

AI Is Making Data Cost Too Much

You may recall that I wrote a piece almost six years ago comparing big data to nuclear power. Part of the purpose of that piece was to knock the wind out of the “data is oil” comparisons that were so popular. Today’s landscape is totally different now thanks to the shifts that the IT industry has undergone in the past few years. I now believe that AI is going to cause a massive amount of wealth transfer away from the AI companies and cause startup economics to shift.

Can AI Really Work for Enterprises?

In this episode of Packet Pushers, Greg Ferro and Brad Casemore debate a lot of topics around the future of networking. One of the things that Brad brought up that Greg pointed out is that data being used for AI algorithm training is being stored in the cloud. That massive amount of data is sitting there waiting to be used between training runs and it’s costing some AI startups a fortune in cloud costs.

AI algorithms need to be trained to be useful. When someone uses ChatGPT to write a term paper or ask nonsensical questions you’re using the output of the GPT training run. The real work happens when OpenAI is crunching data and feeding their monster. They have to give it a set of parameters and data to analyze in order to come up with the magic that you see in the prompt window. That data doesn’t just come out of nowhere. It has to be compiled and analyzed.

There are a lot of creators of content that are angry that their words are being fed into the GPT algorithm runs and then being used in the results without giving proper credit. That means that OpenAI is scraping the content from the web and feeding it into the algorithm without care for what they’re looking it. It also creates issues where the validity and the accuracy of the data isn’t verified ahead of time.

Now, this focuses on OpenAI and GPT specifically because everyone seems to think that’s AI right now. Much like every solution in the history of IT, GPT-based large language models (LLMs) are just a stepping stone along the way to greater understanding of what AI can do. The real value for organizations, as Greg pointed out in the podcast, can be something as simple as analyzing the trouble ticket a user has submitted and then offering directed questions to help clarify the ticket for the help desk so they spend less time chasing false leads.

No Free Lunchboxes

Where are organizations going to store that data? In the old days it was going to be collected in on-prem storage arrays that weren’t being used for anything else. The opportunity cost of using something you already owned was minimal. After all, you bought the capacity so why not use it? Organizations that took this approach decided to just save every data point they could find in an effort to “mine” for insights later. Hence the references to oil and other natural resources.

Today’s world is different. LLMs need massive resources to run. Unless you’re willing to drop several million dollars to build out your own cluster resources and hire engineers to keep the running at peak performance you’re probably going to be using a hosted cloud solution. That’s easy enough to set up and run. And you’re only paying for what you use. CPU and GPU times are important so you want the job to complete as fast as possible in order to keep your costs low.

What about the data that you need to feed to the algorithm? Are you going to feed it from your on-prem storage? That’s way too slow, even with super fast WAN links. You need to get the data as close to the processors as possible. That means you need to migrate it into the cloud. You need to keep it there while the magic AI building machine does the work. Are you going to keep that valuable data in the cloud, incurring costs every hour it’s stored there? Or are you going to pay to have it moved back to your enterprise? Either way the sound of a cash register is deafening to your finance department and music to the ears of cloud providers and storage vendors selling them exabytes of data storage.

All those hopes of making tons of money from your AI insights are going to evaporate in a pile of cloud bills. The operations costs of keeping that data are now more than minimal. If you want to have good data to operate on you’re going to need to keep it. And if you can’t keep it locally in your organization you’re going to have to pay someone to keep it for you. That means writing big checks to the cloud providers that have effectively infinite storage, bounded only by the limit on your credit card or purchase order. That kind of wealth transfer makes investors seem a bit hesitant when they aren’t going to get the casino-like payouts they’d been hoping for.

The shift will cause AI startups to be very frugal in what they keep. They will either amass data only when they think their algorithm is ready for a run or keep only critical data that they know they’re going to need to feed the monster. That means they’re going to be playing a game with the accuracy of the resulting software as well as giving up chances that some insignificant piece of data ends up being the key to a huge shift. In essence, the software will all start looking and sounding the same after a while and there won’t be enough differentiation to make they competitive because no one will be able to afford it.


Tom’s Take

The relative ease with which data could be stored turned companies into data hoarders. They kept it forever hoping they could get some value out of it and create a return curve that soared to the moon. Instead, the application for that data mining finally came along and everyone realized that getting the value out of the data meant investing even more capital into refining it. That kind of investment makes those curves much flatter and makes investors more reluctant. That kind of shift means more work and less astronomical payout. All because your resources were more costly than you first thought.

Does Automation Require Reengineering?

During Networking Field Day 33 this week we had a great presentation from Graphiant around their solution. While the presentation was great you should definitely check out the videos linked above, Ali Shaikh said something in one of the sessions that resonated with me quite a bit:

Automation of an existing system doesn’t change the system.

Seems simple, right? It belies a major issue we’re seeing with automation. Making the existing stuff run faster doesn’t actually fix our issues. It just makes them less visible.

Rapid Rattletraps

Most systems don’t work according to plan. They’re an accumulation of years of work that doesn’t always fit well together. For instance, the classic XKCD comic:

When it comes to automation, the idea is that we want to make things run faster and reduce the likelihood of error. What we don’t talk about is how each individual system has its own quirks and may not even be a good candidate for automation at any point. Automation is all about making things work without intervention. It’s also dependent on making sure the process you’re trying to automate is well-documented and repeatable in the first place.

How many times have you seen or heard of someone spending hours trying to script a process that takes about five minutes to do once or even twice a year? The return on time investment in automating something like that doesn’t really make sense, does it? Sure, it’s cool to automate everything but it’s not really useful, especially if the task has changes every time it’s run that requires you to change in the inputs. It’s like building a default query for data that needs to be rewritten every time the query is run.

You’re probably laughing right now but you also have at least one or two things that would fit this bill. Rather than asking if you should be automating this task you should instead be asking why we’re doing it in the first place. Why are we looking to accomplish this goal if it only needs to be done on occasion? Is it something critical like a configuration backup? Or maybe just a sanity check to see that unused switch ports have been disabled or tagged with some kind of security configuration. Are you trying to do the task for safety or security? Or are you doing it for busy work purposes?

Streamlining the System

In all of those cases we have to ask why the existing system exists. That’s because investing time and resources into automating a system can result in a big overrun in budget when you run into unintended side effects or issues that weren’t documented in the first place. Nothing defeats an automation project faster than hitting roadblocks out of nowhere.

If you shouldn’t invest time in automating something that is already there, what should you do instead? How about reengineering the whole process instead? If you occasionally run configuration backups to make sure you have good copies of the devices why not institute change controls or rolling automatic backups? Instead of solving an existing problem with a script why shouldn’t you change the way you do things that might have other hidden benefits? If you’re scripting changes to ports to verify security status why not have a system in place that creates configuration on those ports when they’re configured and require change controls to enable them?

It feels like extra work. It always seems easier to jump in from the bottom up with both feet and work on a problem until you solve it. Top down means you’re changing the way the system does things instead so the problems either disappear or change to something more manageable. The important question to ask is “where are my resources best spent?” If you see your time as a resource to invest in projects are you better served making something existing work slightly faster? Or would it be better for you to take the time to do something in a different, potentially better way?

If you believe your process is optimized as much as possible and just needs to run on its own that makes for an easy conversation. But if you’re thinking you need to change the way you do things this is a great time to make those changes and use your time investment to do things properly this time around. You may have to knock down a few walls to get there but it’s way better than building a house of cards that is just going to collapse faster.


Tom’s Take

I’m a fan of automation. Batch files and scripting and orchestration systems have a big place in the network to reduce error and multiply the capabilities of teams. Automation isn’t a magic solution. It requires investment of time and effort and a return for the stakeholders to see value. That means you may need to approach the problem from a different perspective to understand what really should be done instead of just doing the same old things a little faster. Future you will thank you for reengineering today.

Victims of Success

It feels like the cybersecurity space is getting more and more crowded with breaches in the modern era. I joke that on our weekly Gestalt IT Rundown news show that we could include a breach story every week and still not cover them all. Even Risky Business can’t keep up. However, the defenders seem to be gaining on the attackers and that means the battle lines are shifting again.

Don’t Dwell

A recent article from The Register noted that dwell times for detection of ransomware and malware hav dropped almost a full day in the last year. Dwell time is especially important because detecting the ransomware early means you can take preventative measures before it can be deployed. I’ve seen all manner of early detection systems, such as data protection companies measuring the entropy of data-at-rest to determine when it is no longer able to be compressed, meaning it likely has been encrypted and should be restored.

Likewise, XDR companies are starting to reduce the time it takes to catch behaviors on the network that are out of the ordinary. When a user starts scanning for open file shares and doing recon on the network you can almost guarantee they’ve been compromised somehow. You can start limiting access and begin cleanup right away to ensure that they aren’t going to get much further. This is an area where zero trust network architecture (ZTNA) is shining. The less a particular user has access to without additional authentication, the less they can give up before the controls in place in the system catch them doing something out of the ordinary. This holds true even if the user hasn’t been tricked into giving up their credentials but instead is working with the attackers through monetary compensation or misguided ire toward the company.

Thanks to the advent of technologies like AI, machine learning, and automation we can now put controls in place quickly to prevent the spread of disaster. You might be forgiven for thinking that kind of response will eradicate this vector of attack. After all, killing off the nasty things floating in our systems means we’re healthier overall, right? It’s not like we’re breeding a stronger strain of disease?

Breeding Grounds

Ask a frightening question and get a frightening answer, right? In the same linked Register article the researchers point out that while dwell times have been reduced the time it takes attackers to capitalize on their efforts has also been accelerated. In addition, attackers are looking at multiple vectors of persistence in order to accomplish their ultimate goal of getting paid.

Let’s assume for the moment that you are an attacker that knows the company you’re going after is going to notice your intrusion much more quickly than before. Do you try to sneak in and avoid detection for an extra day? Or do you crash in through the front door and cause as much chaos as possible before anyone notices? Rather than taking the sophisticated approach of persistence and massive system disruption, attackers are instead taking a more low-tech approach to grabbing whatever they can before they get spotted and neutralized.

If you look at the most successful attacks so far in 2023 you might notice they’ve gone for a “quantity over quality” approach. Sure, a heist like Oceans 11 is pretty impressive. But so is smashing the display case and running out with the jewels. Maybe it’s not as lucrative but when you hit twenty jewelry stores a week you’re going to make up the low per capita take with volume.

Half of all the intrusion attempts are coming at the expense of stolen or compromised credentials. There are a number of impressive tools out there that can search for weak points in the system and expose bugs you never even dreamed could exist. There are also much easier ways to phish knowledge workers for their passwords or just bribe them to gain access to restricted resources. Think of it like the crowbar approach to the heist scenario above.

Lock It Down

Luckily, even the fastest attackers still have to gain access to the system to do damage. I know we all harp on it constantly but the best way to prevent attacks is to minimize the ways that attack vectors get exploited in the first place. Rotate credentials frequently. Have knowledge workers use generated passwords in place of ones that can be tied back to them. Invest in password management systems or, more broadly, identity management solutions in the enterprise. You can’t leak what you don’t know or can’t figure out quickly.

After that, look at how attackers capitalize on leaks or collusion. I know it’s a tale as old as time but you shouldn’t be running anything with admin access that doesn’t absolutely need it. Yes, even YOUR account. You can’t be the vector for a breach if you are just as unimportant as everyone else. Have a separate account with a completely different password for doing those kinds of tasks. Regularly audit accounts that have system-level privilege and make sure they’re being rotated too. Another great reason for having an identity solution is that the passwords can be rotated quickly without disruption. Oh, and make sure the logins to the identity system are as protected as anything else.

Lastly, don’t make the mistake of thinking you’re an unappealing target. Just because you don’t deal with customer data or have personally identifiable information (PII) stored in your system doesn’t mean you’re not going to get swept up in the next major attack. With the quantity approach the attackers don’t care what they grab as long as they can get out with something. They can spend time analyzing it later to figure out how to best take advantage of what they’ve stolen. Don’t give them the chance. Security through obscurity doesn’t work well in an age where you can be targeted and exposed before you realize what’s going on.


Tom’s Take

Building a better mousetrap means you catch more mice. However, the ones that you don’t catch just get smarter and figure out how to avoid the bait. That’s the eternal game in security. You stamp out the low-level threats quickly but that means the ones that aren’t ensnared become more resistant to your efforts. You can’t assume every attack is going to be a sophisticated nation state attempt to steal classified info. You may just be the unlucky target of a smash-and-grab with stolen passwords. Don’t become a victim of your own success. Keep tightening the defenses and make sure you don’t wind up missing the likely while looking for the impossible.

The Essence of Cisco and Splunk

You no doubt noticed that Cisco bought Splunk last week for $28 billion. It was a deal that had been rumored for at least a year if not longer. The purchase makes a lot of sense from a number of angles. I’m going to focus on a couple of them here with some alliteration to help you understand why this may be one of the biggest signals of a shift in the way that Cisco does business.

The S Stands for Security

Cisco is now a premier security company now. The addition of the most power SIEM on the market means that Cisco’s security strategy now has a completeness of vision. SecureX has been a very big part of the sales cycle for Cisco as of late and having all the parts to make it work top to bottom is a big win. XDR is a great thing for organizations but it doesn’t work without massive amounts of data to analyze. Guess where Splunk comes in?

Aside from some very specialized plays, Cisco now has an answer for just about everything a modern enterprise could want in a security vendor. They may not be number one in every market but they’re making a play for number two in as many places as possible. More importantly it’s a stack that is nominally integrated together to serve as a single source for customers. I’ll be the first person to say that the integration of Cisco software acquisitions isn’t seamless. However, when the SKUs all appear together on a bill of materials most organizations won’t look beyond that. Especially if there are professional services available to just make it work.

Cisco is building out their security presence in a big way. All thanks to a big investment in Splunk.

The S Stands for Software

When the pundits said that Cisco could never really transform themselves from a hardware vendor to a software company there was a lot of agreement. Looking back at the transition that Chuck Robbins has led since then what would you say now? Cisco has aggressively gone after software companies across the board to build up a portfolio of recurring revenue that isn’t dependent on refresh cycles or silicon innovations.

Software is the future for Cisco. Iteration on their core value products is going to increase their profit far beyond what they could hope to realize through continuing to push switches and routers. That doesn’t mean that Cisco is going to abandon the hardware market. It just means that Cisco is going to spend more time investing in things with better margins. The current market for software subscriptions and recurring licensing revenue is hot and investors want to see those periodic returns instead of a cycle-based push for companies to adopt new technologies.

What makes more sense to you? Betting on a model where customers need to pay per gigabyte of data stored or a technology which may be dead on the vine?. Taking the nerd hat off for a moment means you need to see the value that companies want to realize, not the hope that something is going to be big in the future. Hardware will come along when the software is ready to support it. Blu-Ray didn’t win over HD-DVD because it was technically superior. It won because Sony supported it and convinced Disney to move their media onto it exclusively.

Software is the way forward for Cisco. Software that provides value for enterprises and drives upgrades and expansion. The hardware itself isn’t going to pull more software onto the bottom line.

The S Stands for Synergy

The word synergy is overused in business vernacular. Jack Welch burned us out on the idea that we can get more out of things together by finding hidden gems that aren’t readily apparent. I think that the real value in the synergy between Cisco and Splunk can be found in the value of creating better code and better programmers.

A bad example of synergy is Cisco’s purchase of Flip Video. When it became clear that the market for consumer video gear wasn’t going to blow up quite like Cisco had hoped they pivoted to talk about using the optics inside the cameras to improve video quality for their video collaboration products. Which never struck me as a good argument. They bet on something that didn’t pay off and had to salvage it the best they could. How many people went on to use Cisco cameras outside of the big telepresence rooms? How many are using them today instead of phone cameras or cheap webcams?

Real synergy comes when the underlying processes are examined and improved or augmented. Cisco is gaining a group of developers and product people that succeed based on the quality of their code. If Splunk didn’t work it wouldn’t be a market leader. The value of what that team can deliver to Cisco across the organization is important. The ongoing critiques of Cisco’s code quality has created a group of industry professionals that are guarded when it comes to using Cisco software on their devices or for their enterprises. I think adding the expertise of the Splunk teams to that group will go a long way to helping Cisco write more stable code in the long term.


Tom’s Take

Cisco needed Splunk. They needed a SIEM to compete in the wider security market and rather than investing in R&D to build one they decided to pick up the best that was available. The long courtship between the two companies finally paid off for the investors. Now the key is to properly integrate Splunk into the wider Cisco Security strategy and also figure out how to get additional benefits from the development teams to offset the big purchase price. The essence of those S’s above is that Cisco is continuing their transformation away from a networking hardware company and becoming a more diversified software firm. It will take time for the market to see if that is the best course of action.

Wi-Fi 6E Won’t Make a Difference

It’s finally here. The vaunted day when the newest iPhone model has Wi-Fi 6E. You’d be forgiven for missing it. It wasn’t mentioned as a flagship feature in the keynote. I had to unearth it in the tech specs page linked above. The trumpets didn’t sound heralding the coming of a new paradigm shift. In fact, you’d be hard pressed to find anyone that even cares in the long run. Even the rumor mill had moved on before the iPhone 15 was even released. If this is the technological innovation we’ve all been waiting for, why does it sound like no one cares?

Newer Is Better

I might be overselling the importance of Wi-Fi 6E just a bit, but that’s because I talk to a lot of wireless engineers. More than a couple of them had said they weren’t even going to bother upgrading to the new USB-C wonder phone unless it had Wi-Fi 6E. Of course, I didn’t do a survey to find out how many of them had 6E-capable access points at home, either. I’d bet the number was 100%. I’d be willing to be the survey of people outside of that sphere looking to buy an iPhone 15 Pro that can tell me if they have a 6E-capable chipset at home is much, much lower.

The newest flagship device has cool stuff. Better cameras, faster processor, more RAM, and even titanium! The reasons to upgrade are legion depending on how old your device is. Are you really ready to sink it all because of a wireless chipset design? There are already a number of folks saying they won’t upgrade their amazing watch because Apple didn’t make it black this year. Are the minor technical achievements really deal breakers in the long run?

The fact of the matter is that the community of IT pros outside of the wireless space don’t actually care about the wireless chipset in their phone. Maybe it’s faster. Maybe it’s cooler. It could even be more about bragging rights than anything else. However, just like the M1 MacBook Wi-Fi, the real-world results are going to be a big pile of “it depends”. That’s because organizations don’t make buying decisions based on consumer tech.

Sure, the enterprise may have been pushed in certain directions in the past due to the adoption of smart phones. Go into any big box store and see how the employees are using phones instead of traditional scanners for inventory management. Go into your average bank or hospital and ask the CIO what their plans are to upgrade the wireless infrastructure to support Wi-Fi 6E now that Apple supports it across the board on their newest devices. I bet you get a very terse answer.

Gen Minus One

The buying patterns for enterprise IT don’t support bleeding edge technology. That’s because most enterprises don’t run on the bleeding edge. Their buying decisions are informed by the installation base of their users, not on their projected purchases. Enterprises aren’t going to take a risk on buying something that isn’t going to provide benefit for the investment. Trying to provide that benefit for a small number of users is even more suspect. Why spend big bucks for a new access point that a tenth of my workforce can properly use?

Buying decisions and deployment methodology follow a timeline that was decided upon months ago, even for projects that come up out of the blue. If you interview your average CIO with a good support team they can tell you how old their devices are, what order they are planned to be replaced, and roughly how much that will cost today. They have a plan ready to plug in when the executive team decides there is budget to spend. Strike while the funding iron is hot!

To upend the whole plan because some new device came out is not an easy sell to the team. Especially if it means reducing the number of devices that can be purchased because the newer ones cost more. If anything it will encourage the teams to hold on to that particular budget until the prices of those cutting edge devices falls to a point where they are more cost effective for a user base that has refreshed devices and has a need for faster connectivity.

Wi-Fi 6E suffers from a problem common to IT across the board. It’s not exciting enough to be important. The current generation of devices can utilize the connectivity it provides efficiently. The airspace in an enterprise is certainly crowded enough to need new bands for high performance devices to move into. But does the performance of Wi-Fi 6E create such a gap as to make it a “must have” in the budget? What would you be willing to sacrifice to get it? And would your average user notice the difference? If you can’t say for certain that incremental improvement will make that much of a difference for the non-wireless savvy person then you’re going to find yourself waiting for the next revision of the standard. Which, sadly, as the benefit of having a higher number. Which means it’s obviously better, right?


Tom’s Take

I like shiny new things. I didn’t upgrade my phone this year because my older one is good enough for my use case. If I were to rank all the reasons why I wanted to upgrade I’d put Wi-Fi 6E near the bottom of the list. It’s neat. I like the technology behind it. For the average CIO it doesn’t move the needle. It doesn’t have an impressive pie chart or cost savings associated with it. If you upgraded everyone to Wi-Fi 6E overnight no one would notice. And even if they did they’d be asking when Wi-Fi 7 was coming out because that one is really cool, even if they know zero about what it does. Wi-Fi 6E on a mobile device won’t matter in the long run because the technology isn’t cool enough to be noticed by people that aren’t looking for it.

Overcoming the Wall

I was watching a Youtube video this week that had a great quote. The creator was talking about sanding a woodworking project and said something about how much it needed to be sanded.

Whenever you think you’re done, that’s when you’ve just started.

That statement really resonated with me. I’ve found that it’s far too easy to think you’re finished with something right about the time you really need to hunker down and put in extra effort. In running they call it “hitting the wall” and it usually marks the point when your body is out of energy. There’s often another wall you hit mentally before you get there, though, and that’s the one that needs to be overcome with some tenacity.

The Looming Rise

If your brain is like mine you don’t like belaboring something. The mind craves completion and resolution. Once you’ve solved a problem it’s done and finished. No need to continue on with it once you’ve reached a point where it’s good enough. Time to move on to something else that’s new and exciting and a source of dopamine.

However, that feeling of being done with something early on is often a false sense of completion. I learned that the hard way when I was studying for my CCIE. Every question has an answer. Some questions have a couple of different answers. However, knowing the correct answer isn’t the same as knowing all the incorrect answers. Why would I want to take the time to learn all the wrong things instead of just learning what’s right and moving on to the next topic?

The reason to keep going even after you know what’s right is to recognize what the wrong thing looks like. When studying you’re often confronted with suboptimal situations or, especially with the CCIE, put into positions where you can make mistakes that will lead to disaster if you don’t recognize the pitfalls early. Maybe it’s creating a routing loop. It could be a choice between two methods of configuration that really only has one correct answer if you know why the other one will cause problems.

Persevering through that mental wall that says “you’ve done enough” is important because the extra value you gain when you do is critical to understand the myriad ways that something can be broken. It’s not enough to know it’s not right. You have to recognize what isn’t right about it. That kind of understanding can come from practice experience, like making the mistake, or through careful study in controlled situations like learning all the wrong ways to work the problem.

The Challenging Ascent

Getting over that wall isn’t easy. Your brain doesn’t want to struggle past the right way to do things. It craves challenge and novelty. You’re going to have to work against your better nature to get to a point where you’re past the wall. Don’t be afraid to lie to yourself to get where you need to be.

When running I will trick myself when I hit my mental wall by saying “one more song” or “one more block” when I’m ready to give up. The idea that I can make it a short distance or short amount of time is comforting to my brain when it wants to stop. And by tricking it I can often push a little harder to another song or two more blocks before I get completely over the wall and have the mental toughness to continue.

Likewise, when you’re studying and you’ve found the correct answer you need to push yourself to find one incorrect way at first. Maybe a second. If it’s something that has configurable settings you should investigate a few wrong values to figure out what happens when things are outside of bounds or when they’re just a little bit off. Maybe convince yourself to figure out two or three and write down the results. If one of them ends up being really interesting it could spark you to do more investigation to find out what caused that particular outcome.

You’ll find that you can get past your mental blocks much easier with tricks like that. More importantly, you’ll also find that you can get them to pop up faster and be overcome with less effort as you understand when they happen. If you’ve ever sat down to study something and your brain immediately wants to give up you know that the wall is right in front of you. How you overcome it can mean the difference between truly understanding a topic and just knowing enough about the answer to regurgitate it later.


Tom’s Take

As always, your mileage may vary with skills like these. I’d wager that most people do hit a wall whether it’s running or doing math or studying the intricacies of how OSPF works over non-broadcast networks. Don’t settle for your brain telling you that you’re done. Seek to really put in the work and understand what’s going on. Write everything down so you know what you’ve discovered. And when that wall seems like it’s too high to climb just whisper to yourself you’re going to climb another foot. And then another. And pretty soon you’ll be over and in the clear.

Networking Is Fast Enough

Without looking up the specs, can you tell me the PHY differences between Gigabit Ethernet and 10GbE? How about 40GbE and 800GbE? Other than the numbers being different do you know how things change? Do you honestly care? Likewise for Wi-Fi 6, 6E, and 7. Can you tell me how the spectrum changes affect you or why the QAM changes are so important? Or do you want those technologies simply because the numbers are bigger?

The more time I spend in the networking space the more I realize that we’ve come to a comfortable point with our technology. You could call it a wall but that provides negative connotations to things. Most of our end-user Ethernet connectivity is gigabit. Sure, there are the occasional 10GbE cards for desktop workstations that do lots of heavy lifting for video editing or more specialized workflows like medical imaging. The rest of the world has old fashioned 1000Mb connections based on 802.3z ratified in 1998.

Wireless is similar. You’re probably running on a Wi-Fi 5 (802.11ac) or Wi-Fi 6 (802.11ax) access point right now. If you’re running on 11ac you might even be connected using Wi-Fi 4 (802.11n) if you’re running in 2.4GHz. Those technologies, while not quite as old as GigE, are still prevalent. Wi-Fi 6E isn’t really shipping in quantity right now due to FCC restrictions on outdoor use and Wi-Fi 7 is a twinkle in hardware manufacturers’ eye right now. Why aren’t we clamoring for more, faster, better, stronger all the time?

Speedometers

How fast can your car go? You might say you’ve had it up to 100 mph or above. You might take a look at your speedometer and say that it can go as high as 150 mph. But do you know for sure? Have you really driven it that fast? Or are you guessing? Would you be shocked to learn that even in Germany, where the Autobahn has an effectively unlimited speed limit, that cars are often limited to 155 mph?. Even though the speedometer may go higher the cars are limited through an agreement for safety reasons. Many US vehicles are also speed limited between 110 and 140 mph.

Why are we restricting the speeds for these vehicles? Safety is almost always the primary concern, driven by the desire for insurance companies to limit claims and reduce accidents. However, another good reason is also why the Autobahn has a higher effective speed limit: road conditions. My car may go 100 mph but there are very few roads in my part of the US that I would feel comfortable going that fast on. The Autobahn is a much better road surface for driving fast compared to some of the two-lane highways around here. Even if the limit was higher I would probably drive slower for safety reasons. The roads aren’t built for screaming speeds.

That same analogy applies to networking. Sure, you may have a 10GbE connection to your Mac Mini and you may be moving gigs of files back and forth between machines in your local network. What happens if you need to upload it to Youtube or back it up to cloud storage? Are you going to see those 10GbE speeds? Or are you going to be limited to your ISP’s data rates? The fastest engine can only go as fast the pathways will permit. In essence, that hot little car is speed limited because of the pathway the data takes to the destination.

There’s been a lot of discussion in the space about ever-increasing connectivity from 400GbE to 800GbE and soon even into the terabit range. But most of it is specialized for AI workloads or other massive elephant flows that are delivered via a fabric. I doubt an ISP is going to put in an 800GbE cross connect to increase bandwidth for consumers any time soon. They won’t do it because they don’t need to. No consumer is going to be running quite that fast.

Likewise, increasing speeds on wireless APs to more than gigabit speeds is silly unless you want to run multiple cables or install expensive 10GbE cards that will require new expensive switches. Forgetting Multigig stuff for now you’re not going to be able to plug in a 10GbE AP to an older switch and get the same performance levels. And most companies aren’t making 10GbE campus switches. They’re still making 1GbE devices. Clients aren’t topping out their transfer rates over wireless. And even if they did they aren’t going to be going faster than the cable that plugs the AP into the rest of the network.

Innovation Idling

It’s silly, right? Why can’t we make things go faster?!? We need to use these super fast connections to make everything better. Yet somehow our world works just fine today. We’ve learned to work with the system we have. Streaming movies wouldn’t work on a dial-up connection but adding 10GbE connections to the home won’t make Netflix work any faster than it does today. That’s because the system is optimized to deliver content just fast enough to keep your attention. If the caching servers or the network degrades to the point where you have to buffer your experience is poor. But so long as the client is getting streaming data ahead of you consuming it you never know the difference, right?

Our networks are optimized to deliver data to clients running on 1GbE. Without a massive change in the way that workloads are done in the coming years we’re never going to be faster than that. Our software programs might be more optimized to deliver content within that framework but I wouldn’t expect to see 10GbE become a huge demand in client devices. Frankly, we don’t need that much speed. We don’t need to run flat out all the time. Just like a car engine we’re more comfortable running at a certain safe speed that preserves our safety and the life of the equipment.


Tom’s Take

Be honest with yourself. Do you want 10GbE or Wi-Fi 7 because you actually need the performance? Or do you just want to say you have the latest and greatest? Would you pay extra for a v12 engine in a sports car that you never drive over 80 mph? Just to say you have it? Ironically enough, this is the same issue that cloud migrations face today. We buy more than we need and never use it because we don’t know what our workloads require. Instead, we buy the fastest biggest thing we can afford and complain that something is holding it back. Rather than rushing out to upgrade your Wi-Fi or Ethernet, ask yourself what you need, not what you want. I think you’ll realize the network is fast enough for the foreseeable future.