The Heat is On

One of the things I like to do in my twenty-eight minutes of spare time per week is play Battletech. It’s a table top wargame that involves big robots and lots of weapons. Some of them are familiar, like missiles and artillery. Because it’s science fiction there are also lasers and other crazy stuff. It’s a game of resource allocation. Can my ammunition last through this fight? You might be asking yourself “why not just carry lots of lasers?” After all, they don’t need ammo. Except the game designers thought of that too. Lasers produce heat. And heat, like ammunition, must be managed. Generate too much and you will shut down. Or boil your pilot alive in the cockpit. Rewind a thousand years and the modern network in a data center is facing a similar issue.

Watt Are You Talking About?

The average AI rack is expected to consume 600 kilowatts of power by next year. GPUs and CPUs are hungry beasts. They need to be fed as much power as possible in order to do whatever math makes AI happen. They have to come up with creative ways to cool those devices as well. We’re quickly reaching the limits of air cooling, with new designs using liquid cooling and even immersion in mineral oil to keep these machines from frying everything.

Networking is no slouch in the energy consumption department either. The latest generation of networking devices are ramping up to 800 GbE and consuming a significant amount of energy to keep the bandwidth flowing. One of the biggest consumers of power in these racks is the digital signal processor (DSP) that are clustered inside fiber optic modules. You have to have a lot of DSPs to condition the light that allows for operations at the top end of the scale. Additionally, the CPUs and ASICs themselves inside the units are running at peak performance to provide the resources AI clusters need to return value.

Worse yet, the very design of the network switch works against traditional air cooling. Servers have pretty face plates with air channels that can pull in air and vent it out the back. Nothing in the way except for maybe a USB stick when you’re loading the operating system. Switches, on the other hand, plug in all the cables up front. Those cables and modules impede air flow and reduce the amount of cooling the switch is capable of producing because of simple physics. If you’ve ever put your hand in front of a modern switch you know that it can pull in a lot of air. But every connection reduces that and creates a feedback loop.

Chill Out

Just like with the AI cluster servers, we have to deal with the heat generation. Unlike the AI servers, we can’t just dunk them in oil or build a bigger chassis with more fans to bleed it off. I mean, we technically could if necessary. But when is the last time you saw a top-of-rack (ToR) switch bigger than 2U? That’s because the real value in the rack isn’t the network. It’s the server that the network connects. Every rack unit of space you give to the networking gear or anything that isn’t a server is a wasted RU that could have been generating revenue.

Modern networking devices have hit 800 GbE but AI wants more. There are designs that are looking to move to 1.6 TbE in the near future. But can they handle the load of DSPs that will be required to condition the laser light? Will they be able to reduce the size of this switches down to accommodate rack economics? Can they keep the whole mess cool enough that it won’t melt anyone that walks into the hot aisle to check on a cable before getting roasted by a flame throwing?

Plug It In

One of the solutions that looks to reduce this is Linear Pluggable Optics (LPO). LPO works to solve these issues by moving the DSPs out of the pluggable module itself and into the switch ASIC. This does do a good job of reducing the power draw of the optical module itself while moving the complexity of signal conditioning into the switch. The net result is less power consumption but the tradeoff is more complexity. To quote Battletech YouTuber Mechanical Frog, “Opportunity cost spares no one.”

Likewise, a competing solution like Co-packed Optics (CPO) trades the module itself for wiring the optical portion of connection right into the switch. This eliminates the need for DSPs to condition the signal and significantly reduces power and heat generation but at the cost of the flexibility of using modules in the first place. I’ll have more to say on CPO in the future.


Tom’s Take

The thing to think about is that the technology we’re using creates the bounds that we have to work within for other technology to build on. We have to deliver networking that doesn’t stop for AI to function the way the designers want. To them, the value of AI is not being more power efficient or generating less heat. Instead, it’s about crunching numbers faster or making better pictures that people want to generate. Data centers have an unlimited power budget and heat removal capabilities as far as AI companies are concerned. Reality dictates that nothing is unlimited but it’s up to the manufacturers to scale those limits before someone hits them. Otherwise, the network is just going to get blamed again. And the heat really gets turned up.

Wi-Fi 8 Already?

A picture of the ASUS ROG Wi-Fi 8 AP

The ASUS ROG Wi-Fi 8 AP

Did you see the big news from CES? Wi-Fi 8 is here! Broadcom was talking about it. ASUS rolled out a d20 Wi-Fi 8 AP. MediaTek made sure they had one too. I guess I should probably take down all these Wi-FI 7 APs and just install the new stuff, right?

Nuts and Bolts

Before my favorite people start jumping out there to talk about how the newest iPhone and Samsung Galaxy phones are trash because they don’t support Wi-Fi 8, we need to make sure we’re all on the same page here. The standard behind the Wi-Fi 8 marketing term, 802.11bn, is a proposed working standard. It’s not finalized yet. It’s still in the draft phases, with final standards approval not expected until September 2028.

The focus of 802.11bn is not speed. It’s reliability. When you look at the way that vendors have been pushing more and more throughput for the past several revisions you might ask yourself how much faster things could get. For the eighth release of the Wi-Fi standard the answer is “not any faster.” Wi-Fi 8 is keeping the same speed numbers that Wi-Fi 7 has right now. Instead, the developers are looking to add features like reduced power consumption, extended range, and quality of service (QoS) enhancements.

The goals are important because speed isn’t going to solve all your problems. Faster doesn’t mean better when you’re dropping too many packets or your streaming video gets caught behind file transfers like patches to your Playstation games. Likewise, the extended range capabilities are designed to keep asymmetric connections from causing issues when your upload is paltry compared to the downlink. Despite the marketing behind “bigger number means better”, having a new protocol focused on consistency over expansion is important. Look no further than OS X Snow Leopard for the value of a maintenance release focused on fixing problems over creating new ones.

Carts and Horses

So, why are we talking about Wi-Fi 8 right now? What is it about the industry that gets people so excited about tech that is barely baked at this point? If you’re already feeling like you are falling behind on the treadmill of upgrades you’re not alone. I would gather that my readers here aren’t the ones that are the target market for these announcements.

Where did these announcements happen? CES isn’t known as a hotbed for enterprise technology. Aside from the crazy focus on AI this year you’d be hard pressed to find firewalls and SANs and cloud computing advancements at CES. However, wireless technology crosses both markets. The same Wi-Fi you use in your enterprise works at your house too. And the real target market for these announcements is the gamer market.

How often do you upgrade your home wireless? I’d venture it’s not on the same 3-5 year cycle that your enterprise APs are on. I’d wager your upgrade cycle is more replacement of broken gear over exciting new features. That’s hard for manufacturers to predict. They like consistent cycles because their investors like consistent payouts. That means they need to get you to upgrade more often. If your devices are also on a similar upgrade trajectory you’re likely not going to be upgrading soon. Why upgrade to a Wi-Fi 7 AP when my laptop only supports Wi-Fi 6?

ASUS and MediaTek really want you to jump now. Sure, the gear is pre-pre-standard. Yeah, the advances aren’t totally baked right now and all that fancy QoS to prioritize streaming video is going to get marked down at the provider edge. What’s really important is that you buy it now because they need your cash to keep the product lines moving. Research and Development isn’t easy or cheap. And buying a new polyhedral AP means they can keep developing it when the standard inevitably changes later this year. More importantly, if you buy a pre-standard product in the consumer space now you can guarantee you’re not going to get an upgrade path to make it standard in two years. What you get instead is a device that’s cheap enough to replace in 2029 when the standard is finalized and devices are actually supporting those fancy new features in the protocol. The vendors get to double dip into your wallet.


Tom’s Take

Unless you’re buying one of these things for a review unit, don’t buy Wi-Fi 8. You don’t need it. It’s not ready. More importantly, not buying these units now when they are barely out of alpha means that companies, especially MediaTek, need to learn to put some more effort into the product instead of just trying to get you to jump at the newest biggest number. That’s just for the consumer side of the house. Please do not buy this to put into an enterprise, no matter how much your CEO begs you to do it for their office. You’re rewarding bad behavior and bad marketing. Take a little joy in making your executives wait for a technology to be fully baked before you implement it.

Focus is In for 2026

Hey everyone. It’s January 1 again, which means it’s time for me to own up to the fact that I wrote five posts in 2025. Two of those were about AI. Not surprising given that everyone was talking about it. But that seemed to be all I was talking about. What else was I doing instead?

  • I upped my running amount drastically. I covered over 1,600 miles this year. I ran another half marathon distance for the first time in four years. I feel a lot better about my health and my consistency because now running is something I prioritize. I don’t think I’m going to run quite so much in 2026 but you never know.
  • I revitalized a podcast. We relaunched Security Boulevard with big help from my coworker Corey Dirrig. We’ve got a great group of hosts that discuss weekly security topics. You should totally check it out.
  • I’m also doing more with things like Techstrong Gang and other Futurum Group media. That’s in addition to the weekly Tech Field Day Rundown I host with Alastair Cooke. Lots of video!
  • For those that follow my Scouting journey, I was asked to be an Assistant District Commissioner with the goal of becoming the District Commissioner in 2026.

Finding Focus

In total, that means my focus has been elsewhere. It’s what happens in life. We find things that are important to us and we put our energy there. My blog has always been a place where I collect my thoughts and explore ideas, both tech and non-tech related. It’s an authentic part of what’s going on in my head and how I see the world.

One thing I have resisted is the temptation to use generative AI to “fill” the gaps in posting. It’s way too easy to let Claude or Gemini do the hard work of creating content and then just posting the edited version. That’s not going to sound like me. That’s not going to be my insight. It’s an algorithm that has been trained poorly to kind of sound like me. Trust me when I say that there isn’t enough technology right now to generate the kind of snark and witty repartee that you get in a single post here!

Something else that has come up in the past year is that I find myself looking more at the interactions between the people and the tech. The fascinating stories to me weren’t always how many billions had been invested into the Great AI Circle or how many gigawatts are being proposed for new data centers in the middle of South Dakota. Instead, it’s how the people are affected. It’s seeing jobs being cut because we need more capital for GPUs. It’s seeing how people are adjusting to a world where they can just make up ideas on the fly and generate anything they can think of, provided they can refine the prompt enough. That’s where the action is going to be for me in the coming year.


Tom’s Take

Where does that leave this place in 2026? It means I’m going to get back to more writing. Yeah, I have said that the last couple of years. But I realize how important it is to me. Once I write it down it sticks in my head. More importantly, it becomes a part of the larger body of knowledge. I still get traffic on some very old posts here. That means someone is searching to answer the same questions I used to want answers for. That also means that people are still using written words over videos. If that’s the case I’m going to do my part of keep things going.

With all the podcasts and Field Day events and other things going on I might not get back to the magical land of 50 posts in the year but I promise I’m going to do better than five. It’s going to be a long year of AI, quantum computing, security breaches, networking, wireless, industry consolidation, and more. I’m going to do my best to cover it authentically and ensure you have my snarky take on it all.