About networkingnerd

Tom Hollingsworth, CCIE #29213, is a former network engineer and current organizer for Tech Field Day. Tom has been in the IT industry since 2002, and has been a nerd since he first drew breath.

What Happens When The Internet Breaks?

It’s a crazy idea to think that a network built to be completely decentralized and resilient can be so easily knocked offline in a matter of minutes. But that basically happened twice in the past couple of weeks. CloudFlare is a service provide that offers to sit in front of your website and provide all kinds of important services. They can prevent smaller sites from being knocked offline by an influx of traffic. They can provide security and DNS services for you. They’re quickly becoming an indispensable part of the way the Internet functions. And what happens when we all start to rely on one service too much?

Bad BGP Behavior

The first outage on June 24, 2019 wasn’t the fault of CloudFlare. A small service provider in Pennsylvania decided to use a BGP Optimizer from Noction to do some route optimization inside their autonomous system (AS). That in and of itself shouldn’t have caused a problem. At least, not until someone leaked those routes to the greater internet.

It was a comedy of errors. The provider in question announced their more specific routes to an upstream customer, who in turn announced them to Verizon. After that all bets are off. Because those routes were more specific than the aggregates they became the preferred routes. And when the whole world beats a path to your door to get to the rest of the world, you see issues.

Those issues caused CloudFlare to go offline. And when CloudFlare goes offline everyone starts having issues. The sites they are the front end for go offline, even if the path to those sites is still valid. That’s because CloudFlare is acting as the front end for your site when you use their service. It’s great because it means that when someone knocks your system offline or hits you with a ton of traffic you’re safe because CloudFlare can support a lot more bandwidth than you can, especially if you’re self hosted. But if CloudFlare is out, you’re out of luck.

There was a pretty important lesson to be learned in all this and CloudFlare did an okay job of explaining some of those lessons. But the tone of their article was a bit standoffish and seemed to imply that the people whose responsibility it was to keep the Internet running should do a better job of keeping their house in order. For those of you playing along at home, you’ll realize that the irony overlords were immediately summoned to mete out justice to CloudFlare.

Irregular Expression

On July 2nd, CloudFlare went down again. This time, instead of seeing issues with routing packets or delays, users of the service were greeted with 502 Bad Gateway errors. Again, when CloudFlare is down your site is down even if you’re not offline. And then the speculation started. Was this another BGP hijack? Was CloudFlare being attacked? No one knew and most of the places you could go look were offline, including one of the biggest offline site detectors, which was a user of CloudFlare services.

CloudFlare [eventually posted a blog owning up to the fact that it wasn’t an attack or a BGP issue, but instead was the result of a bad web application firewall (WAF) rule being deployed globally in one go. A single regular expression (regex) was responsible for spiking the CPU utilization of the entirety of the CloudFlare network. And when all your CPUs are cranking along at 100% utilization across the board, you are effectively offline.

In the post-mortem CloudFlare had to eat a little crow and admit that their testing procedures for catching this particular issue were inadequate. To see the stance they took with Verizon and Noction just a week or so before and then to see how they had to admit that this one was all on them was a bit humbling for sure. But, more importantly, it shows that you have to be vigilant in every part of your organization to ensure that some issue that you deploy isn’t going to cause havoc on the other side. Especially if you’re the responsible party of a large percentage of traffic on the web.


Tom’s Take

I think CloudFlare is doing good work with their services. But I also think that too many people are relying on them to provide services that should be planned out and documented. It’s important to realize that no one service is going to provide all the things you need to stay resilient. You need to know how you’re keeping your site online and what your backup plan is when things go down.

And, if you’re running one of those services, you’d better be careful about running your mouth on the Internet.

Extremely Hive Minded

I must admit that I was wrong. After almost six years, I was mistake about who would end up buying Aerohive. You may recall back in 2013 I made a prediction that Aerohive would end up being bought by Dell. I recall it frequently because quite a few people still point out that post and wonder what if it’s happened yet.

Alas, June 26, 2019 is the date when I was finally proven wrong when Extreme Networks announced plans to purchase Aerohive for $4.45/share, which equates to around $272 million paid, which will be adjust for some cash on hand. Aerohive is the latest addition to the Extreme portfolio, which now includes pieces of Brocade, Avaya, Enterasys, and Motorola/Zebra.

Why did Extreme buy Aerohive? I know that several people in the industry told me they called this months ago, but that doesn’t explain the reasoning behind spending almost $300 million right before the end of the fiscal year. What was the draw that have Extreme buzzing about this particular company?

Flying Through The Clouds

The most apparent answer is HiveManager. Why? Because it’s really the only thing unique to Aerohive that Extreme really didn’t have already. Aerohive’s APs aren’t custom built. Aerohive’s switching line was rebadged from an ODM in order to meet the requirements to be included in Gartner’s Wired and Wireless Magic Quadrant. So the real draw was the software. The cloud management platform that Aerohive has pushed as their crown jewel for a number of years.

I’ll admit that HiveManager is a very nice piece of management software. It’s easy to use and has a lot of power behind the scenes. It’s also capable of being tuned for very specific vertical requirements, such as education. You can set up self-service portals and Private Pre-Shared Keys (PPSKs) fairly easily for your users. You can also build a lot of policy around the pieces of your network, both hardware and users. That’s a place to start your journey.

Why? Because Extreme is all about Automation! I talked to their team a few weeks ago and the story was all about building automation platforms. Extreme wants to have systems that are highly integrated and capable of doing things to make life easier for administrators. That means having the control pieces in place. And I’m not sure if what Extreme had already was in the same league as HiveManager. But I doubt Extreme has put as much effort into their software yet as Aerohive had invested in theirs over the past 8 years.

For Extreme to really build out the edge network of the future, they need to have a cloud-based management system that has easy policy creation and can be extended to include not only wireless access points but wired switches and other data center automation. If you look at what is happening with intent-based networking from other networking companies, you know how important policy definition is to the schema of your network going forward. In order to get that policy engine up and running quickly to feed the automation engine, Extreme made the call to buy it.

Part of the Colony

More importantly than the software piece, to me at least, is the people. Sure, you can have a bunch of people hacking away at code for a lot of hours to build something great. You can even choose to buy that something great from someone else and just start modifying it to your needs. Extreme knew that adapting HiveManager to fulfill the needs of their platform wasn’t going to be a walk in the park. So bringing the Aerohive team on board makes the most sense to me.

But it’s also important to realize who had a big hand in making the call. Abby Strong (@WiFi_Princess) is the VP of Product Marketing at Extreme. Before that she held the same role at Aerohive in some fashion for a number of years. She drove Aerohive to where they were before moving over to Extreme to do something similar.

When you’re building a team, how do you do it? Do you run out and find random people that you think are the best for the job and hope they gel quickly? Do you just throw darts at a stack of resumes and hope random chance favors your bold strategy? Or do you look at existing teams that work well together and can pull off amazing feats of technical talent with the right motivation? I’d say the third option is the most successful, wouldn’t you?

It’s not unheard of in the wireless industry for an entire team to move back and forth between companies. There’s a hospitality team that’s moved back and forth between Ruckus, Aerohive, and Ubiquiti. There are other teams, like some working on 802.11u, that bounced around a couple of times before they found a home. Which makes me wonder if Extreme bought Aerohive for HiveManager and ended up with the development team as a bonus? Or if they decided to buy the development team and got the software for “free”?


Tom’s Take

We all knew Aerohive was putting itself on the market. You don’t shed sales staff and middle management unless you’re making yourself a very attractive target for acquisition. I still held out hope that maybe Dell would come through for me and make my five-year-old prediction prescient. Instead, the right company snapped up Aerohive for next to nothing and will start in earnest integrating HiveManager into their stack in the coming months. I don’t know what the future plans for further integration look like, but the wireless world is buzzing right now and that should make life extremely sweet for the Aerohive team.

Cisco Live 2019 – Rededicating Community

The 2019 Cisco Live Sign Photo

Another Cisco Live is in the books for me. I was a bit shocked to realize this was my 14th event in a row. I’ve been going to Cisco Live half of the time it’s been around! This year was back in San Diego, which has good and bad points. I’d like to discuss a few of them there and get the thoughts of the community.

Good: The Social Media Hub Has Been Freed! – After last year’s issues with the Social Media Hub being locked behind the World of Solutions, someone at Cisco woke up and realized that social people don’t keep the same hours as the show floor people. So, the Hub was located in a breezeway between the Sails Pavilion and the rest of the convention center. And it was great. People congregated. Couches were used. Discussions were had. And the community was able to come together again. Not during the hours when it was convenient. But a long time. This picture of the big meeting on Thursday just solidifies in my mind why the Social Media Hub has to be in a common area:

You don’t get this kind of interaction anywhere else!

Good: Community Leaders Step Forward – Not gonna lie. I feel disconnected sometimes. My job at Tech Field Day takes me away from the action. I spend more time in special sessions than I do in the social media hub. For any other place that could spell disaster. But not for Cisco Live. When the community needs a leader, someone steps forward to fill the role. This year, I was happy to see my good friend Denise Fishburne filling that role. The session above was filled with people paying rapt attention to Fish’s stories and her bringing people into the community. She’s a master at this kind of interaction. I was even proud to sit on the edge and watch her work her craft.

Fish is the d’Artagnan of the group. She may be part of the Musketeers of Social Media but Fish is undoubtedly the leader. A community should hope to have a leader that is as passionate and involved as she is, especially given her prominent role in Cisco. I feel like she can be the director of what the people in the Social Media Hub need. And I’m happy to call her my friend.

Bad: Passes Still Suck – You don’t have to do the math to figure out that $700 is bigger than $200. And that $600/night is worse than $200/night. And yet, for some reason we find ourselves in San Diego, where the Gaslamp hotels are beyond insane, wondering what exactly we’re getting with our $700 event pass. Sessions? Nope. Lunch? Well, sort of. Access to the show floor? Only when it’s open for the random times during the week. Compelling content? That’s the most subjective piece of all. And yet Cisco is still trying to tell us that the idea of a $200 social-only pass doesn’t make sense.

Fine. I get it. Cisco wants to keep the budgets for Cisco Live high. They got the Foo Fighters after all, right? They also don’t have to worry about policing the snacks and food everywhere. Or at least not ordering the lowest line items on the menu. Which means less fussing about piddly things inside the convention center. And for the next two years it’s going to work out just great in Las Vegas. Because Vegas is affordable with the right setup. People are already booking rooms at the surrounding hotels. You can stay at the Luxor or the Excalibur for nothing. But if the pass situation is still $700 (or more) in a couple of years you’re going to see a lot of people dropping out. Because….

Bad: WTF?!? San Francisco?!? – I’ve covered this before. My distaste for Moscone is documented. I thought we were going to avoid it this time around. And yet, I found out we’re going back to SF in 2022.

WHY?!?!?!?

Moscone isn’t any bigger. We didn’t magically find seating for 10,000 extra people. More importantly, the hotel situation in San Fran is worse than ever before. You seriously can’t find a good room this year for VMworld. People are paying upwards of $500/night for a non-air conditioned shoe box! And why would you do this to yourself Cisco?

Sure, it’s cheap. Your employees don’t need hotel rooms. You can truck everything up. But your costs savings are being passed along to the customer. Because you would rather them pay through the nose instead of footing the bill yourself. And Moscone still won’t hold the whole conference. We’ll be spilled over into 8 different hotels and walking from who knows where to get to the slightly nicer shack of a convention center.

I’m not saying that Cisco Live needs to be in Vegas every year. But it’s time for Cisco to start understanding that their conference needs a real convention center. And Moscone ain’t it.

Better: Going Back to Orlando – As you can see above, I’ve edited this post to include new information about Cisco Live 2022. I have been informed by multiple people, including internal Cisco folks, that Live 2022 is going to Orlando and not SF. My original discussion about Cisco Live in SF came from other sources with no hard confirmation. I believe now it was floated as a trial balloon to see how the community would respond. Which means all my statements above still stand regarding SF. Now it just means that there’s a different date attached to it.

Orlando is a better town for conventions than SF. It’s on-par with San Diego with the benefit that hotels are way cheaper for people because of the large amount of tourism. I think it’s time that Cisco did some serious soul searching to find a new venue that isn’t in California or Florida for Cisco Live. Because if all we’re going to do is bounce back and forth between San Diego and Orlando and Vegas over and over again, maybe it’s time to just move Cisco Live to Vegas and be done with the moving.


Tom’s Take

Cisco Live is something important to me. It has been for years, especially with the community that’s been created. There’s nothing like it anywhere else. Sure, there have been some questionable decisions and changes here and there. But the community survives because it rededicates itself every year to being about the people. I wasn’t kidding when I tweeted this:

Because the real heart of the community is each and every one of the people that get on a plane and make the choice time and again to be a part of something special. That kind of dedication makes us all better in every possible way.

The CCIE Times Are A Changing

Today is the day that the CCIE changes. A little, at least. The news hit just a little while ago that there are some changes to the way the CCIE certification and recertification process happens. Some of these are positive. Some of these are going to cause some insightful discussion. Let’s take a quick look at what’s changing and how it affects you. Note that these changes are not taking effect until February 24, 2020, which is in about 8 months.

Starting Your Engines

The first big change comes from the test that you take to get yourself ready for the lab. Historically, this has been a CCIE written exam. It’s a test of knowledge designed to make sure you’re ready to take the big lab. It’s also the test that has been used to recertify your CCIE status.

With the new change on Feb. 24th, the old CCIE written will go away. The test that is going to be used to qualify candidates to take the CCIE lab exam is the Core Technology exam from the CCNP track. The Core Technology exam in each CCNP track serves a dual purpose in the new Cisco certification program. If you’re going for your CCNP you need the Core Technology exam and one other exam from a specific list. That Core Technology exam also qualifies you to schedule a CCIE lab attempt within 18 months.

This means that the CCNP is going to get just a little harder now. Instead of taking multiple tests over routing, switching, or voice you’re going to have all those technologies lumped together into one long exam. There’s also going to be more practical questions on the Core Technologies exam. That’s great if you’re good at configuring devices. But the amount of content on the individual exam is going to increase.

Keeping The Home Fires Burning

Now that we’ve talked about qualification to take the lab exam, let’s discuss the changes to recertification. The really good news is that the Continuing Education program is expanding and giving more options for recertification.

The CCIE has always required you to recertify every two years. But if you miss your recertification date you have a one year “grace period”. Your CCIE status is suspended but you don’t lose your number until the end of the one-year period. This grace period has informally been called the “penalty box” by several people in the industry. Think of it like a time out to focus on getting your certification current.

Starting February 24, 2020, this grace period is now formalized as an extra year of certification. The CCIE will now be valid for 3 years instead of just 2. However, if you do not recertified by the end of the 3rd year, you lose your number. There is no grace period any longer. This means you need to recertify within the 3-year period.

As far as how to recertify, you now have some additional options. You can still recertify using CE credits. The amount has gone up from 100 to 120 credits to reflect the additional year that CCIEs get to recertify now. There is also a new way to recertify using a combination of CE credits and tests. You can take the Core Technologies exam and use 40 CE credits to recertify. You can also pass two Specialist exams and use 40 CE credits to recertify. This is a great way to pick up skills in a new discipline and learn new technologies. You can choose to pass a single Specialist exam and use 80 CE credits to recertify within the three-year period. This change is huge for those of us that need to recertify. It’s a great option that we don’t have today. They hybrid model offers great flexibility for those that are taking tests but also taking e-learning or classroom training.

The biggest change, however, is in the test-only option. Historically, all you needed to do is pass the CCIE written every two years to recertify. With the changes to the written exam used to qualify you to take the lab, that is no longer an option. As listed above, simply taking the Core Technologies exam is not enough. You must also take 40 CE credits.

So, what tests will recertify you? The first is the CCIE lab. If you take and pass a lab exam within the recertification period you’ll be recertified. You can also take three Specialist exams. The combination of three will qualify you for recertification. You can also take the Core Technologies exam and another professional exam to recertify. This means that passing the test required for the CCNP will recertify your CCIE. There is still one Expert-level exam that will work to recertify your CCIE – the CCDE written. Because no changes were made to the CCDE program in this project, the CCDE written exam will still recertify your CCIE.

Also, your recertification date is no longer dependent on your lab date. Historically your recert date was based on the date you took your lab. Now, it’s going to be whatever date you pass your exam or submit your CEs. The good news is this means that all your certifications are going to line up. Because your CCNA and CCNP dates have always been 3 years as well, recertifying your CCIE will sync up all your certifications to the date you recertify your CCIE. It’s a very welcome quality of life change.

Another welcome change is that there will no longer be a program fee when submitting your CE credits. As soon as you have amassed the right combination you just submit them and you’re good to go. No $300 fee. There’s also a great change for anyone that has been a CCIE for 20 years or more. If you choose to “retire” to Emeritus status you no longer have to pay the program fee. You will be a CCIE forever. Even if you are an active CCIE and you choose not to recertify after 20 years you will be automatically enrolled in the Emeritus program.

Managing Change

So, this is a big change. A single test will no longer recertify your number. You’re going to have to expand your horizons by investing in continuing education. You’re going to have to take a class or do some outside study on a new topic like wireless or security. That’s the encouragement from Cisco going forward. You’re not going to be able to just keep learning the same BGP and OSPF-related topics over and over again and hope to keep your certification relevant.

This is going to work out in favor of the people that complain the CCIE isn’t relevant to the IT world of today. Because you can learn about things like network automation and programmability and such from Cisco DevNet and have it count for CCIE recertification, you have no excuse not to bring yourself current to modern network architecture. You also have every opportunity to learn about new technologies like SD-WAN, ACI, and many other things. Increasing your knowledge takes care of keeping your CCIE status current.

Yes, you’re going to lose the ability to panic after two and a half years and cram to take a single test one or two times to reset for the next three years. You also need to be on top of your CCIE CE credits and your recert date. This means you can’t be lazy any longer and just assume you need to recertify every odd or even year. It means that your life will be easier without tons of cramming. But it means that the way things used to be aren’t going to be like that any longer.


Tom’s Take

Change is hard. But it’s inevitable. The CCIE is the most venerable certification in the networking world and one of the longest-lived certifications in the IT space. But that doesn’t mean it’s carved in stone as only being a certain way forever. The CCIE must change to stay relevant. And that means forcing CCIEs to stay relevant. The addition of the continuing education piece a couple of years ago is the biggest and best thing to happen in years. Expanding the ability for us to learn new technologies and making them eligible for us to recertify is a huge gift. What we need to do is embrace it and keep the CCIE relevant. We need to keep the people who hold those certifications relevant. Because the fastest way to fade into obscurity is to keep things the way they’ve always been.

You can find more information about all the changes in the Cisco Certification Program at http://Cisco.com/nextlevel

Home on the Palo Alto Networks Cyber Range

You’ve probably heard many horror stories by now about the crazy interviews that companies in Silicon Valley put you though. Sure, some of the questions are downright silly. How would I know how to weigh the moon? But the most insidious are the ones designed to look like skills tests. You may have to spend an hour optimizing a bubble sort or writing some crazy code that honestly won’t have much impact on the outcome of what you’ll be doing for the company.

Practical skills tests have always been the joy and the bane of people the world over. Many disciplines require you to have a practical examination before you can be certified. Doctors are one. The Cisco CCIE is probably the most well-known in IT. But what is the test really quizzing you on? Most people will admit that the CCIE is an imperfect representation of a network at best. It’s a test designed to get people to think about networks in different ways. But what about other disciplines? What about the ones where time is even more of the essence than it was in CCIE lab?

Red Team Go!

I was at Palo Alto Networks Ignite19 this past week and I got a chance to sit down with Pamela Warren. She’s the Director of Government and Industry Initiatives at Palo Alto Networks. She and her team have built a very interesting concept that I loved to see in action. They call it the Cyber Range.

The idea is simple enough on the surface. You take a classroom setting with some workstations and some security devices racked up in the back. You have your students log into a dashboard to a sandbox environment. Then you have your instructors at the front start throwing everything they can at the students. And you see how they respond.

The idea for the Cyber Range came out of military exercises that NATO used to run for their members. They wanted to teach their cyberwarfare people how to stop sophisticated attacks and see what their skill levels were with regards to stopping the people that could do potential harm to nation state infrastructure or worse to critical military assets during a war. Palo Alto Networks get involved in helping years ago and Pamela grew the idea into something that could be offered as a class.

Cyber Range has a couple of different levels of interaction. Level 1 is basic stuff. It’s designed to teach people how to respond to incidents and stop common exploits from happening. The students play the role of a security operations team member from a fictitious company that’s having a very bad week. You learn how to see the log files, collect forensics data, and ultimately how to identify and stop attackers across a wide range of exploits.

If Level 1 is the undergrad work, Cyber Range Level 2 is postgrad in spades. You dig into some very specific and complicated exploits, some of which have only recently been discovered. During my visit the instructors were teaching everyone about the exploits used by OilRig, a persistent group of criminals that love to steal data through things like DNS exfiltration tunnels. Level 2 of the Cyber Range takes you deep down the rabbit hole to see inside specific attacks and learn how to combat them. It’s a great way to keep up with current trends in malware and exploitive behavior.

Putting Your Money Where Your Firewall Is

To me, the most impressive part of this whole endeavor is how Palo Alto Networks realizes that security isn’t just about sitting back and watching an alert screen. It’s about knowing how to recognize the signs that something isn’t right. And it’s about putting an action plan into place as soon as that happens.

We talk a lot about automation of alerts and automated incident response. But at the end of the day we still need a human being to take a look at the information and make a decision. We can winnow that decision down to a simple Yes or No with all the software in the world but we need a brain doing the hard work after the automation and data analytics pieces give you all the information they can find.

More importantly, this kind of pressure cooker testing is a great way to learn how to spot the important things without failing in reality. Sure, we’ve heard all the horror stories about CCIE candidates that typed in debug IP packet detail on core switch in production and watched it melt down. But what about watching an attacker recon your entire enterprise and start exfiltrating data. And you being unable to stop them because you either don’t recognize the attack vector or you don’t know where to find the right info to lock everything down? That’s the value of training like the Cyber Range.

The best part for me? Palo Alto Networks will bring a Cyber Range to your facility to do the experience for your group! There are details on the page above about how to set this up, but I got a great pic of everything that’s involved here (sans tables to sit at):

How can you turn down something like this? I would have loved to put something like this on for some of my education customers back in the day!


Tom’s Take

I really wish I would have had something like the Cyber Range for myself back when I was fighting virus outbreaks and trying to tame Conficker infections. Because having a sandbox to test myself against scripted scenarios with variations run by live people beats watching a video about how to “easily” fix a problem you may never see in that form. I applaud Palo Alto Networks for their approach to teaching security to folks and I can’t wait to see how Pamela grows the Cyber Range program!

For more information about Palo Alto Networks and Cyber Range, make sure to visit http://Paloaltonetworks.com/CyberRange/

The Good, The Bad, and The Questionable: Acquisition Activities

Sometimes I read the headlines when a company gets acquired and think to myself, “Wow, that was a great move!” Other times I can’t really speak after reading because I’m shaking my head too much about what I see to really make any kind of judgement. With that being said, I think it’s time to look at three recent acquisitions through the lens of everyone’s favorite spaghetti western.

The Good – Palo Networks Alto Buys Twistlock: This one was kind of a no-brainer to me. If you want to stay relevant in the infrastructure security space you’re going to need to have some kind of visibility into containers. If you want to stay solvent after The Cloud destroys all infrastructure spending forevermore, you’re going to need to learn how to look into containers. And when you’re ready and waiting for the collapse of the cloud, containers are probably still going to be relevant.

Joking aside, this is a great move for Palo Alto Networks. They’re getting a lot of container talent and can start looking at all kinds of ways to integrate that into their solution sets. It lets people in the organization justify the spend they have for security solutions by allowing them to work alongside the new constructs that the DevOps visionaries are using this week.

By the way, you can check out more about Palo Alto Networks June 19th at Security Field Day 2

The Bad – HPE Buys Cray?: Hands up if you were waiting for Cray to get purchased. Um, okay. Hands up if you thought Cray was actually still in business? Wow. No hands. Hmmm…

HPE has a love affair with HPC. And not just because they share a lot of letters in their acronyms. HPE has wanted to prove it has the biggest, baddest CPUs on the block. From all their press about The Machine to all the work they’ve done to build huge compute platforms, it is very clear that HPE thinks the future of HPC involves building big things. And who has the best reputation for having amazingly awesome supercomputers?

Here’s my issue with this purchase: Why does HPE think that the future of compute lies outside the cloud? Are they secretly hoping to build a supercomputer cluster and offer it for rent via a cloud service? Or are they realizing they have no hope of catching up in the cloud race and they’re just conceding that they need to position themselves in a niche market to drive revenue from the kinds of customers that can’t use the cloud for whatever reason? There isn’t a lot of room for buggy whip manufacturers any more, but I guess if you make the best one of the lot you win by default.

Given the HPE track record of questionable acquisitions (Aruba aside), I’m really taking a wait-and-see approach to this. I’d rather it be an Aruba success and not an Autonomy debacle.

The Questionable – NXP Buys Marvell Wi-Fi: This one was the head scratcher of the bunch for me. Why is this making headlines now? Well, in part because NXP is scrambling to fill out their portfolio. As mentioned in the linked article, NXP had been resting on their laurels a bit in hopes that the pending Qualcomm acquisition from last year would give them access to the pieces they needed to move into new markets like industrial and communications infrastructure.

Alas, the Qualcomm deal fell apart for political reasons. Which means people are picking up the pieces. And NXP is getting one of the pieces their desperately needed for just shy of $2 billion. But what’s the roadmap? Sure, Marvell has a lot of customers already that use their wireless and Bluetooth chipsets in a wide range of devices. But you don’t make a acquisition like that just for an existing customer base. You need synergy. You need expansion. You need to boost revenues across both companies to justify paying a huge price. So where’s the additional market going to come from? Are they going to double down on industrial and automotive connectivity? Or are they thinking about different expansion plans?


Tom’s Take

Acquisitions in the tech sector are no different from blockbuster trades in the sports world. Sometimes you cheer about a big pickup for a team and other times you boo at the crazy decisions that otherwise sane people made. But if you follow things closely enough you can usually work out which people are crazy like a fox as opposed to just plain crazy.

Will Spectrum Hunger Kill Weather Forecasting?

If you are a fan of the work we do each week with our Gestalt IT Rundown on Facebook, you probably saw a story in this week’s episode about the race for 5G spectrum causing some potential problems with weather forecasting. I didn’t have the time to dig into the details behind the story on that episode, so I wanted to take a few minutes and explain why it’s such a big deal.

First, you have to know that 5G (and many other) speeds are entirely dependent upon the amount of spectrum they can use to communicate. The more spectrum available to them, the more channels they have available to communicate. Which increases the speed they can exchange information and reduces the amount of interference between devices. Sounds simple right?

Except mobile devices aren’t the only things that are using the spectrum. We have all kinds of other devices out there that use radio waves to communicate. We’ve known for several years that there are a lot of devices in the 5 GHz spectrum used by 802.11 that interfere with wireless devices. Things like ISM radios for industrial and medical applications or government radar systems. The government has instituted many regulations in those frequency ranges to ensure that critical infrastructure isn’t interfered with.

When Nature Calls

However, sometimes you can’t regulate away interference. According to this Wired article the FCC, back in March, opened up auctions for the 24 GHz frequency band. This was over strenuous objections from NASA, NOAA, and the American Meteorological Society (AMS). Why is 24 GHz so important? Well, as it turns out, there’s a natural phenomenon that exists at that range.

Recall your kitchen microwave. How does it work? Basically, it uses microwave radiation to heat the water in the food you’re cooking. How does it do that? Turns out the natural frequency of water is 2.38 GHz. Now, thanks to the magic of math, 23.8 GHz is a multiple of that frequency. Which means that anything that broadcasts at 23.8 GHz will have issues with water, such as water in tree leaves or in water pipes.

So, why is NOAA and the AMS freaking out about auctioning off spectrum in the 23.8 GHz range? Because anything broadcasting in that range is not only going to have issues with water interference but it’s also going to look like water to sensitive equipment. That means that orbiting weather satellites that use microwaves to detect water vapor in the air that reacts to 23.8 GHz are going to encounter co-channel interference from 5G radio sources.

You might say to yourself, “So what? It’s just a little buzz, right?” Well, except that that little buzz creates interference in the data being fed into forecast prediction models. Those models are the basis for the weather forecasts we have today. And if you haven’t noticed the reliability of our long range forecasts has been steadily improving for the past 30 years or so. Today’s 7-day forecasts are almost 80% accurate, which is pretty good compared to how bad things were in the 80s, where you could only guarantee 80% accuracy from a 3-day forecast.

Guess what? NOAA says that if the 24 GHz spectrum gets auctioned off for 5G use, we could see the accuracy of our forecasting regress almost 30%, which would push our models back to where they were in the 80s. Now, for those of you that live in places that are fortunate enough to only get sun and the occasional rain shower that doesn’t sound too bad, right? Just make sure to pack an umbrella. But for those that live in places where there is a significant chance for severe weather, it’s a bit more problematic.

I live in Oklahoma. We’re right in the middle of Tornado Alley. In the spring between April 1 and June 1 my state becomes a fun place full of nasty weather that can destroy homes and cause widespread devastation. It’s never boring for sure. But in the last 30 years we’ve managed to go from being able to give people a few minutes warning about an impending tornado to being able to issue Potential Dangerous Situation (PDS) Tornado Watches up to 48 hours in advance. While a PDS Tornado Watch doesn’t mean that we’re going to get one in a specific area, it does mean that you need to be on the lookout for something that day. It gives enough warning to make sure you’re not going to get caught flat footed when things get nasty.

Yes Man

The easiest way to avoid this problem is probably the least likely to happen. The FCC needs to restrict the auction of that spectrum range identified by NOAA and NASA until it can be proven that there won’t be any interference or that the forecast accuracy isn’t going to be impacted. 5G rollouts are still far enough in the future that leaving a part of the spectrum out of the equation isn’t going to cause huge issues for the next few years. But if we have to start creating rules for how we have to change power settings for device manufacturers or create updates for fixed-position sensors and old satellites we’re going to have a lot more issues down the road than just slightly slow mobile devices.

The reason why this is hard is because an FCC focused on opening things up for corporations doesn’t care about the forecast accuracy of a farmer in Iowa. They care about money. They care about progress. And ultimately they care about looking good over saving lives. There’s no guarantee that reducing forecast accuracy will impact life saving, but the odds are that better forecasts will help people make better decisions. And ultimately, when you boil it down to the actual choices, the appearance is that the FCC is picking money over lives. And that’s a pretty easy choice for most people to make.


Tom’s Take

If I’m a bit passionate about weather tech, it’s because I live in one of the most weather-active places on the planet. The National Severe Storms Laboratory and the National Weather Center are both about 5-6 miles from my house. I see the use of this tech every day. And I know that it saves lives. It’s saved lives for years for people that need to know