Unknown's avatar

About networkingnerd

Tom Hollingsworth, CCIE #29213, is a former network engineer and current organizer for Tech Field Day. Tom has been in the IT industry since 2002, and has been a nerd since he first drew breath.

The End of SD-WAN’s Party In China

As I was listening to Network Break Episode 257 from my friends at Packet Pushers, I heard Greg and Drew talking about a new development in China that could be the end of SD-WAN’s big influence there.

China has a new policy in place, according to Axios, that enforces a stricter cybersecurity stance for companies. Companies doing business in China or with offices in China must now allow Chinese officials to get into their networks to check for security issues as well as verifying the supply chain for network security.

In essence, this is saying that Chinese officials can have access to your networks at any time to check for security threats. But the subtext is a little less clear. Do they get to control the CPE as well? What about security constructs like VPNs? This article seems to indicate that as of January 1, 2020, there will be no intra-company VPNs authorized by any companies in China, whether Chinese or foreign businesses in China.

Tunnel Collapse

I talked with a company doing some SD-WAN rollouts globally in China all the way back in 2018. One of the things that was brought up in that interview was that China was an unknown for American companies because of the likelihood of changing that model in the future. MPLS is the current go-to connectivity for branch offices. However, because you can put an SD-WAN head-end unit there and build an encrypted tunnel back to your overseas HQ it wasn’t a huge deal.

SD-WAN is a wonderful way to ensure your branches are secure by default. Since CPE devices “phone home” and automatically build encrypted tunnels back to a central location, such as an HQ, you can be sure that as soon as the device powers on and establishes global connectivity that all traffic will be secure over your VPN until you change that policy.

Now, what happens with China’s new policy? All traffic must transit outside of a VPN. Things like web traffic aren’t as bad but what about email? Or traffic destined for places like AWS or Azure? It was an unmentioned fact that using SD-WAN VPNs to transit through the content filters in place in China was a way around issues that might arise from accessing resources inside of a very well secured country-wide network.

With the policy change and enforcement guidelines set forth to be enacted in 2020, this could be a very big deal for companies hoping to use SD-WAN in China. First and foremost, you can’t use your intra-company VPN functions any longer. That effectively means that your branch office can’t connect to the HQ or the rest of your corporate network. Given some of the questions around intellectual property issues in China that might not be a bad thing. However, it is going to cause issues for your users trying to access the mail and other support services. Especially if they are hosted somewhere that is going to create additional scrutiny.

The other potential issue is whether or not Chinese officials are even going to allow you to use CPE of your own choosing in the future. If the mandate is that officials should have access to your network for security concerns, who is to say they can’t just dictate what CPE you should use in order to facilitate that access. Larger companies can probably negotiate for come kind of on-site server that does network scanning. But smaller branches are likely going to need to have an all-in-one device at the head end doing all the work. The additional benefit for the Chinese is that control of the head end CPE ensures that you can’t build a site-to-site VPN anywhere.

Peering Into The Future

Greg and Drew pontificate a bit on the future on what this means for organizations from foreign countries doing business in China in the future. I tend to agree with them on a few points. I think you’re going to see a push for Chinese offices of major companies treating them like zero-trust endpoints. All communications will be trading minimal information. Networks won’t be directly connected, either by VPN substitute or otherwise.

Looking further down the road makes the plans even more murky. Is there a way that you can certify yourself to have a standard for cybersecurity? We have something similar with regulations here in the US were we can submit compliance reports for various agencies and submit to audits or have audits performed by third parties. But if the government won’t take that as an answer how do you even go about providing the level of detail they want? If the answer is “you can’t”, then the larger discussion becomes whether or not you can comply with their regulations and reduce your business exposure while still making money in this market. And that’s a conversation no technology can solve.


Tom’s Take

SD-WAN gives us a wonderful set of features included in the package. Things like application inspection are wonderful to look at on a dashboard but I’ve always been a bigger fan of the automatic VPN service. I like knowing that as soon as I turn up my devices they become secure endpoints for all my traffic. Alas, all the technology in the world can be defeated by business or government regulation. If the rules say you can’t have a feature, you either have to play by the rules or quit playing the game. It’s up to businesses to decide how they’ll react going forward. But SD-WAN’s greatest feature may now have to be an unchecked box on that dashboard.

Locked Up By Lock-In

When you start evaluating a solution, you are going to get a laundry list of features and functionality that you are supposed to use as criteria for selection. Some are important, like the ones that give you the feature set you need to get your job done. Others are less important for the majority of use cases. One thing tends to stand out for me though.

Since the dawn of platforms, I believe the first piece of comparison marketing has been “avoids lock-in”. You know you’ve seen it too. For those that may not be completely familiar with the term, “lock-in” describes a platform where all the components need to come from the same manufacturer or group of manufacturers in order to work properly. An example would be if a networking solution required you to purchase routers, switches, access points, and firewalls from a single vendor in order to work properly.

Chain of Fools

Lock in is the greatest asset a platform company has. The more devices they can sell you the more money they can get from you at every turn. That’s what they want. So they’re going to do everything they can to keep you in their ecosystem. That includes things like file formats and architectures that require the use of their technology or of partner technologies to operate correctly.

So, the biggest question here is “What’s wrong with that?” Note that I’m not a proponent of lock-in. Rather, I’m against the false appearance of choices. Sure, offering a platform with the promise of “no lock-in” is a great marketing tool. But how likely are you to actually follow through on that promise?

I touched on this a little bit earlier this year at Aruba Atmosphere 2019 when I talked about the promise of OpenConfig allowing hardware from different vendors to all be programmed in a similar way. The promise is grand that you’ll be able to buy an access point from Extreme and run it on an Aruba Controller while the access layer polices are programmed into Cisco switches. It’s the dream of interoperability!

More realistically, though, you’ll find that most people aren’t that concerned about lock-in. The false choice of being open to new systems generally comes down to one single thing: price. The people that I know that complain the most about vendor lock-in almost always follow it up with a complaint about pricing or licensing costs. For example:

The list could go on for three or four more pages. And the odds are good you’ve looked at one of those solutions already or you’re currently dealing with something along those lines. So, ask yourself how much pain vendor lock-in brings you aside from your checkbook?

The most common complaint, aside from price, is that the vendor solution isn’t “best of breed”. Which has always been code for “this particular piece sucks and I really wish I could use something else”. But there’s every possibility that the solution sucks because it has to integrate tightly with the rest of the platform. It’s easy to innovate when you’re the only game in town and trying to get people to buy things from you. But if you’re a piece of a larger puzzle and you’re trying to have eight other teams tell you what your software needs to do in order to work well with the platform, I think you can see where this is going.

How many times have you actually wished you could pull out a piece and sub in another one? Again, aside from just buying the cheapest thing off the shelf? Have you ever really hoped that you could sub in an Aerohive AP630 802.11ax (Wi-Fi 6) AP into your Cisco wireless network because they were first to market? Have you ever really wanted to rip out Cisco ISE from your integrated platform and try to put Aruba ClearPass in its place? Early adopters and frustrated users are some of the biggest opponents of vendor lock-in.

Those Three Words

I’m about to tell you why lock-in isn’t the demon you think it is. And I can do it in three words:

It. Just. Works.

Granted, that’s a huge stretch and we all know it. It really should be “well built software that meets all my functionality goals should just work”. But in reality, the reason why we all like to scream at lock-in when we’re writing checks for it is because the alternative to paying big bucks for making it “all just work” is for us to invest our time and effort into integrating two solutions together. And ask your nearest VAR or vendor partner how much fun that can be?

I used to spend a LOT of my time trying to get pieces and parts to integrate because schools don’t like to spend a lot of money on platforms. In Oklahoma they’re even allowed to get out of license agreements every year. Know what that means? A ton of legacy software that’s already paid for sitting around waiting to run on brand new hardware they just bought. And oh, by the way, can you make all that work together in this new solution we were sold by the Vendor of the Month Club?

And because they got the best deal or the best package, I had to spend my time and effort putting things together. So, in a way, the customer was trading money from the hardware and software to me for my wetware — the brain power needed to make it all work together. So, in a way, I was doing something even worse. I was creating my own lock-in. Not because I was building an integrated solution with one vendor’s pieces. But because I was building a solution that was custom and difficult to troubleshoot, even with proper documentation.


Tom’s Take

Lock-in isn’t the devil. You’re essentially trading flexibility for ease-of-use. You’re trading the ability to switch out pieces of a solution for the ability to integrate the pieces together without expending much effort. And yes, I realize that some lock-in solutions are harder to integrate than others. I’m looking at you, Cisco ISE. But, that just speaks to how hard it is to create the kind of environment that you get when you are trying to create all kinds of integrations. You’ll find that the idea of freedom of choice is much more appealing than actually having the ability to swap things out at will.

Procrastination Party

“I’ll get to that later.”

“I’m not feeling it right now.”

“I have to find an angle.”

“It will be there tomorrow.”

Any of those sound familiar? I know they do for me. That’s because procrastination is the beast that lives inside all of us. Slumbering until a time when it awakes and persuades us to just put things off until later. Can’t hurt, right?

Brain Games

The human brain is an amazing thing. It is the single largest consumer of nutrients and oxygen in the human body. It’s the reason why human babies are born practically helpless due to the size in relation to the rest of an infant. It’s the reason why we can make tools, ponder the existence of life in the universe, and write kick-ass rock and roll music.

But the human brain is lazy. It doesn’t like thinking. It prefers simple patterns and easy work. Given a choice, the human brain would rather do some kind of mindless repetitive task ad naseum instead of creating. When you think about it that makes a lot of sense from a biological perspective. Tasks that are easy don’t engage many resources. Which means the brain doesn’t have to operate as much and can conserve energy.

But we don’t want our brains to be lazy. We want to create and learn and do amazing things with our thoughts. Which means we have to overcome the inertia of procrastination. We have to force ourselves to move past the proclivity to be lazy in our thoughts. It is said that the hardest part of running isn’t the mileage but is instead getting up in the morning and putting on your shoes. Likewise, the hardest part of being creative isn’t the actual thinking but is instead starting the process of it in the first place.

Strategies for Anti-Procrastination

I have some methods I use to fight my tendency to procrastinate. I can’t promise they’ll work for you but the idea is that you base your strategies around the ideas and go from there.

  • Make Yourself Uncomfortable – I’m not talking about laying on a bed or nails or working in the freezing cold. What I mean is take yourself out of you comfort zone. Instead of sitting in my office when I need to write a few things, I intentionally go somewhere in public, like a coffee shop. Why? Because putting myself in a place with noice and uncomfortable chairs makes me focus on what I’m supposed to be doing. My brain can’t get lazy when it’s being stimulated from all sides. I have to apply some effort to drown out the conversation and that extra effort pushes me into action.
  • Set a Small Goal to Relax – This one works wonders for your brain. If it thinks there’s even a remote possibility in the near future that it can relax it’s going to race to that finish line as fast as possible. If you’re familiar with the Pomodoro Technique that’s basically what’s going on. Your brain sees the opportunity to be lazy five minutes out of every 30 so it pushes to get there. Except you’re tricking it by forcing it to do work to get there. You become more productive because you’re thinking you get to relax in ten or fifteen minutes when in fact you’re much more productive because you’ve secretly been focused the whole time.
  • Create a Zone For Yourself – This is kind of the opposite of the first point above but it works just as well. Your brain likes to do mindless repetitive tasks because they require very little energy. So why not use that against your lazy brain and trick it into thinking whatever you’re doing is actually “easy”? There’s a ton of ways to do this. My two favorites involve aural stimulation. There are a lot of folks that have a Coding Playlist or a Writing Setlist that they use to zone out and accomplish tasks that require some focus but are mindless in nature. Likewise, I often use noise generators to do the same thing. My current favorite is A Soft Murmur because it allows me to customize the noise that I want to help me shut off the distractions and focus on what I’m doing. I’ll often pair it with a sprint from the second point above to help me really dial in what I’m trying to work on.

Tom’s Take

Your mileage may very greatly on the items above. Your brain doesn’t work like mine. You know how you think and what it takes to motivate you. Maybe it’s massive amounts of coffee or a TV playing in the background. But knowing that your mind wants to shut off active processing and do repetitive things over and over again does wonders to help figure out how best to engage it and work smarter. You can’t always stop procrastination. But, with a little planning, you can put it off until tomorrow.

Fast Friday Thoughts – Networking Field Day 21

This week has been completely full at Networking Field Day 21 with lots of great presentations. As usual, I wanted to throw out some quick thoughts that fit more with some observations and also to set up some topics for deeper discussion at a later time.

  • SD-WAN isn’t just a thing for branches. It’s an application-focused solution now. We’ve moved past the technical reasons for implementation and finally moved the needle on “getting rid of MPLS” and instead realized that the quality of service and other aspects of the software allow us to do more. That’s especially true for cloud initiatives. If you can guarantee QoS through your SD-WAN setup you’re already light years ahead of where we’ve been already.
  • Automation isn’t just figuring out how to make hardware and software do things really fast. It’s also about helping humans understand which things need to be done fast and automatically and which things don’t. For all the amazing stuff that you can do with scripting and orchestration there are still tasks that should be done manually. And there are plenty of people problems in the process. Really smart companies that want to solve these issues should stop focusing on using technology to eliminate the people and instead learn how to integrate the people into the process.
  • If you’re hoping to own the entire stack from top to bottom, you’re going to end up disappointed. Customers aren’t looking for mediocre solution from a single vendor. They’re not even looking for a great solution from a couple. They want the best and they’re not afraid to go where they want to get them. And when you realize that more and more engineering and architecture talent is focused on integrating already you will see that they don’t have any issues making your solution work with someone else’s instead of counting on you to integrate your platforms together at some point in the future.

Tom’s Take

Networking Field Day is always a great time for me to talk to some of the best people in the world and hear from some of the greatest companies out there about what’s coming down the pipeline. The world of networking is changing even more than it has in the last few software-defined years.

The Certification Ladder

Are you climbing the certification ladder? If you’re in IT the odds are good that you are. Some people are just starting out and see certifications as a way to get the knowledge they need to do their job. Others see certs as a way to get out of a job they don’t like. Still others have plenty of certifications but want to get the ones at the top of their field. This last group are the ones that I want to spend some time talking about.

Pushing The Limit

Expert-level certifications aren’t easy on purpose. They’re supposed to represent the gap between being good at something and going above and beyond. For some that involves some kind of practical test of skills like the CCIE. For others it involves a board interview process like the VCDX. Or it could even involve a combination of things like the CWNE does with board review and documentation submissions.

Expert certifications aren’t designed to be powered through in a short amount of time. That’s because it’s difficult to become an expert at something without putting in the practice time. For some tests, that means meeting some minimum requirements. You can only attempt your VCDX when you have already passed the VCAP-DCA and VCAP-DCD test, for example. Or you may have a minimum requirement of time in the industry, such as the CISSP requirement of four years in the security industry.

But, more importantly, the requirement is that you truly are an expert. How many times have you bumped into someone that has a certification that you think to yourself, “How on earth did they pass that?” It should be fairly uncommon to run into a CCIE that you feel that way about. The test is rigorous and requires everyone to pass a very similar version of the practical exam. Sure, you still run into people that say the old 2-day exam was harder. But by and large, most CCIEs have had to endure the same kind of certification requirements.

Now, what people do after they get there is an entirely different matter altogether. There are a lot of people that get to the pinnacle of their certification journey and sit there on top of their mountain. They take time to survey the lands that they now watch over and they relax. They don’t see any need in going any further. They’ve done what they came to do. And for many that’s the way to go. Congratulations on your ride.

Still others use this opportunity negatively. They expect people to kiss the brass certificate and pay deference to them because of it. This can affect almost anyone. I remember years ago back to a time when I had just gotten my CCIE lab out of the way. I was working on a proposal for a customer. We had just gotten an email from the customer asking why we didn’t include a particular switch in the design. I told our team that we didn’t need it because the requirements of the design didn’t need something that cost three times over what we recommended. The customer’s response was, “Well, this other partner guy is a CCIE and he says we need that switch.” I replied back with, “Well, I’m a CCIE too, so let’s cut that crap and talk about the hardware.”

I’m not sure how many times that person had used the “I’m a CCIE” justification for their recommendations, but it shows me that some people believe a piece of paper speaks louder than their track record. Those people are usually the ones that fall back into the pattern of “listen to me because I passed tests” not “listen to me because I did the studying”. It’s important to ascribe value to passing a test, but remember that the test is a way to prove you have knowledge. It reminds me of this scene from Tommy Boy:

Throwing up a certification as justification for a recommendation is no different that just tossing a worthless guarantee on a box. Prove you know what you’re talking about instead of just saying you do.

Exceeding Your Reach

The last type of person that climbs the certification ladder is like the one in this tweet from my friend Hank Yeomans:

https://twitter.com/HankYeomans/status/1177237065509064705

He looks at the ascent to the top of his certification ladder as a chance to do more. To build more. It’s not the end of the journey. It’s not bad to stop and look around at the new view from the top of your ladder when you’ve climbed it. But if you look at the journey as the start of something that you need to finish, you’re going to start immediately looking around to find the next thing that you need to do. Perhaps it’s learning a new technology related to the one that you just finished. Or maybe it’s that you want to figure out how to get even better at what you do.

People that never rest in their attempts to be better at the ones that ultimately change the way things are done. They don’t just accept that this is the way that things need to be. Instead, they use the top of their ladder to stretch out and see what they can reach. They realize that everything we do in life it just building on something else we’ve already done. We use Crawl, Walk, Run as a metaphor for building through a project or a process all the time. That’s because we know that you have to make steps all the time to progress. But what if someone just said, “You know what, I’ve mastered walking. I don’t need to run. All you people who only crawl listen to me because I’m better than you!” It would show how short-sighted they are when it comes to continuing the journey.


Tom’s Take

Many times, I’ve talked about the fact that I relaxed after I passed my CCIE and enjoyed not studying into the wee hours of the night. But after a while I started getting uncomfortable around 8-9pm. Because there was a little voice in the back of my head that kept telling me “You should be studying for something.” Instinctively, that voice knew that I needed to continue my journey. I would never be content resting on my laurels and I could never bring myself to use my certification as a crutch to make myself look important to others. Instead, I needed to push myself to build on what I’ve already done and make myself better. As Hank said, it’s just a foothill on a greater journey. Once you’ve learned how to use your ladder to increase your reach, even the sky isn’t the limit any longer.

It’s A Wireless Problem, Right?

How many times have your users come to your office and told you the wireless was down? Or maybe you get a phone call or a text message sent from their phone. If there’s a way for people to figure out that the wireless isn’t working they will not hesitate to tell you about it. But is it always the wireless?

Path of Destruction

During CWNP Wi-Fi Trek 2019, Keith Parsons (@KeithRParsons) gave a great talk about Tips, Techniques, and Tools for Troubleshooting Wireless LAN. It went into a lot of detail about how many things you have to look at when you start troubleshooting wireless issues. It makes your head spin when you try and figure out exactly where the issues all lie.

However, I did have to put up a point that I didn’t necessarily agree with Keith on:

I spent a lot of time in the past working with irate customers in schools. And you can better believe that every time there was an issue it was the network’s fault. Or the wireless. Or something. But no matter what the issue ended up being, someone always made sure to remind me the next time, “You know, the wireless is flaky.”

I spent a lot of time trying to educate users about the nuances between signal strength, throughput, Internet uplinks, application performance (in the days before cloud!) and all the various things that could go wrong in the pathway between the user’s workstation and wherever the data was that they were trying to access. Wanna know how many times it really worked?

Very few.

Because your users don’t really care. It’s not all that dissimilar to the utility companies that you rely on in your daily life. If there is a water main break halfway across town that reduces your water pressure or causes you to not have water at all, you likely don’t spend your time troubleshooting the water system. Instead, you complain that the water is out and you move on with your day until it somehow magically gets fixed. Because you don’t have any visibility into the way the water system works. And, honestly, even if you did it might not make that much difference.

A Little Q&A

But educating your users about issues like this isn’t a lost cause. Because instead of trying to avail them of all the possible problems that you could be dealing with, you need to help them start to understand the initial steps of the troubleshooting process. Honestly, if you can train them to answer the following two questions you’ll likely get a head start on the process and make them very happy.

  1. When Did The Problem Start? One of the biggest issues with troubleshooting is figuring out when the problem started in the first place. How many times have you started troubleshooting something only to learn that the real issue has been going on for days or weeks and they don’t really remember why it started in the first place? Especially with wireless you have to know when things started. Because of the ephemeral nature of things like RSSI your issue could be caused by something crazy that you have no idea about. So you have to get your users to start writing down when things are going wrong. That way you have a place to start.
  2. What Were You Doing When It Happened? This one usually takes a little more coaching. People don’t think about what they’re doing when a problem happens outside of what is causing them to not be able to get anything done. Rarely do they even think that something they did caused the issue. And if they do, they’re going to try and hide it from you every time. So you need to get them to start detailing what was going on. Maybe they were up walking around and roamed to a different AP in a corner of the office. Maybe they were trying to work from the break room during lunch and the microwave was giving them fits. You have to figure out what they were doing as well as what they were trying to accomplish before you can really be sure that you are on the right track to solving a problem.

When you think about the insane number of things that you can start troubleshooting in any scenario, you might just be tempted to give up. But, as I mentioned almost a decade ago, you have to start isolating factors before you can really fix issues. As Keith mentioned in his talk, it’s way too easy to pick a rabbit hole issue and start troubleshooting to make it fit instead of actually fixing what’s ailing the user. Just like the scientific method, you have to make your conclusions fit the data instead of making the data fit your hypothesis. If you are dead set on moving an AP or turning off a feature you’re going to be disappointed when that doesn’t actually fix what the users are trying to tell you is wrong.


Tom’s Take

Troubleshooting is the magic that makes technical people look like wizards to regular folks. But it’s not because we have magical powers. Instead, it’s because we know instinctively what to start looking for and what to discard when it becomes apparent that we need to move on to a new issue. That’s what really separates a the best of the best. Being able to focus on the issue and not the next solution that may not work. Keith Parsons is one such wizard. Having him speak at Wi-Fi Trek 2019 was a huge success and should really help people understand how to look at these issues and keep them on the right troubleshooting track.

Keynote Hate – Celebrity Edition

We all know by now that I’m not a huge fan of keynotes. While I’ve pulled back in recent years from the all out snark during industry keynotes, it’s nice to see that friends like Justin Warren (@JPWarren) and Corey Quinn (@QuinnyPig) have stepped up their game. Instead, I try to pull nuggets of importance from a speech designed to rally investors instead of the users. However, there is one thing I really have to stand my ground against.

Celebrity Keynotes.

We’ve seen these a hundred times at dozens of events. After the cheers and adulation of the CEO giving a big speech and again after the technical stuff happens with the CTO or product teams, it’s time to talk about…nothing.

Celebrity keynotes break down into two distinct categories. The first is when your celebrity is actually well-spoken and can write a speech that enthralls the audience. This means they get the stage to talk about whatever they want, like their accomplishments in their career or the charity work their pushing this week. I don’t mind these as much because they feel like a real talk that I might want to attend. Generally the celebrity talking about charity or about other things knows how to keep the conversation moving and peppers the speech with anecdotes or amusing tales that keep the audience riveted. These aren’t my favorite speeches but they are miles ahead of the second kind of celebrity keynote.

The Interview.

These. Are. The. Worst. Nothing like a sports star or an actor getting on stage for 45 minutes of forced banter with an interviewer. Often, the person on stage is a C-level person that has a personal relationship with the celebrity and called in a favor to get them to the event. Maybe it’s a chance to talk about their favorite charity or some of the humanitarian work they’re doing. Or maybe the celebrity donated a bunch of their earnings to the interviewer’s pet project.

No matter the reasons, most of these are always the same. A highlight reel of the celebrity in case someone in the audience forget they played sports ball or invented time travel. Discussion of what they’ve been doing recently or what their life was like after they retired. A quirky game where the celebrity tries to guess what they company does or tries to show they know something about IT. Then the plug for the charity or the fund they’re wanting to really talk about. Then a posed picture on stage with lots of smiles as the rank and file shuffle out of the room to the next session.

Why is this so irritating? Well, for one, no one cares what a quarterback thinks about enterprise storage. Most people rarely care about what the CEO thinks about enterprise storage as long as they aren’t going to shut down the product line or sell it to a competitor. Being an actor in a movie doesn’t qualify you to talk about hacking things on the Internet any more than being in Top Gun qualifies you to fly a fighter jet. Forcing “regular” people to talk about things outside their knowledge base is painful at best.

So why do it then? Well, prestige is one thing. Notice how the C-level execs flock to the stage to get pics with the celebrity after the speech? More posters for the power wall in their office. As if having a posed pic with a celebrity you paid to come to your conference makes you friends. Or perhaps its a chance to get some extra star power later on for a big launch. I can’t tell you the number of times that I’ve made a big IT purchasing decision based on how my favorite actor feels about using merchant silicon over custom ASICs. Wait, I can tell you. Hint, it’s a number that rhymes with “Nero”.

Getting Your Fix

As I’ve said many times, “Complaining without a solution is whining.” So, how do we fix the celebrity keynote conundrum?

Well, my first suggestion would be to get someone that we actually want to listen to. Instead of a quarterback or an actor, how about instead we get someone that can teach us something we need to learn? A self-help expert or a writer that’s done research into some kind of skill that everyone can find important? Motivational speakers are always a good touch too. Anyone that can make us feel better about ourselves or let us walk away from the presentation with a new skill or idea are especially welcome.

Going back to the earlier storyteller keynotes, these are also a big draw for people. Why? Because people want to be entertained. And who better to entertain than someone that does it for their job? Instead of letting some C-level exec spend another keynote dominating half the conversation, why not let your guest do that instead? Of course, it’s not easy to find these celebrities. And more often than not they cost more in terms of speaker fees or donations. And they may not highlight all the themes of your conference like you’d want with someone guiding them. But I promise your audience will walk away happier and better off.


Tom’s Take

Keynotes are a necessary evil of conferences. You can’t have a big event without some direction from the higher-ups. You need to have some kind of rally where everyone can share their wins and talk about strategy. But you can totally do away with the celebrity keynotes. Instead of grandstanding about how you know someone famous you should do your attendees a favor and help them out somehow. Let them hear a great story or give them some new ideas or skill to help them out. They’ll remember that long after the end of an actor’s career or a sports star’s run in the big leagues.

IT Hero Culture

I’ve written before about rock stars and IT super heroes. We all know or have worked with someone like this in the past. Perhaps we still do have someone in the organization that fits the description. But have you ever stopped to consider how it could be our culture that breeds the very people we don’t want around?

Keeping The Lights On

When’s the last time you got recognition for the network operating smoothly? Unless it was in response to a huge traffic spike or an attack that tried to knock you offline, the answer is probably never or rarely. Despite the fact that networks are hard to build and even harder to operate, we rarely get recognized for keeping the lights on day after day.

It’s not all that uncommon. The accounting department doesn’t get recognized when the books are balanced. The janitorial staff doesn’t get an exceptional call out when the floors are mopped. And the electric company doesn’t get a gold star because they really did keep the lights on. All of these things are examples of expected operation. When we plug something into a power socket, we expect it to work. When we plug a router in, we expect it to work as well. It may take more configuration to get the router working than the electrical outlet, but that’s just because the hard work of the wiring has already been done.

The only time we start to notice things is when they’re outside our expectation. When the accounting department’s books are wrong. When the floors are dirty. When the lights aren’t on. We’re very quick to notice failure. And, often, we have to work very hard to minimize the culture that lays blame for failure. I’ve already talked a lot about things like blameless post-mortems and other ways to attack problems instead of people. Companies are embracing the idea that we need to fix issues with our systems and not shame our people into submission for things they might not have had complete control over.

Put On The Cape

Have you ever thought about what happens in the other direction, though? I can speak from experience because I spent a lot of time in that role. As a senior engineer from a VAR, I was often called upon to ride in and save the day. Maybe it was after some other company had tried to install something and failed. Or perhaps it was after one of my own technicians had created an issue that needed to be resolved. I was ready on my white horse to ride in and save the day.

And it felt nice to be recognized for doing it! Everyone feels a bit of pride when you are the person to fix an issue or get a site back up and running after an outage. Adulation is a much better feeling than shame without a doubt. But it also beat apathy too. People don’t get those warm fuzzy feelings from just keeping the lights on, after all.

The culture we create that worships those that resolve issues with superhuman skill reinforces the idea that those traits are desirable in engineers. Think about which person you’d rather have working on your network:

  • Engineer A takes two months to plan the cutover and wants to make sure everything goes smoothly before making it happen.
  • Engineer B cuts over with very little planning and then spends three hours of the maintenance window getting all the systems back online after a bug causes an outage. Everything is back up and running before the end of the window.

Almost everyone will say they want Engineer A working for them, right? Planning and methodical reasoning beats a YOLO attitude any day of the week. But who do we recognize as the rockstar with special skills? Probably Engineer B. Whether or not they created their own issue they are the one that went above and beyond to fix it.

We don’t reward people for producing great Disaster Recovery documentation. We laud them for pulling a 36-hour shift to rebuild everything because there wasn’t a document in the first place. We don’t recognize people that spend an extra day during a wireless site survey to make sure they didn’t miss anything in a warehouse. But we really love the people that come in after-the-fact and spend countless hours fixing it.

Acknowledging Averages

Should we stop thanking people for all their hard work in solving problems? No. Because failure to appreciate true skills in a technical resource will sour them on the job quickly. But, if we truly want to stop the hero worshipping behavior that grows from IT, we have to start acknowledging the people that put in hard work day after day to stay invisible.

We need to give a pat on the back to an engineer that built a good script to upgrade switches. Or to someone that spent a little extra time making sure the site survey report covered everything in detail. We need to help people understand that it’s okay to get your job done and not make a scene. And we have to make sure that we average out the good and the bad when trying to ascertain root cause in outages.

Instead of lauding rock stars for spending 18 hours fixing a routing issue, let’s examine why the issue occurred in the first place. This kind of analysis often happens when it’s a consultant that has to fix the issue since a cost is associated with the fix, but it rarely happens in IT departments in-house. We have to start thinking of the cost of this rock star or white knight behavior as being something akin to money or capital in the environment.


Tom’s Take

Rock star culture and hero worship in IT isn’t going to stop tomorrow. It’s because we want to recognize the people that do the work. We want to hold those that go above and beyond up to those that we want to emulate them. But we should also be asking hard questions about why it was necessary for there to need to be a hero in the first place. And we have to be willing to share some of the adulation with those that keep the lights on between disasters that need heroes.

Positioning Policy Properly

Who owns the network policy for your organization? How about the security policy?Identity policy? Sound like easy questions, don’t they? The first two are pretty standard. The last generally comes down to one or two different teams depending upon how much Active Directory you have deployed. But have you ever really thought about why?

During Future:NET this week, those poll questions were asked to an audience of advanced networking community members. The answers pretty much fell in line with what I was expecting to see. But then I started to wonder about the reasons behind those decisions. And I realized that in a world full of cloud and DevOps/SecOps/OpsOps people, we need to get away from teams owning policy and have policy owned by a separate team.

Specters of the Past

Where does the networking policy live? Most people will jump right in with a list of networking gear. Port profiles live on switches. Routing tables live on routers. Networking policy is executed in hardware. Even if the policy is programmed somewhere else.

What about security policy? Firewalls are probably the first thing that come to mind. More advanced organizations have a ton of software that scans for security issues. Those policy decisions are dictated by teams that understand the way their tools work. You don’t want someone that doesn’t know how traffic flows through a firewall to be trying to manage that device, right?

Let’s consider the identity question. For a multitude of years the identity policy has been owned by the Active Directory (AD) admins. Because identity was always closely tied to the server directory system. Novell (now NetIQ) eDirectory and Microsoft AD were the kings of the hill when it came to identity. Today’s world has so much distributed identity that it’s been handed to the security teams to help manage. AD doesn’t control the VPN concentrator the cloud-enabled email services all the time. There are identity products specifically designed to aggregate all this information and manage it.

But let’s take a step back and ask that important question: why? Why is it that the ownership of a policy must be by a hardware team? Why must the implementors of policy be the owners? The answer is generally that they are the best arbiters of how to implement those policies. The network teams know how to translate applications in to ports. Security teams know how to create firewall rules to implement connection needs. But are they really the best people to do this?

Look at modern policy tools designed to “simplify” networking. I’ll use Cisco ACI as an example but VMware NSX certainly qualifies as well. At a very high level, these tools take into account the needs of applications to create connectivity between software and hardware. You create a policy that allows a database to talk to a front-end server, for example. That policy knows what connections need to happen to get through the firewall and also how to handle redundancy to other members of the cluster. The policy is then implemented automatically in the network by ACI or NSX and magically no one needs to touch anything. The hardware just works because policy automatically does the heavy lifting.

So let’s step back for moment and discuss this. Why does the networking team need to operate ACI or NSX? Sure, it’s because those devices still touch hardware at some point like switches or routers. But we’ve abstracted the need for anyone to actually connect to a single box or a series of boxes and type in lines of configuration that implement the policy. Why does it need to be owned by that team? You might say something about troubleshooting. That’s a common argument that whoever needs to fix it when it breaks is who needs to be the gatekeeper implementing it. But why? Is a network engineer really going to SSH into every switch and correct a bad application tag? Or is that same engineer just going to log into a web console and fix the tag once and propagate that info across the domain?

Ownership of policy isn’t about troubleshooting. It’s about territory. The tug-of-war to classify a device when it needs to be configured is all about collecting and consolidating power in an organization. If I’m the gatekeeper of implementing workloads then you need to pay tribute to me in order to make things happen.

If you don’t believe that, ask yourself this: If there was a Routing team and and Switching team in an organization, who would own the routed SVI interface on a layer 3 switch? The switching team has rights because it’s on their box. The routing team should own it because it’s a layer 3 construct. Both are going to claim it. And both are going to fight over it. And those are teams that do essentially the same job. When you start pulling in the security team or the storage team or the virtualization team you can see how this spirals out of control.

Vision of the Future

Let’s change the argument. Instead of assigning policy to the proper hardware team, let’s create a team of people focused on policy. Let’s make sure we have proper representation from every hardware stack: Networking, Security, Storage, and Virtualization. Everyone brings their expertise to the team for the purpose of making policy interactions better.

Now, when someone needs to roll out a new application, the policy team owns that decision tree. The Policy Team can have a meeting about which hardware is affected. Maybe we need to touch the firewall, two routers, a switch, and perhaps a SAN somewhere along the way. The team can coordinate the policy changes and propose an implementation plan. If there is a construct like ACI or NSX to automate that deployment then that’s the end of it. The policy is implemented and everything is good. Perhaps some older hardware exists that needs manual configuration of the policy. The Policy Team then contacts the hardware owner to implement the policy needs on those devices. But the Policy Team still owns that policy decision. The hardware team is just working to fulfill a request.

Extend the metaphor past hardware now. Who owns the AWS network when your workloads move to the cloud? Is it still the networking team? They’re the best team to own the network, right? Except there are no switches or routers. They’re all software as far as the instance is concerned. Does that mean your cloud team is now your networking team as well? Moving to the cloud muddies the waters immensely.

Let’s step back into the discussion about the Policy Team. Because they own the policy decisions, they also own that policy when it changes hardware or location. If those workloads for email or productivity suite move from on-prem to the cloud then the policy team moves right along with them. Maybe they add an public cloud person to the team to help them interface with AWS but they still own everything. That way, there is no argument about who owns what.

The other beautiful thing about this Policy Team concept is that it also allows the rest of the hardware to behave as a utility in your environment. Because the teams that operate networking or security or storage are just fulfilling requests from the policy team they don’t need to worry about anything other than making their hardware work. They don’t need to get bogged down in policy arguments and territorial disputes. They work on their stuff and everyone stays happy!


Tom’s Take

I know it’s a bit of stretch to think about pulling all of the policy decisions out of the hardware teams and into a separate team. But as we start automating and streamlining the processes we use to implement application policy the need for it to be owned by a particular hardware team is hard to justify. Cutting down on cross-faction warfare over who gets to be the one to manage the new application policy means enhanced productivity and reduced tension in the workplace. And that can only lead to happy users in the long run. And that’s a policy worth implementing.

The Sky is Not Falling For Ekahau

Ekahau Hat (photo courtesy of Sam Clements)

You may have noticed quite a few high profile departures from Ekahau recently. A lot of very visible community members, concluding Joel Crane (@PotatoFi), Jerry Olla (@JOlla), and Jussi Kiviniemi (@JussiKiviniemi) have all decided to move on. This has generated quite a bit of discussion among the members of the wireless community as to what this really means for the company and the product that is so beloved by so many wireless engineers and architects.

Putting the people aside for a moment, I want to talk about the Ekahau product line specifically. There was an undercurrent of worry in the community about what would happen to Ekahau Site Survey (ESS) and other tools in the absence of the people we’ve seen working on them for so long. I think this tweet from Drew Lentz (@WirelessNerd) best exemplifies that perspective:

So, naturally, I decided to poke back:

That last tweet is where I really want to focus this post.

The More Things Change

Let’s think about where Ekahau is with regards to the wireless site survey market right now. With no exaggeration, they are on top and clearly head and shoulders above the rest. What other product out there has the marketshare and mindshare they enjoy? AirMagnet is the former king of the hill but the future for that tool is still in flux with all of the recent movement of the tool between Netscout and now with NetAlly. IBWave is coming up fast but they’re still not quite ready to go head-to-head in the same large enterprise space. I rarely hear TamoGraph Site Survey brought up in conversation. And as for NetSpot, they don’t check enough boxes for real site survey to even really be a strong contender In the enterprise.

So, Ekahau really is the 800lb gorilla of the site survey market. This market is theirs to lose. They have a commanding lead. And speaking to the above tweets from Drew, are they really in danger of losing their customer base after just 12 months? Honestly? I don’t think so. Ekahau has a top-notch offering that works just great today. If there was zero development done on the platform for the next two years it would still be one of the best enterprise site survey tools on the market. How long did AirMagnet flounder under Fluke and still retain the title of “the best” back in the early 2010s?

Here Comes A Challenger

So, if the only really competitor that’s up-and-coming to Ekahau right now is IBWave, does that mean this is a market ripe for disruption? I don’t think that’s the case either. When you look at all the offerings out there, no one is really rushing to build a bigger, better survey tool. You tend to see this in markets where someone has a clear advantage. Without a gap to exploit there is no room for growth. NetSpot gives their tool away so you can’t really win on price. IBWave and AirMagnet are fighting near the top so you don’t have a way to break in beside them.

What features could you offer that aren’t present in ESS today? You’d have to spend 18-24 months to even build something comparable to what is present in the software today. So, you dedicate resources to build something that is going to be the tool that people wanted to use two years ago? Good luck selling that idea to a VC firm. Investors want return on their money today.

And if you’re a vendor that’s trying to break into the market, why even consider it? Companies focused on building APs and wireless control solutions don’t always play nice with each other. If you’re going to build a tool to help survey your own deployments you’re going to be unconsciously biased against others and give yourself some breaks. You might even bias your survey results in favor of your own products. I’m not saying it would be intentional. But it has been known to happen in the past.

Here’s the other thing to keep in mind: inertia. Remember how we posed this question with the idea that Ekahau wouldn’t improve the product at all? We all know that’s not the case. Sure, there are some pretty big names on that list that aren’t there any more. But those people aren’t all the people at Ekahau. Development teams continue to work on the product roadmap. There are still plans in place to look at new technologies. Nothing stopped because someone left. Even if the only thing the people on the development side of the house did was finish the plans in place there would still be another 12-18 months of new features on the horizon. That means trying to develop a competitor to ESS means developing technology to replace what is going to be outdated by the time you finish!

People Matter

That brings me back to the people. It’s a sad fact that everyone leaves a company sooner or later. Bill Gates left Microsoft. Steve Jobs left Apple. You can’t count on people being around forever. You have to plan for their departure and hope that, even if you did catch lightning in a bottle, you have to find a way to do it again.

I’m proud to see some of the people that Ekahau has picked up in the last few months. Folks like Shamree Howard (@Shamree_Howard) and Stew Goumans (@WirelessStew) are going to go a long way to keeping the community engagement alive that Ekahau is so known for. There will be others that are brought in to fill the shoes of those that have left. And don’t forget that for every face we see publicly in the community there is an army of development people behind the scenes working diligently on the tools. They may not be the people that we always associate with the brand but they will try hard to put their own stamp on things. Just remember that we have to be patient and let them grow into their role. They have a lot to live up to, so give them the chance. It may take more than 12 months for them to really understand what they got themselves into.


Tom’s Take

No company goes out of business overnight without massive problems under the hood. Even the biggest corporate failures of the last 40 years took a long time to unfold. I don’t see that happening to Ekahau. Their tools are the best. Their reputation is sterling. And they have a bit of a cushion of goodwill to get the next release right. And there will be a next release. And one after that. Because what Ekahau is doing isn’t so much scaling the mountain they climbed to unseat AirMagnet. It’s proving they can keep going no matter what.