Fast Friday Thoughts – Networking Field Day 21

This week has been completely full at Networking Field Day 21 with lots of great presentations. As usual, I wanted to throw out some quick thoughts that fit more with some observations and also to set up some topics for deeper discussion at a later time.

  • SD-WAN isn’t just a thing for branches. It’s an application-focused solution now. We’ve moved past the technical reasons for implementation and finally moved the needle on “getting rid of MPLS” and instead realized that the quality of service and other aspects of the software allow us to do more. That’s especially true for cloud initiatives. If you can guarantee QoS through your SD-WAN setup you’re already light years ahead of where we’ve been already.
  • Automation isn’t just figuring out how to make hardware and software do things really fast. It’s also about helping humans understand which things need to be done fast and automatically and which things don’t. For all the amazing stuff that you can do with scripting and orchestration there are still tasks that should be done manually. And there are plenty of people problems in the process. Really smart companies that want to solve these issues should stop focusing on using technology to eliminate the people and instead learn how to integrate the people into the process.
  • If you’re hoping to own the entire stack from top to bottom, you’re going to end up disappointed. Customers aren’t looking for mediocre solution from a single vendor. They’re not even looking for a great solution from a couple. They want the best and they’re not afraid to go where they want to get them. And when you realize that more and more engineering and architecture talent is focused on integrating already you will see that they don’t have any issues making your solution work with someone else’s instead of counting on you to integrate your platforms together at some point in the future.

Tom’s Take

Networking Field Day is always a great time for me to talk to some of the best people in the world and hear from some of the greatest companies out there about what’s coming down the pipeline. The world of networking is changing even more than it has in the last few software-defined years.

Positioning Policy Properly

Who owns the network policy for your organization? How about the security policy?Identity policy? Sound like easy questions, don’t they? The first two are pretty standard. The last generally comes down to one or two different teams depending upon how much Active Directory you have deployed. But have you ever really thought about why?

During Future:NET this week, those poll questions were asked to an audience of advanced networking community members. The answers pretty much fell in line with what I was expecting to see. But then I started to wonder about the reasons behind those decisions. And I realized that in a world full of cloud and DevOps/SecOps/OpsOps people, we need to get away from teams owning policy and have policy owned by a separate team.

Specters of the Past

Where does the networking policy live? Most people will jump right in with a list of networking gear. Port profiles live on switches. Routing tables live on routers. Networking policy is executed in hardware. Even if the policy is programmed somewhere else.

What about security policy? Firewalls are probably the first thing that come to mind. More advanced organizations have a ton of software that scans for security issues. Those policy decisions are dictated by teams that understand the way their tools work. You don’t want someone that doesn’t know how traffic flows through a firewall to be trying to manage that device, right?

Let’s consider the identity question. For a multitude of years the identity policy has been owned by the Active Directory (AD) admins. Because identity was always closely tied to the server directory system. Novell (now NetIQ) eDirectory and Microsoft AD were the kings of the hill when it came to identity. Today’s world has so much distributed identity that it’s been handed to the security teams to help manage. AD doesn’t control the VPN concentrator the cloud-enabled email services all the time. There are identity products specifically designed to aggregate all this information and manage it.

But let’s take a step back and ask that important question: why? Why is it that the ownership of a policy must be by a hardware team? Why must the implementors of policy be the owners? The answer is generally that they are the best arbiters of how to implement those policies. The network teams know how to translate applications in to ports. Security teams know how to create firewall rules to implement connection needs. But are they really the best people to do this?

Look at modern policy tools designed to “simplify” networking. I’ll use Cisco ACI as an example but VMware NSX certainly qualifies as well. At a very high level, these tools take into account the needs of applications to create connectivity between software and hardware. You create a policy that allows a database to talk to a front-end server, for example. That policy knows what connections need to happen to get through the firewall and also how to handle redundancy to other members of the cluster. The policy is then implemented automatically in the network by ACI or NSX and magically no one needs to touch anything. The hardware just works because policy automatically does the heavy lifting.

So let’s step back for moment and discuss this. Why does the networking team need to operate ACI or NSX? Sure, it’s because those devices still touch hardware at some point like switches or routers. But we’ve abstracted the need for anyone to actually connect to a single box or a series of boxes and type in lines of configuration that implement the policy. Why does it need to be owned by that team? You might say something about troubleshooting. That’s a common argument that whoever needs to fix it when it breaks is who needs to be the gatekeeper implementing it. But why? Is a network engineer really going to SSH into every switch and correct a bad application tag? Or is that same engineer just going to log into a web console and fix the tag once and propagate that info across the domain?

Ownership of policy isn’t about troubleshooting. It’s about territory. The tug-of-war to classify a device when it needs to be configured is all about collecting and consolidating power in an organization. If I’m the gatekeeper of implementing workloads then you need to pay tribute to me in order to make things happen.

If you don’t believe that, ask yourself this: If there was a Routing team and and Switching team in an organization, who would own the routed SVI interface on a layer 3 switch? The switching team has rights because it’s on their box. The routing team should own it because it’s a layer 3 construct. Both are going to claim it. And both are going to fight over it. And those are teams that do essentially the same job. When you start pulling in the security team or the storage team or the virtualization team you can see how this spirals out of control.

Vision of the Future

Let’s change the argument. Instead of assigning policy to the proper hardware team, let’s create a team of people focused on policy. Let’s make sure we have proper representation from every hardware stack: Networking, Security, Storage, and Virtualization. Everyone brings their expertise to the team for the purpose of making policy interactions better.

Now, when someone needs to roll out a new application, the policy team owns that decision tree. The Policy Team can have a meeting about which hardware is affected. Maybe we need to touch the firewall, two routers, a switch, and perhaps a SAN somewhere along the way. The team can coordinate the policy changes and propose an implementation plan. If there is a construct like ACI or NSX to automate that deployment then that’s the end of it. The policy is implemented and everything is good. Perhaps some older hardware exists that needs manual configuration of the policy. The Policy Team then contacts the hardware owner to implement the policy needs on those devices. But the Policy Team still owns that policy decision. The hardware team is just working to fulfill a request.

Extend the metaphor past hardware now. Who owns the AWS network when your workloads move to the cloud? Is it still the networking team? They’re the best team to own the network, right? Except there are no switches or routers. They’re all software as far as the instance is concerned. Does that mean your cloud team is now your networking team as well? Moving to the cloud muddies the waters immensely.

Let’s step back into the discussion about the Policy Team. Because they own the policy decisions, they also own that policy when it changes hardware or location. If those workloads for email or productivity suite move from on-prem to the cloud then the policy team moves right along with them. Maybe they add an public cloud person to the team to help them interface with AWS but they still own everything. That way, there is no argument about who owns what.

The other beautiful thing about this Policy Team concept is that it also allows the rest of the hardware to behave as a utility in your environment. Because the teams that operate networking or security or storage are just fulfilling requests from the policy team they don’t need to worry about anything other than making their hardware work. They don’t need to get bogged down in policy arguments and territorial disputes. They work on their stuff and everyone stays happy!


Tom’s Take

I know it’s a bit of stretch to think about pulling all of the policy decisions out of the hardware teams and into a separate team. But as we start automating and streamlining the processes we use to implement application policy the need for it to be owned by a particular hardware team is hard to justify. Cutting down on cross-faction warfare over who gets to be the one to manage the new application policy means enhanced productivity and reduced tension in the workplace. And that can only lead to happy users in the long run. And that’s a policy worth implementing.

The Development of DevNet’s Future

You’re probably familiar with Cisco DevNet. If not, DevNet is the place Cisco has embraced outreach to the developer community building for software-defined networking (SDN). Though initially cautious in getting into the software developer community, Cisco has embraced their new role and really opened up to help networking professionals embrace the new software normal in networking. But where is DevNet going to go from here?

Humble Beginnings

DevNet wasn’t always the darling of Cisco’s offerings. I can remember sitting in on some of the first discussions around Cisco OnePK and thinking to myself, “This is never going to work.”

My hesitation with Cisco’s first attempts to focus on software platforms came from two places. The first was what I saw as Cisco trying to figure out how to extend the platforms to include some programmability. It was more about saying they could do software and less about making that software easy to use or program against. The second place was actually the lack of a place to store all of this software knowledge. Programmers and developers are fickle lot and you have to have a repository where they can get access to the pieces they needed.

DevNet was that place that Cisco should have built from the start. It was a way to get people excited and involved in the process. But it wasn’t for everyone at first. If you don’t speak developer you’re going to feel lost. Even if you are completely fluent in networking and you know what you want to accomplish, just not how to get there. DevNet started off as the place to let the curious learn how to combine networking and programming.

The Ascent

DevNet really came into their own about 3 years ago. I use that timeline because that’s when I first heard that people were wanting to spend more time at Cisco Live in the DevNet Zone learning programming and other techniques and less time in traditional sessions. Considering the long history of Cisco Live that’s an impressive feat.

More importantly, DevNet changed the conversation for professionals. Instead of just being for the curious, DevNet became a place where anyone could go and find the information they needed. It became a resource. Not just a playground. Instead of poking around and playing with things it became a place to go and figure things out. Or a place to learn more about a new technology that you wanted to implement, like automation. If the regular sessions at Cisco Live were what you had to learn, DevNet is where you wanted to go and learn.

Susie Wee (@SusieWee) deserves all the credit in the world here. She has seen what the developer community needs to thrive inside of Cisco and she’s delivered it. She’s the kind of ambassador that can go between the various Cisco business units (BUs) and foster the kind of attitudes that people need to have to succeed. It’s no longer about turf wars or fiefdoms. Instead, it’s about leveraging a common platform for developers and networkers alike to find a common ground to build from. But even that’s not enough to complete the vision.

Narrow of Purpose, Wide of Vision

During Cisco Live 2019, I talked quite a bit with Susie and her team. And one of things that struck me from our conversations was not how DevNet was an open and amazing place. Or how they were adding sessions as fast as they could find instructors. It was that so many people weren’t taking advantage of it. That’s when I realized that DevNet needs to shift their focus. Instead of just providing a place for networking people to learn, they’re going to have to go on the offensive.

DevNet needs to enhance and increase their outreach programs. Being a static resource is fine when your audience is eager to learn and looking for answers. But those people have already flocked to the DevNet banner. For things to grow, DevNet needs to pull the laggards along. The people who think automation is just a fad. Or that SDN is in the Trough of Disillusionment from a pretty Gartner graphic. DevNet has momentum, and soon will have the certification program needed to help networking people show off their transformation to developers.


Tom’s Take

For DevNet to really succeed, they need to be grabbing people by the collar and dragging them to the new reality of networking. It’s not enough to give people a place to do research on nice-to-have projects. You’re going to have get the people engaged and motivated. That means committing resources to entry-level outreach. Maybe even building a DevNet Academy similar to the Cisco Academy. But it has to happen. Because the people that aren’t already at DevNet aren’t going to get there on their own. They need a push (or a pull) to find out what they don’t know that they don’t know.

OpenConfig and Wi-Fi – The Winning Combo

Wireless isn’t easy by any stretch of the imagination. Most people fixate on the spectrum analysis part of the equation when they think about how hard wireless is. But there are many other moving parts in the whole architecture that make it difficult to manage and maintain. Not the least of which is how the devices talk to each other.

This week at Aruba Atmosphere 2019, I had the opportunity to moderate a panel of wireless and security experts for Mobility Field Day Exclusive. It was a fun discussion, as you can see from the above video. As the moderator, I didn’t really get a change to explain my thoughts on OpenConfig, but I figured now would be a great time to jump in with some color on my side of the conversation.

Yin and YANG

One of the most exciting ideas behind OpenConfig for wireless people should be the common YANG data models. This means that you can use NETCONF to have a common programming language against specific YANG models. That means no more fumbling around to remember esoteric commands. You just tell the system what you want it to do and the rest is easy.

As outlined in the video, this has a huge impact on trying to keep configurations standard across different types of APs. Imagine the nightmare of trying to configure power settings or radio thresholds with 3 or more AP manufacturers in your building. Now, imagine being able to do it across your building or dozens of others with a few simple commands and some programming know-how? Doesn’t seem quite as daunting now, does it? It’s easy because you’re speaking the same language across all those APs.

So, what if you don’t care, like Richard McIntosh (@802TopHat) points out? What if your vendor doesn’t support OpenConfig? Well, that’s fine. Not everyone has to support it today. But if you work on building a model system and setting up the automation and API interfaces, are you just going to throw it out the window during your refresh cycle because the new APs that you’re buying don’t support OpenConfig? Or is the need for OpenConfig going to be a huge driver for you and part of the selection process.

Companies are motived by their customers. If you tell them that they need to develop OpenConfig for their devices, they will do it because they run the risk of losing sales. And if the industry moves toward adopting a standard northbound API, what happens to those that get left out in the cold after a few missed refresh cycles? I bet they’ll quickly realize the lost opportunities more than cover the development costs of supporting OpenConfig.

Telemetry Short-Cuts

The other big piece of OpenConfig and wireless is telemetry. SNMP-based monitoring doesn’t work well in today’s wired networks and it’s downright broken in wireless. There are too many variables out there in the average deployment to be able to account for them with anything other than telemetry. Many vendors are starting to adopt the idea of streaming the data directly to collectors via a subscription model. OpenConfig makes this easy with the ability to subscribe to the data you want using OpenConfig models.

From a manufacturer perspective, this is a huge chance to get into telemetry and offer it as a differentiator. If you’re not tied to using an archaic platform with proprietary data models you can embrace OpenConfig and deliver a modern telemetry infrastructure that users will want to adopt. And if the radio performance is the same between all of the offerings, telemetry could be a the piece that tips the scales in your favor.

Single-Vendor Isn’t So Single

I remember doing a deployment for a wireless system once that was “state of the art” when we put it in. I had my challenges and made everything work well and the customer was happy. Until a month later when the supporting vendor announced they were buying a competing company and using that company as their offering going forward. My customer was upset, naturally, but so was I. I spent a lot of time working out how to build and deploy that system and now it was on the scrap heap.

It’s even worse when you keep buying from single vendors and suddenly find that the new products don’t quite conform to the same syntax or capabilities. Maybe the new model of router or AP has a new board that is 95% compatible with the old one, except of that one command you use all the time.

OpenConfig can change that. Because the capabilities of the device have to be outlined you can easily figure out if there are any missing parts and pieces. You can also be sure that your provisioning scripts and programs aren’t going to break or cause problems because a critical command was deprecated. And since you can keep the models around for old hardware as well as new you aren’t going to be duplicating efforts everywhere trying to keep things moving between the platforms.


Tom’s Take

OpenConfig is a great idea for any system that has distributed components. Even if it never takes off in Wi-Fi, it will force the manufacturers to do a bit better job of documenting their capabilities and making them easy to consume via API. And anything that exposes more functionality to be consumed by automation and programmability is a good thing indeed.

The Long And Winding Network Road

How do you see your network? Odds are good it looks like a big collection of devices and protocols that you use to connect everything. It doesn’t matter what those devices are. They’re just another source of packets that you have to deal with. Sometimes those devices are more needy than others. Maybe it’s a phone server that needs QoS. Or a storage device that needs a dedicated transport to guarantee that nothing is lost.

But what does the network look like to those developers?

Work Is A Highway

When is the last time you thought about how the network looks to people? Here’s a thought exercise for you:

Think about a highway. Think about all the engineering that goes into building a highway. How many companies are involved in building it. How many resources are required. Now, think of that every time you want to go to the store.

It’s a bit overwhelming. There are dozens, if not hundreds, of companies that are dedicated to building highways and other surface streets. Perhaps they are architects or construction crews or even just maintenance workers. But all of them have a function. All for the sake of letting us drive on roads to get places. To us, the road isn’t the primary thing. It’s just a way to go somewhere that we want to be. In fact, the only time we really notice the road is when it is in disrepair or under construction. We only see the road when it impacts our ability to do the things it enables.

Now, think about the network. Networking professionals spend their entire careers building bigger, faster networks. We take weeks to decide how best to handle routing decisions or agonize over which routing protocols are necessary to make things work the way we want. We make it our mission to build something that will stand the test of time and be the Eighth Wonder of the World. At least until it’s time to refresh it again for slightly faster hardware.

The difference between these two examples is the way that the creators approach their creation. Highway workers may be proud of their creation but they don’t spend hours each day extolling the virtues of the asphalt they used or the creative way they routed a particular curve. They don’t demand respect from drivers every time someone jumps on the highway to go to the department store.

Networking people have a visibility problem. They’re too close to their creation to have the vision to understand that it’s just another road to developers. Developers spend all their time worrying about memory allocation and UI design. They don’t care if the network actually works at 10GbE or 100 GbE. They want a service that transports packets back and forth to their destination.

The Old New Network

We’ve had discussion in the last few years about everything under the sun that is designed to make networking easier. VXLAN, NFV, Service Mesh, Analytics, ZTP, and on and on. But these things don’t make networking easier for users. They make networking easier for networking professionals. All of these constructs are really just designed to help us do our jobs a little faster and make things work just a little bit better than they did before.

Imagine all the work that goes into electrical power generation. Imagine the level of automation and monitoring necessary to make sure that power gets from the generation point to your house. It’s impressive. And yet, you don’t know anything about it. It’s all hidden away somewhere unimpressive. You don’t need to describe the operation of a transformer to be able to plug in a toaster. And no matter how much that technology changes it doesn’t impact your life until the power goes out.

Networking needs to be a utility. It needs to move away from the old methods of worrying about how we create VLANs and routing protocols and instead needs to focus on disappearing just like the power grid. We should be proud of what we build. But we shouldn’t make our pride the focus of hubris about what we do. Networking professionals are like highway workers or electrical company employees. We work hard behind the scenes to provide transport for services. The cloud has changed the way we look at the destination for those services. And it’s high time we examine our role in things as well.


Tom’s Take

Telco workers. COBOL programmers. Data entry specialists. All of these people used to be the kings and queens of their field. They were the people with the respect of hundreds. They were the gatekeepers because their technology and job roles were critical. Until they weren’t any more. Networking is getting there quickly. We’ve been so focused on making our job easy to do that we’ve missed the point. We need to be invisible. Just like a well built road or a functioning electrical grid. We are not the goal of infrastructure. We’re just a part of it. And the sooner we realize that and get out of our own way, we’re going to find that the world is a much better place for everyone involved in IT, from developers to users.

Are We Seeing SD-WAN Washing?

You may have seen a tweet from me last week referencing a news story that Fortinet was now in the SD-WAN market:

It came as a shock to me because Fortinet wasn’t even on my radar as an SD-WAN vendor. I knew they were doing brisk business in the firewall and security space, but SD-WAN? What does it really mean?

SD Boxes

Fortinet’s claim to be a player in the SD-WAN space brings the number of vendors doing SD-WAN to well over 50. That’s a lot of players. But how did the come out of left field to land a deal rumored to be over a million dollars for a space that they weren’t even really playing in six months ago?

Fortinet makes edge firewalls. They make decent edge firewalls. When I used to work for a VAR we used them quite a bit. We even used their smaller units as remote appliances to allow us to connect to remote networks and do managed maintenance services. At no time during that whole engagement did I ever consider them to be anything other than a firewall.

Fast forward to 2018. Fortinet is still selling firewalls. Their website still focuses on security as the primary driver for their lines of business. They do talk about SD-WAN and have a section for it with links to whitepapers going all the way back to May. They even have a contributed article for SDxCentral back and February. However, going back that far the article reads more like a security company that is saying their secure endpoints could be considered SD-WAN.

This reminds me of stories of Oracle counting database licenses as cloud licenses so they could claim to be the fourth largest cloud provider. Or if a company suddenly decided that every box they sold counted as an IPS because it had a function that could be enabled for a fee. The numbers look great when you start counting them creatively but they’re almost always a bit of a fib.

Part Time Job

Imagine if Cisco suddenly decided to start counting ASA firewalls as container engines because of a software update that allowed you to run Kubernetes on the box. People would lose their minds. Because no one buys an ASA to run containers. So for a company like Cisco to count them as part of a container deployment would be absurd.

The same can be said for any company that has a line of business that is focused on one specific area and then suddenly decides that the same line of business can be double-counted for a new emerging market. It may very well be the case that Fortinet has a huge deployment of SD-WAN devices that customers are very happy with. But if those edge devices were originally sold as firewalls or UTM devices that just so happened to be able to run SD-WAN software, it shouldn’t really count should it? If a customer thought they were buying a firewall they wouldn’t really believe it was actually an SD-WAN router.

The problem with this math is that everything gets inflated. Maybe those SD-WAN edge devices are dedicated. But, if they run Fortinet’s security suite are also being counting in the UTM numbers? Is Cisco going to start counting every ISR sold in the last five years as a Viptela deployment after the news this week that Viptela software can run on all of them? Where exactly are we going to draw the line? Is it fair to say that every x86 chip sold in the last 10 years should count for a VMware license because you could conceivably run a hypervisor on them? It sounds ridiculous when you put it like that, but only because of the timelines involved. Some crazier ideas have been put forward in the past.

The only way that this whole thing really works is if the devices are dedicated to their function and are only counted for the purpose they were installed and configured for. You shouldn’t get to add a UTM firewall to both the security side and the SD-WAN side. Cisco routers should only count as traditional layer 3 or SD-WAN, not both. If you try to push the envelope to put up big numbers designed to wow potential customers and get a seat at the big table, you need to be ready to defend your reporting of those numbers when people ask tough questions about the math behind those numbers.


Tom’s Take

If you had told me last year that Fortinet would sell a million dollars worth of SD-WAN in one deal, I’d ask you who they bought to get that expertise. Today, it appears they are content with saying their UTM boxes with a central controller count as SD-WAN. I’d love to put them up against Viptela or VeloCloud or even CloudGenix and see what kind of advanced feature sets they produce. If it’s merely a WAN aggregation box with some central control and a security suite I don’t think it’s fair to call it true SD-WAN. Just a rinse and repeat of some washed up marketing ideas.

Is Training The Enemy of Progress?

Peyton Maynard-Koran was the keynote speaker at InteropITX this year. If you want to catch the video, check this out:

Readers of my blog my remember that Peyton and I don’t see eye-to-eye on a few things. Last year I even wrote up some thoughts about vendors and VARs that were a direct counterpoint to many of the things that have been said. It has even gone further with a post from Greg Ferro (@EtherealMind) about the intelligence level of the average enterprise IT customer. I want to take a few moments and explore one piece of this puzzle that keeps being brought up: You.

Protein Robots

You are a critical piece of the IT puzzle. Why? You’re a thinking person. You can intuit facts and extrapolate cause from nothing. You are NI – natural intelligence. There’s an entire industry of programmers chasing what you have. They are trying to build it into everything that blinks or runs code. The first time that any company has a real breakthrough in true artificial intelligence (AI) beyond complicated regression models will be a watershed day for us all.

However, you are also the problem. You have requirements. You need a salary. You need vacation time. You need benefits and work/life balance to keep your loved ones happy. You sometimes don’t pick up the phone at 3am when the data center blinks out of existence. It’s a challenge to get you what you need while still extracting the value that is needed from you.

Another awesome friend Andrew von Nagy (@RevolutionWiFi) coined the term “protein robots”. That’s basically what we are. Meatbags. Walking brains that solve problems that cause BGP to fall over when presented with the wrong routing info or cause wireless signals to dissipate into thing air. We’re a necessary part of the IT equation.

Sure, people are trying to replace us with automation and orchestration. It’s the most common complaint about SDN that I’ve heard to date: automation is going to steal my job. I’ve railed against that for years in my talks. Automation isn’t going to steal your job, but it will get you a better one. It will get you a place in the organization to direct and delegate and not debug and destroy. In the end, automation isn’t coming for your job as long as you’re trying to get a better one.

All Aboard The Train!

The unseen force that’s opposing upward mobility is training. In order to get a better job, you need to be properly trained to do it. Maybe it’s a lot of experience from running a network for years. Perhaps it’s a training class you get to go to or a presentation online you can watch. No matter what you need to have new skills to handle new job responsibilities. Even if you’re breaking new ground in something like AI development you’re still acquiring new skills along the way. Hopefully, if you’re in a new field, you’re writing them all down for people to follow in your footsteps one day.

However, training is in opposition to what your employer wants for you. It sounds silly, doesn’t it? Yet, we still hear the quote attributed to W. Edwards Deming – “What happens if we train our people and they leave? What happens if we don’t and they stay?” Remember, as we said above you are a protein robot that needs things like time off and benefits. All of those things are seen as an investment in you.

Training is another investment that companies like to tout. When I worked at a VAR, we considered ourselves some of the most highly trained people around. The owner told me when I started that he was “going to put half a million dollars into training me.” When I asked him about that number after five years he told me it felt like he put a kid through college. And that was before my CCIE. The more trained people you have, the easier your job becomes.

But an investment in training can also backfire. Professionals can take that training and put it to use elsewhere. They can go to a different job and take more money. They can refuse to do a job until they are properly trained. The investments that people make in training are often unrealized relative to the amount of money that it costs to make it happen.

It doesn’t help that training prices have skyrocketed. It used to be that I just needed to go down to the local bookstore and find a copy of a CCNA study guide to get certified. I knew I’d reached a new point in my career when I couldn’t buy my books at the bookstore. Instead, I had to seek out the knowledge that I needed elsewhere. And yes, sometimes that knowledge came in the form of bootcamps that cost thousands of dollars. Lucky for me that my employer at the time looked at that investment and said that it was something they would pick up. But I know plenty of folks that can’t get that kind of consideration. Especially when the training budget for the whole department won’t cover one VMware ICM class.

Employers don’t want employees to be too trained. Because you have legs. Because you can get fed up. Because you can leave. The nice thing about making investments in hardware and software is that it’s stuck at a location. A switch is an asset. A license for a feature can’t be transferred. Objects are tied to the company. And their investments can be realized and recovered. Through deprecation or listing as an asset with competitive advantage companies can recover the value they put into a physical thing.

Professionals, on the other hand, aren’t as easy to deal with. Sure, you can list a CCIE as an important part of your business. But what happens if they leave? What happens when they decide they need a raise? Or, worse yet, when they need to spend six months studying for a recertification? The time taken to learn things is often not factored into the equation when we discuss how much training costs. Some of my old coworkers outright refused to get certified if they had to study on their own time. They didn’t want their free non-work time taken up by reading over MCSE books or CCNA guides. Likewise, the company didn’t want to waste billable hours from someone not providing value. It was a huge catch-22.

Running In Place

Your value is in your skillset at the company you work for. They derive that value from you. So, they want you to stay where you are. They want you trained just enough to do your job. They don’t want you to have your own skillset that could be valuable before they get their value from you. And they definitely don’t want you to take your skillset to a competitor. How can you fix that?

  1. Don’t rely on your company to pay for your training. There are a lot of great resources out there that you can use to learn without needed to drop big bucks for a bootcamp. Use bootcamps to solidify your learning after the fact. Honestly, if you’re in a bootcamp to learn something you’re in the wrong class. Read blogs. Buy books from Amazon. Get your skills the old fashioned way.
  2. Be ready to invest time. Your company doesn’t want you using their billable time for learning. So that means you are going to make an investment instead. The best part is that even an hour of studying instead of binge watching another episode of House is time well-spent on getting another important skill. And if it happened on your own time, you’re not going to have to pay that back.
  3. Be ready to be uncomfortable. It’s going to happen. You’re going to feel lost when you learn something new. You’re going to make mistakes while you’re practicing it. And, honestly, you’re going to feel uncomfortable going to your boss to ask for more money once you’re really good at what you’re doing. If you’re totally comfortable when learning something new, you’re doing it wrong.

Tom’s Take

Companies want protein robots. They want workers that give 125% at all times and can offer a wide variety of skills. They want compliant workers that never want to go anywhere else. Don’t be that robot. Push back. Learn what you can on your time. Be an asset but not an immobile one. You need to know more to escape the SDN Langoliers that are trying to eat your job. That means you need to be on the move to get where you need to be. And if you sit still you risk becoming a permanent asset. Just like the hardware you work on.