Fast Friday- Keeping Up With The Times

We’re at the end of the 2010s. It’s almost time to start making posts about 2020 and somehow working vision or eyesight into the theme so you can look just like everyone else. But I want to look back for a moment on how much things have changed for networking in the last ten years.

It’s true that networking wasn’t too exciting for most of the 2000s. Things got faster and more complicated. Nothing really got better except the bottom lines of people pushing bigger hardware. And that’s honestly how we liked it. Because the idea that we were all special people that needed to be at the top of our game to get things done resonated with us. We weren’t just mechanics. We were the automobile designers of the future!

But if there’s something that the mobile revolution of the late 2000s taught us, it was that operators don’t need to be programmers to enjoy using technology. Likewise, enterprise users don’t need to be CCIEs or VCDXs to make things work. That’s the real secret behind all the of the advances in networking technology in the 2010s. We’re not making networking harder any more. We’re not adding complexity for the sake of making our lives more important.

The rapid pace of change that we’ve had over the last ten years is the driver for so many technologies that are changing the role of networking engineers. Automation, orchestration, and software-driven networking aren’t just fads. They’re necessities. That’s because of the focus on new features, not in spite of them. I can remember administering CallManager years ago and not realizing what half of the checkboxes on a line appearance did. That’s not helping things.

Complexity Closet

There are those that would say that what we’re doing is just hiding the complexity behind another layer of abstraction, which is a favorite saying of Russ White. I’d argue that we’re not hiding the complexity as much as we’re putting it back where it belongs – out of sight. We don’t need the added complexity for most operations. This flies in the face of what we want networking engineers to know. If you want to be part of the club you have to know every LSA type and how they interact and what happens during an OSPF DR election. That’s the barrier for entry, right?

Except not everyone needs to know that stuff. They need to know what it looks like when routing is down. More likely, they just need to recognize more basic stuff like DNS being down or something being wrong in the service provider. The odds are way better that something else is causing the problem somewhere outside of your control. And that means your skills are good for very little once you’ve figured out that the problem is somewhere you can’t help.

Hiding complexity behind summary screens or simple policy decisions isn’t bad. In fact, it tends to keep people from diving down rabbit holes when fixing the problems. How many times have we tried to figure out some complicated timer issue when it was really a client with a tenuous connection or a DNS issue? We want the problems to be complicated so we can flex our knowledge to others when in fact we should be applying Occam’s Razor much earlier in the process. Instead of trying to find the most complicated solution to the problem so we can justify our learning, we should instead try to make it as simple as possible to conserve that energy for a time when it’s really needed.


Tom’s Take

We need to leverage the tools that have been developed to make our lives easier in the 2020s instead of railing against them because they’re upsetting our view of how things should be. Maybe networking in 2010 needed complexity and syntax and command lines. But networking in 2022 might need automated scripts and telemetry to figure things out faster since there are ten times the moving parts. It’s like memorizing phone numbers. It works well when you only need to know seven or eight with a few digits. But when you need to know several hundred each with ten or more digits it’s impossible. Yet I still hear people complain about contact lists or phone books because “they used to be good at memorizing numbers”. Instead of us focusing on what we used to be good at, let’s try to keep with the times and be good at what we need to be good at now and in the future.

The Development of DevNet’s Future

You’re probably familiar with Cisco DevNet. If not, DevNet is the place Cisco has embraced outreach to the developer community building for software-defined networking (SDN). Though initially cautious in getting into the software developer community, Cisco has embraced their new role and really opened up to help networking professionals embrace the new software normal in networking. But where is DevNet going to go from here?

Humble Beginnings

DevNet wasn’t always the darling of Cisco’s offerings. I can remember sitting in on some of the first discussions around Cisco OnePK and thinking to myself, “This is never going to work.”

My hesitation with Cisco’s first attempts to focus on software platforms came from two places. The first was what I saw as Cisco trying to figure out how to extend the platforms to include some programmability. It was more about saying they could do software and less about making that software easy to use or program against. The second place was actually the lack of a place to store all of this software knowledge. Programmers and developers are fickle lot and you have to have a repository where they can get access to the pieces they needed.

DevNet was that place that Cisco should have built from the start. It was a way to get people excited and involved in the process. But it wasn’t for everyone at first. If you don’t speak developer you’re going to feel lost. Even if you are completely fluent in networking and you know what you want to accomplish, just not how to get there. DevNet started off as the place to let the curious learn how to combine networking and programming.

The Ascent

DevNet really came into their own about 3 years ago. I use that timeline because that’s when I first heard that people were wanting to spend more time at Cisco Live in the DevNet Zone learning programming and other techniques and less time in traditional sessions. Considering the long history of Cisco Live that’s an impressive feat.

More importantly, DevNet changed the conversation for professionals. Instead of just being for the curious, DevNet became a place where anyone could go and find the information they needed. It became a resource. Not just a playground. Instead of poking around and playing with things it became a place to go and figure things out. Or a place to learn more about a new technology that you wanted to implement, like automation. If the regular sessions at Cisco Live were what you had to learn, DevNet is where you wanted to go and learn.

Susie Wee (@SusieWee) deserves all the credit in the world here. She has seen what the developer community needs to thrive inside of Cisco and she’s delivered it. She’s the kind of ambassador that can go between the various Cisco business units (BUs) and foster the kind of attitudes that people need to have to succeed. It’s no longer about turf wars or fiefdoms. Instead, it’s about leveraging a common platform for developers and networkers alike to find a common ground to build from. But even that’s not enough to complete the vision.

Narrow of Purpose, Wide of Vision

During Cisco Live 2019, I talked quite a bit with Susie and her team. And one of things that struck me from our conversations was not how DevNet was an open and amazing place. Or how they were adding sessions as fast as they could find instructors. It was that so many people weren’t taking advantage of it. That’s when I realized that DevNet needs to shift their focus. Instead of just providing a place for networking people to learn, they’re going to have to go on the offensive.

DevNet needs to enhance and increase their outreach programs. Being a static resource is fine when your audience is eager to learn and looking for answers. But those people have already flocked to the DevNet banner. For things to grow, DevNet needs to pull the laggards along. The people who think automation is just a fad. Or that SDN is in the Trough of Disillusionment from a pretty Gartner graphic. DevNet has momentum, and soon will have the certification program needed to help networking people show off their transformation to developers.


Tom’s Take

For DevNet to really succeed, they need to be grabbing people by the collar and dragging them to the new reality of networking. It’s not enough to give people a place to do research on nice-to-have projects. You’re going to have get the people engaged and motivated. That means committing resources to entry-level outreach. Maybe even building a DevNet Academy similar to the Cisco Academy. But it has to happen. Because the people that aren’t already at DevNet aren’t going to get there on their own. They need a push (or a pull) to find out what they don’t know that they don’t know.

Automating Documentation

Tedium is the enemy of productivity. The fastest way for a task to not be done is to make it long, boring, and somewhat complicated. People who feel that something is tedious or repetitive are the ones more likely to marginalize a task. And I think I speak for the entire industry when I say that there is no task more tedious and boring than documentation. So how can we fix it?

Tell Me What You Did

I’m not a huge fan of documentation. When I decide on a plan of action, I rarely write it down step-by-step unless I’m trying to train someone. Even then, it looks more like notes with keywords instead of a narrative to follow. It’s a habit that has been borne out of years of firefighting in networks and calls to “do it faster”. The essential items of a task are refined and reduced until all that remains is the work and none of the ancillary items, like documentation.

Based on my previous life as a network engineer, I can honestly say that I’m not alone in this either. My old company made lots of money doing network discovery engagements. Sometimes these came because the previous admins walked out the door with no documentation. Other times, it was simply because the network had changed so much since the last person made any notes that what was going on didn’t resemble anything like what they thought it was supposed to look like.

This happens everywhere. It doesn’t take many instances of an network or systems professional telling themselves, “Oh, I’ll write it down later…” for later to never come. Devices get added, settings get changed, and not one word is ever written down. That’s the kind of chaos that causes disorganization at best and outages at worst. And I doubt there’s any networking pro out there that hasn’t been affected by bad documentation at one time or another.

So, how do we fix documentation? It’s tedious for sure. Requiring it as part of the process just invites people to find ways around it. And good documentation takes time. Is there a way to combine the lack of time, lack of requirement, and repetition and make documentation something that is done again? I think there is. And it requires a little help from process.

Not Too Late To Automate

Automation is a big thing right now. SDN is driving it. Network complexity is practically requiring it. Yet networking professionals are having a hard time embracing it. Why?

In part, networking pros don’t like to spend hours solving a problem that can be done in minutes. If you don’t believe me, watch one of the old SNL Nick Burns sketches. Nick is more likely to tell you to move than tell you how to fix your problem. Likewise, if a network pro is spending four hours writing an automation script that is supposed to execute a change that can be made in 20 minutes, they’re not going to want to do it. It’s just the nature of the job and the desire of the network professional to make every minute count.

So, how can we drive adoption of automation? As it turns out, automating documentation can be a huge driver. Automation of tedious tasks is exactly the thing that scripting and automation was designed to solve. Instead of focusing on the automation of the task, like adding VLANs to a set of switches, focus on the ability of the system to create documentation on the fly from the change.

Let’s walk through an example. In order for documentation to matter, it has to answer the 5 Ws. How can we automate that?

Let’s start with Who. Automation can create documentation saying user Hollingsworth made a change through an automated process. That helps the accounting side of the house figure out the person making changes in the network. If that person is actually a script, the Who can be changed to reflect that it was an automated process called by a person related to a change ticket. That gives everyone the ability to track the changes back to a given problem. And it can all be pulled in without user intervention.

What is also an easy automation task. List the configuration being applied. At first, the system can simply list the configuration to be programmed. But for menial and repetitive tasks like VLAN additions you can program the system with a real description like “Adding VLANs to $Switch to support $ticket”. Those variables can be autopopulated based on the work to be done. Again, we reference a ticket number in order to prove that these changes are coming from somewhere.

When is also critical. Are these changes happening in a maintenance window? Or did someone check them in in the middle of the day because they won’t cause any problems? (SPOILER ALERT: They will) By required a timestamp for changes, you can track which professionals are being cavalier with their change management. You can also find out if someone is getting into the system after hours to cause problems or attempt to compromise things. Even if the cause of the change is “immediately” due to downtime or emergency, knowing why it had to be checked in right away is a clue to finding problems that recur in the network.

Where is a two-pronged reason. It’s important to check where the changes are going to be applied. Is it going to be done to all switches in the organization? Or just a set in a remote office. Sanity checking via documentation will keep you from bricking your entire organization in one fell swoop. Likewise, knowing where the change is being checked in from is important. Is a remote office trying to change config on HQ switches? Is a remote engineer dialed in making changes related to an open support case? Is someone from a foreign nation making changes via VPN at 4:30am local time? In every case, you’d really want to know what’s going on before those changes get made.

Why is the one that will trip up everything. If you don’t believe me, I’d like to give you the top two reasons why Windows Server 2003 is shut down and rebooted with the shutdown justification dialog box:

  1. a;lkdjfalkdflasdfkjadlf;kja;d
  2. JUST ****ing SHUT DOWN!!!!!

People don’t like justifying their decisions. Even when I worked for Gateway 2000 on their national help desk, our required call documentation was a bit spotty when it came to justification for changes. Why did you decide to FDISK and reload? Why are you going into the registry to fix the icon colors? Change justification is half of documentation. It gives people something to audit. It gives people a way to look at things and figure out why you started down the path of a particular reasoning for problem solving. It also provides context for you after the fact when you can’t figure out why you did it the way you did.


Tom’s Take

Automation isn’t going to take away your job. Automation is going to do the jobs you hate doing. It’s going to make your life easier to concentrate on the tasks that need to be done by freeing you from the tasks that should be done and aren’t. If we can make automation document our networks for just six months, I think you’ll find the value in programming things to work this way. I also think you’ll be happier with the level of detail on your network. And once you can prove the value of automating just one task to your teams, I’m sure they’ll see the value of increasing automation all around.

Subscription Defined Networking

Cisco’s big announcement this week ahead of Cisco Live was their new Intent-based Networking push. This new portfolio does include new switching platforms in the guise of the Catalyst 9000 series, but the majority of the innovation is coming in the software layer. Articles released so far tout the ability of the network to sense context, provide additional security based on advanced heuristics, and more. But the one thing that seems to be getting little publicity is the way you’re going to be paying for software going forward.

The Bottom Line

Cisco licensing has always been an all-or-nothing affair for the most part. You buy a switch and you have two options – basic L2 switching or everything the switch supports. Routers are similar. Through the early 15.x releases, Cisco routers could be loaded with an advanced image that ran every service imaginable. Those early 15.x releases gave us some attempts at role-based licensing for packet, voice, and security device routers. However, those efforts were rolled back due to customer response.

Shockingly, voice licensing has been the most progressive part of Cisco’s licensing model for a while now. CallManager 4.x didn’t even bother. Hook things up and they work. 5.x through 9.x used Device License Units (DLUs) to help normalize the cost of expensive phones versus their cheaper lobby and break room brethren. But even this model soon gave way to the current Unified Licensing models that attempt to bundle phones with software applications to mimic how people actually communicate in today’s offices.

So where does that leave Cisco? Should they charge for every little thing you could want when you purchase the device? Or should Cisco leave it wide open to the world and give users the right to decide how best to use their software? If John Chambers had still been in charge of Cisco, I know the answer would have been very similar to what we’ve seen in the past. Uncle John hated the idea of software revenue cannibalizing their hardware sales. Like many stalwarts of the IT industry, Chambers believed that hardware was king and software was an afterthought.

Pay As You Go

But Chuck Robbins has different ideas. Alongside the new capabilities of Cisco’s Intuitive Network plan they have also introduced a software subscription model. Now, if you want to use all these awesome new features for the future of the network according to Cisco you are going to pay for them. And you’re going to pay every year you use them.

It’s not that radical of a shift in mindset if you look at the market today. Cable subscriptions are going away in favor of specialized subscriptions to specific content. Custom box companies will charge you a monthly fee to ship you random (and not-so-random) items. You can even set up a subscription to buy essential items from Amazon and Walmart and have them shipped to your home regularly.

People don’t mind paying for things that they use regularly. And moving the cost model away from capital expenditure (CapEx) to an operational expenditure (OpEx) model makes all the sense in the world for Cisco. Studies from industry companies like Infinity Research have said that Infrastructure as a Service (Iaas) growth is going to be around 46% over the next 5 years. That growth money is coming from organizations shift CapEx budget to OpEx budget. For traditional vendors like Cisco, EMC, and Dell, it’s increasingly important for them to capture that budget revenue as it moves into a new pool designed to be spent a month or year at a time instead of once every five to seven years.

The end goal for Cisco is to replace those somewhat frequent hardware expenditures with more regular revenue streams from OpEx budgets. If you’re nodding your head and saying, “That’s pretty obvious…” you are likely from the crowd that couldn’t understand why Cisco kept doubling down on bigger, badder switching during the formative years of SDN. Cisco’s revenue model has always looked a lot like IBM and EMC. They need to sell more boxes more frequently to hit targets. However, SDN is moving the innovation away from the hardware, where Cisco is comfortable, and into the software, where Cisco has struggled as of late.

Software development doesn’t happen in a vacuum. It doesn’t occur because you give away features designed to entice customers into buying a Nexus 9000 instead of a Nexus 6000. Software development only happens when people are paying money for the things you are developing. Sometimes that means that you get bonus features that they figure out in the process of making the main feature. But it surely means that the people focused on making the software want to get it right the first time instead of having to ship endless patches to make it work right eventually. Because if your entire revenue model comes from software, it had better be good software that people want to buy and continue to pay for.


Tom’s Take

I think Chuck Robbins is dragging Cisco into the future kicking and screaming. He’s streamlined the organization by getting rid of the multitude of “pretenders to the throne” and tightening up the rest of the organization from a collection of competing business units into a logically organized group of product lines that can be marketed. The shift toward a forward-looking software strategy built on recurring revenue that isn’t dependent on hardware is the master stroke. If you ever had any doubts about what kind of ship Chuck was going to sail, this is your indicator.

In seven years, we’re not going to be talking about Cisco in the same way we did before. Much like we don’t talk about IBM like we used to. The IBM that exists today bears little resemblance to Tom Watson’s company of the past. I think that the Cisco of the future will bear the same superficial resemblance to John Chamber’s Cisco as well. And that’s for the better.

Cisco and Viptela – The Price of Development Debt

Cisco finally pulled themselves into the SD-WAN market by acquiring Viptela on Monday. Viptela was considered to be one of, if not the leading SD-WAN vendor in the market. That Cisco decided to pick them as an acquisition target isn’t completely surprising. But one might wonder why?

IWANna New Debt

Cisco’s premier strategy for SD-WAN up until last week was IWAN. This is their catch-all solution designed to take the various component pieces being offered by SD-WAN solutions and replicate them on Cisco hardware. IWAN has served as a vehicle for Cisco to push things like the APIC-EM solution, Cisco ONE licensing, and a variety of other enhanced technologies like NBAR and PfR.

Cisco has packaged these technologies together because they have spent a couple of decades building these protocols up to be the best at what they do in the industry. NBAR was the key to application QoS years ago. PfR and OER were the genesis of Cisco having the ability to intelligently route packets to destinations. These protocols have formed the cornerstone of their platform for many, many years.

So why is IWAN such a mess? If you have the best of breed technology built into a router that makes the packets fly across the Internet at lightning speeds how is it that companies like Viptela were eating Cisco’s lunch in the SD-WAN space? It’s because those same best-of-breed protocols are to blame for the jigsaw puzzle of IWAN.

If you are the product manager for a protocol like NBAR or PfR, you want it to be adopted by as many people as possible. Wide adoption guarantees you’re going to have a job tomorrow or even next year. The people working on EIGRP and OSPF are safe. But if you get left behind technologically, you’re in for rough seas. Just ask the folks that managed LANE. But if you can attach yourself to a movement that’s got some steam, you’re in the drivers seat.

At the same time, you want your protocol or product to be the best at what it does. And sometimes being the best means you don’t compromise. That’s great when you are the only thing running on the system. But when you’re trying to get protocols to work together to create something bigger, you often find that compromises are not just a good idea, they’re necessary. But how do you handle it when the product manager for NBAR and the product manager for IP SLA get into a screaming match over who is going to blink first?

Using existing protocols and products is a great idea because it means you don’t have to reinvent the wheel every time you design something. But, with that wheel comes the technical debt of development. Given the chance to reuse something that thousands, if not millions, of dollars of R&D has gone into, companies like Cisco will jump at the chance to get some more longevity out of a protocol.

Not Pokey, But Gumby

Now, lets look at a scrappy startup like Viptela. They have to build their protocols from the ground up. Maybe they have the opportunity of leveraging some open source projects or some basic protocol implementations to get off the ground. That means that they are starting from essentially square one. It also means they are starting off with very little technical and development debt.

When Viptela builds their application monitoring stack or their IPSec VPN stack, they aren’t trying to build the best protocol for every possible situation that could ever be encountered by a wide variety of customers. They are just trying to build a protocol that works. And not just a protocol that works on its own. They want a protocol that works with everything else they are building.

When you’re forced to do everything from scratch, you find that you avoid making some of the same choices that you were forced to make years ago. The lack of technical and development debt also means you can take a new direction with things. Don’t want to support pre-shared key IPSec VPNs? Don’t build it into the protocol. Don’t care to have some of the quirks of PfR? Build something different that meets your needs. You have complete control.

Flexibility is why SD-WAN vendors were able to dominate the market for the past two years. They were able to adapt and change quickly because they didn’t need to keep trying to make systems integrate on top the tech and dev debt they incurred during the product lifecycle. That lets them concentrate on features that customers want, not on trying to integrate features that management has decreed must be included because the product manager was convincing in the last QBR.


Tom’s Take

In the end, the acquisition of Viptela by Cisco was as much about reduction of technical and development debt in their SD-WAN offerings as it was trying to get ahead in the game. They needed something that could be used as-is without the need to rely on any internal development processes. I alluded to this during our Network Collective Off-The-Cuff show. Without the spin-out model available any longer, Cisco is going to have to start making tough decisions to get things like this done. Either those decisions are made via reduction of business units without integration or through larger dollar signs to acquire solutions to provide the cohesion they need.

The Future Of SDN Is Up In The Air

The announcement this week that Riverbed is buying Xirrus was a huge sign that the user-facing edge of the network is the new battleground for SDN and SD-WAN adoption. Riverbed is coming off a number of recent acquisitions in the SDN space, including Ocedo just over a year ago. So, why then, would Riverbed chase down a wireless company when they’re so focused on the wiring behind the walls?

The New User Experience

When SDN was a pile of buzzwords attached to an idea that had just come out of Stanford, a lot of people were trying to figure out just what exactly SDN could offer them in terms of their network. Things like network slicing were the first big pieces to be put up before things like orchestration, programmability, and APIs were really brought to the fore. People were trying to figure out how to make this hot new thing work for them. Well, almost everyone.

Wireless professionals are a bit jaded when it comes to SDN. That’s because they’ve seen it already in the form of controller-based solutions. The idea that a central device can issue commands to remote access devices and control configurations easily? Airespace was doing that over a decade ago before they got bought by Cisco. Programmability is a moot point to people that can import thousands of access points into a device and automatically have new SSIDs being broadcast on them all in a matter of seconds. Even the new crop of “controllerless” wireless systems on the market still have a central control infrastructure that sends commands to the APs. Much like we’ve found in recent years with SDN, removing the control plane from the data plane path has significant advantages.

So, what would it take to excite wireless pros about SDN? Well, as it turns out, the issue comes down to the user side of the equation. Wireless networks work very well in today’s enterprise. They form the backbone of user connectivity. Companies like Aruba are experimenting with all-wireless offices. The concept is crazy at first glance. How will users communicate without phones? As it turns out, most of them have been using instant messengers and soft phone programs for years. Their communications infrastructure has changed significantly since I learned how to install phone systems years ago. But what hasn’t changed is the need to get these applications to play nicely with each other.

Application behavior and analysis is a huge selling point for SDN and, by extension, SD-WAN. Being able to classify application traffic running on a desktop and treat it differently based on criteria like voice traffic versus web browsing traffic is huge for network professionals. This means the complicated configurations of QoS back in the day can be abstracted out of the network devices and handled by more intelligent systems further up the stack. The hard work can be done where it should be done – by systems with unencumbered CPUs making intelligent decisions rather than by devices that are processing packets as quickly as possible. These decisions can only be made if the traffic is correctly marked and identified as close to the point of origin as possible. That’s where Riverbed and Xirrus come into play.

Extending Your Brains To Your Fingers

By purchasing a company like Xirrus, Riverbed can build on their plans for SDN and SD-WAN by incorporating their software technology into the wireless edge. By classifying the applications where they live, the wireless APs can provide the right information to the SDN processes to ensure traffic is dealt with properly as it flies through the network. With SD-WAN technologies, that can mean making sure the web browsing traffic is sent through local internet links when traffic meant for main sites, like communications or enterprise applications, can be sent via encrypted tunnels and monitored for SLA performance.

Network professionals can utilize SDN and SD-WAN to make things run much more smoothly for remote users without the need to install cumbersome appliances at the edge to do the classification. Instead, the remote APs now become the devices needed to make this happen. It’s brilliant when you realize how much more effective it can be to deploy a larger number of connectivity devices that contain software for application analysis than it is to drop a huge server into a branch office where it’s not needed.

With the deployment of these remote devices, Riverbed can continue to build on the software side of technology by increasing the capabilities of these devices while not requiring new hardware every time a change comes out. You may need to upgrade your APs when a new technology shift happens in hardware, like when 802.11ax is finally released, but that shouldn’t happen for years. Instead, you can enjoy the benefits of using SDN and SD-WAN to accelerate your user’s applications.


Tom’s Take

Fortinet bought Meru. HPE bought Aruba. Now, Riverbed is buying Xirrus. The consolidation of the wireless market is about more than just finding a solution to augment your campus networking. It’s about building a platform that uses wireless networking as a delivery mechanism to provide additional value to users. The spectrum part of wireless is always going to be hard to do properly. Now, the additional benefit of turning those devices into SDN sensors is a huge value point for enterprise networking professionals as well. What better way to magically deploy SDN in your network than to flip a switch and have it everywhere all at once?

Do Network Professionals Need To Be Programmers?

With the advent of software defined networking (SDN) and the move to incorporate automation, orchestration, and extensive programmability into modern network design, it could easily be argued that programming is a must-have skill. Many networking professionals are asking themselves if it’s time to pick up Python, Ruby or some other language to create programs in the network. But is it a necessity?

Interfaces In Your Faces

The move toward using API interfaces is one of the more striking aspects of SDN that has been picked up quickly. Instead of forcing information to be input via CLI or information to be collected from the network via scraping the same CLI, APIs have unlocked more power than we ever imagined. RESTful APIs have giving nascent programmers the ability to query devices and push configurations without the need to learn cumbersome syntax. The ability to grab this information and feed it to a network management system and analytics platform has extended the capabilites of the systems that support these architectures.

The syntaxes that power these new APIs aren’t the copyrighted CLIs that networking professionals spend their waking hours memorizing in excruciating detail. JUNOS and Cisco’s “standard” CLI are as much relics of the past as CatOS. At least, that’s the refrain that comes from both sides of the discussion. The traditional networking professionals hold tight to the access methods they have experience with and can tune like a fine instrument. More progressive networkers argue that standardizing around programming languages is the way to go. Why learn a propriety access method when Python can do it for you?

Who is right here? Is there a middle ground? Is the issue really about programming? Is the prattle from programming proponents posturing about potential pitfalls in the perfect positioning of professional progress? Or are anti-programmers arguing against attacks, aghast at an area absent archetypical architecture?

Who You Gonna Call?

One clue in this discussion comes from the world of the smartphone. The very first devices that could be called “smartphones” were really very dumb. They were computing devices with strict user interfaces designed to mimic phone functions. Only when the device potential was recognized did phone manufacturers start to realize that things other than address books and phone dialers be created. Even the initial plans for application development weren’t straightforward. It took time for smartphone developers to understand how to create smartphone apps.

Today, it’s difficult to imagine using a phone without social media, augmented reality, and other important applications. But do you need to be a programmer to use a phone with all these functions? There is a huge market for smartphone apps and a ton of courses that can teach someone how to write apps in very little time. People can create simple apps in their spare time or dedicate themselves to make something truly spectacular. However, users of these phones don’t need to have any specific programming knowledge. Operators can just use their devices and install applications as needed without the requirement to learn Swift or Java or Objective C.

That doesn’t mean that programming isn’t important to the mobile device community. It does mean that programming isn’t a requirement for all mobile device users. Programming is something that can be used to extend the device and provide additional functionality. But no one in an AT&T or Verizon store is going to give an average user a programming test before they sell them the phone.

This, to me, is the argument for network programmability in a nutshell. Network operators aren’t going to learn programming. They don’t need to. Programmers can create software that gathers information and provides interfaces to make configuration changes. But the rank-and-file administrator isn’t going to need to pull out a Java manual to do their job. Instead, they can leverage the experience and intelligence of people that do know how to program in order to extend their network functionality.


Tom’s Take

It seems like this should be a fairly open-and-shut case, but there is a bit of debate yet left to have on the subject. I’m going to be moderating a discussion between Truman Boyes of Bloomberg and Vijay Gill of Salesforce around this topic on April 25th. Will they agree that networking professionals don’t need to be programmers? Will we find a middle ground? Or is there some aspect to this discussion that will surprise us all? I’ll make sure to keep you updated!