Back In The Saddle Of A Horse Of A Different Color

I’ve been asked a few times in the past year if I missed being behind a CLI screen or I ever got a hankering to configure some networking gear. The answer is a guarded “yes”, but not for the reason that you think.

Type Casting

CCIEs are keyboard jockeys. Well, the R&S folks are for sure. Every exam has quirks, but the R&S folks have quirky QWERTY keyboard madness. We spend a lot of time not just learning commands but learning how to input them quickly without typos. So we spend a lot of time with keys and a lot less time with the mouse poking around in a GUI.

However, the trend in networking has been to move away from these kinds of input methods. Take the new Aruba 8400, for instance. The ArubaOS-CX platform that runs it seems to have been built to require the least amount of keyboard input possible. The whole system runs with an API backend and presents a GUI that is a series of API calls. There is a CLI, but anything that you can do there can easily be replicated elsewhere by some other function.

Why would a company do this? To eliminate wasted effort. Think to yourself how many times you’ve typed the same series of commands into a switch. VLAN configuration, vty configs, PortFast settings. The list goes on and on. Most of us even have some kind of notepad that we keep the skeleton configs in so we can paste them into a console port to get a switch up and running quickly. That’s what Puppet was designed to replace!

By using APIs and other input methods, Aruba and other companies are hoping that we can build tools that either accept the minimum input necessary to configure switches or that we can eliminate a large portion of the retyping necessary to build them in the first place. It’s not the first command you type into a switch that kills you. It’s the 45th time you paste the command in. It’s the 68th time you get bored typing the same set of arguments from a remote terminal and accidentally mess this one up that requires a physical presence on site to reset your mistake.

Typing is boring, error prone, and costs significant time for little gain. Building scripts, programs, and platforms that take care of all that messy input for us makes us more productive. But it also changes the way we look at systems.

Bird’s Eye Views

The other reason why my fondness for keyboard jockeying isn’t as great as it could be is because of the way that my perspective has shifted thanks to the new aspects of networking technology that I focus on. I tell people that I’m less of an engineer now and more of an architect. I see how the technologies fit together. I see why they need to complement each other. I may not be able to configure a virtual link without documentation or turn up a storage LUN like I used to, but I understand why flash SSDs are important and how APIs are going to change things.

This goes all they way back to my conversations at VMunderground years ago about shifting the focus of networking and where people will go. You remember? The “ditch digger” discussion?


This is more true now than ever before. There are always going to be people racking and stacking. Or doing basic types of configuration. These folks are usually trained with basic knowledge of their task with no vision outside of their job role. Networking apprentices or journeymen as the case may be. Maybe one out of ten or one out of twenty of them are going to want to move up to something bigger or better.

But for the people that read blogs like this regularly the shift has happened. We don’t think in single switches or routers. We don’t worry about a single access point in a closet. We think in terms of systems. We configure routing protocols across multiple systems. We don’t worry about a single port VLAN issue. Instead, we’re trying to configure layer 2 DCI extensions or bring racks and pods online at the same time. Our visibility matters more than our typing skills.

That’s why the next wave of devices like the Aruba 8400 and the Software Defined Access things coming from Cisco are more important than simple checkboxes on a feature sheet. They remove the visibility of protocols and products and instead give us platforms that need to be configured for maximum effect. The gap between the people that “rack and stack” and those that build the architecture that runs the organization has grown, but only because the middle ground of administration is changing so fast that it’s tough to keep up.

Tom’s Take

If I were to change jobs tomorrow I’m sure that I could get back in the saddle with a couple of weeks of hard study. But the question I keep asking myself is “Why would I want to?” I’ve learned that my value doesn’t come from my typing speed or my encyclopedia of networking command arguments any more. It comes from a greater knowledge of making networking work better and integrate more tightly into the organization. I’m a resource, not a reactionary. And so when I look to what I would end up doing in a new role I see myself learning more and more about Python and automation and less about what new features were added in the latest OSPF release on Cisco IOS. Because knowing how to integrate technology at a high level is more valuable to everyone than just knowing the commands to type to turn the lights on.




Network programmability is a very hot topic.  Developers are looking to the future when REST APIs and Python replaces the traditional command line interface (CLI).  The ability to write programs to interface with the network and build on functionality is spurring people to integrate networking with DevOps.  But what happens if the foundation of the programmable network, the API, isn’t the rock we all hope it will be?

Shiny API People

APIs enable the world we live in today.  Whether you’re writing for POSIX or JSON or even the Microsoft Windows API, you’re interacting with software to accomplish a goal.  The ability to use these standard interfaces makes software predictable and repeatable.  Think of an API as interchangeable parts for software.  By giving developers a way to extract information or interact the same way every time, we can write applications that just work.

APIs are hard work though.  Writing and documenting those functions takes time and effort.  The API guidelines from Microsoft and Apple can be hundreds or even thousands of pages long depending on which parts you are looking at.  They can cover exciting features like media services or mundane options like buttons and toolbars.  But each of these APIs has to be maintained or chaos will rule the day.

APIs are ever changing things.  New functions are added.  Old functions are deprecated.  Applications that used to work with the old version need to be updated to use new calls and methods.  That’s just the way things are done.  But what happens if the API changes aren’t above board?  What happens when API suddenly becomes “antagonistic programming interface”?

Losing My Religion

The most obvious example of a company pulling a fast one with API changes comes from Twitter.  When they moved from version 1.0 to version 1.1 of their API, they made some procedural changes on the backend that enraged their partners.  They limited the number of user tokens that third party clients could have.  They changed the guidelines for the way that things were to be presented or requested.  And they were the final arbiters of the appeals process for getting more access.  If they thought that an application’s functionality was duplicating their client functionality, they summarily dismissed any requests for more user tokens.

Twitter has taken this to a new level lately.  They’ve introduced new functionality, like pictures in direct messages and cards, that may never be supported in the API.  They are manipulating the market to allow their own first party apps to have the most functionality.  They are locking out developers left and right and driving users to their own products at the expense of developers they previously worked arm-in-arm with.  If Twitter doesn’t outright buy your client and bury it, they just wait until you’ve hit your token limit and let you suffocate from your own popularity.

How does this translate to the world of network programmability?  Well, we are taking for granted that networking vendors that are exposing APIs to developers are doing it for all the right reasons.  Extending network flexibility is a very critical thing.  So is reducing complexity and spurring automation.  But what happens when a vendor starts deprecating functions and forcing developers to go back to the drawing board?

Networking can move at a snail’s pace then fire right up to Ludicrous Speed.  The OpenFlow release cycle is a great example of software outpacing the rest of technology.  What happens when API development hits the same pace?  Even the most agile development team can’t keep pace with a 3-6 month cycle when old API calls are deprecated left and right.  They would just throw their hands up and stop working on apps until things settled down.

And what if the impetus is more sinister?  What if a vendor decides to say, “We’re changing the API calls around.  If you want some help rewriting your apps to function with the new ones, you can always buy our services.” Or if they decide to implement your functionality in their base system?  What happens when a networking app gets Sherlocked?

Tom’s Take

APIs are a good and noble thing.  We need them or else things don’t work correctly.  But those same APIs can cause problems if they aren’t documented correctly or if the vendor starts doing silly things with them.  What we need is a guarantee from vendors that their APIs are going to be around for a while so we can develop apps to help their networking gear work properly.  Microsoft wouldn’t be where it is today without robust support for APIs that has been consistent for years.  Networking needs to follow the same path.  The fewer hijinks with your APIs, the better your community will be.