Network programmability is a very hot topic. Developers are looking to the future when REST APIs and Python replaces the traditional command line interface (CLI). The ability to write programs to interface with the network and build on functionality is spurring people to integrate networking with DevOps. But what happens if the foundation of the programmable network, the API, isn’t the rock we all hope it will be?
Shiny API People
APIs enable the world we live in today. Whether you’re writing for POSIX or JSON or even the Microsoft Windows API, you’re interacting with software to accomplish a goal. The ability to use these standard interfaces makes software predictable and repeatable. Think of an API as interchangeable parts for software. By giving developers a way to extract information or interact the same way every time, we can write applications that just work.
APIs are hard work though. Writing and documenting those functions takes time and effort. The API guidelines from Microsoft and Apple can be hundreds or even thousands of pages long depending on which parts you are looking at. They can cover exciting features like media services or mundane options like buttons and toolbars. But each of these APIs has to be maintained or chaos will rule the day.
APIs are ever changing things. New functions are added. Old functions are deprecated. Applications that used to work with the old version need to be updated to use new calls and methods. That’s just the way things are done. But what happens if the API changes aren’t above board? What happens when API suddenly becomes “antagonistic programming interface”?
Losing My Religion
The most obvious example of a company pulling a fast one with API changes comes from Twitter. When they moved from version 1.0 to version 1.1 of their API, they made some procedural changes on the backend that enraged their partners. They limited the number of user tokens that third party clients could have. They changed the guidelines for the way that things were to be presented or requested. And they were the final arbiters of the appeals process for getting more access. If they thought that an application’s functionality was duplicating their client functionality, they summarily dismissed any requests for more user tokens.
Twitter has taken this to a new level lately. They’ve introduced new functionality, like pictures in direct messages and cards, that may never be supported in the API. They are manipulating the market to allow their own first party apps to have the most functionality. They are locking out developers left and right and driving users to their own products at the expense of developers they previously worked arm-in-arm with. If Twitter doesn’t outright buy your client and bury it, they just wait until you’ve hit your token limit and let you suffocate from your own popularity.
How does this translate to the world of network programmability? Well, we are taking for granted that networking vendors that are exposing APIs to developers are doing it for all the right reasons. Extending network flexibility is a very critical thing. So is reducing complexity and spurring automation. But what happens when a vendor starts deprecating functions and forcing developers to go back to the drawing board?
Networking can move at a snail’s pace then fire right up to Ludicrous Speed. The OpenFlow release cycle is a great example of software outpacing the rest of technology. What happens when API development hits the same pace? Even the most agile development team can’t keep pace with a 3-6 month cycle when old API calls are deprecated left and right. They would just throw their hands up and stop working on apps until things settled down.
And what if the impetus is more sinister? What if a vendor decides to say, “We’re changing the API calls around. If you want some help rewriting your apps to function with the new ones, you can always buy our services.” Or if they decide to implement your functionality in their base system? What happens when a networking app gets Sherlocked?
APIs are a good and noble thing. We need them or else things don’t work correctly. But those same APIs can cause problems if they aren’t documented correctly or if the vendor starts doing silly things with them. What we need is a guarantee from vendors that their APIs are going to be around for a while so we can develop apps to help their networking gear work properly. Microsoft wouldn’t be where it is today without robust support for APIs that has been consistent for years. Networking needs to follow the same path. The fewer hijinks with your APIs, the better your community will be.
Great post Tom, highlights the potential pitfalls of the market landscape. A good “abstraction” can be applied to smooth out these types of issues; but what this means is that the “abstraction writer” accepts the burden/responsibility of managing the risks over time. As you point out, this topic is not unique to Network Automation. I believe a good approach here is to leverage Network Automation framework tools – shameless self-plug for Schprokits. Regardless of the framework used, *someone* has to do the work; and this will either be the network vendor and/or the framework company.