The Complexity Conundrum

NailPuzzle

Complexity is the enemy of understanding. Think about how much time you spend in your day trying to simplify things. Complexity is the reason why things like Reddit’s Explain Like I’m Five exist. We strive in our daily lives to find ways to simplify the way things are done. Well, except in networking.

Building On Shifting Sands

Networking hasn’t always been a super complex thing. Back when bridges tied together two sections of Ethernet, networking was fairly simple. We’ve spent years trying to make the network do bigger and better things faster with less input. Routing protocols have become more complicated. Network topologies grow and become harder to understand. Protocols do magical things with very little documentation beyond “Pure Freaking Magic”.

Part of this comes from applications. I’ve made my feelings on application development clear. Ivan Pepelnjak had some great comments on this post as well from Steve Chalmers and Derick Winkworth (@CloudToad). I especially like this one:

Derick is right. The application developers have forced us to make networking do more and more faster with less requirement for humans to do the work to meet crazy continuous improvement and continuous development goalposts. Networking, when built properly, is a static object like the electrical grid or a plumbing system. Application developers want it to move and change and breathe with their needs when they need to spin up 10,000 containers for three minutes to run a test or increase bandwidth 100x to support a rollout of a video streaming app or a sticker-based IM program designed to run during a sports championship.

We’ve risen to meet this challenge with what we’ve had to work with. In part, it’s because we don’t like being the scapegoat for every problem in the data center. We tire of sitting next to the storage admins and complaining about the breakneck pace of IT changes. We have embraced software enhancements and tried to find ways to automate, orchestrate, and accelerate. Which is great in theory. But in reality, we’re just covering over the problem.

Abstract Complexity

The solution to our software networking issues seems simple on the surface. Want to automate? Add a layer to abstract away the complexity. Want to build an orchestration system on top of that? Easy to do with another layer of abstraction to tie automation systems together. Want to make it all go faster? Abstract away!

“All problems in computer science can be solved with another layer of indirection.”

This is a quote from Butler Lampson often attributed to David Wheeler. It’s absolutely true. Developers, engineers, and systems builders keep adding layers of abstraction and indirection on top of complex system and proclaiming that everything is now easier because it looks simple. But what happens why the abstraction breaks down?

Automobiles are perfect example of this. Not too many years ago, automobiles were relatively simple things. Sure, internal combustion engines aren’t toys. But most mechanics could disassemble the engine and fix most issues with a wrench and some knowledge. Today’s cars have computers, diagnostics systems, and require lots of lots of dedicated tools to even diagnose the problem, let alone fix it. We’ve traded simplicity and ease of repairability the appearance of “simple” which conceals a huge amount of complexity under the surface.

To refer back to the Lampson/Wheeler quote, the completion of it is, “Except, of course, for the problem of too many indirections.” Even forty years ago it was understood that too many layers of abstraction would eventually lead to problems. We are quickly reaching this point in networking today. With all the reliance on complex tools providing an overwhelming amount of data about every point of the network, we find ourselves forced to use dashboards and data lakes to keep up with the rapid pace of changes dictated to the network by systems integrations being driven by developer desires and not sound network systems thinking.

Networking professionals can’t keep up. Just as other systems now must be maintained by algorithms to keep pace, so too does the network find itself being run by software instead of augmented by it. Even if people wanted to make a change they would be unable to do so because validating those changes manually would cause issues or interactions that could create havoc later on.

Simple Solutions

So how do we fix the issues? Can we just scrap it all and start over? Sadly, the answer here is a resounding “no”. We have to keep moving the network forward to match pace with the rest of IT. But we can do our part to cut down on the amount of complexity and abstraction being created in the process. Documentation is as critical as ever. Engineers and architects need to make sure to write down all the changes they make as well as their proposed designs for adding services and creating new features. Developers writing for the network need to document their APIs and their programs liberally so that troubleshooting and extension are easily accomplished instead of just guessing about what something is or isn’t supposed to be doing.

When the time comes to build something new, instead of trying to plaster over it with an abstraction, we need to break things down into their basic components and understand what we’re trying to accomplish. We need to augment existing systems instead of building new ones on top of the old to make things look easy. When we can extend existing ideas or augment them in such as way as to coexist then we can worry less about hiding problems and more about solving them.


Tom’s Take

Abstraction has a place, just like NAT. It’s when things spiral out of control and hide the very problems we’re trying to fix that it becomes an abomination. Rather than piling things on the top of the issue and trying to hide it away until the inevitable day when everything comes crashing down, we should instead do the opposite. Don’t hide it, expose it instead. Understand the complexity and solve the problem with simplicity. Yes, the solution itself may require some hard thinking and some pretty elegant programming. But in the end that means that you will really understand things and solve the complexity conundrum.

SDN Myths Revisited

techunplugged-logo

I had a great time at TECHunplugged a couple of weeks ago. I learned a lot about emerging topics in technology, including a great talk about the death of disk from Chris Mellor of the Register. All in all, it was a great event. Even with a presentation from the token (ring) networking guy:

I had a great time talking about SDN myths and truths and doing some investigation behind the scenes. What we see and hear about SDN is only a small part of what people think about it.

SDN Myths

Myths emerge because people can’t understand or won’t understand something. Myths perpetuate because they are larger than life. Lumberjacks and blue oxen clearing forests. Cowboys roping tornadoes. That kind of thing. With technology, those myths exist because people don’t want to believe reality.

SDN is going to take the jobs of people that can’t face the reality that technology changes rapidly. There is a segment of the tech worker populace that just moves from new job to new job doing the same old things. We leave technology behind all the time without a care in the world. But we worry when people can’t work on that technology.

I want you to put your hands on a floppy disk. Go on, I’ll wait. Not so easy, is it? Removable disk technology is on the way out the door. Not just magnetic disk either. I had a hard time finding a CD-ROM drive the other day to read an old disc with some pictures. I’ve taken to downloading digital copies of films because my kids don’t like operating a DVD player any longer. We don’t mourn the passing of disks, we celebrate it.

Look at COBOL. It’s a venerable programming language that still runs a large percentage of insurance agency computer systems. It’s safe to say that the amount of money it would cost to migrate away from COBOL to something relatively modern would be in the millions, if not billions, of dollars. Much easier to take a green programmer and teach them an all-but-dead language and pay them several thousand dollars to maintain this out-of-date system.

It’s like the old story of buggy whip manufacturers. There’s still a market for them out there. Not as big as it was before the introduction of the automobile. But it’s there. You probably can’t break into that market and you had better be very good (or really cheap) at making them if you want to get a job doing it. The job that a new technology replaced is still available for those that need that technology to work. But most of the rest of society has moved on and the old technology fills a niche roll.

SDN Truths

I wasn’t kidding when I said that Gartner not having an SDN quadrant was the smartest thing they ever did (aside from the shot at stretched layer 2 DCI). I say this because it will finally force customers to stop asking for a magic bullet SDN solution and it will force traditional networking vendors to stop packaging a bunch of crap and selling it as a magic bullet.

When SDN becomes a part of the entire solution and not some mystical hammer that fixes all the nails in your environment, then the real transformation can happen. Then people that are obstructing real change can be marginalized and removed. And the technology can be the driver for advancement instead of someone coming down the hall complaining about things not working.

We spend so much time reacting to problems that we forgot how to solve them for good. We’re not being malicious. We just can’t get past the triage. That’s the heart of the fire fighter problem. Ivan wrote a great response to my fire fighter post and his points were spot on. Especially the ones about people standing in the way, whether it be through outright obstruction or by taking power away to affect real change. We can’t hold networking people responsible for the architecture and simultaneously keep them from solving the root issues. That’s the ham-handed kind of organizational roadblock that needs to change to move networking forward.


Tom’s Take

Talks like this don’t happen over night. They take careful planning and thought, followed by panic when you realize your 45-minute talk is actually 20-minutes. So you cut out the boring stuff and get right to the meat of the issue. In this case, that meat is the continued misperception of SDN no matter how much education we throw at the networking community. We’re not going to end up jobless programmers being lied to by silver-tongued marketing wonks. But we are going to have to face the need for organization change and process reevaluation on a scale that will take months, if not years, to implement correctly. And then do it all over again as technology evolves to fit the new mold we created when we broke the old one.

I would rather see the easy money flee to a new startup slot machine and all of the fair weather professionals move on to a new career in whatever is the hot new thing. That means those of us left behind in the newly-transformed traditional networking space will be grizzled veterans willing to learn and implement the changes we need to make to stop being blamed for the problems of IT and be a model for how it should be run. That’s a future to look forward to.

 

This WAN Is Your WAN, This WAN Is My WAN

Straw Bales on Hill Landscape, Tuscany, Italy

Straw Bales on Hill Landscape, Tuscany, Italy

Ideas coalesce all the time in every vertical. You don’t really notice it until you wake up one day and suddenly everything around you looks identical. Wireless becoming the new access layer. Flash storage taking hold of the high end performance crown. And in networking we have the dominance of all things software defined. One recent development has coming along much faster than anyone could have predicted: Software Defined Wide Area Networking (SD-WAN).

Automatic For The People

SD-WAN is a force in modern networking because people want simplicity. While Ivan does a great job of decoupling marketing from reality, people still believe that SD-WAN is the silver bullet that will fix all of their WAN woes. Even during the original discussions of SD-WAN technology at conferences like ONUG, the overriding idea wasn’t around tying sites together or driving down costs to the point of feasibility. It was all about making life easier.

How does SD-WAN manage to accomplish this? It’s all black box networking. Just like the fuel injector in your car. There’s no crying about interoperability or standards-based protocols. You just plug things in and it all works, even if you can’t exactly plug one vendor solution into a competitor. Lock in wins again.

The ideas behind SD-WAN aren’t exactly new. Cisco talked about SD-WAN quite a bit at Networking Field Day 10. Here’s Jeff Reed on it:

The rest of the two hour session details how Cisco is using their Intelligent WAN (IWAN) product to drive SD-WAN. The names of the components all sound very familiar to networkers: DMVPN, NBAR, PfR, and so on. That’s because SD-WAN uses a lot of tried-and-true techniques to tie the concept together. There’s nothing earth-shattering about SD-WAN under the hood. In fact, a fair number of people that work at the “pioneering” SD-WAN startups all seem to have their roots in one or more traditional networking companies.

Fables of Reconstruction

Look at the other presenters at Networking Field Day 10. Two of them announced SD-WAN solutions even though they aren’t really known for expertise in SD-WAN. One of them wasn’t even known as a branch office acceleration solution. So why the SD-WAN land rush all of the sudden? What’s behind the need to have a solution?

You probably wouldn’t be surprised to learn that a lot of investors are backing expansion into SD-WAN technologies. It’s a hot property. But why? As above, customers aren’t interested in the technical wizardry that goes into SD-WAN. They aren’t clamoring for it to supplant their current WAN solution and offer a Rosetta Stone of inter-vendor WAN cooperation. What’s behind the push?

It probably goes something like this:

  1. Technologist needs to implement WAN architecture. Is dismayed that things are so difficult.
  2. Technologist starts searching for solutions about WAN. They probably start asking friends about it.
  3. Analyst firm hears that technologists are asking about WAN solutions. Releases a questionnaire asking which technologies you’d like to learn more about.
  4. Responses to questionnaires are loaded into a graph or report that people buy because they don’t know who to talk to.
  5. Companies realize customers want WAN solutions. They break their necks to offer those solutions to keep up with demand.
  6. Investors see companies beginning to offer WAN solutions and think there’s a huge untapped market. They start funding anyone that mentions WAN in a meeting.

By the way, you can replace “WAN” with any technology above and it still works.

Thanks to customers needing a solution for something they can’t configure easily they are going to be inundated with SD-WAN options by the time they turn around. And the biggest concern no long becomes “Who has the easiest solution?” but instead, “Who is still going to be here in six months?”

Collapse Into Now

The reckoning is coming in the SD-WAN market. If a company doesn’t already have an SD-WAN solution in development or if their solution won’t see daylight for another nine months, they are going to exercise the second “B” of innovation and buy it. And they have a lot of prime targets to choose from.

Investors get cagey without an exit strategy. How are they going to win at this game? They either have to get paid with an IPO, with a later round of funding, or by having someone buy out the investment. If an investor thinks they can get their money back (plus a bit of interest) by having this little startup bought by a traditional networking vendor you can better believe they will be advising the startup to sell.

The customers are the real losers in the case of a buyout, or worse a bankruptcy. Those highly proprietary solutions become dead weight if there isn’t any support for them any longer. Black box networking falls apart when the little magical creatures inside the box go away. Which means customers will be skittish of supporting a solution that is likely to go away any time soon.

Who will you support? An established vendor slow to roll out a solution? Or an up-and-coming company with new ideas but at risk of being snapped up by a big bank account?


Tom’s Take

I loved seeing all the SD-WAN discussion at Networking Field Day 10. SD-WAN is no longer magic sauce that aggregates DSL and MPLS circuits with encryption. Nuage Networks showed off deploying Docker apps to remote sites. Riverbed talked about using their WAN optimization experience to deploy SaaS solutions through SD-WAN.

We’ve heard from SD-WAN companies in the past at Networking Field Day. It’s interesting to hear the comparisons between the upstarts and the old geezers. It’s clear there is a ton of money that is being invested in SD-WAN. The trick is to find out your needs and pick the best solution for you. Otherwise you may find yourself losing your SD-WAN religion.

 

SDN and the Trough Of Understanding

gartner_net_hype_2015

An article published this week referenced a recent Hype Cycle diagram (pictured above) from the oracle of IT – Gartner. While the lede talked a lot about the apparent “death” of Fibre Channel over Ethernet (FCoE), there was also a lot of time devoted to discussing SDN’s arrival at the Trough of Disillusionment. Quoting directly from the oracle:

Interest wanes as experiments and implementations fail to deliver. Producers of the technology shake out or fail. Investments continue only if the surviving providers improve their products to the satisfaction of early adopters.

As SDN approaches this dip in the Hype Cycle it would seem that the steam is finally being let out of the Software Defined Bubble. The Register article mentions how people are going to leave SDN by the wayside and jump on the next hype-filled networking idea, likely SD-WAN given the amount of discussion it has been getting recently. Do you know what this means for SDN? Nothing but good things.

Software Defined Hammers

Engineers have a chronic case of Software Defined Overload. SD-anything ranks right up there with Fat Free and New And Improved as the Most Overused Marketing Terms. Every solution release in the last two years has been software defined somehow. Why? Because that’s what marketing people think engineers want. Put Software Defined in the product and people will buy it hand over fist. Guess what Little Tommy Callahan has to say about that?

There isn’t any disillusionment in this little bump in the road. Quite the contrary. This is where the rubber meets the road, so to speak. This is where all the pretenders to the SDN crown find out that their solutions aren’t suited for mass production. Or that their much-vaunted hammer doesn’t have any nails to drive. Or that their hammer can’t drive a customer’s screws or rivets. And those pretenders will move on to the next hype bubble, leaving the real work to companies that have working solutions and real products that customers want.

This is no different than every other “hammer and nail” problem from the past few decades of networking. Whether it be ATM, MPLS, or any one of a dozen “game changing” technologies, the reality is that each of these solutions went from being the answer to every problem to being a specific solution for specific problems. Hopefully we’ve gotten SDN to this point before someone develops the software defined equivalent of LANE.

The Software Defined Road Ahead

Where does SD-technology go from here? Well, without marketing whipping everyone into a Software Defined Frenzy, the future is whatever developers want to make of it. Developers that come up with solutions. Developers that integrate SDN ideas into products and quietly sell them for specific needs. People that play the long game rather than hope that they can take over the world in a day.

Look at IPv6. It solves so many problems we have with today’s Internet. Not just IP exhaustion issues either. It solves issues with security, availability, and reachability. Yet we are just now starting to deploy it widely thanks to the panic of the IPocalypse. IPv6 did get a fair amount of hype twenty years ago when it was unveiled as the solution to every IP problem. After years of mediocrity and being derided as unnecessary, IPv6 is poised to finally assume its role.

SDN isn’t going to take nearly as long as IPv6 to come into play. What is going to happen is a transition away from Software Defined as the selling point. Even today we’re starting to see companies move away from SD labeling and instead use more specific terms to help customers understand what’s important about the solution and how it will help customers. That’s what is needed to clarify the confusion and reduce fatigue.

 

The Light On The Fiber Mountain

MountainRoad

Fabric switching systems have been a popular solution for many companies in the past few years. Juniper has QFabric and Brocade has VCS. For those not invested in fabrics, the trend has been to collapse the traditional three tier network model down into a spine-leaf architecture to optimize east-west traffic flows. One must wonder how much more optimized that solution can be. As it turns out, there is a bit more that can be coaxed out of it.

Shine A Light On Me

During Interop, I had a chance to speak with the folks over at Fiber Mountain (@FiberMountain) about what they’ve been up to in their solution space. I had heard about their revolutionary SDN offering for fiber. At first, I was a bit doubtful. SDN gets thrown around a lot on new technology as a way to sell it to people that buy buzzwords. I wondered how a fiber networking solution could even take advantage of software.

My chat with M. H. Raza started out with a prop. He showed me one of the new Multifiber Push On (MPO) connectors that represent the new wave of high-density fiber. Each cable, which is roughly the size and shape of a SATA cable, contains 12 or 24 fiber connections. These are very small and pre-configured in a standardized connector. This connector can plug into a server network card and provide several light paths to a server. This connector and the fibers it terminates are the building block for Fiber Mountain’s solution.

With so many fibers running a server, Fiber Mountain can use their software intelligence to start doing interesting things. They can begin to build dedicated traffic lanes for applications and other traffic by isolating that traffic onto fibers already terminated on a server. The connectivity already exists on the server. Fiber Mountain just takes advantage of it. It feels very simliar to the way we add in additional gigabit network ports when we need to expand things like vKernel ports or dedicated traffic lanes for other data.

Quilting Circle

Where this solution starts looking more like a fabric is what happens when you put Fiber Mountain Optical Exchange devices in the middle. These switching devices act like aggregation ports in the “spine” of the network. They can aggregate fibers from top-of-rack switches or from individual servers. These exchanges tag each incoming fiber and add them to the Alpine Orchestration System (AOS), which keeps track of the connections just like the interconnections in a fabric.

Once AOS knows about all the connections in the system, you can use it to start building pathways between east-west traffic flows. You can ensure that traffic between a web server and backend database has dedicated connectivity. You can add additional resources between systems that are currently engaged in heavy processing. You can also dedicated traffic lanes for backup jobs. You can do quite a bit from the AOS console.

Now you have a layer 1 switching fabric without any additional pieces in the middle. The exchanges function almost like a passthrough device. The brains of the system exist in AOS. Remember when Ivan Pepelnjak (@IOSHints) spent all his time pulling QFabric apart to find out what made it tick? The Fiber Mountain solution doesn’t use BGP or MPLS or any other magic protocol sauce. It runs at layer 1. The light paths are programmed by AOS and the packets are swtiched across the dense fiber connections. It’s almost elegant in the simplicity.

Future Illumination

The Fiber Mountain solution has some great promise. Today, most of the operations of the system require manual intervention. You must build out the light paths between servers based on educated guesses. You must manually add additional light paths when extra bandwidth is needed.

Where they can really improve their offering in the future is to add intelligence to AOS to automatically make those decisions based on thresholds and inputs that are predefined. If the system can detect bigger “elephant” traffic flows and automatically provision more bandwidth or isolate these high volume packet generators it will go a long way toward making things much easier on network admins. It would also be great to provide a way to interface that “top talker” data into other systems to alert network admins when traffic flows get high and need additional resources.


Tom’s Take

I like the Fiber Mountain solution. They’ve built a layer 1 fabric that performs similarly to the ones from Juniper and Brocade. They are taking full advantage of the resources provided by the MPO fiber connectors. By adding a new network card to a server, you can test this system without impacting other traffic flows. Fiber Mountain even told me that they are looking at trial installations for customers to bring their technology in at lower costs as a project to show the value to decision makers.

Fiber Moutain has a great start on building a low latency fiber fabric with intelligence. I’ll be keeping a close eye on where the technolgy goes in the future to see how it integrates into the entire network and brings SDN features we all need in our networks.

 

HP Networking – Hitting The Right Notes

HP has quietly been making waves recently with their networking strategies.  They recently showed off their technology around software defined networking (SDN) applications at Interop New York.  Here’s a video:

It would seem that HP has been doing a lot of hard work on the back end with SDN.  So why haven’t we heard about it?

Trumpet and Bugle

HP Networking hasn’t been in the news as much as Cisco and VMware as of late.  When you consider that both of those companies are pushing agendas related to redefining the paradigm of networking around policy and virtualization their trumpeting of those agendas makes total sense.  But even members of the League of Non-Aligned Vendors like Brocade are talking a lot about their SDN strategy with the Vyatta Controller and OpenStack integrations.  Vendors have layers and layers of plans for the “new” networking.  But HP has actually been doing it!  Why haven’t we known until now?

HP has been content to play the role of the bugler to the trumpeters of the bigger organizations.  Rather than talking over and over again about what they are planning on doing, HP waits until they’ve actually done it to talk about it.  It’s a sound strategy.  I love making everything work first and then discussing what you’ve done rather than spending week after week, month after month, talking about a plan that may or may not come to fruition.

The issue with HP is that they need to bugle a little more often to stay afloat in the space.  Only making announcements won’t cut it.  The breakneck pace of innovation and adoption is disrupting the ability of laggard developers to stay afloat.  New technologies are being supplanted by upstarts.  Docker is old news.  Now we’re talking about SocketPlane and Rocket.  You’d be forgiven if you haven’t been keeping up as a blogger or engineer.  But if you’ve missed the boat as a vendor, you’re going to have a hard time treading water.

The Tijuana Brass

How can HP solve their problem?  Technically, they need to keep doing what they’ve been doing all along.  They are making good decisions and innovating around ideas like the HP SDN App Store.  What they need to do it tell more people about it.  Get the word out.  Start some discussions around what you’re doing.  Don’t be afraid to engage.  The more you talk to people about your solutions, the more your name will come up in conversation. You need to be loud and on-key.  Herb Alpert and the Tijuana Brass weren’t popular right away.  It took years of recording and playing before the mainstream “discovered” them and popularized their music.

HP Networking has spent considerable time building SDN infrastructure.  The fact that their are OpenFlow images for a wide variety of their existing switch infrastructure is proof they are concerned about making everything fit together.  Now it’s time to tell the story.  With the impending divestiture of HP’s enterprise businesses from the consumer line, it will be far too easy to get lost in the shuffle of reorganization.  They way to prevent that is to step out and make yourself known.  Write blogs, record podcasts, and interact with the community.  Don’t be afraid to toot your own horn a little.


Disclaimer

HP invited me to attend HP Discover Barcelona as their guest.  They provided travel and lodging expenses during my time in Europe.  They did not require any blog posts or consideration for this invitation, nor where they offered any on my part.  The opinions and analysis expressed herein represents my thoughts alone.

Riding the SD-WAN Wave

Embed from Getty Images

Software Defined Networking has changed the way that organizations think about their network infrastructure.  Companies are looking at increasing automation of mundane tasks, orchestration of policy, and even using white box switches with the help of new unbound operating systems.  A new class of technologies that is coming to market hopes to reduce complexity and cost for the Achilles Heel of many enterprises: the Wide Area Network (WAN).

Do You WANt To Build A Snowman?

The WAN has always been a sore spot for enterprise networks.  It’s necessary to connect your organization to the world.  If you have remote sites or branch locations, it is critical for daily operations.  If you have an e-commerce footprint your WAN connection needs to be able to handle the generated traffic.  But good WAN connectivity costs money.  Lots of money.

WAN protocols are constantly being refined to come up with the fastest possible transmission and the highest possible uptime.  Frame Relay, Asynchronous Transfer Mode (ATM) and Multi-Protocol Label Switching (MPLS) are a succession of technologies that have shaped enterprise WAN connectivity for over a decade.  They have their strengths and weaknesses.  But it is difficult to build an enterprise WAN without one.

Some customers can’t get MPLS connectivity.  Or even Frame Relay for the matter.  Their locations are too remote or the cost of having the connection installed is far above the return on investment.  These customers are often forced to resort to consumer-class connections, like cable modems, Digital Subscriber Line (DSL), or even 4G/LTE modem uplinks.  While cheaper and easy to install, these solutions are often not as robust as their business-grade counterparts.  And when it comes to support on a down circuit…

Redefining the WAN

How does Software Defined WAN (SD-WAN) help?  SD-WAN technologies from companies like Silver Peak, CloudGenix, and Viptela function like overlay networks for the WAN.  They take the various inputs that you have, such as MPLS, cable, and 4G/LTE networks.  These inputs are then arranged in such a way as to allow you to intelligently program how traffic will behave on the links.  If you want only critical business traffic on the MPLS circuit during business hours you can do that.  If you want to ensure the 4G/LTE uplink is only used in the event of an emergency outage, you can do that too.  You can even program various costs and metrics into the system to help you make decisions about when a given link would be a better economic decision given the time of day or amount of transferred data.

You’re probably saying to yourself, “But I can do all of that today.” And you would be right. But all of this has to happen manually, or at the least require a lot of programming.  If you’ve ever tried to configure OER/PFR on a Cisco router you know what I’m talking about.  And that’s just one vendor’s equipment.  What if there are multiple devices in play?  How do you configure the edge routers for fifty sites?  What happens when a circuit goes down at 3 a.m.?  Having a simple interface for making decisions or even the ability to script actions based on inputs makes the system much more flexible and responsive.

It all comes down to a simple number for all parties involved.  For engineering, the amount of time spent configuring and maintaining complex WAN connectivity will be reduced.  Engineers love not needing to spend time on things.  For the decision makers (and bean counters), it all comes down to money.  SD-WAN technologies reduce costs by better utilizing existing infrastructure.  Eventually, their analysis can allow you to reduce or remove unnecessary connectivity.  That means more money in the pockets of the people that want the money.


Tom’s Take

I’ve referred to WAN applications as the “hello world” for SDN.  That’s because I saw so many people demoing them when SDN was first being talked about.  Cisco did this at Cisco Live 2012 in San Diego.  SD-WAN didn’t really become a concrete thing in my mind until is was the topic of discussion on the Spring ONUG meeting.  Those are the people with the money.  And they are looking at the cost savings and optimization from SD-WAN technologies.  You can better believe that the first wave of SD-WAN that you’ve seen in the last couple of months is just the precursor to a wider look at connectivity in general.  Better get ready to surf.

API-jinks

Dastardly-vi

Network programmability is a very hot topic.  Developers are looking to the future when REST APIs and Python replaces the traditional command line interface (CLI).  The ability to write programs to interface with the network and build on functionality is spurring people to integrate networking with DevOps.  But what happens if the foundation of the programmable network, the API, isn’t the rock we all hope it will be?

Shiny API People

APIs enable the world we live in today.  Whether you’re writing for POSIX or JSON or even the Microsoft Windows API, you’re interacting with software to accomplish a goal.  The ability to use these standard interfaces makes software predictable and repeatable.  Think of an API as interchangeable parts for software.  By giving developers a way to extract information or interact the same way every time, we can write applications that just work.

APIs are hard work though.  Writing and documenting those functions takes time and effort.  The API guidelines from Microsoft and Apple can be hundreds or even thousands of pages long depending on which parts you are looking at.  They can cover exciting features like media services or mundane options like buttons and toolbars.  But each of these APIs has to be maintained or chaos will rule the day.

APIs are ever changing things.  New functions are added.  Old functions are deprecated.  Applications that used to work with the old version need to be updated to use new calls and methods.  That’s just the way things are done.  But what happens if the API changes aren’t above board?  What happens when API suddenly becomes “antagonistic programming interface”?

Losing My Religion

The most obvious example of a company pulling a fast one with API changes comes from Twitter.  When they moved from version 1.0 to version 1.1 of their API, they made some procedural changes on the backend that enraged their partners.  They limited the number of user tokens that third party clients could have.  They changed the guidelines for the way that things were to be presented or requested.  And they were the final arbiters of the appeals process for getting more access.  If they thought that an application’s functionality was duplicating their client functionality, they summarily dismissed any requests for more user tokens.

Twitter has taken this to a new level lately.  They’ve introduced new functionality, like pictures in direct messages and cards, that may never be supported in the API.  They are manipulating the market to allow their own first party apps to have the most functionality.  They are locking out developers left and right and driving users to their own products at the expense of developers they previously worked arm-in-arm with.  If Twitter doesn’t outright buy your client and bury it, they just wait until you’ve hit your token limit and let you suffocate from your own popularity.

How does this translate to the world of network programmability?  Well, we are taking for granted that networking vendors that are exposing APIs to developers are doing it for all the right reasons.  Extending network flexibility is a very critical thing.  So is reducing complexity and spurring automation.  But what happens when a vendor starts deprecating functions and forcing developers to go back to the drawing board?

Networking can move at a snail’s pace then fire right up to Ludicrous Speed.  The OpenFlow release cycle is a great example of software outpacing the rest of technology.  What happens when API development hits the same pace?  Even the most agile development team can’t keep pace with a 3-6 month cycle when old API calls are deprecated left and right.  They would just throw their hands up and stop working on apps until things settled down.

And what if the impetus is more sinister?  What if a vendor decides to say, “We’re changing the API calls around.  If you want some help rewriting your apps to function with the new ones, you can always buy our services.” Or if they decide to implement your functionality in their base system?  What happens when a networking app gets Sherlocked?


Tom’s Take

APIs are a good and noble thing.  We need them or else things don’t work correctly.  But those same APIs can cause problems if they aren’t documented correctly or if the vendor starts doing silly things with them.  What we need is a guarantee from vendors that their APIs are going to be around for a while so we can develop apps to help their networking gear work properly.  Microsoft wouldn’t be where it is today without robust support for APIs that has been consistent for years.  Networking needs to follow the same path.  The fewer hijinks with your APIs, the better your community will be.

Rome Wasn’t Software Defined In A Day

Embed from Getty Images

Everywhere you turn, people are talking about software defined networking.  The influence can be felt in every facet of the industry.  Major players are trying to come to grips with the shift in power.  Small vendors are ramping up around ideas and looking to the future.  Professionals are simultaneously excited for change and fearful of upsetting the status quo.  But will all of these things happen overnight?

Not Built In A Day, But Laying Bricks Every Hour

The truth of SDN is that it’s going to take some time for all the pieces to fall into place.  Take a look at the recent Apple Pay launch.  Inside of a week, it has risen to become a very significant part of the mobile payment industry, even if the installed base of users is exclusive to iPhone [6,6+] owners.  But did this revolution happen in the span of a couple of days?

Apple Pay works because Apple spent months, if not years, designing the best way to provide transactions from a phone.  It leverages TouchID for security, a concept introduced last year.  It uses Near Field Communication (NFC) readers, which have been in place for a couple of years.  I even talked about NFC three years ago.  That means the technology to support Apple Pay has been in place for a while.

That kind of support structure is needed to make SDN work the way we want it to.  There’s no magic wand that will convert your infrastructure to SDN overnight.  There is no SDNecronomicon to reference for solving scaling issues or interoperability concerns.  What’s required is the hard work of taking the ideas and processes around SDN and implementing them today.

SDN feels like a radical shift to traditional networking because it’s a foreign concept.  If you had told the first generation iPhone users their device would be a application computer with the capability to pay for purchases wirelessly they would have laughed at you and told you it was a fantasy.  That sufficiently advanced technology was beyond their understanding at the time.

SDN is no different.  The steps being taken today to solve traditional networking problems will feel antiquated in four to five years.  But that foundation must be laid in order to make SDN work in the future.  SDN won’t transform the industry overnight, but we have to keep making advances and pushing forward to make the important gains no matter how small they are.

Not Built In A Day, But It Burned In One

The fear of SDN leads to the dark side of standards adoption.  Arguments. In-fighting. Posturing. Interests making decisions not because they are right for customers but because they protect market share.  If SDN fails in the long term, it will be because of these dark elements and not a technological constraint.

Nothing is immune to politics.  Linux has been more or less standardized for years.  Yet tech advances are still hotly debated.  Go mention systemd to your local Linux hacker and prepare for the onslaught of discussion.  Linux has had much less pressure from these kinds of discussions by virtue of the core kernel being very stable and maintained by a small team.  SDN is very different.

The competing ideas around SDN drive innovation, but also threaten it.  The industry will eventually standardize on OpenDaylight for their controller, much like the server industry standardized on Linux for appliances.  But will that same consensus lead to stagnation? Will innovation simply happen as vendors attempt to modify ODL just enough to make their offering look superior?  Before you say that it’s impossible go and find a reference TRILL implementation.

SDN will succeed because the momentum behind it won’t allow it to fail.  But much like Rome, we need to build SDN with the proper architecture.  Simply laying bricks haphazardly won’t fix our problems.  If the infrastructure is bad enough, we may even need our own Nero to “fix” things again.  Momentum without direction is a useless force.  We need to ensure that SDN is headed in the right direction where it benefits customers and users first.  Profit margins are secondary to that.


Tom’s Take

An idea can transform an industry.  A simple thought about making things better can drag the community out of stagnation and into a Renaissance.  That we are witness to an industry shift is undeniable at this point, especially given that so many things are becoming “software defined”.  However, we must face the truth that this little hobby project won’t produce results overnight.  Hard work and planning will win the day.  Rome went from being a village among hills to the largest power in the Western world.  But that didn’t happen overnight.  The long game of SDN needs to be played one move at a time.  And the building of the SDN empire will take more than a single day.

The Atomic Weight of Policy

Helium-atom

The OpenDaylight project put out a new element this week with their Helium release.  The second release is usually the most important, as it shows that you have a real project on your hands and not just a bunch of people coding in the back room to no avail.  Not that something like that was going to happen to ODL.  The group of people involved in the project have the force of will to change the networking world.

Helium is already having an effect on the market.  Brocade announced their Vyatta Controller last week, which is based on Helium code.  Here’s a handy video as well.  The other thing that Helium has brought forth is the ongoing debate about network policy.  And I think that little gem is going to have more weight in the long run than anything else.

The Best Policy

Helium contains group-based policies for making groups of network objects talk to each other.  It’s a crucial step to bring ODL from an engineering hobby project to a full-fledged product that can be installed by someone that isn’t a code wizard.  That’s because most of the rest of the world, including IT people, don’t speak in specific terms with devices.  They have an idea of what needs to be done and they rely on the devices to implement that idea.

Think about a firewall.  When is the last time you wrote a firewall rule by hand? Unless you are a CLI masochist, you use the GUI to craft a policy that says something like “prevent DNS queries from any device that isn’t a DNS server (referenced by this DNS Server object list)”.  We create those kinds of policies because we can’t account for every new server appearing on the network that wants to make a DNS query.  We block the ones that don’t need to be doing it, and we modify the DNS Server object list to add new servers when needed.

Yet, in networking we’re still playing with access lists and VLAN databases.  We must manually propagate this information throughout the network when updates occur.  Because no one relies on VTP to add and prune information in the network.  The tools we’ve been using to do this work are archaic and failure-prone at best.

Staying Neutral

The inclusion of policy components of Helium will go a long way to paving the road for more of the same in future releases.  Of course, there is already talk about Cisco’s OpFlex and how it didn’t make the cut for Helium despite being one of the most dense pieces of code proposed for ODL.  It’s good that Cisco and ODL both realized that putting out code that wasn’t ready was only going to hurt in the long run.  When you’ve had seven or eight major releases, you can lay an egg with a poorly implemented feature and it won’t be the end of the world.  But if you put out a stinker in the second release, you may not make it to the third.

But this isn’t about OpFlex.  Or Congress. Or any other policy language that might be proposed for ODL in the future.  It’s about making sure that ODL is a policy-driven controller infrastructure.  Markus Nispel of Extreme talked about SDN at Wireless Field Day 7.  In that presentation, he said that he thinks the industry will standardize on ODL as the northbound interface in the network.  For those not familiar, the northbound interface is the one that is user-facing.  We interact with the northbound controller interface while the southbound controller interface programs the devices.

ODL is making sure the southbound interface is OpenFlow.  What they need to do now is ensure the northbound interface can speak policy to the users configuring it.  We’ve all heard the rhetoric of “network engineers need to learn coding” or “non-programmers will be out of a job soon”.  But the harsher reality is that while network programmers are going to be very busy people on the backend, the day-to-day operations of the network will be handled by different teams.  Those teams don’t speak IOS, Junos, or OpenFlow.  They think in policy-based thoughts.

Ops teams don’t want to know how something is going to work when implemented.  They don’t want to spend hours troubleshooting why a VLAN didn’t populate only to find they typoed the number.  They want to plug information into a policy and let the controller do the rest.  That’s what Helium has started and what ODL represents.  An interface into the network for mortal Ops teams.  A way to make that work for everyone, whether it be an OpFlex interface into Cisco APIC programming ACI or a Congress interface to an NSX layer.  If you are a the standard controller, you need to talk to everyone no matter their language.

Tom’s Take

ODL is going to be the controller in SDN.  There is too much behind it to let it fail.  Vendors are going to adopt it as their base standard for SDN.  They may add bells and whistles but it will still be ODL underneath.  That means that ODL needs to set the standard for network interaction.  And that means policy.  Network engineers complain about fragility in networking and how it’s hard to do and in the very same breath say they don’t want the CLI to go away.  What they need to say is that policy gives everyone the flexibility to create robust, fault-tolerant configurations with a minimum of effort while still allowing for other interface options, like CLI and API.  If you are the standard, you can’t play favorites.  Policy is going to be the future of networking interfaces.  If you don’t believe that, you’ll find yourself quickly crushed under the weight of reality.