I was pondering the dichotomy between Software Defined Networking (SDN) and Network Function Virtualization (NFV) the other day. I’ve heard a lot of vendors and bloggers talking about how one inevitably leads to the other. I’ve also seen a lot of folks saying that the two couldn’t be further apart on the scale of software networking. The more I thought about these topics, the more I realized they are two sides of the coin. The problem, at least in my mind, is the perspective.
SDN – Planning The Paradigm
Software Defined Networking telegraphs everything about what it is trying to accomplish right there in the name. Specifically, the “Definition” part of the phrase. I’ve made jokes in the past about the lack of definition in SDN as vendors try to adapt their solutions to fit the buzzword mold. What I finally came to realize is that the SDN folks are all about definition. SDN is the Top Down approach to planning. SDN seeks to decompose the network into subsystems that can be replaced or reprogrammed to suit the needs of those things which utilize the network.
As an example, SDN breaks the idea of switch down into things like “forwarding plane” and “control plane” and seeks to replace the control plane with alternative software, whether it be a controller-based architecture like OpenFlow or an overlay network similar to that of VMware/Nicira. We can replace the OS of a switch with a concept like OpenFlow easily. It’s just a mechanism for determining which entries are populated in the Content Addressable Memory (CAM) tables of the forwarding plane. In top down design, it’s easy to create a stub entry or “black box” to hold information that flows into it. We don’t particularly care how the black box works from the top of the design, just that it does its job when called upon.
Top Down designs tend to run into issues when those black boxes lack detail or are missing some critical functionality. What happens when OpenFlow isn’t capable of processing flows fast enough to keep the CAM table of a campus switch populated with entries? Is the switch going to fall back to process switching the packets? That could be an big issue. Top Down designs are usually very academic and elegant. They also have a tendency to lack concrete examples and real world experience. When you think about it, that does speak a lot about the early days of SDN – lots of definition of terminology and technology, but a severe lack of actual packet forwarding.
NFV – Working From The Ground Up
Network Function Virtualization takes a very different approach to the idea of turning hardware networks into software networks. The driving principle behind NFV is replication of existing technology in a software state. This is classic Bottom Up design. Rather than spending a large amount of time planning and assembling the perfect system, Bottom Up designers tend to build as they go. They concentrate on making something work first, then making their things work together second.
NFV is great for hands-on folks because it gives concrete, real results almost immediately. Once you’ve converted an load balancer or a router to a purely software-based construct you can see right away how it works and what the limitations might be. Does it consume too many resources on the hypervisor? Does it excel at forwarding small packets? Does switching a large packet locally cause a fault? These are problems that can be corrected in the individual system rapidly rather than waiting to modify the overall plan to account for difficulties in the virtualization process.
Bottom Up design does suffer from some issues as well. The focus in Bottom Up is on getting things done on a case-by-case basis. What do you do when you’ve converted all your hardware to software? Do your NFV systems need to talk to one another? That’s usually where Bottom Up design starts breaking down. Without a grand plan at a higher level to ensure that systems can talk to each other this design methodology falls back to a series of “hacks” to get them connected. Units developed in isolation aren’t required to play nice with everyone else until they are forced to do so. That leads to increasing complex and fragile interconnection systems that could fail spectacularly should the wrong thread be yanked with sufficient force.
Which method is better? Should we spend all our time planning the system and hope that our Powerpoint Designs work the right way when someone codes them in a few months? Or should we say “damn the torpedoes” and start building things left and right and hope that someone will figure out a way to tie all these individual pieces together at some point?
Surprisingly, the most successful design requires elements of both. People need to have a basic plan at the least when starting out on a plan to change the networking world. Once the ideas are sketched out, you need a team of folks willing to burn the midnight oil and get the ideas implemented in real life to ensure that the plan works the right way. The guidance from the top is essential to making sure everything works together in the end.
Whether you are leading from the top or the bottom, remember that everything has to meet in the middle sooner or later.