An Educational SDN Use Case

During the VMUnderground Networking Panel, we had a great discussion about software defined networking (SDN) among other topics. Seems that SDN is a big unknown for many out there. One of the reasons for this is the lack of specific applications of the technology. OSPF and SQL are things that solve problems. Can the same be said of SDN? One specific question regarded how to use SDN in small-to-medium enterprise shops. I fired off an answer from my own experience:

Since then, I’ve had a few people using my example with regards to a great use case for SDN. I decided that I needed to develop it a bit more now that I’ve had time to think about it.

Schools are a great example of the kinds of “do more with less” organizations that are becoming more common. They have enterprise-class networks and needs and live off budgets that wouldn’t buy janitorial supplies. In fact, if it weren’t for E-Rate, most schools would have technology from the Stone Age. But all this new tech doesn’t help if you can’t find a way for it to be used to the fullest for the purposes of educating students.

In my example, I talked about the shift from written standardized testing to online assessments. Oklahoma and Indiana are leading the way in getting rid of Scantrons and #2 pencils in favor of keyboards and monitors. The process works well for the most part with proper planning. My old job saw lots of scrambling to prep laptops, tablets, and lab machines for the rigors of running the test. But no amount of pre-config could prepare for the day when it was time to go live. On those days, the network was squarely in the sights of the administration.

I’ve seen emails go around banning non-testing students from the computers. I’ve seen hard-coded DNS entries on testing machines while the rest of the school had DNS taken offline to keep them from surfing the web. Redundant circuits. QoS policies that would make voice engineers cry. All in the hope of keeping the online test bandwidth free to get things completed in the testing window. All the while, I was thinking to myself, “There has got to be an easier way to do this…”

Redefining with Software

Enter SDN. The original use case for SDN at Stanford was network slicing. The Next-Gen Network Team wanted to use the spare capacity of the network for testing without crashing the whole system. Being able to reconfigure the network on the fly is a huge leap forward. Pushing policy into devices without CLI cuts down on the resume-generating events (RGE) in production equipment. So how can we apply these network slicing principles to my example?

On the day of the test, have the configuration system push down a new policy that gives the testing machines a guaranteed amount of bandwidth. This reservation will ensure each machine is able to get what it needs without being starved out. With SDN, we can set this policy on a per-IP basis to ensure it is enforced. This slice will exist separate from the production network to ensure that no one starting a huge FTP transfer or video upload will disrupt testing. By leaving the remaining bandwidth intact for the rest of the school’s production network administrators can ensure that the rest of the student body isn’t impacted during testing. With the move toward flipped classrooms and online curriculum augmentation, having available bandwidth is crucial.

Could this be done via non-SDN means? Sure. Granted, you’ll have to plan the QoS policy ahead of time and find a way to classify your end-user workstations properly. You’ll also have to tune things to make sure no one is dominating the test machine pool. And you have to get it right on every switch you program. And remove it when you’re done. Unless you missed a student or a window, in which case you’ll need to reprogram everything again. SDN certainly makes this process much easier and less disruptive.


Tom’s Take

SDN isn’t a product. You can’t order a part number for SDN-001 and get a box labeled SDN. Instead, it’s a process. You apply SDN to existing environment and extend the capabilities through new processes. Those processes need use cases. Use cases drive business cases. Business cases provide buy in from the stakeholders. That’s why discussing cases like the one above are so important. When you can find a use for SDN, you can get people to accept it. And that’s half the battle.

Do We Need To Redefine Open?

beer-mug

There’s a new term floating around that seems to be confusing people left and right.  It’s something that’s been used to describe a methodology as well as used in marketing left and right.  People are using it and don’t even really know what it means.  And this is the first time that’s happened.  Let’s look at the word “open” and why it has become so confusing.

Talking Beer

For those at home that are familiar with Linux, “open” wasn’t the first term to come to mind.  “Free” is another word that has been used in the past with a multitude of loaded meanings.  The original idea around “free” in relation to the Open Source movement is that the software is freely available.  There are no restrictions on use and the source is always available.  The source code for the Linux kernel can be searched and viewed at any time.

Free describes the fact that the Linux kernel is available for no cost.  That’s great for people that want to try it out.  It’s not so great for companies that want to try and build a business around it, yet Red Hat has managed to do just that.  How can they sell something that doesn’t cost anything?  It’s because they keep the notion of free sharing of code alive while charging people for support and special packages that interface with popular non-free software.

The dichotomy between unencumbered idea software and no cost software is so confusing that the movement created a phrase to describe it:

Free as in freedom, not free as in beer.

When you talk about freedom, you are unrestricted.  You can use the software as the basis for anything.  You can rewrite it to your heart’s content.  That’s your right for free software. When you talk about free beer, you set the expectation that whatever you create will be available at no charge.  Many popular Linux distributions are available at no cost.  That’s like getting beer for nothing.

Open, But Not Open Open

The word “open” is starting to take on aspects of the “free” argument.  Originally, the meaning of open came from the Open Source community.  Open Source means that you can see everything about the project.  You can modify anything.  You can submit code and improve something.  Look at the OpenDaylight project as an example.  You can sign up, download the source for a given module, and start creating working code.  That’s what Brent Salisbury (@NetworkStatic) and Matt Oswalt (@Mierdin) are doing to great effect.  They are creating the network of the future and allowing the community to do the same.

But “open” is being redefined by vendors.  Open for some means “you can work with our software via an API, but you can’t see how everything works”.  This is much like the binary-only NVIDIA driver.  Proprietary programming is pre-compiled and available to download for free, but you can’t modify the source at all.  While it works with open source software, it’s not open.

A conversation I had during Wireless Field Day 7 drove home the idea of this new “open” in relation to software defined networking.  Vendors tout open systems to their customers. They standardize on northbound interfaces that talk to orchestration platforms and have API support for other systems to call them.  But the southbound interface is proprietary.  That means that only their controller can talk to the network hardware attached to it.  Many of these systems have “open” in the name somewhere, as if to project the idea that they work with any component makeup.

This new “open” definition of having proprietary components with an API interface feels very disingenuous.  It also makes for some very awkward conversations:

$VendorA: Our system is open!

ME: Since this is an open system, I can connect my $VendorB switch and get full functionality from your controller, right?

$VendorA: What exactly do you mean by “full”?


Tom’s Take

Using “open” to market these systems is wrong.  Telling customers that you are “open” because your other equipment can program things through a narrow API is wrong.  But we don’t have a word to describe this new idea of “open”.  It’s not exactly closed.  Perhaps we can call it something else.  Maybe “ajar”.  That might make the marketing people a bit upset.  “Try our new AjarNetworking controller.  As open as we wanted to make it without closing everything.”

“Open” will probably be dominated by marketing in the next couple of year.  Vendors will try to tell you how well they interoperate with everyone.  And I will always remember how open protocols like SIP are and how everyone uses that openness against it.  If we can’t keep the definition of “open” clean, we need to find a new term.

Who Wants A Free Puppy?

Embed from Getty Images

Years ago, my wife was out on a shopping trip. She called me excitedly to tell me about a blonde shih-tzu puppy she found and just had to have. As she talked, I thought about all the things that this puppy would need to thrive. Regular walks, food, and love are paramount on the list. I told her to use her best judgement rather than flat out saying “no”. Which is also how I came to be a dog owner. Today, I’ve learned there is a lot more to puppies (and dogs) than walks and feeding. There is puppy-proofing your house. And cleaning up after accidents. And teaching the kids that puppies should be treated gently.

An article from Martin Glassborow last week made me start thinking about our puppy again. Scott McNealy is famous for having told the community that “Open Source is free like a puppy.” back in 2005. While this was a dig at the community in regards to the investment that open source takes, I think Scott was right on the mark. I also think Martin’s recent article illustrates some of the issues that management and stakeholders don’t see with comunity projects.

Open software today takes care and feeding. Only instead of a single OS on a server in the back of the data center, it’s all about new networking paradigms (OpenFlow) or cloud platform plays (OpenStack). This means there are many more moving parts. Engineers and programmers get it. But go to the stakeholders and try to explain what that means. The decision makers love the price of open software. They are ambivalent to the benefits to the community. However, the cost of open projects is usually much higher than the price. People have to invest to see benefits.

TNSTAAFL

At the recent SolidFire Summit, two cloud providers were talking about their software. One was hooked in to the OpenStack community. He talked about having an entire team dedicating to pulling nightly builds and validating them. They hacked their own improvements and pushed them back upstream for the good of the community. He seemed love what he was talking about. The provider next to him was just a little bit larger. When asked what his platform was he answered “CloudStack”. When I asked why, he didn’t hesitate. “They have support options. I can have them fix all my issues.”

Open projects appeal to the hobbiest in all of us. It’s exciting to build something from the ground up. It’s a labor of love in many cases. Labors of love don’t work well for some enterprises though. And that’s the part that most decision makers need to know. Support for this awesome new thing may not alwasy be immediate or complete. To bring this back to the puppy metaphor, you have to have patience as your puppy grows up and learns not to chew on slippers.

The reward for all this attention? A loving pet in the case of the puppy. In the case of open software, you have a workable framework all your own that is customized to your needs and very much a part of your DNA. Supported by your staff and hopefull loved as much or more than any other solution. Just like dog owners that look forward to walking the dog or playing catch at the dog part, your IT organization should look forward to the new and exciting challenges that can be solved with the investment of time.


Tom’s Take

Nothing is free. You either pay for it with money or with time. Free puppies require the latter, just as free software projects do. If the stakeholders in the company look at it as an investment of time and energy then you have the right frame of mind from the outset. If everything isn’t clear up front, you will find yourself needing to defend all the time you’ve spent on your no-cost project. Hopefully your stakeholders are dog people so they understand that the payoff isn’t in the price, but the experience.

SDN and Elephants

Blind_men_and_elephant3

There is a parable about six blind men that try to describe an elephant based on touch alone.  One feels the tail and thinks the elephant is a rope.  One feels the trunk and thinks it’s a tree branch.  Another feels the leg and thinks it is a pillar.  Eventually, the men realize they have to discuss what they have learned to get the bigger picture of what an elephant truly is.  As you have likely guessed, there is a lot here that applies to SDN as well.

Point of View

Your experience with IT is going to dictate your view on SDN.  If you are a DevOps person, you are going to see all the programmability aspects of SDN as important.  If you are a network administrator, you are going to latch on to the automation and orchestration pieces as a way to alleviate the busy work of your day.  If you are a person that likes to have a big picture of your network, you will gravitate to the controller-based aspects of protocols like OpenFlow and ACI.

However, if you concentrate on the parts of SDN that is most familiar, you will miss out on the bigger picture.  Just as an elephant is more than just a trunk or a tail, SDN is more than the individual pieces.  Programmable interfaces and orchestration hooks are means to an end.  The goal is to take all the pieces and make them into something greater.  That takes vision.

The Best Policy

I like what’s going on both with Cisco’s ACI and the OpenStack Congress project.  They’ve moved past the individual parts of SDN and instead are working on creating a new framework to apply those parts. Who cares how orchestration works in a vacuum?  Now, if that orchestration allows a switch to be auto-provisioned on boot, that’s something.  APIs are a part of a bigger push to program networks.  The APIs themselves aren’t the important part.  It’s the interface that interacts with the API that’s crucial.  Congress and ACI are that interface.

Policy is something we all understand.  Think for a moment about quality of service (QoS). Most of the time engineers don’t know LLQ from CBWFQ or PQ.  But if you describe what you want to do, you get it quickly.  If you want a queue that guarantees a certain amount of bandwidth but has a cap you know which implementation you need to use.  With policies, you don’t need to know even that.  You can create the policy that says to reserve a certain amount of bandwidth but not to go past a hard limit.  But you don’t need to know if that’s LLQ or some other form of queuing.  You also don’t need to worry about implementing it on specific hardware or via a specific vendor.

Policies are agnostic.  So long as the API has the right descriptors you can program a Cisco switch just as easily as a Juniper router.  The policy will do all the work.  You just have to figure out the policy.  To tie it back to the elephant example, you find someone with the vision to recognize the individual pieces of the elephant and make them into something greater than a rope and pillar.  You then realize that the elephant isn’t quite like what you were thinking, but instead has applications above and beyond what you could envision before you saw the whole picture.


Tom’s Take

As far as I’m concerned, vendors that are still carrying on about individual aspects of SDN are just trying to distract from a failure to have vision.  VMware and Cisco know where that vision needs to be.  That’s why policy is the coin of the realm.  Congress is the entry point for the League of Non-Aligned Vendors.  If you want to fight the bigger powers, you need to back the solution to your problems.  If Congress could program Brocade, Arista, and HP switches effortlessly then many small-to-medium enterprise shops would rush to put those devices into production.  That would force the superpowers to take a look at Congress and interface with it.  That’s how you build a better elephant – by making sure all the pieces work together.

End Of The CLI? Or Just The Start?

update1-leak3-hero

Windows 8.1 Update 1 launches today. The latest chapter in Microsoft’s newest OS includes a new feature people been asking for since release: the Start Menu. The biggest single UI change in Windows 8 was the removal of the familiar Start button in favor of a combined dashboard / Start screen. While the new screen is much better for touch devices, the desktop population has been screaming for the return of the Start Menu. Windows 8.1 brought the button back, although it only linked to the Start screen. Update 1 promises to add functionality to the button once more. As I thought about it, I realized there are parallels here that we in the networking world can learn as well.

Some very smart people out there, like Colin McNamara (@ColinMcNamara) and Matt Oswalt (@Mierdin) have been talking about the end of the command line interface (CLI). With the advent of programmable networks and API-driven configuration the CLI is archaic and unnecessary, or so the argument goes. Yet, there is a very strong contingent of the networking world that is clinging to the comfortable glow of a terminal screen and 80-column text entry.

Command The Line

API-driven interfaces provide flexibility that we can’t hope to match in a human interface. There is no doubt that a large portion of the configuration of future devices will be done via API call or some sort of centralized interface that programs the end device automatically. Yet, as I’ve said before, engineers don’t like have visibility into a system. Getting rid of the CLI for the sake of streamlining a device is a bad idea.

I’ve worked with many devices that don’t have a CLI. Cisco Catalyst Express switches leap immediately to mind. Other devices, like the Cisco UC500 SMB phone system, have a CLI but use of it is discouraged. In face, when you configure the UC500 using the CLI, you start getting warnings about not being able to use the GUI tool any longer. Yet there are functions that are only visible through the CLI.

Non-Starter

Will the programmable networking world will make the same mistake Microsoft did with Windows 8? Even a token CLI is better than cutting it out entirely. Programmable networking will allow all kinds of neat tricks. For instance, we can present a Cisco-like CLI for one group of users and a Juniper-like CLI for a different group that both accomplish the same results. We don’t need to have these CLIs sitting around resident memory. We should be able to generate them on the fly or call the appropriate interfaces from a centralized library. Extensibility, even in the archaic interface of last resort.

If all our talk revolves around the removal of the tool people have been using for decades to program devices you will make enemies quickly. The talk needs to shift from the death of CLI and more toward the advantages gained through adding API interfaces to your programming. Even if our interface into calling those APIs looks similar to a comfortable CLI, you’re going to win more converts up front if you give them something they recognize as a transition mechanism.


Tom’s Take

Microsoft bit off more than they could chew when they exiled the Start Menu to the same pile as DOSShell and Microsoft Bob. People have spent almost 20 years humming the Rolling Stones’ “Start Me Up” as they click on that menu. Microsoft drove users to this approach. To pull it out from under them all at once with no transition plan made for unhappy users. Networking advocates need to be just as cognizant of the fact that we’re headed down the same path. We need to provide transition options for the die-hard engineers out there so they can learn how to program devices via non-traditional interfaces. If we try to make them quit cold turkey you can be sure the Start Menu discussion will pale in comparison.

The Visibility Event Horizon

black_hole1

I’ve always been a science nerd.  Especially when it comes to astronomy.  I’ve always been fascinated by stellar objects.  The most fascinating has to be the black hole.  A region of space with intense gravity formed from the collapse of a stellar body.  I’ve read all about the peculiar properties of classical black holes.  Of particular interest to the networking field is the idea of the event horizon.

An event horizon is a boundary beyond which events no longer affect observers.  In layman’s terms, it is the point of no return for things falling into a black hole.  Anything that falls below the event horizon disappears from the perspective of the observer.  From the point of view of someone falling into the black hole, they never reach the event horizon yet are unable to contact the outside world.  The event horizon marks the point at which information disappears in a system.

How does this apply to the networking world?  Well, every system has a visibility boundary.  We tend to summarize information heading in both directions.  To a network engineer, everything above the transport layer of the OSI model doesn’t really matter.  Those are application decisions that are made by programmers that don’t affect the system other than to be a drain on resources.  To the programmers and admins, anything below the session layer of the OSI model is of little importance.  As long as a call can be made to utilize network resources, who cares what’s going on down there?

Looking Down

Software Defined Networking (SDN) vendors are enforcing these event horizons.  VMware NSX and the Microsoft Hyper-V virtual networking solution both function in a similar manner.  They both create overlay networks that virtualize resources below the level of the host.  Tunnels are created between devices or systems that ride on top of the physical network beneath.  This means that the overlay can function no matter the state of the underlay, provided a path between the two hosts exists.  However, it also means that the overlay obscures the details of the physical network.

There are many that would call this abstraction.  Why should the hosts care about the network state?  All they really want is a path to reach a destination.  It’s better to give them what they want and leave the gory details up to routing protocols and layer 2 loop avoidance mechanisms.  But, that abstraction becomes an event horizon when the overlay is unwilling (or unable) to process information from the underlay network.

Applications and hosts should be aware enough to listen to network conditions.  Overlays should not rely on OSPF or BGP to make tunnel endpoint rerouting decisions.  Putting undue strain on network processing is part of what has led to the situation we have now, where network operating systems need to be complex and intensive to calculate solutions to problems that could be better solved at a higher level.

If the network reports a traffic condition, like a failed link or a congested WAN circuit, that information should be able to flow back up to the overlay and act as a data point to trigger an alternate solution or path.  Breaking the event horizon for information flowing back up toward the overlay is crucial to allow the complex network constructs we’ve created, such as fabrics, to utilize the best possible solutions for application traffic.

Gazing Up

That’s not to say the event horizon doesn’t exist in the other direction as well.  The network has historically been ignorant of the needs of applications at a higher layer.  Network engineers have spent thousands of hours of time creating things like Quality of Service in an attempt to meet the unique needs of higher level programs.  Sometimes this works in a vacuum with no problems provided we’ve guess accurately enough to predict traffic patterns.  Other times, it fails spectacularly when the information changes too quickly.

The underlay network needs to destroy the event horizon that prevents information at higher layers from flowing down into the network.  Companies that have historically concentrated on networking alone have started to see how important this intelligence can be.  By allowing the network to respond to the needs of applications quickly developers can provide enough information to ensure that they programs are treated fairly by changing network conditions even without needing to listen to them.  In this way, the application people can no longer claim the network is a “black hole”.


Tom’s Take

Even as I was writing this, a number of news stories came out from a paper by Professor Stephen Hawking that stated that the classical event horizon doesn’t exist.  The short, short version is that the conditions close to a quantum singularity preclude a well-defined boundary that prevents the escape of all information, including light.  Pretty heady stuff.

In networking, we do have the luxury of a well-defined boundary between underlay networks and overlay networks.  We’ve seen the damage caused by the apparent event horizon for years.  Critical information wasn’t flowing back and forth as needed to help each side provide the best experience for users and engineers.  We need to ensure that this barrier is removed going forward.  The networking people can’t exist in a vacuum pretending that applications don’t have needs.  The overlay admins need to understand the the underlay is a storehouse of critical information and shouldn’t be ignored simply because tunnels are awesome.  Knowing about the event horizon is the first step to finding a way to blast through it.

SDN and Appeal of Abstraction

MeadowSunrise

During the recent Storage Field Day 4, I was listening to Howard Marks (@DeepStorageNet) talking about storage Quality of Service (QoS).  He was describing something he wanted his storage array to do that sounded suspiciously similar to strict priority queuing (or Low Latency Queuing if you speak Cisco).

As I explained how we solved the problem of allowing a specific amount of priority for a given packet stream, it finally dawned on me how software defined networking had the ability to increase productivity in organizations.  It’s not via automation, although that is a very alluring feature.  It’s because SDN can act as a language interpreter for those that don’t speak the right syntax.

It’s said that a picture is worth a thousand words.  As true as that is I would also argue that a few words can paint a thousand pictures.  It’s all in the interpretation.  If I told you I wanted a picture of a “meadow at sunrise”, you’ve probably come up with an idea in your head of what that would look like.  And the odds are good that it’s totally different from my picture.  These simple words are open to interpretation on a wide scale.  Now, let’s constrain the pictures a bit based on the recipient.  Someone that lives in France would have a totally different idea of a meadow than someone that lives in San Francisco.  Each view is totally valid, but the construction of the picture will be dictated by the thought process of the person.  They each know what the meadow looks like, they’re just a bit different.

Painting with SDN

Extend this metaphor to software.  Specifically to networking and storage.  I have a concept that I need to implement.  User A needs guaranteed access to a specific resource for a period of time that should not exceed a given threshold.  A phone may need priority queue access to a slow WAN link.  A backup client may need guaranteed access to a target server without overrunning the link bandwidth.  Two totally different use cases that can be described in the same general language.  Today, I would need two people to implement those instructions on different systems.  Programming a storage array is much different that programming a router or a switch.

But the abstraction of SDN allows my to input a given command and have it executed on dissimilar hardware via automation and integration.  The SDN management system doesn’t care that I want LLQ on a router or strict priority QoS on a backup client.  It knows that there should be an implementation of the given commands on a set of attached systems.  It’s up to the API integration with the systems to determine the syntax needed to execute the commands.  As a network engineer, I don’t need to know the commands to create the QoS construct on the storage array.  I just need to know what I want to accomplish.  The software takes care of the rest.

SDN could usher in a new age for natural language programming.  I often hear my friends in the CCIE community complaining that people only learn the commands to make something happen.  They ignore the basic concepts behind why something works the way it does.  If I can’t explain the concept to my 8-year old son, I don’t know it very well.  I can’t abstract it to the point where a simpler mind can understand.  What if that simple mind had the capability to translate those instructions into actionable programming?  What if I just had to state what I wanted to do?  “Ensure that all traffic on Link A is given priority treatment, except if it is for the backup server.”  The system can create API calls to program the access control lists and storage arrays to take care of all that without needing to fire up a CLI session or a browser.

This is going to require a lot of programming on the front end.  The management system needs to be able to interpret keywords in the instruction set to pick out the right execution items.  Then the system needs to know where to dispatch the commands to make them land on the right APIs.  In the case of the above example, the system would need to know to send two different sets of commands – one to the storage array to provide metered access during backup windows and another set to the networking gear to ensure the right QoS policies were enforced during the given time window.  Oh, and you’re going to want to have the system go back and clean up those ACLs when it’s time for production hours to start again.


Tom’s Take

This isn’t going to be easy by any means.  But adding all the value into the front end management system means that any other system attached on the backend via API is going to benefit.  I’d rather be doing the work in the right places as opposed to spending all our time on the backend and neglecting the interface to the whole system.  Engineers are notorious for writing terrible GUIs.  Let’s take the time to abstract the bad things away and instead make something that’s really going to change the way we program systems.

The OpenFlow Longbow

carck_longbow

The label of disruption seems to be thrown around quite a bit.  I’ve heard tablets being called the disruptive technology when it comes to PCs.  I’ve also heard people talking about software defined networking (SDN) as the disruptive technology to the way that we’ve been doing networking for the last decade.  Nowhere is that more true than with OpenFlow.  But what does this disruption mean?  Aren’t we still essentially forwarding packets the same way we have in the past?  To get a frame of reference, let’s look at one of my favorite disruptive technologies – the longbow.

Who Are Yew?

The Welsh yew longbow had been a staple of the English military as far back as 600AD.  The recurve shape of the bow provided for a faster, longer arrow flight than the shorter bows used by foot soldiers and mounted calvary.  The wood was especially important, as the yew tree was only really found in abundance in Wales.  Longbows existed in one form or another in many armies in Europe, but the Welsh bow was the only one that was feared.

The advantage was never more apparent than during the Hundred Years War between England and France.  The longbow was deployed as a mid-range artillery to harass advancing troops.  There are questions about whether or not the arrows were able to pierce the plate armor used by knights at the time, but the results of archery corps can’t be denied.  Especially in the Battle of Agincourt.  The English used longbows to slow the advance of French forces in armor, tiring them out as they crossed the field and holding them at bay until the heavier foot soldiers of the English army could be repositioned to take the advancing enemy apart.  Agincourt was a win for the English and fives years later, the war was over.

The longbow proved itself to be a very disruptive technology.  Not because it killed soldiers better or faster than a mace or broadsword.  It was disruptive because it changed the way generals composed their armies.  Instead of relying on heavy assault troops in armor to punch a hole in the enemy lines, the longbow forced a soldier to become more mobile with less armor to be able to cross the range of the bow much more quickly and close to a range where the technology advantage was negated.  Bowmen were at a distinct advantage at point-blank range.  Armies grew more mobile all the way up to the point where a new technology disrupted the reign of the longbow: gunpowder.  Once musketeers became more prevalent, they replaced the role traditionally held by the longbow archer.

Disrupting the Flow

How does this history lesson apply to OpenFlow?  OpenFlow is poised to disrupt networking in the same way as the longbow forced people to take a new look at their armies.  OpenFlow takes something we know about and turns it on its head.  It gives us much more control over how a switch forwards packets.  It also makes us ask questions about how we build our networks.

Questions about big core switches, spine-and-leaf topologies, and the intelligence of edge devices all become very pertinent in an OpenFlow design.  Should I put a larger switch at the edge since it’s going to be doing a lot of heavy lifting?  Should I use a fabric in place of a three-tier design?  Will my controller allow me to use different interconnects to ensure high-speed traffic flows east and west in the data center?

OpenFlow is taking over some vendors offerings.  NEC and HP have already committed to OpenFlow designs.  Even companies that haven’t really embraced OpenFlow have decided to offer it rather than dismiss it.  Arista and Cisco are offering new switches that have support for OpenFlow, even if that support may not extend to more proprietary enhancements right now.  Just like the longbow, OpenFlow is forcing the opposition to reconfigure the way they fight the battle.  They may not like it.  They may even say in private that they’re just doing it to mollify a part of the customer base looking for specific points in an proposal.  But they are still dedicating time and effort to OpenFlow all the same.


Tom’s Take

Disruption happens all the time.  We don’t use cell phones in bags hardwired into our vehicles any more.  Our computers are the size of a broom closet and run off of punch cards.  Just like weapons in the ancient world, whoever comes up with a more effective way of winning battles enjoys a distinct advantage for a time.  Eventually, something comes along that disrupts the disruption.  OpenFlow is currently the king of the SDN battlefield.  It holds that title by virtue of how many people are racing to interoperate with it.  Eventually, it will be dethroned just as the longbow was.  They key will be recognizing the next new thing first and using it to your advantage.  And arming your archers with it.

Should Microsoft Buy Big Switch?

MSBSN

Network virtualization is getting more press than ever.  The current trend seems to be pitting the traditional networking companies, like Cisco and Juniper, against the upstarts in the server virtualization companies, like VMware and OpenStack.  To hear the press and analysts talk about it makes one think that these companies represent all there is in the industry.

Whither Microsoft?

One company that seems to have been left out of the conversation is Microsoft.  The stalwarts of Redmond have been turning heads with their rapid pace of innovation to reach parity with VMware’s offerings.  However, when the conversation turns to networking Microsoft is usually left out in the cold.  That’s because their efforts at networking in the past have been…problematic.  They are very service oriented and care little for the world outside their comfortable servers.  That won’t last forever.  VMware will be able to easily shift the conversation away from feature parity with Hyper-V and concentrate on all the networking expertise that it has now that is missing in the competitor.

Microsoft can fix that problem with a small investment.  If you can innovate by building it, you need to buy it.  Microsoft has the cash to buy several startups, even after sinking a load of it into Nokia.  But which SDN-focused company makes the most sense for Microsoft?  I spent a lot of time thinking about this very question and the answer became clear for me:  Microsoft needs to buy Big Switch Networks.

A Window On The Future

Microsoft needs SDN expertise.  They have no current networking experience outside of creating DHCP and DNS services on their platforms.  I mean, did anyone ever use their Network Access Protocol solution as a NAC option?  Microsoft has traditionally created bare bones network constructs to please their server customers.  They think networking is a resource outside their domain, which coincidentally is just how their competitors used to look at it as well.  At least until Martin Casado changed their minds.

Big Switch is a perfect fit for Microsoft.  They have the chops to talk OpenFlow.  Their recent shift away from overlays to software on bare metal would play well as a marketing point against VMware and their “overlays are the best way” message.  They could also help Microsoft do more development on NV-GRE, the also ran to VxLAN.  Ivan Pepelnjak (@IOSHints) was pretty impressed with NV-GRE last December, but it’s dropped of the radar in the wake of VMware embracing VxLAN in NSX.  I think having a bit more development work from the minds at Big Switch would put it back into the minds of some smaller network virtualization companies looking to support something other than the de facto standard.  I know that Big Switch has moved away from the overlay model, but if NV-GRE can easily be adapted to the work Big Switch was doing a few months ago, it would be a great additional offering to the idea of running everything in an SDN-enabled switch OS.

Microsoft will also benefit from the pile of SDN applications that Big Switch has rumored to be sitting around and festering for lack of attention.  Applications like network taps sell Big Switch products now.  With NSX introducing the ideas of integrated load balancers and firewalls into the base product, Big Switch is going to be hard pressed to charge extra for them.  Instead, they’re going to have to go out on a limb and finish developing them past the alpha stage and hope that they are enough to sell more product and recoup the development costs.  With the deep pockets in Redmond, finishing those applications would be a drop in the bucket if it means that the new product can compete directly on an even field with VMware.

Building A Bigger Switch

Big Switch gains in this partnership also.  They get to take some pressure of their overworked development team.  It can’t be easy switching horses in mid-stream, especially when it involves changing your entire outlook on how SDN should be done.  Adding a few dozen more people to the project will allow you to branch out and investigate how integrating software into your ideas could be done.  Big Switch has already done a great job developing Project Floodlight.  Why not let some big brains chew on other ideas in the same vein for a while.

Big Switch could also use the stability of working for an established company.  They have a pretty big target on their backs now that everyone is developing an SDN strategy.  Writing an OS for bare metal switches is going to bring them into contention with Cumulus Networks.  Why not let an OS vendor do some of the heavy lifting?  It would also allow Microsoft’s well established partner program to offer incentives to partners that want to sell white label switches with software from Big Switch to get into networking much more cheaply than before.  Think about federal or educational discounts that Microsoft already gives to customers.  Do you think they’d be excited to see the same kind of consideration when it comes to networking hardware?

Tom’s Take

Little fish either get eaten by bigger ones or they have to be agile enough to avoid being snapped up.  The smartest little fish in the ocean may be the remora.  It survives by attaching itself to a bigger fish and providing a benefit for them both.  The remora gets the protection of not being eaten while also not taking too much from the host.  Microsoft would do well to setup some kind of similar arrangement with Big Switch.  They could fund future development into NV-GRE compatible options, or they just by the company outright.  Both parties get something out of the deal: Microsoft gets the SDN component they need.  Big Switch gets a backer with so much industry clout that they can no longer be dismissed.

HP Networking and the Software Defined Store

HP

HP has had a pretty good track record with SDN.  Even if it’s not very well-known.  HP has embraced OpenFlow on a good number of its Procurve switches.  Given the age of these devices, there’s a good chance you can find them laying around in labs or in retired network closets to test with.  But where is that going to lead in the long run?

HP Networking was kind enough to come to Interop New York and participate in a Tech Field Day roundtable.  It had been a while since I talked to their team.  I wanted to see how they were handling the battle being waged between OpenFlow proponents like NEC and Brocade, Cisco and their hardware focus, and VMware with NSX.  Jacob Rapp and Chris Young (@NetManChris) stepped up to the plate to talk about SDN and the vision on HP.

They cover a lot of ground in here.  Probably the most important piece to me is the SDN app store.

The press picked up on this quickly.  HP has an interesting idea here.  I should know.  I mentioned it in passing in an article I wrote a month ago.  The more I think about the app store model, the more I realize that many vendors are going to go down the road.  Just not in the way HP is thinking.

HP wants to curate content for enterprises.  They want to ensure that software works with their controller to be sure that there aren’t any hiccups in implementation.  Given their apparent distaste for open source efforts, it’s safe to say that their efforts will only benefit HP customers.  That’s not to say that those same programs won’t work on other controllers.  So long as they operate according to the guidelines laid down by the Open Networking Foundation, all should be good.

Show Me The Money

Where’s the value then?  That’s in positioning the apps in the store.  Yes, you’re going to have some developers come to HP and want to simple apps to put in the store.  Odds are better that you’re going to see more recognizable vendors coming to the HP SDN store.  People are more likely to buy software from a name they recognize, like TippingPoint or F5.  That means that those companies are going to want to have a prime spot in the store.  HP is going to make something from hosting those folks.

The real revenue doesn’t come from an SMB buying a load balancer once.  It comes from a company offering it as a service with a recurring fee.  The vendor gets a revenue stream. HP would be wise to work out a recurring fee as well.  It won’t be the juicy 30% cut that Apple enjoys from their walled garden, but anything would be great for the bottom line.  Vendors win from additional sales.  Customers win from having curated apps that work every time that are easy to purchase, install, and configure.  HP wins because everyone comes to them.

Fragmentation As A Service

Now that HP has jumped on the idea of an enterprise-focused SDN app store, I wonder which company will be the next to offer one?  I also worry that having multiple app stores won’t end up being cumbersome in the long run.  Small developers won’t like submitting their app to four or five different vendor-affiliated stores.  More likely they’ll resort to releasing code on their own rather than jump through hoops.  That will eventually lead to support fragmentation.  Fragmentation helps no one.


Tom’s Take

HP Networking did a great job showcasing what they’ve been doing in SDN.  It was also nice to hear about their announcements the day before they broke wide to the press.  I think HP is going to do well with OpenFlow on their devices.  Integrating OpenFlow visibility into their management tools is also going to do wonders for people worried about keeping up with all the confusing things that SDN can do to a traditional network.  The app store is a very intriguing concept that bears watching.  We can only hope that it ends up being a well-respect entry in a long line of easing customers into the greater SDN world.

Tech Field Day Disclaimer

HP was a presenter at the Tech Field Day Interop Roundtable.  In addition, they also provided the delegates a 1TB USB3 hard disk drive.  They did not ask for any consideration in the writing of this review nor were they promised any.  The conclusions and analysis contained in this post are mine and mine alone.