Sorting Through SD-WAN

lightspeed

SD-WAN has finally arrived. We’re not longer talking about it in terms of whether or not it is a thing that’s going to happen, but a thing that will happen provided the budgets are right. But while the concept of SD-WAN is certain, one must start to wonder about what’s going to happen to the providers of SD-WAN services.

Any Which Way You Can

I’ve written a lot about SDN and SD-WAN. SD-WAN is the best example of how SDN should be marketed to people. Instead of talking about features like APIs, orchestration, and programmability, you need to focus on the right hook. Do you see a food processor by talking about how many attachments it has? Or do you sell a Swiss Army knife by talking about all the crazy screwdrivers it holds? Or do you simply boil it down to “This thing makes your life easier”?

The most successful companies have made the “easier” pitch the way forward. Throwing a kitchen sink at people doesn’t make them buy a whole kitchen. But showing them how easy and automated you can make installation and management will sell boxes by the truckload. You have to appeal the opposite nature that SD-WAN was created to solve. WANs are hard, SD-WANs make them easy.

But that only works if your SD-WAN solution is easy in the first place. The biggest, most obvious target is Cisco IWAN. I will be the first to argue that the reason that Cisco hasn’t captured the SD-WAN market is because IWAN isn’t SD-WAN. It’s a series of existing technologies that were brought together to try and make and SD-WAN competitor. IWAN has all the technical credibility of a laboratory full of parts of amazing machines. What it lacks is any kind of ability to tie all that together easily.

IWAN is a moving target. Which platform should I use? Do I need this software to make it run correctly? How do I do zero-touch deployments? Or traffic control? How do I plug a 4G/LTE modem into the router? The answers to each of these questions involves typing commands or buying additional software features. That’s not the way to attack the complexity of WANs. In fact, it feeds into that complexity even more.

Cisco needs to look at a true SD-WAN technology. That likely means acquisition. Sure, it’s going to be a huge pain to integrate an acquisition with other components like APIC-EM, but given the lead that other competitors have right now, it’s time for Cisco to come up with a solution that knocks the socks off their longtime customers. Or face the very real possibility of not having longtime customers any longer.

Every Which Way But Loose

The first-generation providers of SD-WAN bounced onto the scene to pick up the pieces from IWAN. Names like Viptela, VeloCloud, CloudGenix, Versa Networks, and more. But, aside from all managing to build roughly the same platform with very similar features, they’ve hit a might big wall. They need to start making money in order for these gambles to pay off. Some have customers. Others are managing the migration into other services, like catering their offerings toward service providers. Still others are ripe acquisition targets for companies that lack an SD-WAN strategy, like HPE or Dell. I expect to see some fallout from the first generation providers consolidating this year.

The second generation providers, like Riverbed and Silver Peak, all have something in common. They are building on a business they’ve already proven. It’s no coincidence that both Riverbed and Silver Peak are the most well-known names in WAN optimization. How well known? Even major Cisco partners will argue that they sell these two “best of breed” offerings over Cisco’s own WAAS solution. Riverbed and Silver Peak have a definite advantage because they have a lot of existing customers that rely on WAN optimization. That market alone is going to net them a significant number of customers over the next few years. They can easily sell SD-WAN as the perfect addition to make WAN optimization even easier.

The third category of SD-WAN providers is the late comers. I still can’t believe it, but I’ve been reading about providers that aren’t traditional companies trying to get into the space. Talk about being the ninth horse in an eight horse race. Honestly, at this point you’re better off plowing your investment money into something else, like Internet of Things or Virtual Reality. There’s precious little room among the existing first generation providers and the second generation stalwarts. At best, all you can hope for is a quick exit. At worst, your “novel” technology will be snapped up for pennies after you’re bankrupt and liquidating everything but the standing desks.


Tom’s Take

Why am I excited about the arrival of SD-WAN? Because now I can finally stop talking about it! In all seriousness, when the boardroom starts talking about things that means it’s past the point of being a hobby project and now has become a real debate. SD-WAN is going to change one of the most irritating aspects of networking technology for us. I can remember trying to study for my CCNP and cramming all the DSL and T1 knowledge a person could fit into a brain in my head. Now, it’s all point-and-click and done. IPSec VPNs, traffic analytics, and application identification are so easy it’s scary. That’s the power of SD-WAN to me. Easy to use and easy to extend. I think that the landscape of providers of SD-WAN technologies is going to look vastly different by the end of 2017. But SD-WAN is going to be here for the long haul.

Two Takes On ASIC Design

Making ASICs is a tough task. We learned this last year at Cisco Live Berlin from this conversation with Dave Zacks:

Cisco spent 6 years building the UADP ASIC that powers their next generation switches. They solved a lot of the issues with ASIC design and re-spins by creating some programmability in the development process.

Now, watch this video from Nick McKeown at Barefoot Networks:

Nick says many of the same things that Dave said in his video. But Nick and Barefoot took a totally different approach from Cisco. Instead of creating programmable elements in the ASIC design, then abstracted the entire language of function definition from the ASIC. By using P4 as the high level language and making the system compile the instruction sets down to run in the ASIC, they reduced the complexity, increased the speed, and managed to make the system flexible and capable of implementing new technologies even after the ASIC design is set in stone.

Oh, and they managed to do it in 3 years.

Sometimes, you have to think outside the box in order to come up with some new ideas. Even if that means you have to pull everything out of the box. By abstracting the language from the ASIC, Barefoot not only managed to find a way to increase performance but also to add feature sets to the switch quickly without huge engineering costs.

Some food for thought.

Automating Your Job Away Isn’t Easy

programming

One of the most common complaints about SDN that comes from entry-level networking folks is that SDN is going to take their job away. People fear what SDN represents because it has the ability to replace their everyday tasks and put them out of a job. While this is nowhere close to reality, it’s a common enough argument that I hear it very often during Q&A sessions. How is it that SDN has the ability to ruin so many jobs? And how is it that we just now have found a way to do this?

Measure Twice

One of the biggest reasons that the automation portion of SDN has become so effective in today’s IT environment is that we can finally measure what it is that networks are supposed to be doing and how best to configure them. Think about the work that was done in the past to configure and troubleshoot networks. It’s often a very difficult task that involves a lot of intuition and guesswork. If you tried to explain to someone the best way to do things, you’d likely find yourself at a loss for words.

However, we’ve had boring, predictable standards for many years. Instead of cobbling together half-built networks and integrating them in the most obscene ways possible, we’ve instead worked toward planning and architecting things properly so they are built correctly from the ground up. No more guess work. No more last minute decisions that come back to haunt us years down the road. Those kinds of things are the basic building blocks for automation.

When something is built along the lines of predictable rules with proper adherence to standards, it’s something that can be understood by a non-human. Going all the way back to Basic Computing 101, the inputs of a system determine the outputs. More simply, Garbage In, Garbage Out. If your network configuration looks like a messy pile of barely operational commands it will only really work when a human can understand what’s going on. Machines don’t guess. They do exactly what they are told to do. Which means that they tend to break when the decisions aren’t clear.

Cut Once

When a system, script, or program can read inputs and make procedural decisions on those inputs, you can make some very powerful things happen. Provided, that is, that your chosen language is powerful enough to do those things. I’m reminded of a problem I worked on fifteen years ago during my internship at IBM. I needed to change the MTU size for a network adapter in the Windows 2000 registry. My programming language of choice wasn’t powerful enough for me to say something like, “Read these values into an array and change the last 2 or 3 to the following MTU”. So instead, I built a nested if statement that was about 15 levels deep to ensure I caught every possible permutation of the adapter binding order. It was messy. It was ugly. And it worked. But there was no way it would scale.

The most important thing to realize about SDN and automation is that we’ve moved past simply understanding basic values. We’ve finally graduated to a place where programs can make complex decisions based on a number of inputs. We’ve graduated from simple if-then-else constructs and up to a point where programs can take a number of inputs and make decisions based on them. Sure, in many cases the inputs are simple little things like tags or labels. But what we’re gaining is the ability to process more and more of those labels. We can create provisioning scripts that ensure that prod never talks to dev. We can automate turn-up of a new switch with multiple VLANs on different ports through the use of labels and object classes. We can even extrapolate this to a policy-based network language that we can use to build a task once and execute it over and over again on different hardware because we’re doing higher level processing instead of being hamstrung by specific device syntax.

Automation is going to cost some people their jobs. That’s a given. Just like every other manufacturing position, the menial tasks of assembling simple pieces or performing repetitive tasks can easily be accomplished by a machine or software construct. But writing those programs and working on those machines is a new kind of job in and of itself. A humorous anecdote from the auto industry says that the introduction of robots onto assembly lines caused many workers to complain and threaten to walk off the job. However, one worker picked up the manual for the robot and realized that he could easily start working on the it instead of the assembly line.


Tom’s Take

Automation isn’t a magic bullet to fix all your problems. It only works if things are ordered and structured in such a way that you can predictably repeat tasks over and over. And it’s not going to stop with one script or process. You need to continue to build, change, and extend your environment. Which means that your job of programming switches should now be looked at in light of building the programs that program switches. Does it mean that you need to forget the basics of networking? No, but it does mean that they way in which you think about them will change.

Is The Rise Of SD-WAN Thanks To Ethernet?

Ethernet

SD-WAN has exploded in the market. Everywhere I turn, I see companies touting their new strategy for reducing WAN complexity, encrypting data in flight, and even doing analytics on traffic to help build QoS policies and traffic shaping for critical links. The first demo I ever watched for SDN was a WAN routing demo that chose best paths based on cost and time-of-day. It was simple then, but that kind of thinking has exploded in the last 5 years. And it’s all thanks to our lovable old friend, Ethernet.

Those Old Serials

When I started in networking, my knowledge was pretty limited to switches and other layer 2 devices. I plugged in the cables, and the things all worked. As I expanded up the OSI model, I started understanding how routers worked. I knew about moving packets between different layer 3 areas and how they controlled broadcast storms. This was also around the time when layer 3 switching was becoming a big thing in the campus. How was I supposed to figure out the difference between when I should be using a big router with 2-3 interfaces versus a switch that had lots of interfaces and could route just as well?

The key for me was media types. Layer 3 switching worked very well as long as you were only connecting Ethernet cables to the device. Switches were purpose built for UTP cable connectivity. That works really well for campus networks with Cat 5/5e/6 cabling. Switched Virtual Interfaces (SVIs) can handle a large amount of the routing traffic.

For WAN connectivity, routers were a must. Because only routers were modular in a way that accepted cards for different media types. When I started my journey on WAN connectivity, I was setting up T1 lines. Sometimes they had an old-fashioned serial connector like this:

s-l300

Those connected to external CSU/DSU modules. Those were a pain to configure and had multiple points of failure. Eventually, we moved up in the world to integrated CSU/DSU modules that looked like this:

ehwic-2-ports-t-1-e-1

Those are really awesome because all the configuration is done on the interface. They also take regular UTP cables instead of those crazy V.35 monsters.

cisco_v35_old_large

But those UTP cables weren’t Ethernet. Those were still designed to be used as serial connections.

It wasn’t until the rise of MPLS circuits and Transparent LAN services that Ethernet became the dominant force in WAN connectivity. I can still remember turning up my first managed circuit and thinking, “You mean I can use both FastEthernet interfaces? No cards? Wow!”.

Today, Ethernet dominates the landscape of connectivity. Serial WAN interfaces are relegated to backwater areas where you can’t get “real WAN connectivity”. And in most of those cases, the desire to use an old, slow serial circuit can be superseded by a 4G/LTE USB modem that can be purchased from almost any carrier. It would appear that serial has joined the same Heap of History as token ring, ARCnet, and other venerable connectivity options.

Rise, Ethernet

The ubiquity of Ethernet is a huge boon to SD-WAN vendors. They no longer have to create custom connectivity options for their appliances. They can provide 3-4 Ethernet interfaces and 2-3 USB slots and cover a wide range of options. This also allows them to simplify their board designs. No more modular chassis. No crazy requirements for WIC slots, NM slots, or any other crazy terminology that Cisco WAN engineers are all too familiar with.

Ethernet makes sense for SD-WAN vendors because they aren’t concerned with media types. All their intelligence resides in the software running on the box. They’d rather focus on creating automatic certificate-based IPsec VPNs than figuring out the clock rate on a T1 line. Hardware is not their end goal. It is much easier to order a reference board from Intel and plug it into a box than trying to configure a serial connector and make a custom integration.

Even SD-WAN vendors that are chasing after the service provider market are benefitting from Ethernet ubiquity. Service providers may still run serial connections in their networks, but management of those interfaces at the customer side is a huge pain. They require specialized technical abilities. It’s expensive to manage and difficult to troubleshoot remotely. Putting Ethernet handoffs at the CPE side makes life much easier. In addition, making those handoffs Ethernet makes it much easier to offer in-line service appliances, like those of SD-WAN vendors. It’s a good choice all around.

Serial connectivity isn’t going away any time soon. It fills an important purpose for high-speed connectivity where fiber isn’t an option. It’s also still a huge part of the install base for circuits, especially in rural areas or places where new WAN circuits aren’t easily run. Traditional routers with modular interfaces are still going to service a large number of customers. But Ethernet connectivity is quickly growing to levels where it will eclipse these legacy serial circuits soon. And the advantage for SD-WAN vendors can only grow with it.


Tom’s Take

Ethernet isn’t the only reason SD-WAN has succeeded. Ease of use, huge feature set, and flexibility are the real reasons when SD-WAN has moved past the concept stage and into deployment. WAN optimization now has SD-WAN components. Service providers are looking to offer it as a value added service. SD-WAN has won out on the merits of the technology. But the underlying hardware and connectivity was radically simplified in the last 5-7 years to allow SD-WAN architects and designers to focus on the software side of things instead of the difficulties of building complicated serial interfaces. SD-WAN may not owe it’s entire existence to Ethernet, but it got a huge push in the right direction for sure.

HPE Networking: Past, Present, and Future

hpe_pri_grn_pos_rgb

I had the chance to attend HPE Discover last week by invitation from their influencer team. I wanted to see how HPE Networking had been getting along since the acquisition of Aruba Networks last year. There have been some moves and changes, including a new partnership with Arista Networks announced in September. What follows is my analysis of HPE’s Networking portfolio after HPE Discover London and where they are headed in the future.

Campus and Data Center Divisions

Recently, HPE reorganized their networking division along two different lines. The first is the Aruba brand that contains all the wireless assets along with the campus networking portfolio. This is where the campus belongs. The edge of the network is an ever-changing area where connectivity is king. Reallocating the campus assets to the capable Aruba team means that they will do the most good there.

The rest of the data center networking assets were loaded into the Data Center Infrastructure Group (DCIG). This group is headed up by Dominick Wilde and contains things like FlexFabric and Altoline. The partnership with Arista rounds out the rest of the switch portfolio. This helps HPE position their offerings across a wide range of potential clients, from existing data center infrastructure to newer cloud-ready shops focusing on DevOps and rapid application development.

After hearing Dom Wilde speak to us about the networking portfolio goals, I think I can see where HPE is headed going forward.

The Past: HPE FlexFabric

As Dom Wilde said during our session, “I have a market for FlexFabric and can sell it for the next ten years.” FlexFabric represents the traditional data center networking. There is a huge market for existing infrastructure for customers that have made a huge investment in HPE in the past. Dom is absolutely right when he says the market for FlexFabric isn’t going to shrink the foreseeable future. Even though the migration to the cloud is underway, there are a significant number of existing applications that will never be cloud ready.

FlexFabric represents the market segment that will persist on existing solutions until a rewrite of critical applications can be undertaken to get them moved to the cloud. Think of FlexFabric as the vaunted buggy whip manufacturer. They may be the last one left, but for the people that need their products they are the only option in town. DCIG may have eyes on the future, but that plan will be financed by FlexFabric.

The Present: HPE Altoline

Altoline is where HPE was pouring their research for the past year. Altoline is a product line that benefits from the latest in software defined and webscale technologies. It is technology that utilizes OpenSwitch as the operating system. HPE initially developed OpenSwitch as an open, vendor-neutral platform before turning it over to the Linux Foundation this summer to run with development from a variety of different partners.

Dom brought up a couple of great use cases for Altoline during our discussion that struck me as brilliant. One of them was using it as an out-of-band monitoring solution. These switches don’t need to be big or redundant. They need to have ports and a management interface. They don’t need complexity. They need simplicity. That’s where Altoline comes into play. It’s never going to be as complex as FlexFabric or as programmable as Arista. But it doesn’t have to be. In a workshop full of table saw and drill presses, Altoline is a basic screwdriver. It’s a tool you can count on to get the easy jobs done in a pinch.

The Future: Arista

The Arista partnership, according to Dom Wilde, is all about getting ready for the cloud. For those customers that are looking at moving workloads to the cloud or creating a hybrid environment, Arista is the perfect choice. All of Arista’s recent solution sets have been focused on providing high-speed, programmable networking that can integrate a number of development tools. EOS is the most extensible operating system on the market and is a favorite for developers. Positioning Arista at the top of the food chain is a great play for customers that don’t have a huge investment in cloud-ready networking right now.

The question that I keep coming back to is…when does this Arista partnership become an acquisition? There is a significant integration between the two companies. Arista has essentially displaced the top of the line for HPE. How long will it take for Arista to make the partnership more permanent? I can easily foresee HPE making a play for the potential revenues produced by Arista and the help they provide moving things to the cloud.


Tom’s Take

I was the only networking person at HPE Discover this year because the HPE networking story has been simplified quite a bit. On the one hand, you have the campus tied up with Aruba. They have their own story to tell in a different area early next year. On the other hand, you have the simplification of the portfolio with DCIG and the inclusion of the Arista partnership. I think that Altoline is going to find a niche for specific use cases but will never really take off as a separate platform. FlexFabric is in maintenance mode as far as development is concerned. It may get faster, but it isn’t likely to get smarter. Not that it really needs to. FlexFabric will support legacy architecture. The real path forward is Arista and all the flexibility it represents. The question is whether HPE will try to make Arista a business unit before Arista takes off and becomes too expensive to buy.

Disclaimer

I was an invited guest of HPE for HPE Discover London. They paid for my travel and lodging costs as well as covering event transportation and meals. They did not ask for nor were they promised any kind of consideration in the coverage provided here. The opinions and analysis contained in this article represent my thoughts alone.

OpenFlow Is Dead. Long Live OpenFlow.

The King Is Dead - Long Live The King

Remember OpenFlow? The hammer that was set to solve all of our vaguely nail-like problems? Remember how everything was going to be based on OpenFlow going forward and the world was going to be a better place? Or how heretics like Ivan Pepelnjak (@IOSHints) that dared to ask questions about scalability or value of application were derided and laughed at? Yeah, good times. Today, I stand here to eulogize OpenFlow, but not to bury it. And perhaps find out that OpenFlow has a much happier life after death.

OpenFlow Is The Viagra Of Networking

OpenFlow is not that much different than Sildenafil, the active ingredient in Vigara. Both were initially developed to do something that they didn’t end up actually solving. In the case of Sildenafil, it was high blood pressure. The “side effect” of raising blood pressure in a specific body part wasn’t even realized until after the trials of the drug. The side effect because the primary focus of the medication that was eventually developed into a billion dollar industry.

In the same way, OpenFlow failed at its stated mission of replacing the forwarding plane programming method of switches. As pointed out by folks like Ivan, it had huge scalability issues. It was a bit clunky when it came to handling flow programming. The race from 1.0 to 1.3 spec finalization left vendors in the dust, but the freeze on 1.3 for the past few years has really hurt innovation. Objectively, the fact that almost no major shipping product uses OpenFlow as a forwarding paradigm should be evidence of it’s failure.

The side effect of OpenFlow is that it proved that networking could be done in software just as easily as it could be done in hardware. Things that we thought we historically needed ASICs and FPGAs to do could be done by a software construct. OpenFlow proved the viability of Software Defined Networking in a way that no one else could. Yet, as people abandoned it for other faster protocols or rewrote their stacks to take advantage of other methods, OpenFlow did still have a great number of uses.

OpenFlow Is a Garlic Press, Not A Hammer

OpenFlow isn’t really designed to solve every problem. It’s not a generic tool that can be used in a variety of situations. It has some very specific use cases that it does excel at doing, though. Think more like a garlic press. It’s a use case tool that is very specific for what it does and does that thing very well.

This video from Networking Field Day 13 is a great example of OpenFlow being used for a specific task. NEC’s flavor of OpenFlow, ProgrammableFlow, is used on conjunction with higher layer services like firewalls and security appliances to mitigate the spread of infections. That’s a huge win for networking professionals. Think about how hard it would be to track down these systems in a network of thousands of devices. Even worse, with the level of virulence of modern malware, it doesn’t take long before the infected system has infected others. It’s not enough to shut down the payload. The infection behavior must be removed as well.

What NEC is showing is the ultimate way to stop this from happening. By interrogating the flows against a security policy, the flow entries can be removed from switches across the network or have deny entries written to prevent communications. Imagine being able to block a specific workstation from talking to anything on the network until it can be cleaned. And have that happen automatically without human interaction. What if a security service could get new malware or virus definitions and install those flow entries on the fly? Malware could be stopped before it became a problem.

This is where OpenFlow will be headed in the future. It’s no longer about adapting the problems to fit the protocol. We can’t keep trying to frame the problem around how much it resembles a nail just so we can use the hammer in our toolbox. Instead, OpenFlow will live on as a point protocol in a larger toolbox that can do a few things really well. That means we’ll use it when we need to and use a different tool when needed that better suits the problem we’re actually trying to solve. That will ensure that the best tool is used for the right job in every case.


Tom’s Take

OpenFlow is still useful. Look at what Coho Data is using it for. Or NEC. Or any one of a number of companies that are still developing on it. But the fact that these companies have put significant investment and time into the development of the protocol should tell you what the larger industry thinks. They believe that OpenFlow is a dead end that can’t magically solve the problems they have with their systems. So they’ve moved to a different hammer to bang away with. I think that OpenFlow is going to live a very happy life now that people are leaving it to solve the problems it’s good at solving. Maybe one day we’ll look back on the first life of OpenFlow not as a failure, but instead as the end of the beginning of it become what it was always meant to be.

The Tortoise and the Austin Hare

Dell_Logo

Dell announced today the release of their newest network operating system, OS10 (note the lack of an X). This is an OS that is slated to build on the success that Dell has had selling 3rd party solutions from vendors like Cumulus Networks and Big Switch. OS10’s base core will be built on an unmodified Debian distro that will have a “premium” feature set that includes layer 2 and layer 3 functionality. The aim to have a fully open-source base OS in the networking space is lofty indeed, but the bigger question to me is “what happens to Cumulus”?

Storm Clouds

As of right this moment, before the release of Dell OS10, the only way to buy Linux on a Dell switch is to purchase it with Cumulus. In the coming months, Dell will begin to phase in OS10 as an option in parallel with Cumulus. This is especially attractive to large environments that are running custom-built networking today. If your enterprise is running Quagga or sFlow or some other software that has been tweaked to meet your unique needs you don’t really need a ton of features wrapped in an OS with a CLI you will barely use.

So why introduce an OS that directly competes with your partners? Why upset that apple cart? It comes down to licenses. Every time someone buys a Dell data center switch, they have to pick an OS to run on it. You can’t just order naked hardware and install your own custom kernel and apps. You have to order some kind of software. When you look at the drop-down menu today you can choose from FTOS, Cumulus, or Big Switch. For the kinds of environments that are going to erase and reload anyway the choice is pointless. It boils down to the cheapest option. But what about those customers that choose Cumulus because it’s Linux?

Customers want Linux because they can customize it to their heart’s content. They need access to the switching hardware and other system components. So long as the interface is somewhat familiar they don’t really care what engine is driving it. But every time a customer orders a switch today with Cumulus as the OS option, Dell has to pay Cumulus for that software license. It costs Dell to ship Linux on a switch that isn’t made by Dell.

OS10 erases that fee. By ordering a base image that can only boot and address hardware, Dell puts a clean box in the hands of developers that are going to be hacking the system anyway. When the new feature sets are released later in the year that increase the functionality of OS10, you will likely see more customers beginning to experiment with running Linux development environments. You’ll also see Dell beginning to embrace a model that loads features on a switch as software modules instead of purpose-built appliances.

Silver Lining

Dell’s future is in Linux. Rebuilding their OS from the ground up to utilize Linux only makes sense given industry trends. Junos, EOS, and other OSes from upstarts like Pluribus and Big Switch are all based on Linux or BSD. Reinventing the wheel makes very little sense there. But utilizing the Switch Abstraction Interface (SAI) developed for OpenCompute gives them an edge to focus on northbound feature development while leaving the gory details of addressing hardware to the abstraction talking to something below it.

Dell isn’t going to cannibalize their Cumulus partnership immediately. There are still a large number of shops running Cumulus that a are going to want support from their vendor of choice in the coming months. Also, there are a large number of Dell customers that aren’t ready to disaggregate hardware from software radically. Those customers will require some monitoring, as they are likely to buy the cheapest option as opposed to the best fit and wind up with a switch that will boot and do little else to solve network problems.

In the long term, Cumulus will continue to be a fit for Dell as long as OS10 isn’t ported to the Campus LAN. Once that occurs, you will likely see a distancing of these two partners as Dell embraces their own Linux OS options and Cumulus moves on to focus on using whitebox hardware instead of bundling themselves with existing vendors. Once the support contracts expire on the Cumulus systems supported by Dell, I would expect to see a professional services offering to help those users of Cumulus-on-Dell migrate to a “truly open and unmodified kernel”.


Tom’s Take

Dell is making strides in opening up their networking with Linux and open source components. Juniper has been doing it forever, and HP recently jumped into the fray with OpenSwitch. Making yourself open doesn’t solve your problems or conjure customers out of thin air. But it does give you a story to tell about your goals and your direction. Dell needs to keep their Cumulus partnerships going forward until they can achieve feature parity with the OS that currently runs on their data center switches. After that happens and the migration plans are in place, expect to see a bit of jousting between the two partners about which approach is best. Time will tell who wins that argument.