Penny Pinching With Open Source

You might have seen this Register article this week which summarized a Future:Net talk from Peyton Koran. In the article and the talk, Peyton talks about how the network vendor and reseller market has trapped organizations into a needless cycle of bad hardware and buggy software. He suggests that organizations should focus on their new “core competency” of software development and run whitebox or merchant hardware on top of open source networking stacks. He says that developers can use code that has a lot of community contributions and shares useful functionality. It’s a high and mighty goal. However, I think the open source part of the equation is going to cause some issues.

A Penny For Your Thoughts

The idea behind open source isn’t that hard to comprehend. Everything available to see and build. Anyone can contribute and give back to the project and make the world a better place. At least, that’s the theory. Reality is sometimes a bit different.

Many times, I’ve had off-the-record conversations with organizations that are consuming open source resources and projects as a starting point for building something that will end up containing many proprietary resources. When I ask them about contributing back to those projects or finding ways to advance things, the response are usually silence. Very rarely, I hear that the organization sees their proprietary developments as a “competitive advantage” that they are going to use to either beat a competitor or build a product that saves them a significant amount of money.

The analogy I like to use for open source is the “Take A Penny” dish that many businesses have next to their cash register. The idea is that people can contribute something small to help out others. If someone needs a penny or two to help make payment for a good or service, they can take one. If they have a few left over they can give back. It’s a way to give back a little now and then.

However, there are a couple of types of people that skew the trend for the penny dish. The first is the person that gives back much more than the norm. That person might put a quarter in the dish or put in four or five pennies at every dish they find. They contribute above the norm often and give a lot. The second type of person takes quite a few pennies from the dish and never gives back. They may see the dish as “free money” to be used to augment their own. They don’t care if all the pennies are gone when they finish just as long as they got what they needed out of it.

Extend this metaphor into the open source community. There are quite a few contributors that put in significant time and effort in their projects. They may have found a way to do it full time or may be paid by their company to participate in projects. These folks are dedicated to the cause.

On the other side of the metaphor are the people and organizations that take what they want from open source and never give back. They never contribute to the project, even if their enhancements are needed and welcome. They take a free or inexpensive starting point and use it to build a product that could be used internally to give the organization an advantage. The key here is that it’s something used internally. The GPL covers distribution of software that is based on GPL code, but it’s not really clear about what happens if that software is consumed internally. An enterprising developer may say to themselves, “As long as I don’t sell it, I don’t have to give my code back.”

Networking For Pocket Change

There are quite a few open source networking projects out there. Quagga, BiRD, and Open vSwitch are great examples of projects that have significant reach and are used by a lot of companies to build great products. However, imagine what would happen if no one gave back to these projects. Imagine what would happen if contributors decided to make their own BGP daemons or OVS-like program and use it without regard for helping others in the community.

Open source software needs developers willing to contribute back to the project. If networking is going to embrace open source projects as Peyton suggested in his talk, it’s going to take a lot more contribution than quiet consumption. Whether or not you agree with the premise that networking vendors are corrupt and evil you do have to concede that they’ve given us mostly stable protocols like BGP and OSPF. These same vendors have contributed ideas back to the standardization process to improve protocols like spanning tree and power over Ethernet. Their contributions helped shape what networking is today.

If the next generation of software based network developers wants to embrace and extend these contributions with open source, they’re going to need to be transparent and communicate with the project leads. They’re also going to need to push back when someone high up the food chain sees the development process as a way to gain an advantage and try and keep it all secretive. If developers aren’t going to give back to the community it negates the advantages of open source and instead takes us back to the days of networks being one-off creations that have no interoperability beyond a few protocols. Islands in a sea of home-grown lava.


Tom’s Take

As anyone that attended Future:Net within earshot of me can attest, I wasn’t overly thrilled with Peyton’s take on the future of networking. I have some deep seeded reservations about the “screw the vendors, build it all yourself” mentality that is pervading organizations today. Not everyone is a development shop. Law firms and schools don’t employ software engineers regularly. If you want to transform those types of users into open source adherents, you need to lead the pack by giving back and talking about what your doing with open source. If you’re not willing to lead the way, stop telling people to take the fork in the road.

Advertisements

Network Longevity – Think Car, Not iPhone

One of the many takeaways I got from Future:Net last week was the desire for networks to do more. The presenters were talking about their hypothesized networks being able to make intelligent decisions based on intent and other factors. I say “hypothesized” because almost everyone admitted that we aren’t quite there. Yet. But the more I thought about it, the more I realized that perhaps the timeline for these mythical networks is a bit skewed in favor of refresh cycles that are shorter than we expect.

Software Eats The World

SDN has changed the way we look at things. Yes, it’s a lot of hype. Yes, it’s an overloaded term. But it’s also the promise of getting devices to do much more than we had ever dreamed. It’s about automation and programmability and, now, deriving intent from plain language. It’s everything we could ever want a simple box of ASICs to do for us and more.

But why are we asking so much? Why do we now believe that the network is capable of so much more than it was just five years ago? Is it because we’ve developed a revolutionary new method for making chips that are ten times smarter and a hundred times faster? No. In fact, the opposite is true. Chips are a little faster and a little cheaper, but they aren’t custom construction. Most companies are embracing merchant silicon from companies like Broadcom and Mellanox. So, where’s all this functionality coming from?

Its’ the “S” in SDN. We’re seeing software driving this development. Yes, it’s the same thing we’ve been saying for years now. But it’s something more now. Admins and engineers aren’t just excited that network devices are more software now. They’re expecting it. They’re looking to the software to drive the development. Gone are the days of CLI-only access. Now, the interfaces are practically built on API-driven capabilities. The Aruba 8400 seems like it was built for the API first, with the GUI being a superset of API functions. People getting into the networking world are focusing on things like Python first and CLI syntax second. Software is winning again.

It’s like the iPhone revolution. The hardware is completely divorced from the software. I can upgrade to the latest version of Apple’s iOS and get most of the functionality I want in the OS. Aside from some hardware specific things the majority of the functions work no matter what. Every year, I get new toys to play with. Every 12 months I can enable new functions and have new avenues available to me. If you think back to the first iterations of the iPhone ten years ago, you can see how far software development has driven this device.

Hardware For The Long Haul

However, for all the grandeur and amazement that iPhone (and Android for that matter) show, it’s also created a side effect that is causing even more problems in the mobile device world that is bleeding over into other electronics. That is the rapid replacement cycle. I mentioned above how awesome it is that you can get new functions every year in your mobile device. But I didn’t mention how people are already looking forward to the newest hardware even though they may have purchased a new phone just 10 months ago.

The desire to get faster by leaps and bounds every year is almost comical at this point. When the new device is delivered an it’s just a bit faster than the last, people are disappointed. Even I was guilty of this when I moved from a 2013 MacBook to a 2016 model. It was only 20% faster. Even with everything else I got in the new package, I found myself chasing performance over every other feature.

The desire to get better performance isn’t just limited to phones and tablets and laptops. When network designers look to increase network performance, they want to do so in a way that makes their users happy. Gone are the days when a simple gigabit network connection would keep someone pleased. We now have to develop gigabit wireless connections to the backbone network, where data is passed to servers connected by 25Gig connections to spines and super spines running at 100Gig (for now). In some cases, that data doesn’t even move any more, and instead short-lived containers are spun up next to it to make things run faster.

Networks aren’t designed like iPhones. They aren’t built to be ripped out every year and replaced with something a little faster and a little shinier. They’re designed more like cars in that regard. They are large purchases that should last for years upon years, with their performance profile dictated by the design at the time. We’ve gone through the phases of running 10Mbit hubs. We upgraded to FastEthernet. We’re now living in a cycle where Gigabit and 10/40 Gigabit switches are ruling the roost. Some of the more forward-thinking folks are adopting 25/50Gigabit and even looking to faster technologies like 400Gig connections for spines.

However, the longevity of hardware remains. Capital expenditure isn’t the easiest thing in the world to accomplish. You need to budget for new devices in the networking world. You need to justify cost savings or new applications. You can’t just buy a fast new switch or two every year and claim that it makes everything better without some kind of justification. Even if you move to the cloud with your strategy, you’re not changing the purchasing model. You’re just trading capital expenditure to operational expenditure. You’re leasing a car instead of buying it. Because if you leave the cloud, you’re back to your old switching model with nothing to show for it.


Tom’s Take

I was pretty hard on the Future:Net presenters because I felt that their talk of ditching money grubbing vendors for the purity of whitebox switching running open source software was a bit heavy handed. Most organizations are in the middle of a refresh cycle and can’t afford to rip everything out and replace it today. Moreover, the value in whitebox switching isn’t realized when you install new hardware. It’s realized when you turn your switch into an iPhone. Where software development rules the day and your hardware fades away in the background. We’re not there yet. We’re still buying hardware for the sake of hardware as the software is catching up. Networking equipment is still built and bought like a car and will be for a few years to come. Maybe we can revisit this topic in 3 years and see how far software will have driven us by then.

Resource Contention In IT – Time Is Never Enough

I’m at Future:NET this week and there’s a lot of talk about the future of what networking is going to look like from the perspective of vendors like Apstra, Veriflow, and Forward Networks. There’s also a great deal of discussion from customers and end users as well. One of the things that I think is being missed in all the talk about resources.

Time Is Not On Your Side

Many of the presenters, like Truman Boyes of Bloomberg and Peyton Maynard-Koran of EA, discussed the idea of building boxes from existing components instead of buying them from established networking vendors like Cisco and Arista. The argument does hold some valid ideas. If you can get your hardware from someone like EdgeCore or Accton and get your software from someone else like Pluribus Networks or Pica8 it looks like a slam dunk. You get 90% to 95% of a solution that you could get from Cisco with much less cost to you overall.

Companies like Facebook and Google have really pioneered this solution. Facebook’s OCP movement is really helping networking professionals understand the development that goes into building their own switches. Facebook’s commitment is also helping reduce the price of the components when an eager person wants to go build an OCP switch from parts they find at Radio Shack or from Amazon.

But, for Facebook, the development of a switch like this or the development of a platform is a sunk cost. Because the important resource to Facebook isn’t time. Facebook has teams of engineers sitting around developing things. For them, the time the least important resource. Time is something they have in abundance. Why is that? Because their development is entirely focused on their product. Google can afford to have 500 people working on a product with an IT focus like Google Reader or Google Wave because that’s what Google hires people to do.

Contrast that with the typical IT department at an enterprise. Even with thousands of users in Marketing, Management, and Finance there are usually only a handful of IT professionals. And those people have to cover storage, compute, networking, wireless, and software. The focus of the average law firm is not using IT resources to create a product. The focus of the business is leveraging IT to provide a service. A finance firm doesn’t have the time resources to commit to developing in-house solutions or creating IT hardware from components and freely available software.

Money, Money, Money.

Let’s look at the other side of the coin. Facebook and Google have oodles and oodles of time to build and develop things. They can get their developers to work together to build the hardware and software to integrate at a deep level. And because they understand it at that level, they can easily debug it instead of asking someone to solve their problem. What Facebook and Google don’t have is money.

To large firms like these, money is more important than time. When you have to purchase networking or storage equipment by the thousands or tens of thousands of units money becomes a huge issue. If you can save a few dollars per switch that can translate to huge savings in the long run. Even Facebook is doing this with OCP. By creating a demand for specific components for these devices, they can drive costs down across the board and save money for them. For Facebook, money is what is tracked for creating their infrastructure. The more that is saved, the more they can do with it.

In the enterprise, money isn’t quite as important as time. Money is important to businesses for sure. You don’t keep the lights on if you aren’t making money. But because IT supports the business and isn’t the entire business, money can be more easily allocated to projects from budgets. There are pools of money that can be used to purchase office furniture, catering services, or IT hardware. These resources can be reallocated efficiently like Facebook allocates time for projects. If the storage array needs to be upgraded or the wireless needs to be refreshed there can be discussions about how to accomplish it. Maybe the CEO doesn’t get a new desk this quarter. Or maybe there needs to be a few new sales discussions to create capital. But money isn’t as valuable as time. If you think I’m crazy try to get 10 minutes on a CEO’s calendar. Versus getting him to sign off on a purchase.

That’s the real value of cloud computing for IT professionals. They aren’t paying for scale or for availability. They’re really paying for time. They’re paying for a process that reduces the amount of time that they spend configuring low level tasks that are menial and time consuming. Building systems takes time. Automation reduces the time it takes. Process reduces it even further. So organizations looking to move to the cloud are essentially trading one resource, money, for a more important resource, time. Likewise, the large cloud providers that are building these systems are trading their resource, time, for a more valuable resource, money.


Tom’s Take

I don’t believe that smaller enterprises will ever truly embrace the idea of building their own OCP switches running custom Linux distros and custom built routing processes. Because to them, time is way too important. Time to focus on the business. Time to focus on supporting the way that the money is made. Time to do more. Likewise, I expect that large enterprises and providers like Facebook will continue to push the envelope of development and create new solutions. Because they have the time to play and test and build. And use those skills to make money. But I never see a world where those two places meet. Because resource contention is different between these two groups and it causes different outcomes. And the value of those resources are unlikely to change without massive disruption.

Automating Documentation

Tedium is the enemy of productivity. The fastest way for a task to not be done is to make it long, boring, and somewhat complicated. People who feel that something is tedious or repetitive are the ones more likely to marginalize a task. And I think I speak for the entire industry when I say that there is no task more tedious and boring than documentation. So how can we fix it?

Tell Me What You Did

I’m not a huge fan of documentation. When I decide on a plan of action, I rarely write it down step-by-step unless I’m trying to train someone. Even then, it looks more like notes with keywords instead of a narrative to follow. It’s a habit that has been borne out of years of firefighting in networks and calls to “do it faster”. The essential items of a task are refined and reduced until all that remains is the work and none of the ancillary items, like documentation.

Based on my previous life as a network engineer, I can honestly say that I’m not alone in this either. My old company made lots of money doing network discovery engagements. Sometimes these came because the previous admins walked out the door with no documentation. Other times, it was simply because the network had changed so much since the last person made any notes that what was going on didn’t resemble anything like what they thought it was supposed to look like.

This happens everywhere. It doesn’t take many instances of an network or systems professional telling themselves, “Oh, I’ll write it down later…” for later to never come. Devices get added, settings get changed, and not one word is ever written down. That’s the kind of chaos that causes disorganization at best and outages at worst. And I doubt there’s any networking pro out there that hasn’t been affected by bad documentation at one time or another.

So, how do we fix documentation? It’s tedious for sure. Requiring it as part of the process just invites people to find ways around it. And good documentation takes time. Is there a way to combine the lack of time, lack of requirement, and repetition and make documentation something that is done again? I think there is. And it requires a little help from process.

Not Too Late To Automate

Automation is a big thing right now. SDN is driving it. Network complexity is practically requiring it. Yet networking professionals are having a hard time embracing it. Why?

In part, networking pros don’t like to spend hours solving a problem that can be done in minutes. If you don’t believe me, watch one of the old SNL Nick Burns sketches. Nick is more likely to tell you to move than tell you how to fix your problem. Likewise, if a network pro is spending four hours writing an automation script that is supposed to execute a change that can be made in 20 minutes, they’re not going to want to do it. It’s just the nature of the job and the desire of the network professional to make every minute count.

So, how can we drive adoption of automation? As it turns out, automating documentation can be a huge driver. Automation of tedious tasks is exactly the thing that scripting and automation was designed to solve. Instead of focusing on the automation of the task, like adding VLANs to a set of switches, focus on the ability of the system to create documentation on the fly from the change.

Let’s walk through an example. In order for documentation to matter, it has to answer the 5 Ws. How can we automate that?

Let’s start with Who. Automation can create documentation saying user Hollingsworth made a change through an automated process. That helps the accounting side of the house figure out the person making changes in the network. If that person is actually a script, the Who can be changed to reflect that it was an automated process called by a person related to a change ticket. That gives everyone the ability to track the changes back to a given problem. And it can all be pulled in without user intervention.

What is also an easy automation task. List the configuration being applied. At first, the system can simply list the configuration to be programmed. But for menial and repetitive tasks like VLAN additions you can program the system with a real description like “Adding VLANs to $Switch to support $ticket”. Those variables can be autopopulated based on the work to be done. Again, we reference a ticket number in order to prove that these changes are coming from somewhere.

When is also critical. Are these changes happening in a maintenance window? Or did someone check them in in the middle of the day because they won’t cause any problems? (SPOILER ALERT: They will) By required a timestamp for changes, you can track which professionals are being cavalier with their change management. You can also find out if someone is getting into the system after hours to cause problems or attempt to compromise things. Even if the cause of the change is “immediately” due to downtime or emergency, knowing why it had to be checked in right away is a clue to finding problems that recur in the network.

Where is a two-pronged reason. It’s important to check where the changes are going to be applied. Is it going to be done to all switches in the organization? Or just a set in a remote office. Sanity checking via documentation will keep you from bricking your entire organization in one fell swoop. Likewise, knowing where the change is being checked in from is important. Is a remote office trying to change config on HQ switches? Is a remote engineer dialed in making changes related to an open support case? Is someone from a foreign nation making changes via VPN at 4:30am local time? In every case, you’d really want to know what’s going on before those changes get made.

Why is the one that will trip up everything. If you don’t believe me, I’d like to give you the top two reasons why Windows Server 2003 is shut down and rebooted with the shutdown justification dialog box:

  1. a;lkdjfalkdflasdfkjadlf;kja;d
  2. JUST ****ing SHUT DOWN!!!!!

People don’t like justifying their decisions. Even when I worked for Gateway 2000 on their national help desk, our required call documentation was a bit spotty when it came to justification for changes. Why did you decide to FDISK and reload? Why are you going into the registry to fix the icon colors? Change justification is half of documentation. It gives people something to audit. It gives people a way to look at things and figure out why you started down the path of a particular reasoning for problem solving. It also provides context for you after the fact when you can’t figure out why you did it the way you did.


Tom’s Take

Automation isn’t going to take away your job. Automation is going to do the jobs you hate doing. It’s going to make your life easier to concentrate on the tasks that need to be done by freeing you from the tasks that should be done and aren’t. If we can make automation document our networks for just six months, I think you’ll find the value in programming things to work this way. I also think you’ll be happier with the level of detail on your network. And once you can prove the value of automating just one task to your teams, I’m sure they’ll see the value of increasing automation all around.

Subscription Defined Networking

Cisco’s big announcement this week ahead of Cisco Live was their new Intent-based Networking push. This new portfolio does include new switching platforms in the guise of the Catalyst 9000 series, but the majority of the innovation is coming in the software layer. Articles released so far tout the ability of the network to sense context, provide additional security based on advanced heuristics, and more. But the one thing that seems to be getting little publicity is the way you’re going to be paying for software going forward.

The Bottom Line

Cisco licensing has always been an all-or-nothing affair for the most part. You buy a switch and you have two options – basic L2 switching or everything the switch supports. Routers are similar. Through the early 15.x releases, Cisco routers could be loaded with an advanced image that ran every service imaginable. Those early 15.x releases gave us some attempts at role-based licensing for packet, voice, and security device routers. However, those efforts were rolled back due to customer response.

Shockingly, voice licensing has been the most progressive part of Cisco’s licensing model for a while now. CallManager 4.x didn’t even bother. Hook things up and they work. 5.x through 9.x used Device License Units (DLUs) to help normalize the cost of expensive phones versus their cheaper lobby and break room brethren. But even this model soon gave way to the current Unified Licensing models that attempt to bundle phones with software applications to mimic how people actually communicate in today’s offices.

So where does that leave Cisco? Should they charge for every little thing you could want when you purchase the device? Or should Cisco leave it wide open to the world and give users the right to decide how best to use their software? If John Chambers had still been in charge of Cisco, I know the answer would have been very similar to what we’ve seen in the past. Uncle John hated the idea of software revenue cannibalizing their hardware sales. Like many stalwarts of the IT industry, Chambers believed that hardware was king and software was an afterthought.

Pay As You Go

But Chuck Robbins has different ideas. Alongside the new capabilities of Cisco’s Intuitive Network plan they have also introduced a software subscription model. Now, if you want to use all these awesome new features for the future of the network according to Cisco you are going to pay for them. And you’re going to pay every year you use them.

It’s not that radical of a shift in mindset if you look at the market today. Cable subscriptions are going away in favor of specialized subscriptions to specific content. Custom box companies will charge you a monthly fee to ship you random (and not-so-random) items. You can even set up a subscription to buy essential items from Amazon and Walmart and have them shipped to your home regularly.

People don’t mind paying for things that they use regularly. And moving the cost model away from capital expenditure (CapEx) to an operational expenditure (OpEx) model makes all the sense in the world for Cisco. Studies from industry companies like Infinity Research have said that Infrastructure as a Service (Iaas) growth is going to be around 46% over the next 5 years. That growth money is coming from organizations shift CapEx budget to OpEx budget. For traditional vendors like Cisco, EMC, and Dell, it’s increasingly important for them to capture that budget revenue as it moves into a new pool designed to be spent a month or year at a time instead of once every five to seven years.

The end goal for Cisco is to replace those somewhat frequent hardware expenditures with more regular revenue streams from OpEx budgets. If you’re nodding your head and saying, “That’s pretty obvious…” you are likely from the crowd that couldn’t understand why Cisco kept doubling down on bigger, badder switching during the formative years of SDN. Cisco’s revenue model has always looked a lot like IBM and EMC. They need to sell more boxes more frequently to hit targets. However, SDN is moving the innovation away from the hardware, where Cisco is comfortable, and into the software, where Cisco has struggled as of late.

Software development doesn’t happen in a vacuum. It doesn’t occur because you give away features designed to entice customers into buying a Nexus 9000 instead of a Nexus 6000. Software development only happens when people are paying money for the things you are developing. Sometimes that means that you get bonus features that they figure out in the process of making the main feature. But it surely means that the people focused on making the software want to get it right the first time instead of having to ship endless patches to make it work right eventually. Because if your entire revenue model comes from software, it had better be good software that people want to buy and continue to pay for.


Tom’s Take

I think Chuck Robbins is dragging Cisco into the future kicking and screaming. He’s streamlined the organization by getting rid of the multitude of “pretenders to the throne” and tightening up the rest of the organization from a collection of competing business units into a logically organized group of product lines that can be marketed. The shift toward a forward-looking software strategy built on recurring revenue that isn’t dependent on hardware is the master stroke. If you ever had any doubts about what kind of ship Chuck was going to sail, this is your indicator.

In seven years, we’re not going to be talking about Cisco in the same way we did before. Much like we don’t talk about IBM like we used to. The IBM that exists today bears little resemblance to Tom Watson’s company of the past. I think that the Cisco of the future will bear the same superficial resemblance to John Chamber’s Cisco as well. And that’s for the better.

Cisco and Viptela – The Price of Development Debt

Cisco finally pulled themselves into the SD-WAN market by acquiring Viptela on Monday. Viptela was considered to be one of, if not the leading SD-WAN vendor in the market. That Cisco decided to pick them as an acquisition target isn’t completely surprising. But one might wonder why?

IWANna New Debt

Cisco’s premier strategy for SD-WAN up until last week was IWAN. This is their catch-all solution designed to take the various component pieces being offered by SD-WAN solutions and replicate them on Cisco hardware. IWAN has served as a vehicle for Cisco to push things like the APIC-EM solution, Cisco ONE licensing, and a variety of other enhanced technologies like NBAR and PfR.

Cisco has packaged these technologies together because they have spent a couple of decades building these protocols up to be the best at what they do in the industry. NBAR was the key to application QoS years ago. PfR and OER were the genesis of Cisco having the ability to intelligently route packets to destinations. These protocols have formed the cornerstone of their platform for many, many years.

So why is IWAN such a mess? If you have the best of breed technology built into a router that makes the packets fly across the Internet at lightning speeds how is it that companies like Viptela were eating Cisco’s lunch in the SD-WAN space? It’s because those same best-of-breed protocols are to blame for the jigsaw puzzle of IWAN.

If you are the product manager for a protocol like NBAR or PfR, you want it to be adopted by as many people as possible. Wide adoption guarantees you’re going to have a job tomorrow or even next year. The people working on EIGRP and OSPF are safe. But if you get left behind technologically, you’re in for rough seas. Just ask the folks that managed LANE. But if you can attach yourself to a movement that’s got some steam, you’re in the drivers seat.

At the same time, you want your protocol or product to be the best at what it does. And sometimes being the best means you don’t compromise. That’s great when you are the only thing running on the system. But when you’re trying to get protocols to work together to create something bigger, you often find that compromises are not just a good idea, they’re necessary. But how do you handle it when the product manager for NBAR and the product manager for IP SLA get into a screaming match over who is going to blink first?

Using existing protocols and products is a great idea because it means you don’t have to reinvent the wheel every time you design something. But, with that wheel comes the technical debt of development. Given the chance to reuse something that thousands, if not millions, of dollars of R&D has gone into, companies like Cisco will jump at the chance to get some more longevity out of a protocol.

Not Pokey, But Gumby

Now, lets look at a scrappy startup like Viptela. They have to build their protocols from the ground up. Maybe they have the opportunity of leveraging some open source projects or some basic protocol implementations to get off the ground. That means that they are starting from essentially square one. It also means they are starting off with very little technical and development debt.

When Viptela builds their application monitoring stack or their IPSec VPN stack, they aren’t trying to build the best protocol for every possible situation that could ever be encountered by a wide variety of customers. They are just trying to build a protocol that works. And not just a protocol that works on its own. They want a protocol that works with everything else they are building.

When you’re forced to do everything from scratch, you find that you avoid making some of the same choices that you were forced to make years ago. The lack of technical and development debt also means you can take a new direction with things. Don’t want to support pre-shared key IPSec VPNs? Don’t build it into the protocol. Don’t care to have some of the quirks of PfR? Build something different that meets your needs. You have complete control.

Flexibility is why SD-WAN vendors were able to dominate the market for the past two years. They were able to adapt and change quickly because they didn’t need to keep trying to make systems integrate on top the tech and dev debt they incurred during the product lifecycle. That lets them concentrate on features that customers want, not on trying to integrate features that management has decreed must be included because the product manager was convincing in the last QBR.


Tom’s Take

In the end, the acquisition of Viptela by Cisco was as much about reduction of technical and development debt in their SD-WAN offerings as it was trying to get ahead in the game. They needed something that could be used as-is without the need to rely on any internal development processes. I alluded to this during our Network Collective Off-The-Cuff show. Without the spin-out model available any longer, Cisco is going to have to start making tough decisions to get things like this done. Either those decisions are made via reduction of business units without integration or through larger dollar signs to acquire solutions to provide the cohesion they need.

The Future Of SDN Is Up In The Air

The announcement this week that Riverbed is buying Xirrus was a huge sign that the user-facing edge of the network is the new battleground for SDN and SD-WAN adoption. Riverbed is coming off a number of recent acquisitions in the SDN space, including Ocedo just over a year ago. So, why then, would Riverbed chase down a wireless company when they’re so focused on the wiring behind the walls?

The New User Experience

When SDN was a pile of buzzwords attached to an idea that had just come out of Stanford, a lot of people were trying to figure out just what exactly SDN could offer them in terms of their network. Things like network slicing were the first big pieces to be put up before things like orchestration, programmability, and APIs were really brought to the fore. People were trying to figure out how to make this hot new thing work for them. Well, almost everyone.

Wireless professionals are a bit jaded when it comes to SDN. That’s because they’ve seen it already in the form of controller-based solutions. The idea that a central device can issue commands to remote access devices and control configurations easily? Airespace was doing that over a decade ago before they got bought by Cisco. Programmability is a moot point to people that can import thousands of access points into a device and automatically have new SSIDs being broadcast on them all in a matter of seconds. Even the new crop of “controllerless” wireless systems on the market still have a central control infrastructure that sends commands to the APs. Much like we’ve found in recent years with SDN, removing the control plane from the data plane path has significant advantages.

So, what would it take to excite wireless pros about SDN? Well, as it turns out, the issue comes down to the user side of the equation. Wireless networks work very well in today’s enterprise. They form the backbone of user connectivity. Companies like Aruba are experimenting with all-wireless offices. The concept is crazy at first glance. How will users communicate without phones? As it turns out, most of them have been using instant messengers and soft phone programs for years. Their communications infrastructure has changed significantly since I learned how to install phone systems years ago. But what hasn’t changed is the need to get these applications to play nicely with each other.

Application behavior and analysis is a huge selling point for SDN and, by extension, SD-WAN. Being able to classify application traffic running on a desktop and treat it differently based on criteria like voice traffic versus web browsing traffic is huge for network professionals. This means the complicated configurations of QoS back in the day can be abstracted out of the network devices and handled by more intelligent systems further up the stack. The hard work can be done where it should be done – by systems with unencumbered CPUs making intelligent decisions rather than by devices that are processing packets as quickly as possible. These decisions can only be made if the traffic is correctly marked and identified as close to the point of origin as possible. That’s where Riverbed and Xirrus come into play.

Extending Your Brains To Your Fingers

By purchasing a company like Xirrus, Riverbed can build on their plans for SDN and SD-WAN by incorporating their software technology into the wireless edge. By classifying the applications where they live, the wireless APs can provide the right information to the SDN processes to ensure traffic is dealt with properly as it flies through the network. With SD-WAN technologies, that can mean making sure the web browsing traffic is sent through local internet links when traffic meant for main sites, like communications or enterprise applications, can be sent via encrypted tunnels and monitored for SLA performance.

Network professionals can utilize SDN and SD-WAN to make things run much more smoothly for remote users without the need to install cumbersome appliances at the edge to do the classification. Instead, the remote APs now become the devices needed to make this happen. It’s brilliant when you realize how much more effective it can be to deploy a larger number of connectivity devices that contain software for application analysis than it is to drop a huge server into a branch office where it’s not needed.

With the deployment of these remote devices, Riverbed can continue to build on the software side of technology by increasing the capabilities of these devices while not requiring new hardware every time a change comes out. You may need to upgrade your APs when a new technology shift happens in hardware, like when 802.11ax is finally released, but that shouldn’t happen for years. Instead, you can enjoy the benefits of using SDN and SD-WAN to accelerate your user’s applications.


Tom’s Take

Fortinet bought Meru. HPE bought Aruba. Now, Riverbed is buying Xirrus. The consolidation of the wireless market is about more than just finding a solution to augment your campus networking. It’s about building a platform that uses wireless networking as a delivery mechanism to provide additional value to users. The spectrum part of wireless is always going to be hard to do properly. Now, the additional benefit of turning those devices into SDN sensors is a huge value point for enterprise networking professionals as well. What better way to magically deploy SDN in your network than to flip a switch and have it everywhere all at once?