The Long And Winding Network Road

How do you see your network? Odds are good it looks like a big collection of devices and protocols that you use to connect everything. It doesn’t matter what those devices are. They’re just another source of packets that you have to deal with. Sometimes those devices are more needy than others. Maybe it’s a phone server that needs QoS. Or a storage device that needs a dedicated transport to guarantee that nothing is lost.

But what does the network look like to those developers?

Work Is A Highway

When is the last time you thought about how the network looks to people? Here’s a thought exercise for you:

Think about a highway. Think about all the engineering that goes into building a highway. How many companies are involved in building it. How many resources are required. Now, think of that every time you want to go to the store.

It’s a bit overwhelming. There are dozens, if not hundreds, of companies that are dedicated to building highways and other surface streets. Perhaps they are architects or construction crews or even just maintenance workers. But all of them have a function. All for the sake of letting us drive on roads to get places. To us, the road isn’t the primary thing. It’s just a way to go somewhere that we want to be. In fact, the only time we really notice the road is when it is in disrepair or under construction. We only see the road when it impacts our ability to do the things it enables.

Now, think about the network. Networking professionals spend their entire careers building bigger, faster networks. We take weeks to decide how best to handle routing decisions or agonize over which routing protocols are necessary to make things work the way we want. We make it our mission to build something that will stand the test of time and be the Eighth Wonder of the World. At least until it’s time to refresh it again for slightly faster hardware.

The difference between these two examples is the way that the creators approach their creation. Highway workers may be proud of their creation but they don’t spend hours each day extolling the virtues of the asphalt they used or the creative way they routed a particular curve. They don’t demand respect from drivers every time someone jumps on the highway to go to the department store.

Networking people have a visibility problem. They’re too close to their creation to have the vision to understand that it’s just another road to developers. Developers spend all their time worrying about memory allocation and UI design. They don’t care if the network actually works at 10GbE or 100 GbE. They want a service that transports packets back and forth to their destination.

The Old New Network

We’ve had discussion in the last few years about everything under the sun that is designed to make networking easier. VXLAN, NFV, Service Mesh, Analytics, ZTP, and on and on. But these things don’t make networking easier for users. They make networking easier for networking professionals. All of these constructs are really just designed to help us do our jobs a little faster and make things work just a little bit better than they did before.

Imagine all the work that goes into electrical power generation. Imagine the level of automation and monitoring necessary to make sure that power gets from the generation point to your house. It’s impressive. And yet, you don’t know anything about it. It’s all hidden away somewhere unimpressive. You don’t need to describe the operation of a transformer to be able to plug in a toaster. And no matter how much that technology changes it doesn’t impact your life until the power goes out.

Networking needs to be a utility. It needs to move away from the old methods of worrying about how we create VLANs and routing protocols and instead needs to focus on disappearing just like the power grid. We should be proud of what we build. But we shouldn’t make our pride the focus of hubris about what we do. Networking professionals are like highway workers or electrical company employees. We work hard behind the scenes to provide transport for services. The cloud has changed the way we look at the destination for those services. And it’s high time we examine our role in things as well.


Tom’s Take

Telco workers. COBOL programmers. Data entry specialists. All of these people used to be the kings and queens of their field. They were the people with the respect of hundreds. They were the gatekeepers because their technology and job roles were critical. Until they weren’t any more. Networking is getting there quickly. We’ve been so focused on making our job easy to do that we’ve missed the point. We need to be invisible. Just like a well built road or a functioning electrical grid. We are not the goal of infrastructure. We’re just a part of it. And the sooner we realize that and get out of our own way, we’re going to find that the world is a much better place for everyone involved in IT, from developers to users.

Advertisements

Are We Seeing SD-WAN Washing?

You may have seen a tweet from me last week referencing a news story that Fortinet was now in the SD-WAN market:

It came as a shock to me because Fortinet wasn’t even on my radar as an SD-WAN vendor. I knew they were doing brisk business in the firewall and security space, but SD-WAN? What does it really mean?

SD Boxes

Fortinet’s claim to be a player in the SD-WAN space brings the number of vendors doing SD-WAN to well over 50. That’s a lot of players. But how did the come out of left field to land a deal rumored to be over a million dollars for a space that they weren’t even really playing in six months ago?

Fortinet makes edge firewalls. They make decent edge firewalls. When I used to work for a VAR we used them quite a bit. We even used their smaller units as remote appliances to allow us to connect to remote networks and do managed maintenance services. At no time during that whole engagement did I ever consider them to be anything other than a firewall.

Fast forward to 2018. Fortinet is still selling firewalls. Their website still focuses on security as the primary driver for their lines of business. They do talk about SD-WAN and have a section for it with links to whitepapers going all the way back to May. They even have a contributed article for SDxCentral back and February. However, going back that far the article reads more like a security company that is saying their secure endpoints could be considered SD-WAN.

This reminds me of stories of Oracle counting database licenses as cloud licenses so they could claim to be the fourth largest cloud provider. Or if a company suddenly decided that every box they sold counted as an IPS because it had a function that could be enabled for a fee. The numbers look great when you start counting them creatively but they’re almost always a bit of a fib.

Part Time Job

Imagine if Cisco suddenly decided to start counting ASA firewalls as container engines because of a software update that allowed you to run Kubernetes on the box. People would lose their minds. Because no one buys an ASA to run containers. So for a company like Cisco to count them as part of a container deployment would be absurd.

The same can be said for any company that has a line of business that is focused on one specific area and then suddenly decides that the same line of business can be double-counted for a new emerging market. It may very well be the case that Fortinet has a huge deployment of SD-WAN devices that customers are very happy with. But if those edge devices were originally sold as firewalls or UTM devices that just so happened to be able to run SD-WAN software, it shouldn’t really count should it? If a customer thought they were buying a firewall they wouldn’t really believe it was actually an SD-WAN router.

The problem with this math is that everything gets inflated. Maybe those SD-WAN edge devices are dedicated. But, if they run Fortinet’s security suite are also being counting in the UTM numbers? Is Cisco going to start counting every ISR sold in the last five years as a Viptela deployment after the news this week that Viptela software can run on all of them? Where exactly are we going to draw the line? Is it fair to say that every x86 chip sold in the last 10 years should count for a VMware license because you could conceivably run a hypervisor on them? It sounds ridiculous when you put it like that, but only because of the timelines involved. Some crazier ideas have been put forward in the past.

The only way that this whole thing really works is if the devices are dedicated to their function and are only counted for the purpose they were installed and configured for. You shouldn’t get to add a UTM firewall to both the security side and the SD-WAN side. Cisco routers should only count as traditional layer 3 or SD-WAN, not both. If you try to push the envelope to put up big numbers designed to wow potential customers and get a seat at the big table, you need to be ready to defend your reporting of those numbers when people ask tough questions about the math behind those numbers.


Tom’s Take

If you had told me last year that Fortinet would sell a million dollars worth of SD-WAN in one deal, I’d ask you who they bought to get that expertise. Today, it appears they are content with saying their UTM boxes with a central controller count as SD-WAN. I’d love to put them up against Viptela or VeloCloud or even CloudGenix and see what kind of advanced feature sets they produce. If it’s merely a WAN aggregation box with some central control and a security suite I don’t think it’s fair to call it true SD-WAN. Just a rinse and repeat of some washed up marketing ideas.

Is Training The Enemy of Progress?

Peyton Maynard-Koran was the keynote speaker at InteropITX this year. If you want to catch the video, check this out:

Readers of my blog my remember that Peyton and I don’t see eye-to-eye on a few things. Last year I even wrote up some thoughts about vendors and VARs that were a direct counterpoint to many of the things that have been said. It has even gone further with a post from Greg Ferro (@EtherealMind) about the intelligence level of the average enterprise IT customer. I want to take a few moments and explore one piece of this puzzle that keeps being brought up: You.

Protein Robots

You are a critical piece of the IT puzzle. Why? You’re a thinking person. You can intuit facts and extrapolate cause from nothing. You are NI – natural intelligence. There’s an entire industry of programmers chasing what you have. They are trying to build it into everything that blinks or runs code. The first time that any company has a real breakthrough in true artificial intelligence (AI) beyond complicated regression models will be a watershed day for us all.

However, you are also the problem. You have requirements. You need a salary. You need vacation time. You need benefits and work/life balance to keep your loved ones happy. You sometimes don’t pick up the phone at 3am when the data center blinks out of existence. It’s a challenge to get you what you need while still extracting the value that is needed from you.

Another awesome friend Andrew von Nagy (@RevolutionWiFi) coined the term “protein robots”. That’s basically what we are. Meatbags. Walking brains that solve problems that cause BGP to fall over when presented with the wrong routing info or cause wireless signals to dissipate into thing air. We’re a necessary part of the IT equation.

Sure, people are trying to replace us with automation and orchestration. It’s the most common complaint about SDN that I’ve heard to date: automation is going to steal my job. I’ve railed against that for years in my talks. Automation isn’t going to steal your job, but it will get you a better one. It will get you a place in the organization to direct and delegate and not debug and destroy. In the end, automation isn’t coming for your job as long as you’re trying to get a better one.

All Aboard The Train!

The unseen force that’s opposing upward mobility is training. In order to get a better job, you need to be properly trained to do it. Maybe it’s a lot of experience from running a network for years. Perhaps it’s a training class you get to go to or a presentation online you can watch. No matter what you need to have new skills to handle new job responsibilities. Even if you’re breaking new ground in something like AI development you’re still acquiring new skills along the way. Hopefully, if you’re in a new field, you’re writing them all down for people to follow in your footsteps one day.

However, training is in opposition to what your employer wants for you. It sounds silly, doesn’t it? Yet, we still hear the quote attributed to W. Edwards Deming – “What happens if we train our people and they leave? What happens if we don’t and they stay?” Remember, as we said above you are a protein robot that needs things like time off and benefits. All of those things are seen as an investment in you.

Training is another investment that companies like to tout. When I worked at a VAR, we considered ourselves some of the most highly trained people around. The owner told me when I started that he was “going to put half a million dollars into training me.” When I asked him about that number after five years he told me it felt like he put a kid through college. And that was before my CCIE. The more trained people you have, the easier your job becomes.

But an investment in training can also backfire. Professionals can take that training and put it to use elsewhere. They can go to a different job and take more money. They can refuse to do a job until they are properly trained. The investments that people make in training are often unrealized relative to the amount of money that it costs to make it happen.

It doesn’t help that training prices have skyrocketed. It used to be that I just needed to go down to the local bookstore and find a copy of a CCNA study guide to get certified. I knew I’d reached a new point in my career when I couldn’t buy my books at the bookstore. Instead, I had to seek out the knowledge that I needed elsewhere. And yes, sometimes that knowledge came in the form of bootcamps that cost thousands of dollars. Lucky for me that my employer at the time looked at that investment and said that it was something they would pick up. But I know plenty of folks that can’t get that kind of consideration. Especially when the training budget for the whole department won’t cover one VMware ICM class.

Employers don’t want employees to be too trained. Because you have legs. Because you can get fed up. Because you can leave. The nice thing about making investments in hardware and software is that it’s stuck at a location. A switch is an asset. A license for a feature can’t be transferred. Objects are tied to the company. And their investments can be realized and recovered. Through deprecation or listing as an asset with competitive advantage companies can recover the value they put into a physical thing.

Professionals, on the other hand, aren’t as easy to deal with. Sure, you can list a CCIE as an important part of your business. But what happens if they leave? What happens when they decide they need a raise? Or, worse yet, when they need to spend six months studying for a recertification? The time taken to learn things is often not factored into the equation when we discuss how much training costs. Some of my old coworkers outright refused to get certified if they had to study on their own time. They didn’t want their free non-work time taken up by reading over MCSE books or CCNA guides. Likewise, the company didn’t want to waste billable hours from someone not providing value. It was a huge catch-22.

Running In Place

Your value is in your skillset at the company you work for. They derive that value from you. So, they want you to stay where you are. They want you trained just enough to do your job. They don’t want you to have your own skillset that could be valuable before they get their value from you. And they definitely don’t want you to take your skillset to a competitor. How can you fix that?

  1. Don’t rely on your company to pay for your training. There are a lot of great resources out there that you can use to learn without needed to drop big bucks for a bootcamp. Use bootcamps to solidify your learning after the fact. Honestly, if you’re in a bootcamp to learn something you’re in the wrong class. Read blogs. Buy books from Amazon. Get your skills the old fashioned way.
  2. Be ready to invest time. Your company doesn’t want you using their billable time for learning. So that means you are going to make an investment instead. The best part is that even an hour of studying instead of binge watching another episode of House is time well-spent on getting another important skill. And if it happened on your own time, you’re not going to have to pay that back.
  3. Be ready to be uncomfortable. It’s going to happen. You’re going to feel lost when you learn something new. You’re going to make mistakes while you’re practicing it. And, honestly, you’re going to feel uncomfortable going to your boss to ask for more money once you’re really good at what you’re doing. If you’re totally comfortable when learning something new, you’re doing it wrong.

Tom’s Take

Companies want protein robots. They want workers that give 125% at all times and can offer a wide variety of skills. They want compliant workers that never want to go anywhere else. Don’t be that robot. Push back. Learn what you can on your time. Be an asset but not an immobile one. You need to know more to escape the SDN Langoliers that are trying to eat your job. That means you need to be on the move to get where you need to be. And if you sit still you risk becoming a permanent asset. Just like the hardware you work on.

VMware and VeloCloud: A Hedge Against Hyperconvergence?

VMware announced on Thursday that they are buying VeloCloud. This was a big move in the market that immediately set off a huge discussion about the implications. I had originally thought AT&T would buy VeloCloud based on their relationship in the past, but the acquistion of Vyatta from Brocade over the summer should have been a hint that wasn’t going to happen. Instead, VMware swooped in and picked up the company for an undisclosed amount.

The conversations have been going wild so far. Everyone wants to know how this is going to affect the relationship with Cisco, especially given that Cisco put money into VeloCloud in both 2016 and 2017. Given the acquisition of Viptela by Cisco earlier this year it’s easy to see that these two companies might find themselves competing for marketshare in the SD-WAN space. However, I think that this is actually a different play from VMware. One that’s striking back at hyperconverged vendors.

Adding The Value

If you look at the marketing coming out of hyperconvergence vendors right now, you’ll see there’s a lot of discussion around platform. Fast storage, small footprints, and the ability to deploy anywhere. Hyperconverged solutions are also starting to focus on the hot new trends in compute, like containers. Along the way this means that traditional workloads that run on VMware ESX hypervisors aren’t getting the spotlight they once did.

In fact, the leading hyperconvergence vendor Nutanix has been aggressively selling their own hypervisor, Acropolis as a competitor to VMware. They tout new features and easy configuration as the major reason to use Acropolis over ESX. The push by Nutanix is to get their customers off of ESX and on to Acropolis to get a share of the VMware budget that companies are currently paying.

For VMware, it’s a tough sell to keep their customers on ESX. There’s a very big ecosystem of software out there that runs on ESX, but if you can replicate a large portion of it natively like Acropolis and other hypervisors do there’s not much of a reason to stick with ESX. And if the VMware solution is more expensive over time you will find yourself choosing the cheaper alternative when the negotiations come up for renewal.

For VMware NSX, it’s an even harder road. Most of the organizations that I’ve seen deploying hyperconverged solutions are not huge enterprises with massive centralized data centers. Instead, they are the kind small-to-medium businesses that need some functions but are very budget conscious. They’re also very geographically diverse, with smaller branch offices taking the place of a few massive headquarters locations. While NSX has some advantages for these companies, it’s not the best fit for them. NSX works optimally in a data center with high-speed links and a well-built underlay network.

vWAN with VeloCloud

So how is VeloCloud going to play into this? VeloCloud already has a lot of advantages that made them a great complement to VMware’s model. They have built-in multi tenancy. Their service delivery is virtualized. They were already looking to move toward service providers as their primary market, but network services and managed service providers. This sounds like their interests are aligning quite well with VMware already.

The key advantage for VMware with VeloCloud is how it will allow NSX to extend into the branch. Remember how I said that NSX loves an environment with a stable underlay? That’s what VeloCloud can deliver. A stable, encrypted VPN underlay. An underlay that can be managed from one central location, or in the future, perhaps even a vCenter plugin. That gives VeloCloud a huge advantage to build the underlay to get connectivity between branches.

Now, with an underlay built out, NSX can be pushed down into the branch. Branches can now use all the great features of NSX like analytics, some of which will be bolstered by VeloCloud, as well as microsegmentation and other heretofore unseen features in the branch. The large headquarters data center is now available in a smaller remote size for branches. That’s a huge advantage for organizations that need those features in places that don’t have data centers.

And the pitch against using other hypervisors with your hyperconverged solution? NSX works best with ESX. Now, you can argue that there is real value in keeping ESX on your remote branches is not costs or features that you may one day hope to use if your WAN connection gets upgraded to ludicrous speed. Instead, VeloCloud can be deployed between your HQ or main office and your remote site to bring those NSX functions down into your environment over a secure tunnel.

While this does compete a bit with Cisco from a delivery standpoint, it still doesn’t affect them with complete overlap. In this scenario, VeloCloud is a service delivery platform for NSX and not a piece of hardware at the edge. Absent VeloCloud, this kind of setup could still be replicated with a Cisco Viptela box running the underlay and NSX riding on top in the overlay. But I think that the market that VMware is going after is going to be building this from the ground up with VMware solutions from the start.


Tom’s Take

Not every issues is “Us vs. Them”. I get that VMware and Cisco seem to be spending more time moving closer together on the networking side of things. SD-WAN is a technology that was inevitably going to bring Cisco into conflict with someone. The third generation of SD-WAN vendors are really companies that didn’t have a proper offering buying up all the first generation startups. Viptela and VeloCloud are now off the market and they’ll soon be integral parts of their respective parent’s strategies going forward. Whether VeloCloud is focused on enabling cloud connectivity for VMware or retaking the branch from the hyperconverged vendors is going to play out in the next few months. But instead of focusing on conflict with anyone else, VeloCloud should be judged by the value it brings to VMware in the near term.

Back In The Saddle Of A Horse Of A Different Color

I’ve been asked a few times in the past year if I missed being behind a CLI screen or I ever got a hankering to configure some networking gear. The answer is a guarded “yes”, but not for the reason that you think.

Type Casting

CCIEs are keyboard jockeys. Well, the R&S folks are for sure. Every exam has quirks, but the R&S folks have quirky QWERTY keyboard madness. We spend a lot of time not just learning commands but learning how to input them quickly without typos. So we spend a lot of time with keys and a lot less time with the mouse poking around in a GUI.

However, the trend in networking has been to move away from these kinds of input methods. Take the new Aruba 8400, for instance. The ArubaOS-CX platform that runs it seems to have been built to require the least amount of keyboard input possible. The whole system runs with an API backend and presents a GUI that is a series of API calls. There is a CLI, but anything that you can do there can easily be replicated elsewhere by some other function.

Why would a company do this? To eliminate wasted effort. Think to yourself how many times you’ve typed the same series of commands into a switch. VLAN configuration, vty configs, PortFast settings. The list goes on and on. Most of us even have some kind of notepad that we keep the skeleton configs in so we can paste them into a console port to get a switch up and running quickly. That’s what Puppet was designed to replace!

By using APIs and other input methods, Aruba and other companies are hoping that we can build tools that either accept the minimum input necessary to configure switches or that we can eliminate a large portion of the retyping necessary to build them in the first place. It’s not the first command you type into a switch that kills you. It’s the 45th time you paste the command in. It’s the 68th time you get bored typing the same set of arguments from a remote terminal and accidentally mess this one up that requires a physical presence on site to reset your mistake.

Typing is boring, error prone, and costs significant time for little gain. Building scripts, programs, and platforms that take care of all that messy input for us makes us more productive. But it also changes the way we look at systems.

Bird’s Eye Views

The other reason why my fondness for keyboard jockeying isn’t as great as it could be is because of the way that my perspective has shifted thanks to the new aspects of networking technology that I focus on. I tell people that I’m less of an engineer now and more of an architect. I see how the technologies fit together. I see why they need to complement each other. I may not be able to configure a virtual link without documentation or turn up a storage LUN like I used to, but I understand why flash SSDs are important and how APIs are going to change things.

This goes all they way back to my conversations at VMunderground years ago about shifting the focus of networking and where people will go. You remember? The “ditch digger” discussion?

 

This is more true now than ever before. There are always going to be people racking and stacking. Or doing basic types of configuration. These folks are usually trained with basic knowledge of their task with no vision outside of their job role. Networking apprentices or journeymen as the case may be. Maybe one out of ten or one out of twenty of them are going to want to move up to something bigger or better.

But for the people that read blogs like this regularly the shift has happened. We don’t think in single switches or routers. We don’t worry about a single access point in a closet. We think in terms of systems. We configure routing protocols across multiple systems. We don’t worry about a single port VLAN issue. Instead, we’re trying to configure layer 2 DCI extensions or bring racks and pods online at the same time. Our visibility matters more than our typing skills.

That’s why the next wave of devices like the Aruba 8400 and the Software Defined Access things coming from Cisco are more important than simple checkboxes on a feature sheet. They remove the visibility of protocols and products and instead give us platforms that need to be configured for maximum effect. The gap between the people that “rack and stack” and those that build the architecture that runs the organization has grown, but only because the middle ground of administration is changing so fast that it’s tough to keep up.


Tom’s Take

If I were to change jobs tomorrow I’m sure that I could get back in the saddle with a couple of weeks of hard study. But the question I keep asking myself is “Why would I want to?” I’ve learned that my value doesn’t come from my typing speed or my encyclopedia of networking command arguments any more. It comes from a greater knowledge of making networking work better and integrate more tightly into the organization. I’m a resource, not a reactionary. And so when I look to what I would end up doing in a new role I see myself learning more and more about Python and automation and less about what new features were added in the latest OSPF release on Cisco IOS. Because knowing how to integrate technology at a high level is more valuable to everyone than just knowing the commands to type to turn the lights on.

Penny Pinching With Open Source

You might have seen this Register article this week which summarized a Future:Net talk from Peyton Koran. In the article and the talk, Peyton talks about how the network vendor and reseller market has trapped organizations into a needless cycle of bad hardware and buggy software. He suggests that organizations should focus on their new “core competency” of software development and run whitebox or merchant hardware on top of open source networking stacks. He says that developers can use code that has a lot of community contributions and shares useful functionality. It’s a high and mighty goal. However, I think the open source part of the equation is going to cause some issues.

A Penny For Your Thoughts

The idea behind open source isn’t that hard to comprehend. Everything available to see and build. Anyone can contribute and give back to the project and make the world a better place. At least, that’s the theory. Reality is sometimes a bit different.

Many times, I’ve had off-the-record conversations with organizations that are consuming open source resources and projects as a starting point for building something that will end up containing many proprietary resources. When I ask them about contributing back to those projects or finding ways to advance things, the response are usually silence. Very rarely, I hear that the organization sees their proprietary developments as a “competitive advantage” that they are going to use to either beat a competitor or build a product that saves them a significant amount of money.

The analogy I like to use for open source is the “Take A Penny” dish that many businesses have next to their cash register. The idea is that people can contribute something small to help out others. If someone needs a penny or two to help make payment for a good or service, they can take one. If they have a few left over they can give back. It’s a way to give back a little now and then.

However, there are a couple of types of people that skew the trend for the penny dish. The first is the person that gives back much more than the norm. That person might put a quarter in the dish or put in four or five pennies at every dish they find. They contribute above the norm often and give a lot. The second type of person takes quite a few pennies from the dish and never gives back. They may see the dish as “free money” to be used to augment their own. They don’t care if all the pennies are gone when they finish just as long as they got what they needed out of it.

Extend this metaphor into the open source community. There are quite a few contributors that put in significant time and effort in their projects. They may have found a way to do it full time or may be paid by their company to participate in projects. These folks are dedicated to the cause.

On the other side of the metaphor are the people and organizations that take what they want from open source and never give back. They never contribute to the project, even if their enhancements are needed and welcome. They take a free or inexpensive starting point and use it to build a product that could be used internally to give the organization an advantage. The key here is that it’s something used internally. The GPL covers distribution of software that is based on GPL code, but it’s not really clear about what happens if that software is consumed internally. An enterprising developer may say to themselves, “As long as I don’t sell it, I don’t have to give my code back.”

Networking For Pocket Change

There are quite a few open source networking projects out there. Quagga, BiRD, and Open vSwitch are great examples of projects that have significant reach and are used by a lot of companies to build great products. However, imagine what would happen if no one gave back to these projects. Imagine what would happen if contributors decided to make their own BGP daemons or OVS-like program and use it without regard for helping others in the community.

Open source software needs developers willing to contribute back to the project. If networking is going to embrace open source projects as Peyton suggested in his talk, it’s going to take a lot more contribution than quiet consumption. Whether or not you agree with the premise that networking vendors are corrupt and evil you do have to concede that they’ve given us mostly stable protocols like BGP and OSPF. These same vendors have contributed ideas back to the standardization process to improve protocols like spanning tree and power over Ethernet. Their contributions helped shape what networking is today.

If the next generation of software based network developers wants to embrace and extend these contributions with open source, they’re going to need to be transparent and communicate with the project leads. They’re also going to need to push back when someone high up the food chain sees the development process as a way to gain an advantage and try and keep it all secretive. If developers aren’t going to give back to the community it negates the advantages of open source and instead takes us back to the days of networks being one-off creations that have no interoperability beyond a few protocols. Islands in a sea of home-grown lava.


Tom’s Take

As anyone that attended Future:Net within earshot of me can attest, I wasn’t overly thrilled with Peyton’s take on the future of networking. I have some deep seeded reservations about the “screw the vendors, build it all yourself” mentality that is pervading organizations today. Not everyone is a development shop. Law firms and schools don’t employ software engineers regularly. If you want to transform those types of users into open source adherents, you need to lead the pack by giving back and talking about what your doing with open source. If you’re not willing to lead the way, stop telling people to take the fork in the road.

Network Longevity – Think Car, Not iPhone

One of the many takeaways I got from Future:Net last week was the desire for networks to do more. The presenters were talking about their hypothesized networks being able to make intelligent decisions based on intent and other factors. I say “hypothesized” because almost everyone admitted that we aren’t quite there. Yet. But the more I thought about it, the more I realized that perhaps the timeline for these mythical networks is a bit skewed in favor of refresh cycles that are shorter than we expect.

Software Eats The World

SDN has changed the way we look at things. Yes, it’s a lot of hype. Yes, it’s an overloaded term. But it’s also the promise of getting devices to do much more than we had ever dreamed. It’s about automation and programmability and, now, deriving intent from plain language. It’s everything we could ever want a simple box of ASICs to do for us and more.

But why are we asking so much? Why do we now believe that the network is capable of so much more than it was just five years ago? Is it because we’ve developed a revolutionary new method for making chips that are ten times smarter and a hundred times faster? No. In fact, the opposite is true. Chips are a little faster and a little cheaper, but they aren’t custom construction. Most companies are embracing merchant silicon from companies like Broadcom and Mellanox. So, where’s all this functionality coming from?

Its’ the “S” in SDN. We’re seeing software driving this development. Yes, it’s the same thing we’ve been saying for years now. But it’s something more now. Admins and engineers aren’t just excited that network devices are more software now. They’re expecting it. They’re looking to the software to drive the development. Gone are the days of CLI-only access. Now, the interfaces are practically built on API-driven capabilities. The Aruba 8400 seems like it was built for the API first, with the GUI being a superset of API functions. People getting into the networking world are focusing on things like Python first and CLI syntax second. Software is winning again.

It’s like the iPhone revolution. The hardware is completely divorced from the software. I can upgrade to the latest version of Apple’s iOS and get most of the functionality I want in the OS. Aside from some hardware specific things the majority of the functions work no matter what. Every year, I get new toys to play with. Every 12 months I can enable new functions and have new avenues available to me. If you think back to the first iterations of the iPhone ten years ago, you can see how far software development has driven this device.

Hardware For The Long Haul

However, for all the grandeur and amazement that iPhone (and Android for that matter) show, it’s also created a side effect that is causing even more problems in the mobile device world that is bleeding over into other electronics. That is the rapid replacement cycle. I mentioned above how awesome it is that you can get new functions every year in your mobile device. But I didn’t mention how people are already looking forward to the newest hardware even though they may have purchased a new phone just 10 months ago.

The desire to get faster by leaps and bounds every year is almost comical at this point. When the new device is delivered an it’s just a bit faster than the last, people are disappointed. Even I was guilty of this when I moved from a 2013 MacBook to a 2016 model. It was only 20% faster. Even with everything else I got in the new package, I found myself chasing performance over every other feature.

The desire to get better performance isn’t just limited to phones and tablets and laptops. When network designers look to increase network performance, they want to do so in a way that makes their users happy. Gone are the days when a simple gigabit network connection would keep someone pleased. We now have to develop gigabit wireless connections to the backbone network, where data is passed to servers connected by 25Gig connections to spines and super spines running at 100Gig (for now). In some cases, that data doesn’t even move any more, and instead short-lived containers are spun up next to it to make things run faster.

Networks aren’t designed like iPhones. They aren’t built to be ripped out every year and replaced with something a little faster and a little shinier. They’re designed more like cars in that regard. They are large purchases that should last for years upon years, with their performance profile dictated by the design at the time. We’ve gone through the phases of running 10Mbit hubs. We upgraded to FastEthernet. We’re now living in a cycle where Gigabit and 10/40 Gigabit switches are ruling the roost. Some of the more forward-thinking folks are adopting 25/50Gigabit and even looking to faster technologies like 400Gig connections for spines.

However, the longevity of hardware remains. Capital expenditure isn’t the easiest thing in the world to accomplish. You need to budget for new devices in the networking world. You need to justify cost savings or new applications. You can’t just buy a fast new switch or two every year and claim that it makes everything better without some kind of justification. Even if you move to the cloud with your strategy, you’re not changing the purchasing model. You’re just trading capital expenditure to operational expenditure. You’re leasing a car instead of buying it. Because if you leave the cloud, you’re back to your old switching model with nothing to show for it.


Tom’s Take

I was pretty hard on the Future:Net presenters because I felt that their talk of ditching money grubbing vendors for the purity of whitebox switching running open source software was a bit heavy handed. Most organizations are in the middle of a refresh cycle and can’t afford to rip everything out and replace it today. Moreover, the value in whitebox switching isn’t realized when you install new hardware. It’s realized when you turn your switch into an iPhone. Where software development rules the day and your hardware fades away in the background. We’re not there yet. We’re still buying hardware for the sake of hardware as the software is catching up. Networking equipment is still built and bought like a car and will be for a few years to come. Maybe we can revisit this topic in 3 years and see how far software will have driven us by then.