The Silver (Peak) Lining For HPE and Cloud

You no doubt saw the news this week that HPE announced that they’re buying Silver Peak for just shy of $1 billion dollars. It’s a good exit for Silver Peak and should provide some great benefits for both companies. There was a bit of interesting discussion around where this fits in the bigger picture for HPE, Aruba, and the cloud. I figured I’d throw my hat in the ring and take a turn discussing it.

Counting Your Chickens

First and foremost, let’s discuss where this acquisition is headed. HPE announced it and they’re the ones holding the purse strings. But the acquisition post was courtesy of Keerti Melkote, who runs the Aruba, a Hewlett Packard Enterprise Company (Aruba) side of the house. Why is that? It’s because HPE “reverse acquired” Aruba and sent all their networking expertise and hardware down to the Arubans to get things done.

I would venture to say that Aruba’s acquisition was the best decision HPE could have made. It gave them immediate expertise in an area they sorely needed help. It gave Aruba a platform to build on and innovate from. And it ultimately allowed HPE to shore up their campus networking story while trying to figure out how they wanted to extend into the data center.

Aruba was one of the last major networking players to announce a strategy based on SD-WAN. We’ve seen a lot of major acquisitions on that front, including Cisco buying Viptela, VMware buying VeloCloud, Palo Alto Networks buying CloudGenix, and Oracle buying Talari. That last one is going to be important later in this story. Aruba didn’t go out and buy their own SD-WAN solution. Instead, they developed it in-house leveraging the expertise they had with ClearPass. Instead of calling it SD-WAN and focusing on connecting islands together, they used the term SD-Branch to denote the connectivity was more about the users in the branch and not the office itself.

I know that SD-Branch isn’t a term that’s en vogue with most analysts. But it’s important to realize that Aruba was trying to say more about the users than anything else. Hardware was an afterthought in this solution. It wasn’t about an edge gateway, although Aruba had those. It wasn’t about connectivity, even though Aruba could help on that front too. Instead, it was about pushing policy down to the edge and sorting out connectivity for devices. That’s the focus that Aruba had for many years with their wireless roots. It only made sense to leverage the tools to get where they wanted to be on the SD-Whatever front.

Coming Home To Roost

The world of SD-WAN isn’t the same as the branch any longer, though. Now, SD-WAN drives cloud on-ramp and edge security. Ironically enough, the drive to include Secure Access Service Edge (SASE) in SD-WAN is way more in line with the original concept of SD-Branch as defined by Aruba a couple of years ago. But you have to have a more well-rounded solution that includes cloud.

Why do you need to worry about the cloud? If you still have to ask that question in 2020 you’re not gonna make it in IT much longer. The cloud operationalizes a lot of things in IT that we have just accepted for years. The way that we do workloads and provisioning has changed on-site as well. If you don’t believe me then listen to HPE. Their biggest focus coming out of HPE Discover this year has been highlighting GreenLake, their on-premises IT-as-a-Service offering. It’s designed to give you cloud-like performance in your data center with cloud-like billing options. And, as marketed, the ability to move workloads back and forth between on-site and in-cloud as needed.

Hopefully, you’re starting to see why Silver Peak was such an important pickup for HPE now. SD-Branch is focused on devices but not services. It’s not designed to provide cloud on-ramp. HPE and Aruba need a solution that gives you the ability to accelerate user adoption of software-as-a-service no matter where it lives, be it in GreenLake or AWS or Azure. Essentially, HPE and Aruba needed Talari for their cloud offerings. And that’s what they’re going to get with Silver Peak.

Silver Peak has focused on cloud intelligence to accelerate SaaS for a few years now. They also have a multi-cloud networking solution. Multi-cloud is the way that you get people working between two different clouds, like AWS and GreenLake for example.

When you tie in Silver Peak’s DC-focused SD-WAN solution with Aruba’s existing SD-Branch capabilities, you see how holistic a solution you have now. And because it’s all based on software you don’t have to worry about clunky integration. It can run side-by-side for now and in the next revision of Aruba Central integration with the new Edge Services Platform (ESP), it’s all going to be seamless to use however you want to use.


Tom’s Take

I think Silver Peak is a good pickup for HPE and Aruba. Just remember that when you hear HPE and networking in the same sentence, you need to think about Aruba. Ultimately, the integration of Silver Peak into Aruba’s SD-Branch solution is going to benefit everyone from users to cloud to software and back again. And it’s going to help position Aruba as a major player in the SD-WAN market. Which is a silver lining on any cloud.

SD-WAN and Technical Debt

Back during Networking Field Day 22, I was having a fun conversation with Phil Gervasi (@Network_Phil) and Carl Fugate (@CarlFugate) about SD-WAN and innovation. I mentioned that it was fascinating to see how SD-WAN companies kept innovating but that bigger, more established companies that had bought into SD-WAN seemed to be having issues catching up. As our conversation continued I realized that technical debt plays a huge role in startup culture in all factors, not just with SD-WAN. However, SD-WAN is a great example of technical debt to talk about here.

Any Color You Want In Black

Big companies have investments in supply chains. They have products that are designed in a certain way because it’s the least expensive way to develop the project or it involves using technology developed by the company that gives them a competitive advantage. Think about something like the Cisco Nexus 9000-series switches that launched with Cisco ACI. Every one of them came with the Insieme ASIC that was built to accelerate the policy component of ACI. Whether or not you wanted to use ACI or Insieme in your deployment, you were getting the ASIC in the switch.

Policies like this lead to unintentional constraints in development. Think back five years to Cisco’s IWAN solution. It was very much the precursor to SD-WAN. It was a collection of technologies like Performance Routing (PfR), Application Visibility Control (AVC), Policy Based Routing (PBR), and Network Based Application Recognition (NBAR). If that alphabet soup of acronyms makes you break in hives, you’re not alone. Cisco IWAN was a platform very much market by potential and complexity.

Let’s step back and ask ourselves an important question: “Why?” Why was IWAN so complicated? Why was IWAN hard to deploy? Why did IWAN fail to capture a lot of market share and ride the wave that eventually became SD-WAN? Looking back, a lot of the choices that were made that eventually doomed IWAN can come down to existing technical debt. Cisco is a company that makes design decisions based on what they’ve been doing for a while.

I’m sure that the design criteria for IWAN came down to two points:

  1. It needs to run on IOS.
  2. It needs to be an ISR router.

That doesn’t sound like much. But imagine the constraints you run into with just those two limitations. You have a hardware platform that may not be suited for the kind of work you want to do. Maybe you want to take advantage of x86 chipset acceleration. Too bad. You have to run what’s in the ISR. Which means it could be underpowered. Or incapable of doing things like crypto acceleration for VPNs, which is important for building a mesh of encrypted tunnels. Or maybe you need some flexibility to build a better detection platform for applications. Except you have to use IOS. Which uses NBAR. And anything you write to extend NBAR has to run on their platforms going forward. Which means you need to account for every possible permutation of hardware that IOS runs on. Which is problematic at best.

See how technical debt can creep in from the most simplistic of sources? All we wanted to do was build a platform to connect WANs together easily. Now we’re mired in a years-old hardware choice and an aging software platform that can’t help us do what needs to be done. Is it any wonder why IWAN didn’t succeed in the original form? Or why so many people involved with the first generation of SD-WAN startups were involved with IWAN, even if just tangentially?

Debt-Free Development

Now, let’s look at a startup like CloudGenix, who was a presenter at Networking Field Day 22 and was recently acquired by Palo Alto Networks. They started off on a different path when they founded the startup. They knew what they wanted to accomplish. They had a vision for what would later be called SD-WAN. But instead of shoehorning it into an existing platform, they had the freedom to build what they wanted.

No need to keep the ISR platform? Great. That means you can build on x86 hardware to make your software more universally deployable on a variety of boxes. Speaking of boxes, using commercial off-the-shelf (COTS) equipment means you can buy some very small devices to run the software. You don’t need a system designed to use ATM modules or T1 connections. If all you little system is ever going to use is Ethernet there’s no reason to include expansion at all. Maybe USB for something like a 4G/LTE modem. But those USB ports are baked into the board already.

A little side note here that came from Olivier Huynh Van of Gluware. You know the USB capabilities on a Cisco ISR? Yeah, the ISR chipset didn’t support USB natively. And it’s almost impossible to find USB that isn’t baked into an x86 board today. So Cisco had to add it to the ISR in a way that wasn’t 100% spec-supported. It’s essentially emulated in the OS. Which is why not every USB drive works in an ISR. Take that for what’s it’s worth.

Back to CloudGenix. Okay, so you have a platform you can build on. And you can build software that can run on any x86 device with Ethernet ports and USB devices. That means your software doesn’t need to do complicated things. It also means there are a lot of methods already out there for programming network operating systems for x86 hardware, such as Intel’s Data Plane Development Kit (DPDK). However CloudGenix chose to build their OS, they didn’t need to build everything completely from scratch. Even if they chose to do it there are still a ton of resources out there to help them get started. Which means you don’t have to restart your development every time you need to add a feature.

Also, the focus on building the functions you want into an OS you can bend to your needs means you don’t need to rely on other teams to build pieces of it. You can build your own GUI. You can make it look however you want. You can also make it operate in a manner that is easiest for your customer base. You don’t need to include every knob or button or bell and whistle. You can expose or hide functions as you wish. Don’t want customers to have tons of control over VPN creation or certificate authentication? You don’t need to worry about the GUI team exposing it without your permission. Simple and easy.

One other benefit of developing on platforms without technical debt? It’s easy to port your software from physical to virtual. CloudGenix was already successful in porting their software to run from physical hardware to the cloud thanks to CloudBlades. Could you imagine trying to get the original Cisco IWAN running in a cloud package for AWS or Azure? If those hives aren’t going crazy right now I’m sure you must have nerves or steel.


Tom’s Take

Technical debt is no joke. Every decision you make has consequences. And they may not be apparent for this generation of products. People you may never meet may have to live with your decisions as they try to build their vision. Sometimes you can work with those constraints. But more often than not brilliant people are going to jump ship and do it on their own. Not everyone is going to succeed. But for those that have the vision and drive and turn out something that works the rewards are legion. And that’s more than enough to pay off any debts, technical or not.

Fast Friday- Keeping Up With The Times

We’re at the end of the 2010s. It’s almost time to start making posts about 2020 and somehow working vision or eyesight into the theme so you can look just like everyone else. But I want to look back for a moment on how much things have changed for networking in the last ten years.

It’s true that networking wasn’t too exciting for most of the 2000s. Things got faster and more complicated. Nothing really got better except the bottom lines of people pushing bigger hardware. And that’s honestly how we liked it. Because the idea that we were all special people that needed to be at the top of our game to get things done resonated with us. We weren’t just mechanics. We were the automobile designers of the future!

But if there’s something that the mobile revolution of the late 2000s taught us, it was that operators don’t need to be programmers to enjoy using technology. Likewise, enterprise users don’t need to be CCIEs or VCDXs to make things work. That’s the real secret behind all the of the advances in networking technology in the 2010s. We’re not making networking harder any more. We’re not adding complexity for the sake of making our lives more important.

The rapid pace of change that we’ve had over the last ten years is the driver for so many technologies that are changing the role of networking engineers. Automation, orchestration, and software-driven networking aren’t just fads. They’re necessities. That’s because of the focus on new features, not in spite of them. I can remember administering CallManager years ago and not realizing what half of the checkboxes on a line appearance did. That’s not helping things.

Complexity Closet

There are those that would say that what we’re doing is just hiding the complexity behind another layer of abstraction, which is a favorite saying of Russ White. I’d argue that we’re not hiding the complexity as much as we’re putting it back where it belongs – out of sight. We don’t need the added complexity for most operations. This flies in the face of what we want networking engineers to know. If you want to be part of the club you have to know every LSA type and how they interact and what happens during an OSPF DR election. That’s the barrier for entry, right?

Except not everyone needs to know that stuff. They need to know what it looks like when routing is down. More likely, they just need to recognize more basic stuff like DNS being down or something being wrong in the service provider. The odds are way better that something else is causing the problem somewhere outside of your control. And that means your skills are good for very little once you’ve figured out that the problem is somewhere you can’t help.

Hiding complexity behind summary screens or simple policy decisions isn’t bad. In fact, it tends to keep people from diving down rabbit holes when fixing the problems. How many times have we tried to figure out some complicated timer issue when it was really a client with a tenuous connection or a DNS issue? We want the problems to be complicated so we can flex our knowledge to others when in fact we should be applying Occam’s Razor much earlier in the process. Instead of trying to find the most complicated solution to the problem so we can justify our learning, we should instead try to make it as simple as possible to conserve that energy for a time when it’s really needed.


Tom’s Take

We need to leverage the tools that have been developed to make our lives easier in the 2020s instead of railing against them because they’re upsetting our view of how things should be. Maybe networking in 2010 needed complexity and syntax and command lines. But networking in 2022 might need automated scripts and telemetry to figure things out faster since there are ten times the moving parts. It’s like memorizing phone numbers. It works well when you only need to know seven or eight with a few digits. But when you need to know several hundred each with ten or more digits it’s impossible. Yet I still hear people complain about contact lists or phone books because “they used to be good at memorizing numbers”. Instead of us focusing on what we used to be good at, let’s try to keep with the times and be good at what we need to be good at now and in the future.

Magical Mechanics

If you’re a fan of this blog, you’ve probably read my last post about the new SD-WAN magic quadrant that’s been making the rounds and generating discussion. Some people are smiling that this report places Cisco in an area other than leadership in the SD-WAN space. Others are decrying the report as being unfair and contradictory. I wanted to take another look at it given some new information and some additional thoughts on the results.

Fair and Square

The first thing I wanted to do is make sure that I was completely transparent with the way the Gartner Magic Quadrant (MQ) works. I have a very good idea thanks to a conversation with Andrew Lerner (@Fast_Lerner), who is the Research VP of Networking at Gartner. Andrew was nice enough to clarify my understanding of the MQ and accompanying documentation. I’ll quote him here to make sure I don’t get anything wrong:

In an MQ, we assess the overall vendors’ behavior and offering in the market. Product, service/support sales, marketing, innovation, etc. if a vendor has multiple products in a market and sells them regularly to the enterprise, they are part of the MQ assessment. Viable products are not “excluded”.

As you can see from Andrew’s explanation, the MQ takes into account all the aspects of a company. It’s not just a product. It’s the sales, marketing, and other aspects of the company that give the overall score for a company. So how does Gartner figure out how products and services? That’s where their Critical Capabilities documents come into play. They are focused exclusively on products and services. They don’t take marketing or sales or anything else into account.

According to Andrew, when Gartner did their Critical Capabilities document on Cisco, they looked at Meraki MX and IOS-XE only. Viptela vEdge was not examined. So, the CC documents give us the Gartner overview of technology behind the MQ analysis. While the CC documents are focused solely on the Meraki MX and IOS-XD SD-WAN technology, they are components of the overall analysis in the MQ that was published.

What does that all mean in the long run?

Axis and Allies

In order to break this down a bit further, let’s ignore the actual quadrants in the MQ for the moment. Instead, let’s think about this picture as a graph. One axis of the graph is “Ability to Execute,” or in other words can the company do what they say they’re going to do? The other axis is “Completeness of Vision.” This is a question of how the company has rounded out their understanding of the market forces and direction. I want to use this sample MQ with some labels thrown in to help my readers understand how each axis can affect the ranking a company gets:

So, let’s look at where Cisco was situated on those graphs. They are not in the upper right part of the graph, which is the “good” part according to what most people will tell you when they glance at it. Cisco was ranked almost directly in the middle of the graph. Why would that be?

Let’s look at the Execution axis. Why would Cisco have some issues with execution on SD-WAN? Well, the biggest one is probably the shift to IOS-XE and the issues with code quality. Almost everyone involved deploying IOS-XE has told me that Cisco had significant issues in the early releases. Daniel Dib (@DanielDibSwe) had a great conversation with me about the breakdown between the router code and the controller code a week ago. Here’s the first tweet in the chain:

So, there were issues that have been addressed. But is the code completely stable right now? I can’t say, since I haven’t deployed it. But ask around and see what the common wisdom is. I would be genuinely interested to hear how much better things have gotten in the last six months. But, those code quality issues from months ago are going to be a concern in the report. And you can’t guarantee that every box is going to be running on the latest code. Having issues with stable code is going to impact your ability to execute on your vision.

Now, let’s look at the Completeness of Vision axis. If you reference the above picture you’ll see that the Challengers square represents companies that don’t yet understand the market direction. Given that this was the location that Cisco was placed in (barely), let’s examine why that might be. Let’s start by asking “which Cisco product is best for my SD-WAN solution?” Do you know for sure? Which one are you going to be offered?

In my last post, I said that Cisco had been deemphasizing vEdge deployments in favor of IOS-XE. But I completely forgot about Meraki as an additional offering for SMBs and smaller deployments. Where does Meraki make the most sense? And how big does your network need to be before it outgrows a Meraki deployment? Are you chasing a feature? Or maybe you need a service that isn’t offered natively on the Meraki platform? All of these questions need to be answered when you look at what you’re going to do with SD-WAN.

The other companies that Cisco will tell you are their biggest competitors are probably VMware and Silver Peak. How many SD-WAN platforms do they sell? How about companies ranked closer to Cisco in the MQ like Citrix, CloudGenix, or Versa Networks? How many SD-WAN solutions do they offer? How about HPE, Juniper and Aryaka?

In almost every case, the answer is “one”. Each of these companies have settled on a single solution for SD-WAN or SD-Branch. They don’t split those deployments across different product lines. They may have different sized boxes for things, but they all run the same common software. Can you integrate an IOS-XE SD-WAN appliance into a Meraki deployment? Can you take a Meraki MX and make it work with vEdge?

You may be starting to see that the completeness of Cisco’s vision isn’t lacking in SD-WAN but instead how they’re going to accomplish it. Rather than having one solution that can be scaled to fit all needs, Cisco is choosing to offer two different solutions for SMBs and enterprises. And if you count vEdge as a separate product from IOS-XE, as Cisco has suggested in some of their internal reports, then you have three products! I’m not saying that Cisco doesn’t have a vision. But it really looks like that vision is hazier than it should be.

If Cisco had a unified vision with stable code that was integrated up and down the stack I have no doubts they would have been rated higher. When Cisco was deploying Viptela vEdge as the sole SD-WAN offering they had it was much easier to figure out how everything was going to integrate together. But, just like all transitions, the devil is in the details here as Cisco tries to move to IOS-XE. Code quality is going to be a potential source of problems no matter what. But if you are staking your reputation on moving everyone to a single code base from a different more stable one you had better get it right quickly. Otherwise you’re going to get it counted against you.


Tom’s Take

I really appreciate Andrew Lerner for reaching out regarding my analysis of the MQ. I will admit I wasn’t exactly spot on with the differences between the MQ and the Critical Capabilities documents. But the results are close. The CC analyzes Cisco’s big platforms and tells people how they work. The MQ takes everything as a whole and gives Cisco a ranking of where they stand in the market. Sales numbers aside, do you think Cisco should be a leader in the SD-WAN space? Do you feel that where they are today with IOS-XE and Meraki MX is a complete vision? If you do then you likely won’t care about the MQ either way. But for those that have questions about execution and vision, maybe it’s time to figure out what might have caused Cisco to be right in the middle this time around.

SD-WAN Squares and Perplexing Planes

The latest arcane polygon is out in the SD-WAN space. Normally, my fortune telling skills don’t involve geometry. I like to talk to real people about their concerns and their successes. Yes, I know that the gardening people do that too. It’s just that no one really bothers to read their reports and instead makes all their decisions based on boring wall art.

Speaking of which, I’m going to summarize that particular piece of art here. Note this isn’t the same for copyright reasons but close enough for you to get the point:

4D8DB810-3618-44EA-8AA2-99EB7EAA3E45

So, if you can’t tell by the colors here, the big news is that Cisco has slipped out of the top Good part of the polygon and is now in the bottom Bad part (denoted by the red) and is in danger of going out of business and being the laughing stock of the networking community. Well, no, not so much that last part. But their implementation has slipped into the lower part of the quadrant where first-stage startups and cash-strapped companies live and wish they could build something.

Cisco released a report rebutting those claims and it talks about how Viptela is a huge part of their deployments and that they have mindshare and marketshare. But after talking with some colleagues in the networking industry and looking a bit deeper into the issue, I think Cisco has the same problem that Boeing had this year. Their assurances are inconsistent with reality.

A WANderful World

When you deploy Cisco SD-WAN, what exactly are you deploying? In the past, the answer was IWAN. IWAN was very much an initial attempt to create SD-WAN. It wasn’t perfect and it was soon surpassed by other technologies, many of which were built by IWAN team members that left to found their own startups. But Cisco quickly realized they needed to jump in the fray and get some help. That’s when the bought Viptela.

Here’s where things start getting murky. The integration of Viptela SD-WAN has been problematic. The initial sales model after the acquisition was to continue to sell Viptela vEdge to the customer. The plan was then to integrate the software on a Cisco ISR, then to integrate the Viptela software into IOS-XE. They’ve been selling vEdge boxes for a long while and are supporting them still. But the transition plan to IOS-XE is in full effect. The handwriting on the wall is that soon Cisco will only offer IOS-XE SD-WAN for sale, not vEdge.

Flash forward to 2019. A report is released about the impact and forward outlook for SD-WAN. It has a big analyst name attached to it. In that report, they leave out the references to vEdge and instead grade Cisco on their IOS-XE offering which isn’t as mature as vEdge. Or as deployed as vEdge. Or, as stated by even Cisco, as stable as vEdge, at least at first. That means that Cisco is getting graded on their newest offering. Which, for Cisco, means they can’t talk about how widely deployed and stable vEdge is. Why would this particular analyst firm do that?

Same is Same

The common wisdom is that Gartner graded Cisco on the curve that the sales message is that IOS-XE is the way and the future of SD-WAN at Cisco. Why talk about what came before when the new hot thing is going to be IOS-XE? Don’t buy this old vEdge when this new ISR is the best thing ever.

Don’t believe me? Go call your Cisco account team and try to buy a vEdge. I bet you get “upsold” to an ISR. The path forward is to let vEdge fade away. So it only makes sense that you should be grading Cisco on what their plans are, not on what they’ve already sold. I’ll admit that I don’t put together analyst reports with graphics very often, so maybe my thinking is flawed. But if I call your company and they won’t sell me the product that you want me to grade you on, I’m going to guess I shouldn’t grade you on it.

So Cisco is mad that Gartner isn’t grading them on their old solution and is instead looking at their new solution in a vacuum. And since they can’t draw on the sales numbers or the stability of their existing solution that you aren’t going to be able to buy without a fight, they slipped down in the square to a place that doesn’t show them as the 800-lb gorilla. Where have a heard something like this before?

Plane to See

The situation that immediately sprung to mind was the issue with Boeing’s 737-MAX airliner. In a nutshell, Boeing introduced a new airliner with a different engine and configuration that changed its flight profile. Rather than try to get the airframe recertified, which could take months or years, they claimed it was the same as the old one and just updated training manuals. They also didn’t disclose there was a new software program that tried to override the new flight characteristics and that particular “flaw” caused the tragic crashes of two flights full of people.

I’m not trying to compare analyst reports to tragedies in any way. But I do find it curious that companies want their new stuff to be treated just like their old stuff with regards to approvals and regulation and analysis. They know that the new product is going to have issues and concerns but they don’t want you to count those because remember how awesome our old thing was?

Likewise, they don’t want you to count the old problems against them. Like the Pinto or the Edsel from Ford, they don’t want you to think their old issues should matter. Just look at how far we’ve come! But that’s not how these things work. We can’t take the old product into account when trying to figure out how the new one works. We can’t assume the new solution is the same as the old one without testing, no matter how much people would like us to do that. It’s like claiming the Space Shuttle was as good as the Saturn V rocket because it went into space and came from NASA.

If your platform has bugs in the first version, those count against you too. You have to work it all out and realize people are going to remember that instability when they grade your product going forward. Remember that common wisdom says not to install an operating system update until the first service patch. You have to consider that reputation every time you release a patch or an update.


Tom’s Take

The SD-WAN MQ tells me that Cisco isn’t getting any more favors from the analysts. The marketing message not lining up with the install base is the heart of a transition away from hardware and software that Cisco doesn’t own. The hope that they could just smile and shrug their shoulders and hope that no one caught on has been dashed. Instead, Cisco now has to realize their going to have to earn that spot back through good code releases and consistent support and licensing. No one is going to give them any slack with SD-WAN like they have with switches and unified communications. If Cisco thinks that they’re just going to be able to bluff their way through this product transition, that idea just won’t fly.

Fast Friday Thoughts – Networking Field Day 21

This week has been completely full at Networking Field Day 21 with lots of great presentations. As usual, I wanted to throw out some quick thoughts that fit more with some observations and also to set up some topics for deeper discussion at a later time.

  • SD-WAN isn’t just a thing for branches. It’s an application-focused solution now. We’ve moved past the technical reasons for implementation and finally moved the needle on “getting rid of MPLS” and instead realized that the quality of service and other aspects of the software allow us to do more. That’s especially true for cloud initiatives. If you can guarantee QoS through your SD-WAN setup you’re already light years ahead of where we’ve been already.
  • Automation isn’t just figuring out how to make hardware and software do things really fast. It’s also about helping humans understand which things need to be done fast and automatically and which things don’t. For all the amazing stuff that you can do with scripting and orchestration there are still tasks that should be done manually. And there are plenty of people problems in the process. Really smart companies that want to solve these issues should stop focusing on using technology to eliminate the people and instead learn how to integrate the people into the process.
  • If you’re hoping to own the entire stack from top to bottom, you’re going to end up disappointed. Customers aren’t looking for mediocre solution from a single vendor. They’re not even looking for a great solution from a couple. They want the best and they’re not afraid to go where they want to get them. And when you realize that more and more engineering and architecture talent is focused on integrating already you will see that they don’t have any issues making your solution work with someone else’s instead of counting on you to integrate your platforms together at some point in the future.

Tom’s Take

Networking Field Day is always a great time for me to talk to some of the best people in the world and hear from some of the greatest companies out there about what’s coming down the pipeline. The world of networking is changing even more than it has in the last few software-defined years.

Positioning Policy Properly

Who owns the network policy for your organization? How about the security policy?Identity policy? Sound like easy questions, don’t they? The first two are pretty standard. The last generally comes down to one or two different teams depending upon how much Active Directory you have deployed. But have you ever really thought about why?

During Future:NET this week, those poll questions were asked to an audience of advanced networking community members. The answers pretty much fell in line with what I was expecting to see. But then I started to wonder about the reasons behind those decisions. And I realized that in a world full of cloud and DevOps/SecOps/OpsOps people, we need to get away from teams owning policy and have policy owned by a separate team.

Specters of the Past

Where does the networking policy live? Most people will jump right in with a list of networking gear. Port profiles live on switches. Routing tables live on routers. Networking policy is executed in hardware. Even if the policy is programmed somewhere else.

What about security policy? Firewalls are probably the first thing that come to mind. More advanced organizations have a ton of software that scans for security issues. Those policy decisions are dictated by teams that understand the way their tools work. You don’t want someone that doesn’t know how traffic flows through a firewall to be trying to manage that device, right?

Let’s consider the identity question. For a multitude of years the identity policy has been owned by the Active Directory (AD) admins. Because identity was always closely tied to the server directory system. Novell (now NetIQ) eDirectory and Microsoft AD were the kings of the hill when it came to identity. Today’s world has so much distributed identity that it’s been handed to the security teams to help manage. AD doesn’t control the VPN concentrator the cloud-enabled email services all the time. There are identity products specifically designed to aggregate all this information and manage it.

But let’s take a step back and ask that important question: why? Why is it that the ownership of a policy must be by a hardware team? Why must the implementors of policy be the owners? The answer is generally that they are the best arbiters of how to implement those policies. The network teams know how to translate applications in to ports. Security teams know how to create firewall rules to implement connection needs. But are they really the best people to do this?

Look at modern policy tools designed to “simplify” networking. I’ll use Cisco ACI as an example but VMware NSX certainly qualifies as well. At a very high level, these tools take into account the needs of applications to create connectivity between software and hardware. You create a policy that allows a database to talk to a front-end server, for example. That policy knows what connections need to happen to get through the firewall and also how to handle redundancy to other members of the cluster. The policy is then implemented automatically in the network by ACI or NSX and magically no one needs to touch anything. The hardware just works because policy automatically does the heavy lifting.

So let’s step back for moment and discuss this. Why does the networking team need to operate ACI or NSX? Sure, it’s because those devices still touch hardware at some point like switches or routers. But we’ve abstracted the need for anyone to actually connect to a single box or a series of boxes and type in lines of configuration that implement the policy. Why does it need to be owned by that team? You might say something about troubleshooting. That’s a common argument that whoever needs to fix it when it breaks is who needs to be the gatekeeper implementing it. But why? Is a network engineer really going to SSH into every switch and correct a bad application tag? Or is that same engineer just going to log into a web console and fix the tag once and propagate that info across the domain?

Ownership of policy isn’t about troubleshooting. It’s about territory. The tug-of-war to classify a device when it needs to be configured is all about collecting and consolidating power in an organization. If I’m the gatekeeper of implementing workloads then you need to pay tribute to me in order to make things happen.

If you don’t believe that, ask yourself this: If there was a Routing team and and Switching team in an organization, who would own the routed SVI interface on a layer 3 switch? The switching team has rights because it’s on their box. The routing team should own it because it’s a layer 3 construct. Both are going to claim it. And both are going to fight over it. And those are teams that do essentially the same job. When you start pulling in the security team or the storage team or the virtualization team you can see how this spirals out of control.

Vision of the Future

Let’s change the argument. Instead of assigning policy to the proper hardware team, let’s create a team of people focused on policy. Let’s make sure we have proper representation from every hardware stack: Networking, Security, Storage, and Virtualization. Everyone brings their expertise to the team for the purpose of making policy interactions better.

Now, when someone needs to roll out a new application, the policy team owns that decision tree. The Policy Team can have a meeting about which hardware is affected. Maybe we need to touch the firewall, two routers, a switch, and perhaps a SAN somewhere along the way. The team can coordinate the policy changes and propose an implementation plan. If there is a construct like ACI or NSX to automate that deployment then that’s the end of it. The policy is implemented and everything is good. Perhaps some older hardware exists that needs manual configuration of the policy. The Policy Team then contacts the hardware owner to implement the policy needs on those devices. But the Policy Team still owns that policy decision. The hardware team is just working to fulfill a request.

Extend the metaphor past hardware now. Who owns the AWS network when your workloads move to the cloud? Is it still the networking team? They’re the best team to own the network, right? Except there are no switches or routers. They’re all software as far as the instance is concerned. Does that mean your cloud team is now your networking team as well? Moving to the cloud muddies the waters immensely.

Let’s step back into the discussion about the Policy Team. Because they own the policy decisions, they also own that policy when it changes hardware or location. If those workloads for email or productivity suite move from on-prem to the cloud then the policy team moves right along with them. Maybe they add an public cloud person to the team to help them interface with AWS but they still own everything. That way, there is no argument about who owns what.

The other beautiful thing about this Policy Team concept is that it also allows the rest of the hardware to behave as a utility in your environment. Because the teams that operate networking or security or storage are just fulfilling requests from the policy team they don’t need to worry about anything other than making their hardware work. They don’t need to get bogged down in policy arguments and territorial disputes. They work on their stuff and everyone stays happy!


Tom’s Take

I know it’s a bit of stretch to think about pulling all of the policy decisions out of the hardware teams and into a separate team. But as we start automating and streamlining the processes we use to implement application policy the need for it to be owned by a particular hardware team is hard to justify. Cutting down on cross-faction warfare over who gets to be the one to manage the new application policy means enhanced productivity and reduced tension in the workplace. And that can only lead to happy users in the long run. And that’s a policy worth implementing.

The Development of DevNet’s Future

You’re probably familiar with Cisco DevNet. If not, DevNet is the place Cisco has embraced outreach to the developer community building for software-defined networking (SDN). Though initially cautious in getting into the software developer community, Cisco has embraced their new role and really opened up to help networking professionals embrace the new software normal in networking. But where is DevNet going to go from here?

Humble Beginnings

DevNet wasn’t always the darling of Cisco’s offerings. I can remember sitting in on some of the first discussions around Cisco OnePK and thinking to myself, “This is never going to work.”

My hesitation with Cisco’s first attempts to focus on software platforms came from two places. The first was what I saw as Cisco trying to figure out how to extend the platforms to include some programmability. It was more about saying they could do software and less about making that software easy to use or program against. The second place was actually the lack of a place to store all of this software knowledge. Programmers and developers are fickle lot and you have to have a repository where they can get access to the pieces they needed.

DevNet was that place that Cisco should have built from the start. It was a way to get people excited and involved in the process. But it wasn’t for everyone at first. If you don’t speak developer you’re going to feel lost. Even if you are completely fluent in networking and you know what you want to accomplish, just not how to get there. DevNet started off as the place to let the curious learn how to combine networking and programming.

The Ascent

DevNet really came into their own about 3 years ago. I use that timeline because that’s when I first heard that people were wanting to spend more time at Cisco Live in the DevNet Zone learning programming and other techniques and less time in traditional sessions. Considering the long history of Cisco Live that’s an impressive feat.

More importantly, DevNet changed the conversation for professionals. Instead of just being for the curious, DevNet became a place where anyone could go and find the information they needed. It became a resource. Not just a playground. Instead of poking around and playing with things it became a place to go and figure things out. Or a place to learn more about a new technology that you wanted to implement, like automation. If the regular sessions at Cisco Live were what you had to learn, DevNet is where you wanted to go and learn.

Susie Wee (@SusieWee) deserves all the credit in the world here. She has seen what the developer community needs to thrive inside of Cisco and she’s delivered it. She’s the kind of ambassador that can go between the various Cisco business units (BUs) and foster the kind of attitudes that people need to have to succeed. It’s no longer about turf wars or fiefdoms. Instead, it’s about leveraging a common platform for developers and networkers alike to find a common ground to build from. But even that’s not enough to complete the vision.

Narrow of Purpose, Wide of Vision

During Cisco Live 2019, I talked quite a bit with Susie and her team. And one of things that struck me from our conversations was not how DevNet was an open and amazing place. Or how they were adding sessions as fast as they could find instructors. It was that so many people weren’t taking advantage of it. That’s when I realized that DevNet needs to shift their focus. Instead of just providing a place for networking people to learn, they’re going to have to go on the offensive.

DevNet needs to enhance and increase their outreach programs. Being a static resource is fine when your audience is eager to learn and looking for answers. But those people have already flocked to the DevNet banner. For things to grow, DevNet needs to pull the laggards along. The people who think automation is just a fad. Or that SDN is in the Trough of Disillusionment from a pretty Gartner graphic. DevNet has momentum, and soon will have the certification program needed to help networking people show off their transformation to developers.


Tom’s Take

For DevNet to really succeed, they need to be grabbing people by the collar and dragging them to the new reality of networking. It’s not enough to give people a place to do research on nice-to-have projects. You’re going to have get the people engaged and motivated. That means committing resources to entry-level outreach. Maybe even building a DevNet Academy similar to the Cisco Academy. But it has to happen. Because the people that aren’t already at DevNet aren’t going to get there on their own. They need a push (or a pull) to find out what they don’t know that they don’t know.

OpenConfig and Wi-Fi – The Winning Combo

Wireless isn’t easy by any stretch of the imagination. Most people fixate on the spectrum analysis part of the equation when they think about how hard wireless is. But there are many other moving parts in the whole architecture that make it difficult to manage and maintain. Not the least of which is how the devices talk to each other.

This week at Aruba Atmosphere 2019, I had the opportunity to moderate a panel of wireless and security experts for Mobility Field Day Exclusive. It was a fun discussion, as you can see from the above video. As the moderator, I didn’t really get a change to explain my thoughts on OpenConfig, but I figured now would be a great time to jump in with some color on my side of the conversation.

Yin and YANG

One of the most exciting ideas behind OpenConfig for wireless people should be the common YANG data models. This means that you can use NETCONF to have a common programming language against specific YANG models. That means no more fumbling around to remember esoteric commands. You just tell the system what you want it to do and the rest is easy.

As outlined in the video, this has a huge impact on trying to keep configurations standard across different types of APs. Imagine the nightmare of trying to configure power settings or radio thresholds with 3 or more AP manufacturers in your building. Now, imagine being able to do it across your building or dozens of others with a few simple commands and some programming know-how? Doesn’t seem quite as daunting now, does it? It’s easy because you’re speaking the same language across all those APs.

So, what if you don’t care, like Richard McIntosh (@802TopHat) points out? What if your vendor doesn’t support OpenConfig? Well, that’s fine. Not everyone has to support it today. But if you work on building a model system and setting up the automation and API interfaces, are you just going to throw it out the window during your refresh cycle because the new APs that you’re buying don’t support OpenConfig? Or is the need for OpenConfig going to be a huge driver for you and part of the selection process.

Companies are motived by their customers. If you tell them that they need to develop OpenConfig for their devices, they will do it because they run the risk of losing sales. And if the industry moves toward adopting a standard northbound API, what happens to those that get left out in the cold after a few missed refresh cycles? I bet they’ll quickly realize the lost opportunities more than cover the development costs of supporting OpenConfig.

Telemetry Short-Cuts

The other big piece of OpenConfig and wireless is telemetry. SNMP-based monitoring doesn’t work well in today’s wired networks and it’s downright broken in wireless. There are too many variables out there in the average deployment to be able to account for them with anything other than telemetry. Many vendors are starting to adopt the idea of streaming the data directly to collectors via a subscription model. OpenConfig makes this easy with the ability to subscribe to the data you want using OpenConfig models.

From a manufacturer perspective, this is a huge chance to get into telemetry and offer it as a differentiator. If you’re not tied to using an archaic platform with proprietary data models you can embrace OpenConfig and deliver a modern telemetry infrastructure that users will want to adopt. And if the radio performance is the same between all of the offerings, telemetry could be a the piece that tips the scales in your favor.

Single-Vendor Isn’t So Single

I remember doing a deployment for a wireless system once that was “state of the art” when we put it in. I had my challenges and made everything work well and the customer was happy. Until a month later when the supporting vendor announced they were buying a competing company and using that company as their offering going forward. My customer was upset, naturally, but so was I. I spent a lot of time working out how to build and deploy that system and now it was on the scrap heap.

It’s even worse when you keep buying from single vendors and suddenly find that the new products don’t quite conform to the same syntax or capabilities. Maybe the new model of router or AP has a new board that is 95% compatible with the old one, except of that one command you use all the time.

OpenConfig can change that. Because the capabilities of the device have to be outlined you can easily figure out if there are any missing parts and pieces. You can also be sure that your provisioning scripts and programs aren’t going to break or cause problems because a critical command was deprecated. And since you can keep the models around for old hardware as well as new you aren’t going to be duplicating efforts everywhere trying to keep things moving between the platforms.


Tom’s Take

OpenConfig is a great idea for any system that has distributed components. Even if it never takes off in Wi-Fi, it will force the manufacturers to do a bit better job of documenting their capabilities and making them easy to consume via API. And anything that exposes more functionality to be consumed by automation and programmability is a good thing indeed.

The Long And Winding Network Road

How do you see your network? Odds are good it looks like a big collection of devices and protocols that you use to connect everything. It doesn’t matter what those devices are. They’re just another source of packets that you have to deal with. Sometimes those devices are more needy than others. Maybe it’s a phone server that needs QoS. Or a storage device that needs a dedicated transport to guarantee that nothing is lost.

But what does the network look like to those developers?

Work Is A Highway

When is the last time you thought about how the network looks to people? Here’s a thought exercise for you:

Think about a highway. Think about all the engineering that goes into building a highway. How many companies are involved in building it. How many resources are required. Now, think of that every time you want to go to the store.

It’s a bit overwhelming. There are dozens, if not hundreds, of companies that are dedicated to building highways and other surface streets. Perhaps they are architects or construction crews or even just maintenance workers. But all of them have a function. All for the sake of letting us drive on roads to get places. To us, the road isn’t the primary thing. It’s just a way to go somewhere that we want to be. In fact, the only time we really notice the road is when it is in disrepair or under construction. We only see the road when it impacts our ability to do the things it enables.

Now, think about the network. Networking professionals spend their entire careers building bigger, faster networks. We take weeks to decide how best to handle routing decisions or agonize over which routing protocols are necessary to make things work the way we want. We make it our mission to build something that will stand the test of time and be the Eighth Wonder of the World. At least until it’s time to refresh it again for slightly faster hardware.

The difference between these two examples is the way that the creators approach their creation. Highway workers may be proud of their creation but they don’t spend hours each day extolling the virtues of the asphalt they used or the creative way they routed a particular curve. They don’t demand respect from drivers every time someone jumps on the highway to go to the department store.

Networking people have a visibility problem. They’re too close to their creation to have the vision to understand that it’s just another road to developers. Developers spend all their time worrying about memory allocation and UI design. They don’t care if the network actually works at 10GbE or 100 GbE. They want a service that transports packets back and forth to their destination.

The Old New Network

We’ve had discussion in the last few years about everything under the sun that is designed to make networking easier. VXLAN, NFV, Service Mesh, Analytics, ZTP, and on and on. But these things don’t make networking easier for users. They make networking easier for networking professionals. All of these constructs are really just designed to help us do our jobs a little faster and make things work just a little bit better than they did before.

Imagine all the work that goes into electrical power generation. Imagine the level of automation and monitoring necessary to make sure that power gets from the generation point to your house. It’s impressive. And yet, you don’t know anything about it. It’s all hidden away somewhere unimpressive. You don’t need to describe the operation of a transformer to be able to plug in a toaster. And no matter how much that technology changes it doesn’t impact your life until the power goes out.

Networking needs to be a utility. It needs to move away from the old methods of worrying about how we create VLANs and routing protocols and instead needs to focus on disappearing just like the power grid. We should be proud of what we build. But we shouldn’t make our pride the focus of hubris about what we do. Networking professionals are like highway workers or electrical company employees. We work hard behind the scenes to provide transport for services. The cloud has changed the way we look at the destination for those services. And it’s high time we examine our role in things as well.


Tom’s Take

Telco workers. COBOL programmers. Data entry specialists. All of these people used to be the kings and queens of their field. They were the people with the respect of hundreds. They were the gatekeepers because their technology and job roles were critical. Until they weren’t any more. Networking is getting there quickly. We’ve been so focused on making our job easy to do that we’ve missed the point. We need to be invisible. Just like a well built road or a functioning electrical grid. We are not the goal of infrastructure. We’re just a part of it. And the sooner we realize that and get out of our own way, we’re going to find that the world is a much better place for everyone involved in IT, from developers to users.