Are We Seeing SD-WAN Washing?

You may have seen a tweet from me last week referencing a news story that Fortinet was now in the SD-WAN market:

It came as a shock to me because Fortinet wasn’t even on my radar as an SD-WAN vendor. I knew they were doing brisk business in the firewall and security space, but SD-WAN? What does it really mean?

SD Boxes

Fortinet’s claim to be a player in the SD-WAN space brings the number of vendors doing SD-WAN to well over 50. That’s a lot of players. But how did the come out of left field to land a deal rumored to be over a million dollars for a space that they weren’t even really playing in six months ago?

Fortinet makes edge firewalls. They make decent edge firewalls. When I used to work for a VAR we used them quite a bit. We even used their smaller units as remote appliances to allow us to connect to remote networks and do managed maintenance services. At no time during that whole engagement did I ever consider them to be anything other than a firewall.

Fast forward to 2018. Fortinet is still selling firewalls. Their website still focuses on security as the primary driver for their lines of business. They do talk about SD-WAN and have a section for it with links to whitepapers going all the way back to May. They even have a contributed article for SDxCentral back and February. However, going back that far the article reads more like a security company that is saying their secure endpoints could be considered SD-WAN.

This reminds me of stories of Oracle counting database licenses as cloud licenses so they could claim to be the fourth largest cloud provider. Or if a company suddenly decided that every box they sold counted as an IPS because it had a function that could be enabled for a fee. The numbers look great when you start counting them creatively but they’re almost always a bit of a fib.

Part Time Job

Imagine if Cisco suddenly decided to start counting ASA firewalls as container engines because of a software update that allowed you to run Kubernetes on the box. People would lose their minds. Because no one buys an ASA to run containers. So for a company like Cisco to count them as part of a container deployment would be absurd.

The same can be said for any company that has a line of business that is focused on one specific area and then suddenly decides that the same line of business can be double-counted for a new emerging market. It may very well be the case that Fortinet has a huge deployment of SD-WAN devices that customers are very happy with. But if those edge devices were originally sold as firewalls or UTM devices that just so happened to be able to run SD-WAN software, it shouldn’t really count should it? If a customer thought they were buying a firewall they wouldn’t really believe it was actually an SD-WAN router.

The problem with this math is that everything gets inflated. Maybe those SD-WAN edge devices are dedicated. But, if they run Fortinet’s security suite are also being counting in the UTM numbers? Is Cisco going to start counting every ISR sold in the last five years as a Viptela deployment after the news this week that Viptela software can run on all of them? Where exactly are we going to draw the line? Is it fair to say that every x86 chip sold in the last 10 years should count for a VMware license because you could conceivably run a hypervisor on them? It sounds ridiculous when you put it like that, but only because of the timelines involved. Some crazier ideas have been put forward in the past.

The only way that this whole thing really works is if the devices are dedicated to their function and are only counted for the purpose they were installed and configured for. You shouldn’t get to add a UTM firewall to both the security side and the SD-WAN side. Cisco routers should only count as traditional layer 3 or SD-WAN, not both. If you try to push the envelope to put up big numbers designed to wow potential customers and get a seat at the big table, you need to be ready to defend your reporting of those numbers when people ask tough questions about the math behind those numbers.


Tom’s Take

If you had told me last year that Fortinet would sell a million dollars worth of SD-WAN in one deal, I’d ask you who they bought to get that expertise. Today, it appears they are content with saying their UTM boxes with a central controller count as SD-WAN. I’d love to put them up against Viptela or VeloCloud or even CloudGenix and see what kind of advanced feature sets they produce. If it’s merely a WAN aggregation box with some central control and a security suite I don’t think it’s fair to call it true SD-WAN. Just a rinse and repeat of some washed up marketing ideas.

Advertisements

Cisco and the Two-Factor Two-Step

In case you missed the news, Cisco announced yesterday that they are buying Duo Security. This is a great move on Cisco’s part. They need to beef up their security portfolio to compete against not only Palo Alto Networks but also against all the up-and-coming startups that are trying to solve problems that are largely being ignored by large enterprise security vendors. But how does an authentication vendor help Cisco?

Who Are You?

The world relies on passwords to run. Banks, email, and even your mobile device has some kind of passcode. We memorize them, write them down, or sometimes just use a password manager (like 1Password) to keep them safe. But passwords can be guessed. Trivial passwords are especially vulnerable. And when you factor in things like rainbow tables, it gets even scarier.

The most secure systems require you to have some additional form of authentication. You may have heard this termed as Two Factor Authentication (2FA). 2FA makes sure that no one is just going to be able to guess your password. The most commonly accepted forms of multi-factor authentication are:

  • Something You Know – Password, PIN, etc
  • Something You Have – Credit Card, Auth token, etc
  • Something You Are – Biometrics

You need at least two of these in order to successfully log into a system. Not having an additional form means you’re locked out. And that also means that the individual components of the scheme are useless in isolation. Knowing someone’s password without having their security token means little. Stealing a token without having their fingerprint is worthless.

But, people are starting to get more and more sophisticated with their attacks. One of the most popular forms of 2FA is the SMS authentication. It combines What You Know, in this case you password for your account, with Something You Have, which is a phone capable of receiving an SMS text message. When you log in, the authentication system sends an SMS to the authorized number and you have to type in the short-lived code to get into the system.

Ask Reddit how that worked out for them recently. A hacker (or group) was able to intercept the 2FA SMS codes for certain accounts and use both factors to log in and gather account data. It’s actually not as trivial as one might think to intercept SMS codes. It’s much, much harder to crack the algorithm of something like a security token. You’d need access to the source code and months to download everything. Like exactly what happened in 2011 to RSA.

In order for 2FA to work effectively, it needs to be something like an app on your mobile device that can be updated and changed when necessary to validate new algorithms and expire old credentials. It needs to be modern. It needs to be something that people don’t think twice about. That’s what Duo Security is all about. And, from their customer base and the fact that Cisco payed about $2.3 billion for them, they must do it well.

Won’t Get Fooled Again

How does Duo help Cisco? Well, first and foremost I hope that Duo puts an end to telnet access to routers forever. Telnet is the lazy way we enable remote access to devices. SSH is ten times better and a thousand times more secure. But setting it up properly to authenticate with certificate authentication is a huge pain. People want it to work when they need it to work. And tying it to a specific machine or location isn’t the easiest or more convenient thing.

Duo can give Cisco the ability to introduce real 2FA login security to their devices. IOS could be modified to require Duo Security app login authentication. That means that only users authorized to log into that device would get the login codes. No more guessed remote passwords!

Think about integrating Duo with Cisco ISE. That could be a huge boon for systems that need additional security. You could have groups of system that need 2FA and others that don’t. You could easily manage those lists and move systems in and out as needed. Or, you could start a policy that all systems needs 2FA and phase in the requirements over time to make people understand how important it is and give them time to download the app and get it set up. The ISE possibilities are endless.

One caveat is that Duo is a program that works with a large number of third party programs right now. Including integrations with Juniper Networks. As you can imagine, that list might change once Cisco takes control of the company. Some organizations that use Duo will probably see a price increase and will continue to offer the service to their users. Others, possibly Juniper as an example, may be frozen out as Cisco tries to keep the best parts of the company for their own use. If Cisco is smart, they’ll keep Duo available for any third party that wants to use the platform or integrate. It’s the best solution out there for solving this problem and everyone deserves to have good security.


Tom’s Take

Cisco buying a security company is no shock. They need the horsepower to compete in a world where firewalls are impediments at best and hackers have long since figured out how to get around static defenses. They need to get involved in software too. Security isn’t fought in silicon any more. It’s all in code and beefing up the software side of the equation. Duo gives them a component to compete in the broader authentication market. And the acquisition strategy is straight out of the Chambers playbook.

A plea to Cisco: Don’t lock everyone out of the best parts of Duo because you want to bundle them with recurring Cisco software revenue. Let people integrate. Take a page from the Samsung playbook. Just because you compete with Apple doesn’t mean you can’t make chips for them. Keep your competitors close and make they use your software and you’ll make more money than freezing everyone out and claiming your software is the best and least used of the bunch.

It’s About Time and Project Management

I stumbled across a Reddit thread today from /u/Magician_Hiker that posed a question I’ve always found fascinating. When we work on projects, it always seems like there is a disconnect between the project management team and the engineering team doing the work. The statement posted at the top of this thread is as follows:

Project Managers only plan for when things go right.

Engineers always plan for when things go wrong.

How did we get here? And can anything be done about it?

Projecting Management

I’ve had a turn or two at project management. I got my Project+ many years back, and even more years before that I had to learn all about project management in college. The science behind project management is storied and deep. The idea of having someone assigned to keep things running on task and making sure all the little details get taken care of is a huge boon as the size of projects grow.

As an engineer, can you imagine trying to juggle three different installations across 5 different sites that all need to be coordinated together? Can you think about the effort needed to make sure that everything works together and is done on time? The thought alone probably gives you hives.

Project managers are capable of juggling lots of things in their professional capabilities. That means keeping all the dishes cooking at the same time and making sure that everything is done on time to eat dinner. It also means that people need to know about timelines and how those timelines intersect and can impact the execution of multiple phases of a project. Sure, it’s easy to figure out that we can’t start installing the equipment until it arrives on the dock. But how about coordinating the installers to be on-site on the right day knowing that the company is drop shipping the equipment to three different receiving docks? That’s a bit harder.

Project managers need to know timelines for things because they have to juggle everything together. If you’ve ever had the misfortune to need to use a Gantt chart you’ll know what I’m talking about. These little jewels have all the timeline needs of a project visualized for everyone to figure out how to make things happen. Stable time is key to a project. Estimates need to make sense. You can’t just spitball something and hope it works. If part of your project timeline is off in either direction, you’re going to get messed up further down the line.

Predictability

Project timelines need to be consistent. Most people try to err on the side of caution when trying to make them work. They fudge the numbers and pad things out a bit so that everything will work out in the end. Even if that means that there may be a few hours when someone is sitting around with nothing to do.

I worked with a project manager that jokingly told me that the way he figured out the timing for an installation project was to take the units from his engineers and double it and move to the next time unit. So hours became days, and days became weeks. We chuckled about this at the time, but it also wasn’t surprising when their projects always seemed to talk a LOT longer than most people budgeted for.

The problem with inflated numbers is that no customer is going to want to pay for wasted time. If you think it’s hard to get a customer to buy off on an installation that might take 30 hours try getting them to pay when they are telling you your engineers were sitting around for 10 of those hours. Customers only want to pay for the hours worked, not the hours spent on the phone trying to locate shipments or trying to figure out what this weird error message is.

Likewise, trying to go the other direction and get things done more quickly than the estimate is a recipe for disaster too. There’s even a specific term for it: crashing (sounds great, eh?). Crashing a project means adding resources to a project or removing items from the critical execution path to make a deadline or complete something earlier. If you want a textbook example of why messing with a project timeline is a bad idea, go read or watch The Martian. The first resupply mission is a prime example of this practice in action and why it can go horribly wrong.

These are all great reasons why cloud is so appealing to people. Justin Warren (@JPWarren) did a great presentation a couple of years ago about what happens when projects run late and why cloud fixes that:

Watch that whole video and you’ll understand things from a project manager’s point of view. Cloud is predictable and stable and it always works the same way. The variance on things is tight. You don’t have to worry about projects slipping up or taking too much time. Cloud removes uncertainty and doubt about execution. That’s something that project managers love.


Tom’s Take

I used to get asked to quote my projected installation times to the sales managers for projects. Most of the time, I’d give them an estimate that I felt comfortable with and that would be the end of it. One day, I asked them about why a 10-hour project was quoted as 14 on an order. The sales manager told me that they’d developed “Tom Time”, which was 1.4 times the amount of whatever I quoted. So, 10 hours became 14 and 20 hours became 28, and so on. When I asked why I was told that engineers often run into problems and don’t think to account for it. So project managers need to build in the time somehow. Perhaps that’s one of the reasons why software defined and cloud are more attractive. Because there isn’t any Tom Time involved.

Friday Musings on Network Analytics

I’ve been at Networking Field Day this week, and as always the conversations have been great and focused around a variety of networking topics. One that keeps jumping out at me is network analytics. There’s a few things that have come up that were especially interesting to me:

  • Don’t ask yourself if networking monitoring is not worth your time. Odds are good you’re already monitoring stuff in your network and you don’t even realize it. Many networking vendors enable basic analytics for troubleshooting purposes. You need to figure out how to build that into a bigger part of your workflows.
  • Remember that analytics can integrate with platforms you’re already using. If you’re using ServiceNow you can integrate everything into it. No better way to learn how analytics can help you than to setup some kind of ticket generation for down networks. And, if that automation causes you to get overloaded with link flaps you’ll have even more motivation to figure out why your provider can’t keep things running.
  • Don’t discount open source tools. The world has come a long way since MRTG and Cacti. In fact, a lot of the flagship analytics platforms are built with open source tools as a starting point. If you can figure out how to use the “free” versions, you can figure out how to implement the bigger stuff too. The paid versions may look nicer or have deeper integrations, but you can bet that they all work mostly the same under the hood.
  • Finally, remember that you can’t possible deal with all this data yourself. You can collect it but parsing it is like trying to drink from a firehose of pond water. You need to treat the data and then analyze that result. Find tools (probably open source) that help you understand what you’re seeing. If it saves you 10 minutes of looking, it’s worth it.

Tom’s Take

Be sure to say tuned to our Gestalt IT On-Premise IT Roundtable podcast in the coming weeks for more great discussion on the analytics topic. We’ve got an episode that should be out soon that will take the discussion of the “expense” of networking analytics in a new direction.

Independence, Impartiality, and Perspective

In case you haven’t noticed recently, there are a lot of people that have been going to work for vendors and manufacturers of computer equipment. Microsoft has scored more than a few of them, along with Cohesity, Rubrik, and many others. This is something that I see frequently from my position at Tech Field Day. We typically hear the rumblings of a person looking to move on to a different position early on because we talk to a great number of companies. We also hear about it because it represents a big shift for those who are potential delegates for our events. Because going to a vendor means loss of their independence. But what does that really mean?

Undeclaring Independence

When people go to work for a manufacturer of a computing product, the necessarily lose their independence. But that’s not the only case where that happens. You can also not be truly independent if you work for reseller. If your company specializes in Cisco and EMC, are you truly independent when discussion Juniper and NetApp? If you make your money by selling one group of products you’re going to be unconsciously biased toward them. If you’ve been burned or had the rug pulled out from under you by a vendor, you may be biased against them.

Likewise, if you work yourself in an independent consulting business, are you honestly and truly independent? That would mean you’re making no money from any vendor as your customer. That means no drafting of whitepapers, no testing, and no interaction with them that involves compensation. This falls into the same line of thinking as the reseller employee. Strictly speaking, you aren’t totally independent.

Independence would really mean not even being involved in the market. A public defender is truly independent of IT. A plumber is independent of IT. They don’t have biases. They don’t care either way what type of computer or network solution they use. But you don’t consider a public defender or a plumber to be an expert on IT. Because in order to know anything about IT you have to understand the companies. You have to talk to them and get what they’re doing. And that means you’re going to start forming opinions about them. But, truthfully, real independence is not what you’re looking for here.

Partial Neutrality

Instead of independence, which is a tricky subject, what you should look for is impartiality. Being impartial means treating everything fairly. You accept that you have bias and you try to overcome it. For example, judges are impartial. They have their own opinions and thoughts about subjects. They rule based on the law. They aren’t independent of the justice system. Instead, they do their best to use logic and rules to decide matters.

Likewise, people in IT should strive to be impartial. Instead of forming an opinion about something without thought we should try to look at all aspects of the ideas before we form our opinions. I used to be a huge fan of IBM Thinkpad laptops. I carried one for many years at my old job. Today, I’m a huge fan of MacBooks. I use one every day. Does that mean that I don’t like Windows laptops any more? It means that I don’t use them enough to have a solid opinion. I may compare some of the features I have on my current laptop against the ones that I see, but I also recognize that I am making that comparison and that I need to take it into account when I arrive at my decision.

Extending this further, when making decisions about how we analyze a solution from a company, we have to take into account how impartial we’re going to be in every direction. When you work for a company that makes a thing, whether it be a switch or a laptop or a car, you’re going to be partial to the thing you make. That’s just how people are. We like things we are involved in making. Does that mean we are incapable of admiring other things? No, but it does mean that we have some things to overcome to truly be impartial.

The hardest part of trying to be impartial is realizing when you aren’t. Unconscious bias creeps in all the time. Thoughts about the one time you used a bad product or ate bad food in a restaurant make you immediately start thinking about buying something else quickly. Even when it’s unfounded we still do it. And we have to recognize when we’re doing it in order to counter it.

Being impartial isn’t easy on purpose. Being able to look at all sides of an issue, both good and bad, takes a lot of extra effort. We don’t always like seeing the good in something we dislike or finding the bad parts of an opinion we agree with. But in order to be good judges of things we need to find the balance.


Tom’s Take

I try my best to be impartial. I know it’s hard to do. I have the fortune to be independent. Sure, I’m still a CCIE so I have a lot of Cisco knowledge. I also have a lot of relationships with people in the industry. A lot of those same people work at companies that are my customers. In the end, I have to realize that I need to work hard to be impartial every day. That’s the only way I can say that I’m independent and capable of evaluating things on their own merits. It’s just a matter of having the right perspective.

The Privacy Pickle

I recorded a fantastic episode of The Network Collective last night with some great friends from the industry. The topic was privacy. Originally I thought we were just going to discuss how NAT both was and wasn’t a form of privacy and how EUI-64 addressing wasn’t the end of days for people worried about being tracked. But as the show wore on, I realized a few things about privacy.

Booming In Peace

My mom is a Baby Boomer. We learn about them as a generation based on some of their characteristics, most notably their rejection of the values of their parents. One of things they hold most dear is their privacy. They grew up in a world where they could be private people. They weren’t living in a 1 or 2 room house with multiple siblings. They had the right of privacy. They could have a room all to themselves if they so chose.

Baby Boomers, like my mom, are intensely private adults. They marvel at the idea that targeted advertisements can work for them. When Amazon shows them an ad for something they just searched for they feel like it’s a form of dark magic. They also aren’t trusting of “new” things. I can still remember how shocked my mother was that I would actively get into someone else’s car instead of a taxi. When I explained that Uber and Lyft do a similar job of vetting their drivers it still took some convincing to make her realize that it was safe.

Likewise, the Boomer generation’s personal privacy doesn’t mesh well with today’s technology. While there are always exceptions to every rule, the number of people in their mid-50s and older that use Twitter and Snapchat are far, far less than the number that is the target demographic for each service. I used to wonder if it was because older people didn’t understand the technology. But over time I started to realize that it was more based on the fact that older people just don’t like sharing that kind of information about themselves. They’re not open books. Instead, Baby Boomers take a lot of studying to understand.

Zee Newest

On the opposite side of the spectrum is my son’s generation, Generation Z. GenZ is the opposite of the Boomer generation when it comes to privacy. They have grown up in a world that has never known anything but the ever-present connectivity of the Internet. They don’t understand that people can live a life without being watched by cameras and having everything they do uploaded to YouTube. Their idea of celebrity isn’t just TV and movie stars but also extends to video game streamers on Twitch or Instagram models.

Likewise, this generation is more open about their privacy. They understand that the world is built on data collection. They sign away their information. But they tend to be crafty about it. Rather than acting like previous generations that would fill out every detail of a form this generation only fills out the necessary pieces. And they have been known to put in totally incorrect information for no other reason than to throw people off.

GenZ realizes that the privacy genie is out of the bottle. They have to deal with the world they were born into, just like the Baby Boomers and the other generations that came before them. But the way that they choose to deal with it is not through legislation but instead through self-regulation. They choose what information they expose so as not to create a trail or a profile that big data consuming companies can use to fingerprint them. And in most cases, they don’t even realize they’re doing it! My son is twelve and he instinctively knows that you don’t share everything about yourself everywhere. He knows how to navigate his virtual neighborhood just a sure as I knew how to ride my bike around my physical one back when I was his age.

Tom’s Take

Where does that leave me and my generation? Well, we’re a weird mashup on Generation X and Generation Y/Millenials. We aren’t as private as our parents and we aren’t as open as our children. We’re cynical. We’re rebelling against what we see as our parent’s generation and their complete privacy. Likewise, just like our parents, we are almost aghast at the idea that our children could be so open. We’re coming to live in a world where Big Data is learning everything about us. And our children are growing up in that world. Their children, the generation after GenZ, will only know a world where everyone knows everything already. Will it be like Minority Report, where advertising works with retinal patterns? Or will it be a generation where we know everything but really know nothing because no one tells the whole truth about who they are?

Tale From The Trenches: The Debug Of Damocles

My good friend and colleague Rich Stroffolino (@MrAnthropology) is collecting Tales from the Trenches about times when we did things that we didn’t expect to cause problems. I wanted to share one of my own here about the time I knocked a school offline with a debug command.

I Got Your Number

The setup for this is pretty simple. I was deploying a CallManager setup for a multi-site school system deployment. I was using local gateways at every site to hook up fax lines and fire alarms with FXS/FXO ports for those systems to dial out. Everything else got backhauled to a voice gateway at the high school with a PRI running MGCP.

I was trying to figure out why the station IDs that were being send by the sites weren’t going out over caller ID. Everything was showing up as the high school number. I needed to figure out what was being sent. I was at the middle school location across town and trying to debug via telnet. I logged into the router and figured I would make a change, dial my cell phone from the VoIP phone next to me, and see what happened. Simple troubleshooting, right?

I did just that. My cell phone got the wrong caller ID. And my debug command didn’t show me anything. So I figured I must be debugging the wrong thing. Not debug isdn q931 after all. Maybe it’s a problem with MGCP. I’ll check that. But I just need to make sure I’m getting everything. I’ll just debug it all.

debug mgcp packet detail

Can You Hear Me Now?

Veterans of voice are probably screaming at me right now. For those who don’t know, debug anything detail generates a ton of messages. And I’m not consoled into the router. I’m remote. And I didn’t realize how big of a problem that was until my console started scrolling at 100 miles an hour. And then froze.

Turns out, when you overwhelm a router CPU with debug messages, it shuts off the telnet window. It also shuts off the console as well, but I wouldn’t have known that because I was way far way from that port. But I did starting hearing people down the hall saying, “Hello? Hey, are you still there? Weird, it just went dead.”

Guess what else a router isn’t doing when it’s processing a tidal wave of debug messages? It’s not processing calls. At all. For five school sites. I looked down at my watch. It was 2:00pm. That’s bad. Elementary schools get a ton of phone calls within the last hour of being in session. Parents calling to tell kids to wait in a pickup line or ride a certain bus home. Parents wanting to check kids out early. All kinds of things. That need phones.

I raced out of my back room. I acknowledged the receptionists comment about the phones not working. I jumped in my car and raced across town to the high school. I managed not to break any speed limits, but I also didn’t loiter one bit. I jumped out of my car and raced into the building. The look on my face must have warded off any comments about phone system issues because no one stopped me before I got to the physical location of the voice gateway.

I knew things were bad. I didn’t have time to console in and remove the debug command. I did what ever good CCIE has been taught since the beginning of time when they need to remove a bad configuration that broke their entire lab.

I pulled the power cable and cycled the whole thing.

I was already neck deep in it. It would have taken me at least five minutes to get my laptop ready and consoled in. In hindsight, that would have been five wasted minutes since the MGCP debugger would have locked out the console anyway. As the router was coming back up, I nervously looked at the terminal screen for a login prompt. Now that the debugger wasn’t running, everything looked normal. I waiting impatiently for the MGCP process to register with CallManager once more.

I kept repeating the same status CLI command while I refreshed the gateway page in CallManager over and over. After a few more tense minutes, everything was back to normal. I picked up a phone next to the rack and dialed my cell phone. It rang. I was happy. I walked back to the main high school office and told them that everything was back to normal.

Tom’s Take

My post-mortem was simple. I did dumb things. I shouldn’t have debugged remotely. I shouldn’t have used the detail keyword for something so simple. In fact, watching my screen fill up with five sites worth of phone calls in a fraction of a second told me there was too much going on behind the scenes for me to comprehend anyway.

That was the last time I ever debugged anything in detail. I made sure from that point forward to start out small and then build from there to find my answers. I also made sure that I did all my debugging from the console and not a remote access window. And the next couple of times I did it were always outside of production hours with a hand ready to yank the power cable just in case.

I didn’t save the day. At best, all I did was cover for my mistake. If it had been a support call center or a hospital I probably would have been fired for it. I made a bad decision and managed to get back to operational without costing money or safety.

Remember when you’re doing your job that you need to keep an eye on how your job will affect everything else. We don’t troubleshoot in a vacuum after all.