Fast Friday – Networking Field Day 22 Thoughts

Since I’m on the road again at Networking Field Day this week, I have had some great conversations with the delegates and presenters. A few stray thoughts that may develop into full blown blog posts at some point, but I figured I could get some of them out here for some quick entertainment.

  • The startup model means flexibility. That also means you can think about problems in a new light. So it would follow that you get to develop some new idea without a mountain of technical debt. Things like archaic platforms and crusty old user interfaces. You’d be surprised the amount of stuff that gets carried forward as technical debt.
  • Integrating products isn’t easy. Even if you think you’ve got the right slot for your newest acquisition you may find it isn’t the best fit overall. Or, even better, you may find a synergy you didn’t know existed because of a forgotten tool. Very rarely does anything just neatly fit into all your plans.
  • The more guest Wi-Fi I have to register for, the more I long for the days of Passport and OpenRoaming. If you already know who I am, why oh why must I continually register. Who wants to create Envoy, but for Wi-Fi?
  • There are days when I miss the CLI and doing stuff. Then I look at how complicated networks are now with the cloud and I realize I’d be in over my head. Also, no one wants to parse thousands of lines of log files. Even when I have insomnia.

Tom’s Take

I’ll have more good stuff soon. Don’t forget to check out the stuff I write for Gestalt IT, which includes posts from previous Field Day events and some briefings I’ve taken.

Agility vs. Flexibility

When you’re looking at moving to a new technology, whether it be SD-WAN or cloud, you’re going to be told all about the capabilities it has and all the shiny new stuff it can do for you. I would almost guarantee that you’re going to hear the words “agile” and “flexible” at some point during the conversation. Now, obviously those two things are different based on the fact there are two different words to describe what they do. But I’ve also heard people use them interchangeably. What does it mean to be agile? And is it better to be flexible too?

Agile Profile

Agility is the ability to move quickly and easily. It’s a quality displayed by athletes and fighters the world over. It’s a combination of reflexes and skill. Agility gives you the ability to react quickly to situations.

What does that mean in a technology sense? Mostly, agile solutions or methodologies are able to react to changing conditions or requirements quickly and adapt to meet those needs. Imagine a platform that can react to the changing needs of users. Or add new functions on the fly on demand. That’s the kind of agility that comes from software functionality or programmability. It’s a development team that can react without technical debt weighing them down.

But agile doesn’t always mean extensible. Just because you can react quickly doesn’t mean that you have the ability to extend the platform in ways that it can’t manage. Agile solutions can be rebuilt quickly but they have limitations. Usually, with technology, those limitations revolve around hardware. Agile solutions have to be built that way from the start. But it often means sacrifices must be made. Perhaps it didn’t ship with an interface that allows hardware to be added. Maybe the form factor is a limitation. A Raspberry Pi is a very agile platform within reason. But you’re not ever going to be able to build them into a GPU farm. Because they are locked into a specific kind of agility.

Flex Specs

Flexibility is the ability to react to new environments or changing requirements. That definition sounds an awful lot like the one above for agility, doesn’t it? They both sort of mean that you can change what you’re capable of. Flexibility is a characteristic that is usually used to describe gymnasts or dancers. Would you confuse a ballerina for a boxing champion? Likely not. Even though they can react to different situations they’re both different in many ways.

First and foremost, flexibility doesn’t require speed. Agility implicitly requires quick reactions. Flexibility can take time to adapt to things. Maybe that means adding new hardware to a server to expand GPU capabilities. Maybe it means adding modules to a software program to add new functions, like financial tracking added to a roster program. It may not be available right away but it is something that can be built in.

Flexibility on a hardware platform can take many directions. I always think of SD-WAN appliances as the ultimately form of flexibility. The more advanced units can run 4G/LTE modems in USB form. Or they can even run in the cloud without any specific hardware. The software platform isn’t tied to one specific hardware configuration or even form factor. It’s truly flexible because it doesn’t have any prerequisites or requirements.

But, as mentioned, flexibility isn’t always equated to agility. You can have a very flexible platform that requires a lot of time to build out. A classic example would be a desktop computer. It’s a very flexible platform but it takes time to install expansion cards and optional hardware. It’s also something that has to be configured and built to be flexible from the start. ATX motherboards have a certain kind of flexibility. Micro-ATX boards trade expansion flexibility for size flexibility. I can’t add two extra graphics cards to them but I can put the board into a case the size of a toaster.


Tom’s Take

What’s better? Agile or flexible? It depends on what kind of solution you need. Do you want to build on something? Or be able to upgrade it quickly? Is speed more important that creativity? There are so many dimensions that need to be considered. Most modern platforms have a few elements of each in their design. SD-WAN is both agile and flexible. Some solutions are more one than the other and that’s fine. Just remember that you need to ask for something very specific to meet criteria because if you’re looking for one you may end up with the other and not realize it until it’s too late.

Meraki Is Almost An Enterprise Solution

You may remember a three or so years ago when I famously declared that Meraki is not a good solution for enterprises. I know the folks at Meraki certainly haven’t. The profile for the hardware and services has slowly been rising inside of Cisco. More than just wireless with the requisite networking components, Meraki has now embraced security, SD-WAN, and even security cameras. They’ve moved into a lot of areas that customers have been asking about while also still trying to maintain the simplicity that Meraki is known for.

Having just finished up a Meraki presentation during Tech Field Day Extra at Cisco Live Europe, I thought it would be a good time to take a look at the progress that Meraki has been making toward embracing their enterprise customer base. I’m not entirely convinced that they’ve made it yet, but the progress is starting to look good.

Playing for Scale

The first area where Meraki is starting to really make strides is in the scalability department. This video from Tech Field Day Extra is all about new security features in the platform, specifically with firewalls. Take a quick look:

Toward the end of the video is one of the big things I got excited about. Meraki has introduced rule groups into their firewall platform. Sounds like a strange thing to get excited about, right? Kind of funny how the little things end up mattering in the long run. The real reason I’m getting excited about it has nothing to do with the firewall or even the interface. It has everything to do with being scalable.

One of the reasons why I’ve always seen Meraki as a solution that is more appropriate for small businesses is the lack of ability to scale. Meraki’s roots as a platform built for small deployments means that the interface has always been focused on making it easy to configure. You may remember from my last post that I wasn’t a fan of the way everything felt like it was driven through deployment wizards. Hand holding me through my first firewall deployment is awesome. Doing it for my 65th deployment is annoying. In enterprise solutions I can easily script or configure this via the command line to avoid the interface. But Meraki makes me use the UI to get it done.

Enterprises don’t run on wizards. They don’t work with assistance turned on. Enterprises need scalability. They need to be able to configure things to run across dozens or hundreds of devices quickly and predictably. They need that to happen quickly, too. Sure, it may only take four minutes to configure something via the firewall. Now, multiply that by 400 devices. Just that one little settings going to take over 26 hours to configure. And that’s assuming you don’t need to take time for a break or to sleep. When you’re working at the magnitude of an enterprise, those little shortcuts matter.

You might be saying right now, “But what about policies and groups for devices?” You would be right that groups can definitely speed up the process. But how many groups do you think the average enterprise would have for devices? I doubt all routers or switches or firewalls would conveniently fit into a single group. Or even ten groups. And there’s always the possibility that a policy change among those groups may get implemented correctly nine times out of those ten. The tenth time it gets an error that could still affect hundreds of devices. You see how this could get out of hand.

That’s why I’m excited about the little things like firewall groups. It means that Meraki is starting to see that these things need to be programmatically done. Building a series of policies in software makes it easy to deploy over and over again through scripting or enhanced device updating. Polices are good for rules. They’re not so good for devices. So the progress means that Meraki needs to keep building toward letting us script these deployments and updates across the entire organization.

Hextuple Option

The other thing that’s neatly buried at the end of the video is courtesy of a question from my friend Jody Lemoine (@GhostInTheNet). He points out that there are IPv6 addresses on the dashboard. The Meraki presenters confirm that they are testing IPv6 support natively and not just in bridged mode. Depending on when you read this post in the future, it may even be out already. You know that I’m an IPv6 fan and I’ve been tough on Meraki in the past about their support for it. So I’m happy to see that it’s in the works.

But more importantly I’m pleased that Meraki has jumped into a complex technical solution with both feet. Enterprises don’t need a basic set of services. They don’t want you to just turn on the twenty most common settings. Enterprises need odd things sometimes. They need longer VPN lifetimes or weird routing LSA support. Sometimes they need to do the really odd things because their thousand-odd devices really have to have this feature turned on to make it work.

Now, I’ve rightfully decried the idea that you should just do whatever your customers want, but the truth is that doing something silly for one customer isn’t the same as doing it for a dozen or more that are asking for a feature. Meraki has always felt shy to me about the way they implement features in their software. It’s almost the opposite of Cisco, in a way. Cisco is happy to include corner-case options on software releases on a whim to satisfy million-dollar customers. Meraki, on the other hand, has seemed to wait until well past critical mass to turn something on. It almost feels like you have to break down their door to get something advanced enabled.

To me, IPv6 is the watershed. It’s something that the general public doesn’t think they need or doesn’t know they really should have. Cisco has had IPv6 support in IOS for years. Meraki has been dragging along until they feel the need to implement it. But implementing it in 2020 makes me feel they will finally start implementing features in a way that makes sense for users. Hopefully that also means they’ll be more responsive to their Make A Wish feature requests and start indexing how many customers really want a certain feature or certain option enabled.

Napoleon Complex

The last thing that I’ll say about the transformation of Meraki is about their drive to embrace complexity. I know that Russ White and I don’t always see eye-to-eye about complexity. But I feel that hiding it is ultimately detrimental to IT staff members. Sure, you don’t want the CEO or the janitor in the wireless system deploying new SSIDs on a daily basis or disabling low data rates on APs. But you also need to have access to those features when the time comes. That was one of my big takeaways in my previous Meraki post.

I know that Meraki prides themselves on having a clean, easy-to-use interface. I know that it’s going to take a while before Meraki starts exposing their interface to power users. But, it also took Microsoft a long time to let people start doing massive modifications via PowerShell. Or Apple letting users go wild under the hood. These platforms finally opened a little and let people do some creative things. Sure, Apple IOS is still about as locked down as Meraki is, but every WWDC brings some new features that can be tinkered with here and there. I’m not expecting a fully complexity-embracing model in the next couple of years from Meraki, but I feel that the right people internally are starting to understand that growth comes in the form of enterprise customers. Enterprises don’t shy away from complexity. They don’t want it hidden. They want to see it and understand it and plan for it. And, ultimately, embrace it.


Tom’s Take

I will freely admit that I’m hard on the Meraki team. I do it because I see potential. I remember seeing them for the first time all the way back at Wireless Field Day 2 in their cramped San Francisco townhome office. In the years since the Cisco acquisition they’ve grown immensely with talent and technology. The road to becoming something more than you start out doing isn’t easy. And sometimes you need someone willing to stop you now and then and tell you where directions make more sense. I don’t believe for one moment that my armchair quarterbacking has really had a significant impact on the drive that Meraki has to get into larger enterprises. But I hope that my perspective has shown them how the practitioners of the world think and how they’re slowly transforming to meet those needs and goals. Hopefully in the next couple of years I can write the final blog post in this trilogy when Meraki embraces the enterprise completely.

Really Late Company Christmas Shopping

I’m headed out to Cisco Live Europe today, so I’m trying to get everything packed before I head to the airport. I also realize I need to go buy a few things for my suitcase. Which must be the same thing that a bunch of companies thought this week as they went on a buying spree! Seriously:

I don’t think we’re quite done yet, either. An oblique tweet from a friend with some inside sources leads me to believe that the reason why this is happening right now is because some of the venture funds are getting antsy and are calling in their markers. Maybe they need the funds to cash out investors? Maybe they’re looking to reduce their exposure to other things? Maybe they’re ready to jump on a plane to an uncharted island somewhere?

This is one of the challenges when you’re beholden to investors. Sure, not all of us are independently wealthy and capable of bootstrapping our own startup. We need some kind of funding to make that happen. But as soon as we do we are going to find ourselves at the mercy of their decisions and be forced to play by their rules.

If it’s time for them to get out of the position they have in a company, you’d better have the money. And if you don’t, they’re going to get it. I don’t know for sure what the situation is in both of those cases, but no one had really been talking publicly about buying Nyansa or Big Switch in the last few months. I had always figured that Nyansa would go to a bigger company, much like Aruba buying Rasa Networks in 2016. VMware is an interesting fit for them and a much better enterprise use of the technology in the long term.

Big Switch is puzzling for sure. From what I’ve heard they were profitable last quarter and bullish on the entire outlook for 2020. Did something change? Did the investors decide they wanted out? Or did some other market force push Big Switch to find a new home? When you look at the list of companies that were interested in buying them it’s not surprising. Dell Technologies would have been my first guess given their close working relationship. VMware would have been the second. Juniper and Extreme were interesting options but I’m not quite sure where the fit would be with them. And Cisco would have purchased as a purely defensive measure. So Arista is an interesting fit. I’m still waiting to hear some more details given how fresh this story is.

We’re into Q1 for most companies now. Or at least the ones that don’t have an odd FY schedule. So they’re realizing they either need to catch up on some R&D or that they have enough cash or equity lying around to go shopping. And if some of the companies on the market are selling at lower prices, it only makes sense to snap them up. Even if the integration pieces are going to take a while. Nyansa has great analytics, but it’s focused on the endpoint side. It’s going to take some work to make it all play nice with the other analytics pieces of VMware. That’s not cheap, but if the price of doing it through acquisition is cheaper than doing it through in-house efforts then buying your way in looks better in the long run. And if some venture fund is looking for cash at the same time, it could be a match made in heaven.


Tom’s Take

I’m a tech person. Even through the stuff I’ve done with Tech Field Day where I’ve had to learn more about financing and such I still consider myself a tech grunt first and foremost. When the talk turns to preferred share options and funding rounds and other such stuff I tend to look back at technology and figure out where that stuff is going. People that work with money for a living have a much different opinion of technology than tech people do. If that weren’t the case, we’d be talking about Betamax and HD-DVD more than we do now. But, money is still the way that tech gets done. And sometimes you need to do a little shopping to get the tech you need to keep building.

Why Do You Need NAT66?

It’s hard to believe that it’s been eight years since I wrote my most controversial post ever. I get all kinds of comments on my NAT66 post even to this day. I’ve been told I’m a moron, an elitist, and someone that doesn’t understand how the Internet works. I’ve also had some good comments that highlight a specific need for tools like NAT66. I wanted to catch up with everything and ask a very important question.

WHY?

Every Tool Has A Purpose

APNIC had a great post about NAT66 back in 2018. You should totally read it. I consider it a fair review of the questions surrounding NAT’s use in 2020. Again, NAT has a purpose and when used properly and sparingly for that purpose it works well. In the case of the article, Marco Cilloni (@MCilloni) lays out the need to use NAT66 to use IPv6 at his house due to ISP insanity and the latency overhead of using tunnels with Hurricane Electric. In this specific case, NAT66 was a good tool for him to use to translate his /128 address to something useable in his network.

If you’re brave, you should delve into the comments. A couple of my favorite passages:

People from your side completely fail to understand that while NAT was not designed for security, it did bring security, in particular for home users.

Either the IPv6 community sobers up and starts actively supporting NAT or you can kiss the IPv6 protocol goodbye. I’ve put many many hours into IPv6 integration and I’m starting to realize it’s a doomed protocol and should be scraped.

It’s obvious to me a decade later that there are two camps of people that discuss NAT66: Those that are using it for a specific purpose. And those that think it has to be enabled because they use it with IPv4 networks. An example of the former:

Pieter knew what he needed to do to make it work and he did it. Not because it was something that he configured on his home router to make the Internet work. But he also knew this wasn’t the optimal solution. When you can’t change the ISP router to accept RAs you need a workaround. There are a ton of stories I get from people just like Pieter that involve a workaround or a specific thing like provider-independent address space not being available. These are the kinds of situations that tools like NAT were designed to solve.

X, Why, Z

Let’s get back to my earlier question: WHY?

For those that believe that NAT66 needs to be used everywhere, why? Is it because you’re used to using RFC1918 address space to conserve addresses? Maybe you don’t like the idea of your MAC address “leaking” onto the Internet? How about providing enhanced security, as the commenters above mentioned? There’s even a comment on my original post from late last year about using NAT to force redirects for DNS entries to avoid having Google overriding DNS on his Android phone. Why?

For me, this always comes back to the same answer I give again and again. NAT, used for a purpose, isn’t inherently evil or wrong. It’s when people start using it for things it wasn’t designed for and breaking every other thing that it becomes a crutch and a problem. And those problematic solutions always cause issues somewhere down the line.

NAT44 broke FTP. It broke end-to-end communications. It created the need for big, hungry middle boxes to track state of connections. It conflated addressing and firewall functions. Anyone that screams at me and tells me that NAT provides security by obscuring addresses usually has an edge firewall doing NAT for them. In a true one-to-one NAT configuration, accessing the public or global IP address of the host in question does nothing to halt the traffic from being passed through. People who talk to be about address obfuscation with NAT44 or NAT66 usually mean PAT, not NAT. They want one address masquerading as all the addresses in their organization to “hide” themselves from the Internet.

Why does NAT need to solve those problems? I get the complexity of using provider independent (PI) space internally and the need to configure BGP to avoid asymmetric routing. Especially if your upstream provider isn’t paying attention to communities or attributes you’re using to avoid creating a transit network or weight traffic to prefer one link over the other. NAT may be a good short-term solution for you in these cases. But do you really want to run NAT66 for the next decade because of a policy issue with your ISP? That, to me, is the ultimate in passive-aggressive configuration. Why not just jump through hoops instead of hammering out a real solution?

I may sound like a 5-year-old, but “WHY” is really the most important question you can ask. Why do you need NAT66? Why do you even need IPv6? Is it a requirement for a contract? Maybe you have to enable it to allow your application to be uploaded to the walled garden store for your mobile provider. Maybe you just want to play around with it and get an Hurricane Electric Sage t-shirt. But if you can’t answer “WHY” then all the other things you want aren’t going to make sense.

I don’t run my HE.net tunnel at home any longer. I didn’t have an advantage in running IPv6 and I had more than a few headaches that had to be addressed. There will come a day when I want to do more with IPv6, but that’s going to require more bandwidth than I have right now. I still listen to IPv6 podcasts all the time, like the excellent IPv6 Buzz featuring my friend Ed Horley. Even the experts are bullish about the adoption of IPv6 but not ignorant of the challenges. And these guys run a business doing IPv6.

For those of you that are already limbering up your fingers to leave me a comment, stop and ask yourself “WHY” first. Why do you need NAT66? Is it because you have a specific problem you can’t solve any other way? Or is it because you need NAT66 to be there just like ISDN dialer maps and reserved VLANs on switches? To me, even past my days in the trenches as an engineer, the days of needing NAT everywhere are long gone. The IPv4 Internet relies on NAT. We are hobbled by that fact. VPNs need NAT traversal. Game consoles and VoIP devices need to be NAT-aware, which increases complexity.

The IPv6 Internet doesn’t need to be like that. Let’s teach the “right” way to do things. You don’t need NAT66 for privacy. RFC 4941 exists for that. You don’t need to think NAT66 provides security. That’s what perimeter devices are for. Anything more complicated than those two “simple” cases is going to be an exercise in frustration. You don’t need to bellow from the rooftops that NAT is a necessary and mandatory piece of all Internet traffic. Instead, come back to “WHY”. Why do two devices need a middle box to translate for them and hold state information? Why can’t my ISP provide me the address space I want or the connectivity options that make this work easily? The more “WHY” questions you ask, the more the answers will come. If you just want to fold your arms together and claim that NAT is needed because “This is the way,” you may find yourself alone on the Island of NAT sooner than you think.


Tom’s Take

My identity as the “I Hate NAT” guy is pretty solid at this point in my life. It’s too late to change now. Sure, I don’t hate NAT completely. It’s like a vulture to me. It serves a specific purpose but having it everywhere is almost always problematic. By now, the only way I can work against the needless expansion of NAT is to ask hard questions. Ironically enough, the hard questions aren’t multi-part essays that require an open-book test to resolve. Often, the hardest questions can just be a single word that forces you to question what you need and why you need it.

The Art of Saying “No”

No.

It’s the shortest sentence in the English language. It requires no other parts of speech. It’s an answer, a statement, and a command all at once. It’s a phrase that some people have zero issues saying over and over again. And yet, some others have an extremely difficult time answering anything in the negative.

I had a fun discussion on twitter yesterday with some friends about the idea behind saying “no” to people. It started with this tweet:

Coincidentally, I tweeted something very similar to what Bob Plankers had tweeted just hours before:

The gist is the same though. Crazy features and other things that have been included in software and hardware because someone couldn’t tell another person “no”. Sadly, it’s something that happens a lot in the IT industry. As a bad as IT’s reputation for being the Department of NO is we often find ourselves backed into a corner when it comes to saying “yes” way too much. I wanted to examine a couple of specific situations when we really should be saying “no” to people instead of just agreeing to keep the conversation moving.

Whatever You Need, We Do

When I worked at a VAR, I did both pre- and post-sales. I would go out to the customer site with the account managers to discuss technologies and try to get the potential customer what they needed. One of the AMs I worked with loved to introduce me and infer my skill level by saying, “Tom is the guy that makes all my lies come true.” It was his favorite icebreaker. We would all chuckle and get the conversation started.

Sadly, that icebreaker was true more often than it should have been. Because he (and some other AMs) would very often tell the customer whatever they wanted to hear to close the sale. Promise we could install the whole system in three hours? Easy. Tell them it will fix all their crazy Internet speed problems? You got it. Even as bad as telling this this will make their applications run so much faster and keep them super secure the whole time. Whatever it takes to make you sign the check.

When I arrived on site with a pile of equipment and a list of things that I needed to configure, I was quite often stricken with frustration because of the way my AMs had fibbed to the customer about the capabilities of the solution. Maybe they sold the wrong licenses to keep the costs down. Or, in some cases, they sold a feature that was much harder to implement than others. I seriously couldn’t count on both hands and feet the number of times I was forced to go to the customer and ask them what they were expecting from the solution based on what was sold to them.

Sometimes, you have to say “no”. That’s a hard phrase to say when you work in sales. You want the customer to get your product or service instead of your competitors. You want to book revenue. You want to keep your boss happy and keep yourself employed. You want to meet your goals. But you also don’t want to burn your bridges when it comes to being a good resource instead of someone just looking to make a buck.

I always tried to position myself as someone that could off impartial advice about a subject. If the customer wanted something that I couldn’t deliver I would say, “That’s not a good idea” or “Have you thought about why you want that?” I wanted to make sure that the customer really did want the thing they were asking for. Anyone that’s ever had a CEO or CIO clamor to implement a thing they say in an airport ad after coming back from a conference trip will attest to the power of wanting cool, shiny things.

Being a truly trusted advisor to your client means you have to be honest. No, that open source project won’t get you what you’re looking for just because it’s free. No, you can’t make your old intercom system work with a new VoIP UC solution. No, you can’t just keep running this server another three years on Windows 2003 Server so you can avoid the upgrade fees for your new clients. Saying “no” isn’t just about making them avoid things they don’t want to do. It’s about helping them understand a strategy and vision for what they need to be doing. Customers don’t always need to be told what they want to hear. They really do need to be told what they need to hear though.

Managing Products, I Think

The other side of the equation comes from the vendor side with product managers. I’ll admit that I have a limited view here, but the people that I’ve talked to seem to back up my thoughts on the matter. As stated above, I’ve always wondered how crazy random features made it into a software product. My supposition is that someone wanted to close a million-dollar deal somewhere and that feature was one of the things that it took to make that happen.

I also know that crazy things like this happen more often than you might realize. For example, ever wonder why wireless access points come configured with 80 MHz channels out-of-the-box when everyone you know, vendors included, tell you to configure them for 20 MHz or even 40 MHz instead? Could it be that when testing companies pull the APs out of the box that they don’t reconfigure the channels? Or perhaps it’s because those APs with 80 MHz defaults seem “faster” on those same tests? It’s a silly default configuration but it wins contests and reports. That’s the kind of decision that gets made by a product manager that wants to win customers or awards.

I would hope that the people that make products understand that people don’t really need insane corner case features to make products work. Worse yet, having those crazy features involved to support a random solution that is likely going to be replaced in a few years anyway cuts into partner revenue. The vendor shouldn’t be the one making their equipment compatible with every piece of hardware under the sun. Microsoft doesn’t write all the drivers for hardware to work with Windows, for example. They just write the specs for interfacing with the OS and leave the driver software writing up to the people that make the webcams or Bluetooth coffee mugs.

Vendors need to let the integration work happen with the integrators. Maybe they get access to some kind of advanced API or toolkit that assists with writing the “glue” that ties systems together. But building in basic support for everything under the sun from the outset creates support nightmares and unforeseen interactions with things that you will own for the next decade. Take the easy way out and tell people “no” and that they need to find someone to help them instead of just begging to have that crazy feature request included in a one-off build. Or, worse yet, included in main release and enabled by default.


Tom’s Take

I will admit that I have a really hard time saying no to things. It increases my workload and makes me so distracted that I can barely see straight most of the time. But there are times that I know I need to respond in the negative to something. It’s usually when I see that the person making the request either doesn’t know what they’re asking for or will end up regretting it later on. The key is to help them understand that you have the experience they lack and the vision to see this isn’t going to work the way they are planning. Hopefully they’ll come around to your way of thinking. But if not, just remember that “No.” is a complete sentence.

Time For Improvement

Welcome to 2020! First and foremost, no posts from me involving vision or eyesight or any other optometrist puns for this year. I promise 366 days free of anything having to do with eyeballs. That does mean a whole world of other puns that I’m going to be focusing on!

Now, let’s look back at 2019. The word that I could use to describe it was “hectic”. It felt like everything was in overdrive all year long. There were several times that I got to the end of the week and realized that I didn’t have any kind of post ready to go. I’m the kind of person that likes to write when the inspiration hits me. And instead I found myself scrambling to write up some thoughts. And that was something I told myself that I was going to get away from. So we’re going to call that one a miss and get back to trying to post on a day other than Friday.

That also means that, given all the other content that I’ve been working on with Gestalt IT that I’m going to have to schedule some time actually working on that content instead of hoping that some idea is going to fly out of the blue at 11:30pm the night before I’m supposed to put a post up. The good news is that also means that I’m going to be upping the amount of content that I’m consuming for inspiration. Since I spent a good chunk of they year going on a morning walk it meant that I had a lot more time to consume podcast episodes and wash those ideas around. I’m sure that means that I’m going to find the time and the motivation to keep turning out content.

Part of the reason for that is because of something that Stephen Foskett (@SFoskett) told me during a call this past year. He said that I’ve been consistently turning out content for the last 9 years on a weekly basis. I’m proud of that fact. Sure, there’s been a couple of times in the last year or two when I’ve missed and had to publish something on a Saturday or the Monday after. But overall I’m happy with the amount of content that I’ve been writing here. And because you all keep on reading it I’m going to keep writing it. There’s a lot of value in what I do here and I hope you all continue to value it too.

IA Writing

Last January I switched over to using IA Writer for my posts on my iPad. I wrote primarily on that platform all year long. I can say that It’s very handy to be able to grab your mobile device and hammer out a post. Given that I can do split screen and reference my hand-written notes from briefings it’s a huge advantage to keeping my thoughts organized and ready to put down on paper.

Between IA Writer for writing, Notability for taking notes during briefings, and Things to keep me on track for the posts that I need to cover I’ve gotten my workflow down to something that works for me. I’m going to keep tweaking it for sure but I’m happy that I can get information to a place where I can refer to it later and have reminders about what I need to cover. It makes everything seamless and consistent. There are still some things that I need to use Microsoft Word to write, but those are long-form projects. Overall, I’m going to keep refining my process to make it better and more appropriate for me.

Ultimately, that’s a big goal for me in 2020 and something that I’ve finally realized that I do regularly without conscious thought. If you’ve read any books on process or project management you’ve probably heard of kaizen, the Japanese concept of continuous improvement of processes. It’s something that drives companies like Toyota to get better at everything they do and never accept anything as “complete”.

I’ve read about kaizen before but I never really understood that it could mean any improvement before. I had it in my head that the process was about change all the time. It wasn’t until I sat down this year and analyzed what I was doing to find that I’m always trying to optimize what I do. It’s not about finding shortcuts for the sake of saving time. It’s about optimizing what I do to save effort and the investment of time. For me it’s not about spending 8 hours to write a script that will automate a one-time 30-minute task. It’s about breaking down the task and figuring out how many times I’ll do it and how I need to optimize the process to spend less time on it. If the answer is a script or an automation routine then I’m all for it. But the key is recognizing the kaizen process and putting a name to my behavior.


Tom’s Take

2020 is going to be busy. Tech Field Day is going to be busy. I’m going to be at a lot of events checking out what’s going on and how to make new things happen. I’m also going to be writing a lot. And when you factor in my roles outside of work with Wood Badge and a trip to Philmont, NM with my son for a high adventure trip with his scout troop you can see I’m going to be quite occupied even when I’m not writing. But I’m not going to remove anything from my process. As I said above, I’m going to kaizen everything and fit it all in. That might mean having a couple of posts queued up when I’m in the back country or taking some extra time after dinner to write. But 2020 is going to be a big year of optimizing my workflows and improving in every way.

Fast Friday- Keeping Up With The Times

We’re at the end of the 2010s. It’s almost time to start making posts about 2020 and somehow working vision or eyesight into the theme so you can look just like everyone else. But I want to look back for a moment on how much things have changed for networking in the last ten years.

It’s true that networking wasn’t too exciting for most of the 2000s. Things got faster and more complicated. Nothing really got better except the bottom lines of people pushing bigger hardware. And that’s honestly how we liked it. Because the idea that we were all special people that needed to be at the top of our game to get things done resonated with us. We weren’t just mechanics. We were the automobile designers of the future!

But if there’s something that the mobile revolution of the late 2000s taught us, it was that operators don’t need to be programmers to enjoy using technology. Likewise, enterprise users don’t need to be CCIEs or VCDXs to make things work. That’s the real secret behind all the of the advances in networking technology in the 2010s. We’re not making networking harder any more. We’re not adding complexity for the sake of making our lives more important.

The rapid pace of change that we’ve had over the last ten years is the driver for so many technologies that are changing the role of networking engineers. Automation, orchestration, and software-driven networking aren’t just fads. They’re necessities. That’s because of the focus on new features, not in spite of them. I can remember administering CallManager years ago and not realizing what half of the checkboxes on a line appearance did. That’s not helping things.

Complexity Closet

There are those that would say that what we’re doing is just hiding the complexity behind another layer of abstraction, which is a favorite saying of Russ White. I’d argue that we’re not hiding the complexity as much as we’re putting it back where it belongs – out of sight. We don’t need the added complexity for most operations. This flies in the face of what we want networking engineers to know. If you want to be part of the club you have to know every LSA type and how they interact and what happens during an OSPF DR election. That’s the barrier for entry, right?

Except not everyone needs to know that stuff. They need to know what it looks like when routing is down. More likely, they just need to recognize more basic stuff like DNS being down or something being wrong in the service provider. The odds are way better that something else is causing the problem somewhere outside of your control. And that means your skills are good for very little once you’ve figured out that the problem is somewhere you can’t help.

Hiding complexity behind summary screens or simple policy decisions isn’t bad. In fact, it tends to keep people from diving down rabbit holes when fixing the problems. How many times have we tried to figure out some complicated timer issue when it was really a client with a tenuous connection or a DNS issue? We want the problems to be complicated so we can flex our knowledge to others when in fact we should be applying Occam’s Razor much earlier in the process. Instead of trying to find the most complicated solution to the problem so we can justify our learning, we should instead try to make it as simple as possible to conserve that energy for a time when it’s really needed.


Tom’s Take

We need to leverage the tools that have been developed to make our lives easier in the 2020s instead of railing against them because they’re upsetting our view of how things should be. Maybe networking in 2010 needed complexity and syntax and command lines. But networking in 2022 might need automated scripts and telemetry to figure things out faster since there are ten times the moving parts. It’s like memorizing phone numbers. It works well when you only need to know seven or eight with a few digits. But when you need to know several hundred each with ten or more digits it’s impossible. Yet I still hear people complain about contact lists or phone books because “they used to be good at memorizing numbers”. Instead of us focusing on what we used to be good at, let’s try to keep with the times and be good at what we need to be good at now and in the future.

Stop SIS – Self-Inflicted Spam

Last month I ran across a great blog post by Jed Casey (@WaxTrax) about letting go of the digital hoard that he had slowly been collecting over the years. It’s not easy to declare bankruptcy because you’ve hit your limit of things that you can learn and process. Jed’s focus in the article is that whatever he was going to try and come up with was probably out of date or past its prime. But it got me to thinking about a little project that I’ve been working on over the past few months.

Incoming!

One of the easy ways to stay on top of things in the industry is to sign up for updates. A digest email here and a notification there about new posts or conversations is a great way to stay in-the-know about information or the latest, greatest thing. But before you know it you’re going to find yourself swamped with incoming emails and notifications.

I’ve noticed it quite a bit in my inbox this year. What was once a message that I would read to catch up became a message I would scan for content. That then became a message that I skipped past after I read the subject line and eventually settled into something I deleted after seeing the sender. It’s not that the information contained within wasn’t important in some way. Instead, my processing capability for the email or the message was settled into a mode where it took a lot to break me out of the pattern. Before I knew it, I was deleting dozens of messages a day simply because they were updates and digests that I didn’t have the time or mental capacity to process.

That’s when I realized that I had a problem. I needed to reduce the amount of email I was getting. But this wasn’t an issue like normal spam. Unsolicited Bulk Email is a huge issue for the Internet but the systems we have in place now do a good job of stopping it before it lands in my inbox. Instead, my bigger issue was with the tide of email that I had specifically chosen to sign up for. The newsletters and release updates and breaking news that was flooding my email client and competing for my attention on an hourly basis. It was too much.

Geronimo!

So what did I do?

  1. The first thing I did was stop. I stopped signing up for every newsletter and notification that interested me. Instead, I added them all to a list and let them sit for a week. If they still appealed to me after that week than I would sign up for them. Otherwise, then went into the garbage pile before I ever had a chance to start getting inundated by them.
  2. The next thing was to unsubscribe from every email that I got that was deleted without reading. Sure, I could just keep deleting them. But that process still took my attention away from what I needed to be working on. Instead, i wanted to stop that at the source. Keeping it from being sent in the first place might only save two seconds from it being deleted out of my inbox, but those two seconds per email can really add up.
  3. Summaries are your friend. For every email I did actually read, I looked to see if there was a summary or digest option instead of constant updates. Did I really need to know instantly when someone had posted or replied? Nope. But figuring it out on a more set schedule, such as every other day or once a week, was a big improvement in the way that I could process information. I could dedicate time to reading a longer digest instead of scanning an notification that quickly blurred into the background.
  4. I turned off instant updates on my mail client for a time. I didn’t need to jump every time something came in. Instead, I set my client to update every 15 minutes. I knew there would be some kind of lag in my replies in a lot of cases, but it honestly wasn’t that different from before. What changed here is that I could deal with the bulk amount of email all at once instead of trying to process them one at a time as they arrived. And, ironically enough, the amount of time it took to deal with email went down significantly as I reduced the amount of email coming in. Instead, I was able to set my mail client back to instant updates and keep a better pace of getting to the important emails as they came in.

Once I went through this process I was able to reduce the amount of email that I was getting blasted with. And being able to keep my head above water helped me process the stuff that did end up coming in later that I needed to stop. The mailing lists that you get subscribed to out of nowhere. The vendor press release schedules that I needed to categorize and process. Having the ability to catch my breath helped immensely. Once I realized that I was the cause for my spam issues I was able to make headway.


Tom’s Take

Don’t do what I did. Don’t let yourself get to the point where I was. Your inbox isn’t a garbage pile for every random email or update. Don’t sign up for stuff you aren’t planning on reading. Audit your newsletters frequently to see how much you’re reading them. If you find yourself deleting stuff without ever opening it then you know you’ve reached a point where you need to stop and reassess what you’re trying to accomplish. If you aren’t even bothering to open things then you aren’t really staying on top of the game. Instead, focus your attention on making sure you have the attention span left to look at things. And stop inflicting spam on yourself.

Magical Mechanics

If you’re a fan of this blog, you’ve probably read my last post about the new SD-WAN magic quadrant that’s been making the rounds and generating discussion. Some people are smiling that this report places Cisco in an area other than leadership in the SD-WAN space. Others are decrying the report as being unfair and contradictory. I wanted to take another look at it given some new information and some additional thoughts on the results.

Fair and Square

The first thing I wanted to do is make sure that I was completely transparent with the way the Gartner Magic Quadrant (MQ) works. I have a very good idea thanks to a conversation with Andrew Lerner (@Fast_Lerner), who is the Research VP of Networking at Gartner. Andrew was nice enough to clarify my understanding of the MQ and accompanying documentation. I’ll quote him here to make sure I don’t get anything wrong:

In an MQ, we assess the overall vendors’ behavior and offering in the market. Product, service/support sales, marketing, innovation, etc. if a vendor has multiple products in a market and sells them regularly to the enterprise, they are part of the MQ assessment. Viable products are not “excluded”.

As you can see from Andrew’s explanation, the MQ takes into account all the aspects of a company. It’s not just a product. It’s the sales, marketing, and other aspects of the company that give the overall score for a company. So how does Gartner figure out how products and services? That’s where their Critical Capabilities documents come into play. They are focused exclusively on products and services. They don’t take marketing or sales or anything else into account.

According to Andrew, when Gartner did their Critical Capabilities document on Cisco, they looked at Meraki MX and IOS-XE only. Viptela vEdge was not examined. So, the CC documents give us the Gartner overview of technology behind the MQ analysis. While the CC documents are focused solely on the Meraki MX and IOS-XD SD-WAN technology, they are components of the overall analysis in the MQ that was published.

What does that all mean in the long run?

Axis and Allies

In order to break this down a bit further, let’s ignore the actual quadrants in the MQ for the moment. Instead, let’s think about this picture as a graph. One axis of the graph is “Ability to Execute,” or in other words can the company do what they say they’re going to do? The other axis is “Completeness of Vision.” This is a question of how the company has rounded out their understanding of the market forces and direction. I want to use this sample MQ with some labels thrown in to help my readers understand how each axis can affect the ranking a company gets:

So, let’s look at where Cisco was situated on those graphs. They are not in the upper right part of the graph, which is the “good” part according to what most people will tell you when they glance at it. Cisco was ranked almost directly in the middle of the graph. Why would that be?

Let’s look at the Execution axis. Why would Cisco have some issues with execution on SD-WAN? Well, the biggest one is probably the shift to IOS-XE and the issues with code quality. Almost everyone involved deploying IOS-XE has told me that Cisco had significant issues in the early releases. Daniel Dib (@DanielDibSwe) had a great conversation with me about the breakdown between the router code and the controller code a week ago. Here’s the first tweet in the chain:

So, there were issues that have been addressed. But is the code completely stable right now? I can’t say, since I haven’t deployed it. But ask around and see what the common wisdom is. I would be genuinely interested to hear how much better things have gotten in the last six months. But, those code quality issues from months ago are going to be a concern in the report. And you can’t guarantee that every box is going to be running on the latest code. Having issues with stable code is going to impact your ability to execute on your vision.

Now, let’s look at the Completeness of Vision axis. If you reference the above picture you’ll see that the Challengers square represents companies that don’t yet understand the market direction. Given that this was the location that Cisco was placed in (barely), let’s examine why that might be. Let’s start by asking “which Cisco product is best for my SD-WAN solution?” Do you know for sure? Which one are you going to be offered?

In my last post, I said that Cisco had been deemphasizing vEdge deployments in favor of IOS-XE. But I completely forgot about Meraki as an additional offering for SMBs and smaller deployments. Where does Meraki make the most sense? And how big does your network need to be before it outgrows a Meraki deployment? Are you chasing a feature? Or maybe you need a service that isn’t offered natively on the Meraki platform? All of these questions need to be answered when you look at what you’re going to do with SD-WAN.

The other companies that Cisco will tell you are their biggest competitors are probably VMware and Silver Peak. How many SD-WAN platforms do they sell? How about companies ranked closer to Cisco in the MQ like Citrix, CloudGenix, or Versa Networks? How many SD-WAN solutions do they offer? How about HPE, Juniper and Aryaka?

In almost every case, the answer is “one”. Each of these companies have settled on a single solution for SD-WAN or SD-Branch. They don’t split those deployments across different product lines. They may have different sized boxes for things, but they all run the same common software. Can you integrate an IOS-XE SD-WAN appliance into a Meraki deployment? Can you take a Meraki MX and make it work with vEdge?

You may be starting to see that the completeness of Cisco’s vision isn’t lacking in SD-WAN but instead how they’re going to accomplish it. Rather than having one solution that can be scaled to fit all needs, Cisco is choosing to offer two different solutions for SMBs and enterprises. And if you count vEdge as a separate product from IOS-XE, as Cisco has suggested in some of their internal reports, then you have three products! I’m not saying that Cisco doesn’t have a vision. But it really looks like that vision is hazier than it should be.

If Cisco had a unified vision with stable code that was integrated up and down the stack I have no doubts they would have been rated higher. When Cisco was deploying Viptela vEdge as the sole SD-WAN offering they had it was much easier to figure out how everything was going to integrate together. But, just like all transitions, the devil is in the details here as Cisco tries to move to IOS-XE. Code quality is going to be a potential source of problems no matter what. But if you are staking your reputation on moving everyone to a single code base from a different more stable one you had better get it right quickly. Otherwise you’re going to get it counted against you.


Tom’s Take

I really appreciate Andrew Lerner for reaching out regarding my analysis of the MQ. I will admit I wasn’t exactly spot on with the differences between the MQ and the Critical Capabilities documents. But the results are close. The CC analyzes Cisco’s big platforms and tells people how they work. The MQ takes everything as a whole and gives Cisco a ranking of where they stand in the market. Sales numbers aside, do you think Cisco should be a leader in the SD-WAN space? Do you feel that where they are today with IOS-XE and Meraki MX is a complete vision? If you do then you likely won’t care about the MQ either way. But for those that have questions about execution and vision, maybe it’s time to figure out what might have caused Cisco to be right in the middle this time around.