Predictions As A Service

It’s getting close to the end of the year and it’s time once again for the yearly December flood of posts that will be predicting what’s coming in 2018. Long time readers of my blog know that I don’t do these kinds of posts. My New Year’s Day posts are almost always introspective in nature and forward looking from my own personal perspective. But I also get asked quite a bit to contribute to other posts about the future. And I wanted to tell you why I think the prediction business is a house of cards built on quicksand.

The Layups

It’s far too tempting in the prediction business to play it safe. Absent a ton of research, it’s just easier to play it safe with some not-so-bold predictions. For instance, here’s what I could say about 2018 right now:

  • Whitebox switching will grow in revenue.
  • Software will continue to transform networking.
  • Cisco is going to buy companies.

Those are 100% true. Even without having spent one day in 2018. They’re also things I didn’t need to tell you at all. You already knew them. They’re almost common sense at this point. If I needed to point out that Cisco is going to buy at least two companies next year, you are either very new to networking or you’ve been studying for your CCIE lab and haven’t seen the sun in eight months.

Safe predictions have a great success rate. But they say nothing. However, they are used quite a bit for the lovely marketing fodder we see everywhere. In three months, you could see presentation from an SD-WAN vendor that says, “Industry analyst Tom Hollingsworth predicts that 2018 is going to be a big year for software networking.” It’s absolutely true. But I didn’t say SD-WAN. I didn’t name any specific vendors. So that prediction could be used by anyone for any purpose and I’d still be able to say in December 2018 that I was 100% right.

Playing it safe is the most useless kind of prediction there is. Because all you’re doing is reinforcing the status quo and offering up soundbites to people that like it that way.

Out On A Limb

The other kind of prediction that people love to get into is the crazy, far out bold prediction. These are the ones that get people gasping and following your every word to see if it pays off. But these predictions are prone to failure and distraction.

Let’s run another example. Here are four bold sample predictions for 2018:

  • Cisco will buy Arista.
  • VMware will cease to be a separate brand inside Dell.
  • Hackers will release a tool to compromise iPhones remotely.
  • HPE will go out of business.

Those predictions are more exciting! They name companies like Cisco and VMware and Apple. They have very bold statements like huge purchases or going out of business. But guess what? They’re completely made up. I have no insight or research that tells me anything even close to those being true or not.

However, those bold predictions just sit out there and fester. People point to them and say, “Tom thinks Cisco will buy Arista in 2018!” And no one will every call me on the carpet if I’m wrong. If Cisco does end up buying Arista in 2020 or later, people will just say I was ahead of my time. If it never comes to pass, people will forget and just focus on my next bold prediction of VMware buying Cisco. It’s a hype train with no end.

And on the off chance that I do nail a big one, people are going to think I have the inside track. My little predictions will be more important. And if I hit half of my bold ones, I would probably start getting job offers from analyst firms and such. These people are the prediction masters extraordinaire. If they aren’t telling you something you already know, they’re pitching you something that have no idea about.

Apple has a cottage industry built around crazy predictions. Just look back to August to see how many crazy ideas were out there about the iPhone X. Fingerprint sensor under the glass? 3D rear camera? Even crazier stuff? All reported on as pseudo-fact and eaten up by the readers of “news” sites. Even the people who do a great job of prediction based on solid research missed a few key details in the final launch. it just goes to show that no one is 100% accurate in bold predictions.


Tom’s Take

I still do predictions for other people. Sometimes I try to make tongue-in-cheek ones for fun. Other times I try to be serious and do a little research. But I also think that figuring out what’s coming 5 years from now is a waste of my time. I’d rather try to figure out how to use what I have today and build that toward the future. I’d rather be a happy iPhone user than the people that predicted that Apple’s move into the mobile market would fail miserably. Because that’s a headline you’ll never live down.

I’d like to thank my friends at Network Collective for inspiring this post. Make sure you check out their video podcast!

Advertisements

An Opinion On Offense Against NAT

It’s been a long time since I’ve gotten to rant against Network Address Translation (NAT). At first, I had hoped that was because IPv6 transitions were happening and people were adopting it rapidly enough that NAT would eventually slide into the past of SAN and DOS. Alas, it appears that IPv6 adoption is getting better but still not great.

Geoff Huston, on the other hand, seems to think that NAT is a good thing. In a recent article, he took up the shield to defend NAT against those that believe it is an abomination. He rightfully pointed out that NAT has extended the life of the modern Internet and also correctly pointed out that the slow pace of IPv6 deployment was due in part to the lack of urgency of address depletion. Even with companies like Microsoft buying large sections of IP address space to fuel Azure, we’re still not quite at the point of the game when IP addresses are hard to come by.

So, with Mr. Huston taking up the shield, let me find my +5 Sword of NAT Slaying and try to point out a couple of issues in his defense.

Relationship Status: NAT’s…Complicated

The first point that Mr. Huston brings up in his article is that the modern Internet doesn’t resemble the one build by DARPA in the 70s and 80s. That’s very true. As more devices are added to the infrastructure, the simple packet switching concept goes away. We need to add hierarchy to the system to handle the millions of devices we have now. And if we add a couple billion more we’re going to need even more structure.

Mr. Huston’s argument for NAT says that it creates a layer of abstraction that allows devices to be more mobile and not be tied to a specific address in one spot. That is especially important for things like mobile phones, which move between networks frequently. But instead of NAT providing a simple way to do this, NAT is increasing the complexity of the network by this abstraction.

When a device “roams” to a new network, whether it be cellular, wireless, wired, or otherwise, it is going to get a new address. If that address needs to be NATed for some reason, it’s going to create a new entry in a NAT state table somewhere. Any device behind a NAT that needs to talk to another device somewhere is going to create twice as many device entries as needed. Tracking those state tables is complicated. It takes memory and CPU power to do this. There is no ASIC that allows a device to do high-speed NATing. It has to be done by a general purpose CPU.

Adding to the complexity of NAT is the state that we’re in today when we overload addresses to get connectivity. It’s not just a matter of creating a singular one-to-one NAT. That type of translation isn’t what most people think of as NAT. Instead, they think of Port Address Translation (PAT), which allows hundreds or thousands of devices to share the same IP address. How many thousands? Well, as it turns out about 65,000 give or take. You can only PAT devices if you have free ports to PAT them on. And there are only 65,636 ports available. So you hit a hard limit there.

Mr. Huston talks in his article about extending the number of bits that can be used for NAT to increase the number of hosts that can be successfully NATed. That’s going to explode the tables of the NATing device and cause traffic to slow considerably if there are hundreds of thousands of IP translations going on. Mr. Huston argues that since the Internet is full of “middle boxes” anyway that are doing packet inspection and getting in the way of true end-to-end communications that we should utilize them and provide more space for NAT to occur instead of implementing IPv6 as an addressing space.

I’ll be the first to admit that chopping the IPv6 address space right in the middle to allow MAC addresses to auto-configure might not have been the best decision. But, in the 90s when we didn’t have DHCP it was a great idea in theory. And yes, assigning a /48 to a network does waste quite a bit of IP space. However, it does a great job of shrinking the size of the routing table, since that network can be summarized a lot better than having a bunch of /64 host routes floating around. This “waste” echoes the argument for and against using a /64 for a point-to-point link. If you’re worried about wasting several thousand addresses out of a potential billion then there might be other solutions you should look at instead.

Say My Name

One of the points that gets buried in the article that might shed some light on this defense of NAT is Mr. Huston’s championing for Named Data Networking. The concept of NDN is that everything on the Internet should stop being referred to as an address and instead should be tagged with a name. Then, when you want to look for a specific thing, you send a packet with that name and the Internet routes your packet to the thing you want. You then setup a communication between you and the data source. Sounds simple, right?

If you’re following along at home, this also sounds suspiciously like object storage. Instead of a piece of data living on a LUN or other SAN construct, we make every piece of data an object of a larger system and index them for easy retrieval. This idea works wonders for cloud providers, where object storage provides an overlay that hides the underlying infrastructure.

NDN is a great idea in theory. According to the Wikipedia article, address space is unbounded because you just keep coming up with new names for things. And since you’re using a name and not an address, you don’t have to NAT anything. That last point kind of blows up Mr. Huston’s defense of NAT in favor of NDN, right?

One question I have makes me go back to the object storage model and how it relates to NDN. In an object store, every piece of data has an Object ID, usually a UUID of 32 bits or 64 bits. We do this because, as it turns out, computers are horrible at finding names for things. We need to convert those names into numbers because computers still only understand zeros and ones at their most basic level. So, if we’re going to convert those names to some kind of numeric form anyway, why should we completely get rid of addresses? I mean, if we can find a huge address space that allows us to enumerate resources like an object store, we could duplicate a lot of NDN today, right? And, for the sake of argument, what if that huge address space was already based on hexadecimal?

Hello, Is It Me URLooking For?

To put this in a slightly different perspective, let’s look at the situation with phone numbers. In the US, we’ve had an explosion of mobile phones and other devices that have forced us to extend the number of area codes that we use to refer to groups of phone numbers. These area codes are usually geographically specific. We add more area codes to contain numbers that are being added. Sometimes these are specific to one city, like 212 is for New York. Other times they can cover a whole state or a portion of a state, like 580 does for Oklahoma.

It would be a whole lot easier for us to just refer to people by name instead of adding new numbers, right? I mean, we already do that in our mobile phones. We have a contact that has a phone number and an email address. If we want to contact John Smith, we look up the John Smith we want and choose our contact preference. We can call, email, or send a message through text or other communications method.

What address we use depends on our communication method. Calls use a phone number. If you’re on an iPhone like me, you can text via phone or AppleID (email address). You can also set up a video call the same way. Each of these methods of contact uses a different address for the name.

With Named Data Networking, are we going to have different addresses for each resource? If we’re doing away with addresses, how are we going to name things? Is there a name registry? Are we going to be allowed to name things whatever we want? Think about all the names of videos on Youtube if you want an idea of the nightmare that might be. And if you add some kind of rigid structure in the mix, you’re going to have to contain a database of names somewhere. As we’ve found with DNS, having a repository of information in a central place would make an awfully tempting target. Not to mention causing issues if it ever goes offline for some reason.


Tom’s Take

I don’t think there’s anything that could be said to defend NAT in my eyes. It’s the duct tape temporary solution that never seems to go away completely. Even with depletion and IPv6 adoption, NAT is still getting people riled up and ready to say that it’s the best option in a world of imperfect solutions. However, I think that IPv6 is the best way going forward. With more room to grow and the opportunity to create unique IDs for objects in your network. Even if we end up going down the road of Named Data Networking, I don’t think NAT is the solution you want to go with in the long run. Drive a sword through the heart of NAT and let it die.

VMware and VeloCloud: A Hedge Against Hyperconvergence?

VMware announced on Thursday that they are buying VeloCloud. This was a big move in the market that immediately set off a huge discussion about the implications. I had originally thought AT&T would buy VeloCloud based on their relationship in the past, but the acquistion of Vyatta from Brocade over the summer should have been a hint that wasn’t going to happen. Instead, VMware swooped in and picked up the company for an undisclosed amount.

The conversations have been going wild so far. Everyone wants to know how this is going to affect the relationship with Cisco, especially given that Cisco put money into VeloCloud in both 2016 and 2017. Given the acquisition of Viptela by Cisco earlier this year it’s easy to see that these two companies might find themselves competing for marketshare in the SD-WAN space. However, I think that this is actually a different play from VMware. One that’s striking back at hyperconverged vendors.

Adding The Value

If you look at the marketing coming out of hyperconvergence vendors right now, you’ll see there’s a lot of discussion around platform. Fast storage, small footprints, and the ability to deploy anywhere. Hyperconverged solutions are also starting to focus on the hot new trends in compute, like containers. Along the way this means that traditional workloads that run on VMware ESX hypervisors aren’t getting the spotlight they once did.

In fact, the leading hyperconvergence vendor Nutanix has been aggressively selling their own hypervisor, Acropolis as a competitor to VMware. They tout new features and easy configuration as the major reason to use Acropolis over ESX. The push by Nutanix is to get their customers off of ESX and on to Acropolis to get a share of the VMware budget that companies are currently paying.

For VMware, it’s a tough sell to keep their customers on ESX. There’s a very big ecosystem of software out there that runs on ESX, but if you can replicate a large portion of it natively like Acropolis and other hypervisors do there’s not much of a reason to stick with ESX. And if the VMware solution is more expensive over time you will find yourself choosing the cheaper alternative when the negotiations come up for renewal.

For VMware NSX, it’s an even harder road. Most of the organizations that I’ve seen deploying hyperconverged solutions are not huge enterprises with massive centralized data centers. Instead, they are the kind small-to-medium businesses that need some functions but are very budget conscious. They’re also very geographically diverse, with smaller branch offices taking the place of a few massive headquarters locations. While NSX has some advantages for these companies, it’s not the best fit for them. NSX works optimally in a data center with high-speed links and a well-built underlay network.

vWAN with VeloCloud

So how is VeloCloud going to play into this? VeloCloud already has a lot of advantages that made them a great complement to VMware’s model. They have built-in multi tenancy. Their service delivery is virtualized. They were already looking to move toward service providers as their primary market, but network services and managed service providers. This sounds like their interests are aligning quite well with VMware already.

The key advantage for VMware with VeloCloud is how it will allow NSX to extend into the branch. Remember how I said that NSX loves an environment with a stable underlay? That’s what VeloCloud can deliver. A stable, encrypted VPN underlay. An underlay that can be managed from one central location, or in the future, perhaps even a vCenter plugin. That gives VeloCloud a huge advantage to build the underlay to get connectivity between branches.

Now, with an underlay built out, NSX can be pushed down into the branch. Branches can now use all the great features of NSX like analytics, some of which will be bolstered by VeloCloud, as well as microsegmentation and other heretofore unseen features in the branch. The large headquarters data center is now available in a smaller remote size for branches. That’s a huge advantage for organizations that need those features in places that don’t have data centers.

And the pitch against using other hypervisors with your hyperconverged solution? NSX works best with ESX. Now, you can argue that there is real value in keeping ESX on your remote branches is not costs or features that you may one day hope to use if your WAN connection gets upgraded to ludicrous speed. Instead, VeloCloud can be deployed between your HQ or main office and your remote site to bring those NSX functions down into your environment over a secure tunnel.

While this does compete a bit with Cisco from a delivery standpoint, it still doesn’t affect them with complete overlap. In this scenario, VeloCloud is a service delivery platform for NSX and not a piece of hardware at the edge. Absent VeloCloud, this kind of setup could still be replicated with a Cisco Viptela box running the underlay and NSX riding on top in the overlay. But I think that the market that VMware is going after is going to be building this from the ground up with VMware solutions from the start.


Tom’s Take

Not every issues is “Us vs. Them”. I get that VMware and Cisco seem to be spending more time moving closer together on the networking side of things. SD-WAN is a technology that was inevitably going to bring Cisco into conflict with someone. The third generation of SD-WAN vendors are really companies that didn’t have a proper offering buying up all the first generation startups. Viptela and VeloCloud are now off the market and they’ll soon be integral parts of their respective parent’s strategies going forward. Whether VeloCloud is focused on enabling cloud connectivity for VMware or retaking the branch from the hyperconverged vendors is going to play out in the next few months. But instead of focusing on conflict with anyone else, VeloCloud should be judged by the value it brings to VMware in the near term.

Devaluing Data Exposures

I had a great time this week recording the first episode of a new series with my co-worker Rich Stroffolino. The Gestalt IT Rundown is hopefully the start of some fun news stories with a hint of snark and humor thrown in.

One of the things I discussed in this episode was my belief that no data is truly secure any more. Thanks to recent attacks like WannaCry and Bad Rabbit and the rise of other state-sponsored hacking and malware attacks, I’m totally behind the idea that soon everyone will know everything about me and there’s nothing that anyone can do about it.

Just Pick Up The Phone

Personal data is important. Some pieces of personal data are sacrificed for the greater good. Anyone who is in IT or works in an area where they deal with spam emails and robocalls has probably paused for a moment before putting contact information down on a form. I have an old Hotmail address I use to catch spam if I’m relative certain that something looks shady. I give out my home phone number freely because I never answer it. These pieces of personal data have been sacrificed in order to provide me a modicum of privacy.

But what about other things that we guard jealously? How about our mobile phone number. When I worked for a VAR that was the single most secretive piece of information I owned. No one, aside from my coworkers, had my mobile number. In part, it’s because I wanted to make sure that it got used properly. But also because I knew that as soon as one person at the customer site had it, soon everyone would. I would be spending my time answering phone calls instead of working on tickets.

That’s the world we live in today. So many pieces of information about us are being stored. Our Social Security Number, which has truthfully been misappropriated as an identification number. US Driver’s Licenses, which are also used as identification. Passport numbers, credit ratings, mother’s maiden name (which is very handy for opening accounts in your name). The list could be a blog post in and of itself. But why is all of this data being stored?

Data Is The New Oil

The first time I heard someone in a keynote use the phrase “big data is the new oil”, I almost puked. Not because it’s a platitude the underscores the value of data. I lost it because I know what people do with vital resources like oil, gold, and diamonds. They horde them. Stockpiling the resources until they can be refined. Until every ounce of value can be extracted. Then the shell is discarded until it becomes a hazard.

Don’t believe me? I live in a state that is legally required to run radio and television advertisements telling children not to play around old oilfield equipment that hasn’t been operational in decades. It’s cheaper for them to buy commercials than it is to clean up their mess. And that precious resource? It’s old news. Companies that extract resources just move on to the next easy source instead of cleaning up their leftovers.

Why does that matter to you? Think about all the pieces of data that are stored somewhere that could possibly leak out about you. Phone numbers, date of birth, names of children or spouses. And those are the easy ones. Imagine how many places your SSN is currently stored. Now, imagine half of those companies go out of business in the next three years. What happens to your data then? You can better believe that it’s not going to get destroyed or encrypted in such a way as to prevent exposure. It’s going to lie fallow on some forgotten server until someone finds it and plunders it. Your only real hope is that it was being stored on a cloud provider that destroys the storage buckets after the bill isn’t paid for six months.

Devaluing Data

How do we fix all this? Can this be fixed? Well, it might be able to be done, but it’s not going to be fun, cheap, or easy. It all starts by making discrete data less valuable. An SSN is worthless without a name attached to it, for instance. If all I have are 9 random numbers with no context I can’t tell what they’re supposed to be. The value only comes when those 9 numbers can be matched to a name.

We’ve got to stop using SSN as a unique identifier for a person. It was never designed for that purpose. In fact, storing SSN as all is a really bad idea. Users should be assigned a new, random ID number when creating an account or filling out a form. SSN shouldn’t be stored unless absolutely necessary. And when it is, it should be treated like a nuclear launch code. It should take special authority to query it, and the database that queries it should be directly attached to anything else.

Critical data should be stored in a vault that can only be accessed in certain ways and never exposed. A prime example is the trusted enclave in an iPhone. This enclave, when used for TouchID or FaceID, stores your fingerprints and your face map. Pretty important stuff, yes? However, even with biometric ID systems become more prevalent there isn’t any way to extract that data from the enclave. It’s stored in such a way that it can only be queried in a specific manner and a result of yes/no returned from the query. If you stole my iPhone tomorrow, there’s no way for you to reconstruct my fingerprints from it. That’s the template we need to use going forward to protect our data.


Tom’s Take

I’m getting tired of being told that my data is being spread to the four winds thanks to it lying around waiting to be used for both legitimate and nefarious purposes. We can’t build fences high enough around critical data to keep it from being broken into. We can’t keep people out, so we need to start making the data less valuable. Instead of keeping it all together where it can be reconstructed into something of immense value, we need to make it hard to get all the pieces together at any one time. That means it’s going to be tough for us to build systems that put it all together too. But wouldn’t you rather spend your time solving a fun problem like that rather than making phone calls telling people your SSN got exposed on the open market?

Scotty Isn’t DevOps

I was listening to the most recent episode of our Gestalt IT On-Presmise IT Roundtable where Stephen Foskett mentioned one of our first episodes where we discussed whether or not DevOps was a disaster, or as I put it a “dumpster fire”. Take a listen here:

Around 13 minutes in, I have an exchange with Nigel Poulton where I mention that the ultimate operations guy is Chief Engineer Montgomery Scott of the USS Enterprise. Nigel countered that Scotty was the epitome of the DevOps mentality because his crazy ideas are what kept the Enterprise going. In this post, I hope to show that not only was Scott not a DevOps person, he should be considered the antithesis of DevOps.

Engineering As Operations

In the fictional biography of Mr. Scott, all he ever wanted to do was be an engineer. He begrudging took promotions but found ways to get back to the engine room on the Enterprise. He liked working starships. He hated building them. His time working on the transwarp drive of the USS Excelsior proved that in the third Star Trek film.

Scotty wasn’t developing new ideas to implement on the Enterprise. He didn’t spend his time figuring out how to make the warp engines run at increased efficiency. He didn’t experiment with the shields or the phasers. Most of his “miraculous” moments didn’t come from deploying new features to the Enterprise. Instead, they were the fruits of his ability to streamline operations to combat unforeseen circumstances.

In The Apple, Scott was forced to figure out a way to get the antimatter system back online after it was drained by an unseen force. Everything he did in the episode was focused on restoring functions to the Enterprise. This wasn’t the result of a failed upgrade or a continuous deployment scenario. The operation of his ship was impacted. In Is There No Truth In Beauty, Mr. Scott even challenges the designer of the Enterprise’s engines that he can’t handle them as well as Scotty. Mr. Scott was boasting that he was better at operations than a developer. Plain and simple.

In the first Star Trek movie, Admiral Kirk is pushing Scotty to get the Enterprise ready to depart in hours after an eighteen month refit. Scotty keeps pushing back that they need more time to work out the new systems and go on a shakedown cruise. Does that sound like a person that wants to do CI/CD to a starship? Or does it sound more like the caution of an operations person wanting to make sure patches are deployed in a controlled way? Every time someone in the series or movies suggested doing major upgrades or redesigns to the Enteprise, Scotty always warned against doing it in the field unless absolutely necessary.

Montgomery Scott isn’t the King of DevOps. He’s a poster child for simple operations. Keep the systems running. Deal with problems as they arise. Make changes only if necessary. And don’t monkey with the systems! These are the tried-and-true refrains of a person that knows that his expertise isn’t in building things but in making them run.

Engineering as DevOps

That’s not to say that Star Trek doesn’t have DevOps engineers. The Enterprise-D had two of the best examples of DevOps that I’ve ever seen – Geordi LaForge and Data. These two operations officers spent most of their time trying new things with the Enterprise. And more than a few crises arose because of their development aspirations.

LaForge and Data were constantly experimenting on the Enterprise in an attempt to make it run better. Given that the mission of the Enterprise-D did not have the same five-year limit as the original, they were expected to keep the technology on the Enterprise more current in space. However, their experiments often led to problems. Destabilizing the warp core, causing shield harmonics failures, and even infecting the Enterprise’s computer with viruses were somewhat commonplace during Geordi’s tenure as Chief Engineer.

Commander Data was also rather fond of finding out about new technology that was being developed and trying to integrate it into the Enterprise’s systems. Many times, he mentioned finding out about something being developed the the Daystrom Institute and wanting to see if it would work for them. Which leads me to think that the Daystrom Institute is the Star Trek version of Stack Overflow – copy some things you think will make everything better and hope it doesn’t blow up because you didn’t understand it.

Even if it was a plot convenience device, it felt like the Enterprise was often caught in the middle of applying a patch or an upgrade right when the action started. An exploding star or an enemy vessel always waited until just the right moment to put the Enterprise in harm’s way. Even Starfleet seemed to decide the Enterprise was the only vessel that could help after the DevOps team took the warp core offline to make it run 0.1% faster.

Perhaps instead of pushing forward with an aggressive DevOps mentality for the flagship of the Federation, Geordi and Data would have done better to take lessons from Mr. Scott and wait for appropriate windows to make changes and upgrades and quite tinkering with their ship so often that it felt like it was being held together by duct tape and hope.


Tom’s Take

Despite being fictional characters, Scotty, Geordi, and Data all represent different aspects of the technology we look at today. Scotty is the tried-and-true operations person. Geordi and Data are leading the charge to keep the technology fresh. Each of them has their strong points, but it’s hard to overlook Scotty as being a bastion of simple operations mentalities. Even when they all met together in Relics, Scotty was thinking more about making things work and less on making them fast or pretty or efficient. I think the push to the DevOps mentality would do well to take a seat and listen to the venerable chief engineer of the original Enterprise.

More Accurate IT Acronyms

IT is flooded with acronyms. It takes a third of our working life to figure out what they all mean. Protocols aren’t any easier to figure out if it’s just a string of three or four letters that look vaguely like a word. Which, by the way, you should never pronounce.

But what if the acronyms of our favorite protocols didn’t describe what the designers wanted but instead described what they actually do?

  • Sporadic Network Mangling Protocol

  • Obscurity Sends Packets Flying

  • Expensive Invention Gets Routers Puzzled

  • Vexing Router Firmware

  • Really Intensive Protocol

  • Someone Doesn’t Worry About Networking

  • Somewhat Quixotic Language

  • Blame It oN DNS

  • Cisco’s Universal Call Misdirector

  • Some Mail’s Thrown Places

  • Mangles Packets, Looks Silly

  • Amazingly Convoluted Lists

  • ImProperly SECured

  • May Push Lingering Sanity To Expire

Are there any other ones you can think of? Leave it in the comments.

Back In The Saddle Of A Horse Of A Different Color

I’ve been asked a few times in the past year if I missed being behind a CLI screen or I ever got a hankering to configure some networking gear. The answer is a guarded “yes”, but not for the reason that you think.

Type Casting

CCIEs are keyboard jockeys. Well, the R&S folks are for sure. Every exam has quirks, but the R&S folks have quirky QWERTY keyboard madness. We spend a lot of time not just learning commands but learning how to input them quickly without typos. So we spend a lot of time with keys and a lot less time with the mouse poking around in a GUI.

However, the trend in networking has been to move away from these kinds of input methods. Take the new Aruba 8400, for instance. The ArubaOS-CX platform that runs it seems to have been built to require the least amount of keyboard input possible. The whole system runs with an API backend and presents a GUI that is a series of API calls. There is a CLI, but anything that you can do there can easily be replicated elsewhere by some other function.

Why would a company do this? To eliminate wasted effort. Think to yourself how many times you’ve typed the same series of commands into a switch. VLAN configuration, vty configs, PortFast settings. The list goes on and on. Most of us even have some kind of notepad that we keep the skeleton configs in so we can paste them into a console port to get a switch up and running quickly. That’s what Puppet was designed to replace!

By using APIs and other input methods, Aruba and other companies are hoping that we can build tools that either accept the minimum input necessary to configure switches or that we can eliminate a large portion of the retyping necessary to build them in the first place. It’s not the first command you type into a switch that kills you. It’s the 45th time you paste the command in. It’s the 68th time you get bored typing the same set of arguments from a remote terminal and accidentally mess this one up that requires a physical presence on site to reset your mistake.

Typing is boring, error prone, and costs significant time for little gain. Building scripts, programs, and platforms that take care of all that messy input for us makes us more productive. But it also changes the way we look at systems.

Bird’s Eye Views

The other reason why my fondness for keyboard jockeying isn’t as great as it could be is because of the way that my perspective has shifted thanks to the new aspects of networking technology that I focus on. I tell people that I’m less of an engineer now and more of an architect. I see how the technologies fit together. I see why they need to complement each other. I may not be able to configure a virtual link without documentation or turn up a storage LUN like I used to, but I understand why flash SSDs are important and how APIs are going to change things.

This goes all they way back to my conversations at VMunderground years ago about shifting the focus of networking and where people will go. You remember? The “ditch digger” discussion?

 

This is more true now than ever before. There are always going to be people racking and stacking. Or doing basic types of configuration. These folks are usually trained with basic knowledge of their task with no vision outside of their job role. Networking apprentices or journeymen as the case may be. Maybe one out of ten or one out of twenty of them are going to want to move up to something bigger or better.

But for the people that read blogs like this regularly the shift has happened. We don’t think in single switches or routers. We don’t worry about a single access point in a closet. We think in terms of systems. We configure routing protocols across multiple systems. We don’t worry about a single port VLAN issue. Instead, we’re trying to configure layer 2 DCI extensions or bring racks and pods online at the same time. Our visibility matters more than our typing skills.

That’s why the next wave of devices like the Aruba 8400 and the Software Defined Access things coming from Cisco are more important than simple checkboxes on a feature sheet. They remove the visibility of protocols and products and instead give us platforms that need to be configured for maximum effect. The gap between the people that “rack and stack” and those that build the architecture that runs the organization has grown, but only because the middle ground of administration is changing so fast that it’s tough to keep up.


Tom’s Take

If I were to change jobs tomorrow I’m sure that I could get back in the saddle with a couple of weeks of hard study. But the question I keep asking myself is “Why would I want to?” I’ve learned that my value doesn’t come from my typing speed or my encyclopedia of networking command arguments any more. It comes from a greater knowledge of making networking work better and integrate more tightly into the organization. I’m a resource, not a reactionary. And so when I look to what I would end up doing in a new role I see myself learning more and more about Python and automation and less about what new features were added in the latest OSPF release on Cisco IOS. Because knowing how to integrate technology at a high level is more valuable to everyone than just knowing the commands to type to turn the lights on.