Scotty Isn’t DevOps

I was listening to the most recent episode of our Gestalt IT On-Presmise IT Roundtable where Stephen Foskett mentioned one of our first episodes where we discussed whether or not DevOps was a disaster, or as I put it a “dumpster fire”. Take a listen here:

Around 13 minutes in, I have an exchange with Nigel Poulton where I mention that the ultimate operations guy is Chief Engineer Montgomery Scott of the USS Enterprise. Nigel countered that Scotty was the epitome of the DevOps mentality because his crazy ideas are what kept the Enterprise going. In this post, I hope to show that not only was Scott not a DevOps person, he should be considered the antithesis of DevOps.

Engineering As Operations

In the fictional biography of Mr. Scott, all he ever wanted to do was be an engineer. He begrudging took promotions but found ways to get back to the engine room on the Enterprise. He liked working starships. He hated building them. His time working on the transwarp drive of the USS Excelsior proved that in the third Star Trek film.

Scotty wasn’t developing new ideas to implement on the Enterprise. He didn’t spend his time figuring out how to make the warp engines run at increased efficiency. He didn’t experiment with the shields or the phasers. Most of his “miraculous” moments didn’t come from deploying new features to the Enterprise. Instead, they were the fruits of his ability to streamline operations to combat unforeseen circumstances.

In The Apple, Scott was forced to figure out a way to get the antimatter system back online after it was drained by an unseen force. Everything he did in the episode was focused on restoring functions to the Enterprise. This wasn’t the result of a failed upgrade or a continuous deployment scenario. The operation of his ship was impacted. In Is There No Truth In Beauty, Mr. Scott even challenges the designer of the Enterprise’s engines that he can’t handle them as well as Scotty. Mr. Scott was boasting that he was better at operations than a developer. Plain and simple.

In the first Star Trek movie, Admiral Kirk is pushing Scotty to get the Enterprise ready to depart in hours after an eighteen month refit. Scotty keeps pushing back that they need more time to work out the new systems and go on a shakedown cruise. Does that sound like a person that wants to do CI/CD to a starship? Or does it sound more like the caution of an operations person wanting to make sure patches are deployed in a controlled way? Every time someone in the series or movies suggested doing major upgrades or redesigns to the Enteprise, Scotty always warned against doing it in the field unless absolutely necessary.

Montgomery Scott isn’t the King of DevOps. He’s a poster child for simple operations. Keep the systems running. Deal with problems as they arise. Make changes only if necessary. And don’t monkey with the systems! These are the tried-and-true refrains of a person that knows that his expertise isn’t in building things but in making them run.

Engineering as DevOps

That’s not to say that Star Trek doesn’t have DevOps engineers. The Enterprise-D had two of the best examples of DevOps that I’ve ever seen – Geordi LaForge and Data. These two operations officers spent most of their time trying new things with the Enterprise. And more than a few crises arose because of their development aspirations.

LaForge and Data were constantly experimenting on the Enterprise in an attempt to make it run better. Given that the mission of the Enterprise-D did not have the same five-year limit as the original, they were expected to keep the technology on the Enterprise more current in space. However, their experiments often led to problems. Destabilizing the warp core, causing shield harmonics failures, and even infecting the Enterprise’s computer with viruses were somewhat commonplace during Geordi’s tenure as Chief Engineer.

Commander Data was also rather fond of finding out about new technology that was being developed and trying to integrate it into the Enterprise’s systems. Many times, he mentioned finding out about something being developed the the Daystrom Institute and wanting to see if it would work for them. Which leads me to think that the Daystrom Institute is the Star Trek version of Stack Overflow – copy some things you think will make everything better and hope it doesn’t blow up because you didn’t understand it.

Even if it was a plot convenience device, it felt like the Enterprise was often caught in the middle of applying a patch or an upgrade right when the action started. An exploding star or an enemy vessel always waited until just the right moment to put the Enterprise in harm’s way. Even Starfleet seemed to decide the Enterprise was the only vessel that could help after the DevOps team took the warp core offline to make it run 0.1% faster.

Perhaps instead of pushing forward with an aggressive DevOps mentality for the flagship of the Federation, Geordi and Data would have done better to take lessons from Mr. Scott and wait for appropriate windows to make changes and upgrades and quite tinkering with their ship so often that it felt like it was being held together by duct tape and hope.

Tom’s Take

Despite being fictional characters, Scotty, Geordi, and Data all represent different aspects of the technology we look at today. Scotty is the tried-and-true operations person. Geordi and Data are leading the charge to keep the technology fresh. Each of them has their strong points, but it’s hard to overlook Scotty as being a bastion of simple operations mentalities. Even when they all met together in Relics, Scotty was thinking more about making things work and less on making them fast or pretty or efficient. I think the push to the DevOps mentality would do well to take a seat and listen to the venerable chief engineer of the original Enterprise.


More Accurate IT Acronyms

IT is flooded with acronyms. It takes a third of our working life to figure out what they all mean. Protocols aren’t any easier to figure out if it’s just a string of three or four letters that look vaguely like a word. Which, by the way, you should never pronounce.

But what if the acronyms of our favorite protocols didn’t describe what the designers wanted but instead described what they actually do?

  • Sporadic Network Mangling Protocol

  • Obscurity Sends Packets Flying

  • Expensive Invention Gets Routers Puzzled

  • Vexing Router Firmware

  • Really Intensive Protocol

  • Someone Doesn’t Worry About Networking

  • Somewhat Quixotic Language

  • Blame It oN DNS

  • Cisco’s Universal Call Misdirector

  • Some Mail’s Thrown Places

  • Mangles Packets, Looks Silly

  • Amazingly Convoluted Lists

  • ImProperly SECured

  • May Push Lingering Sanity To Expire

Are there any other ones you can think of? Leave it in the comments.

Back In The Saddle Of A Horse Of A Different Color

I’ve been asked a few times in the past year if I missed being behind a CLI screen or I ever got a hankering to configure some networking gear. The answer is a guarded “yes”, but not for the reason that you think.

Type Casting

CCIEs are keyboard jockeys. Well, the R&S folks are for sure. Every exam has quirks, but the R&S folks have quirky QWERTY keyboard madness. We spend a lot of time not just learning commands but learning how to input them quickly without typos. So we spend a lot of time with keys and a lot less time with the mouse poking around in a GUI.

However, the trend in networking has been to move away from these kinds of input methods. Take the new Aruba 8400, for instance. The ArubaOS-CX platform that runs it seems to have been built to require the least amount of keyboard input possible. The whole system runs with an API backend and presents a GUI that is a series of API calls. There is a CLI, but anything that you can do there can easily be replicated elsewhere by some other function.

Why would a company do this? To eliminate wasted effort. Think to yourself how many times you’ve typed the same series of commands into a switch. VLAN configuration, vty configs, PortFast settings. The list goes on and on. Most of us even have some kind of notepad that we keep the skeleton configs in so we can paste them into a console port to get a switch up and running quickly. That’s what Puppet was designed to replace!

By using APIs and other input methods, Aruba and other companies are hoping that we can build tools that either accept the minimum input necessary to configure switches or that we can eliminate a large portion of the retyping necessary to build them in the first place. It’s not the first command you type into a switch that kills you. It’s the 45th time you paste the command in. It’s the 68th time you get bored typing the same set of arguments from a remote terminal and accidentally mess this one up that requires a physical presence on site to reset your mistake.

Typing is boring, error prone, and costs significant time for little gain. Building scripts, programs, and platforms that take care of all that messy input for us makes us more productive. But it also changes the way we look at systems.

Bird’s Eye Views

The other reason why my fondness for keyboard jockeying isn’t as great as it could be is because of the way that my perspective has shifted thanks to the new aspects of networking technology that I focus on. I tell people that I’m less of an engineer now and more of an architect. I see how the technologies fit together. I see why they need to complement each other. I may not be able to configure a virtual link without documentation or turn up a storage LUN like I used to, but I understand why flash SSDs are important and how APIs are going to change things.

This goes all they way back to my conversations at VMunderground years ago about shifting the focus of networking and where people will go. You remember? The “ditch digger” discussion?


This is more true now than ever before. There are always going to be people racking and stacking. Or doing basic types of configuration. These folks are usually trained with basic knowledge of their task with no vision outside of their job role. Networking apprentices or journeymen as the case may be. Maybe one out of ten or one out of twenty of them are going to want to move up to something bigger or better.

But for the people that read blogs like this regularly the shift has happened. We don’t think in single switches or routers. We don’t worry about a single access point in a closet. We think in terms of systems. We configure routing protocols across multiple systems. We don’t worry about a single port VLAN issue. Instead, we’re trying to configure layer 2 DCI extensions or bring racks and pods online at the same time. Our visibility matters more than our typing skills.

That’s why the next wave of devices like the Aruba 8400 and the Software Defined Access things coming from Cisco are more important than simple checkboxes on a feature sheet. They remove the visibility of protocols and products and instead give us platforms that need to be configured for maximum effect. The gap between the people that “rack and stack” and those that build the architecture that runs the organization has grown, but only because the middle ground of administration is changing so fast that it’s tough to keep up.

Tom’s Take

If I were to change jobs tomorrow I’m sure that I could get back in the saddle with a couple of weeks of hard study. But the question I keep asking myself is “Why would I want to?” I’ve learned that my value doesn’t come from my typing speed or my encyclopedia of networking command arguments any more. It comes from a greater knowledge of making networking work better and integrate more tightly into the organization. I’m a resource, not a reactionary. And so when I look to what I would end up doing in a new role I see myself learning more and more about Python and automation and less about what new features were added in the latest OSPF release on Cisco IOS. Because knowing how to integrate technology at a high level is more valuable to everyone than just knowing the commands to type to turn the lights on.

How Should We Handle Failure?

I had an interesting conversation this week with Greg Ferro about the network and how we’re constantly proving whether a problem is or is not the fault of the network. I postulated that the network gets blamed when old software has a hiccup. Greg’s response was:

Which led me to think about why we have such a hard time proving the innocence of the network. And I think it’s because we have a problem with applications.

Snappy Apps

Writing applications is hard. I base this on the fact that I am a smart person and I can’t do it. Therefore it must be hard, like quantum mechanics and figuring out how to load the dishwasher. The few people I know that do write applications are very good at turning gibberish into usable radio buttons. But they have a world of issues they have to deal with.

Error handling in applications is a mess at best. When I took C Programming in college, my professor was an actual coder during the day. She told us during the error handling portion of the class that most modern programs are a bunch of statements wrapped in if clauses that jump to some error condition in if they fail. All of the power of your phone apps is likely a series of if this, then that, otherwise this failure condition statements.

Some of these errors are pretty specific. If an application calls a database, it will respond with a database error. If it needs an SSL certificate, there’s probably a certificate specific error message. But what about those errors that aren’t quite as cut-and-dried? What if the error could actually be caused by a lot of different things?

My first real troubleshooting on a computer came courtesy of Lucasarts X-Wing. I loved the game and played the training missions constantly until I was able to beat them perfectly just a few days after installing it. However, when I started playing the campaign, the program would crash to DOS when I started the second mission with an out of memory error message. In the days before the Internet, it took a lot of research to figure out what was going on. I had more than enough RAM. I met all the specs on the side of the box. What I didn’t know and had to learn is that X-Wing required the use of Expanded Memory (EMS) to run properly. Once I decoded enough of the message to find that out, I was able to change things and make the game run properly. But I had to know that the memory X-Wing was complaining about was EMS, not the XMS RAM that I needed for Windows 3.1.

Generic error messages are the bane of the IT professional’s existence. If you don’t believe me, ask yourself how many times you’ve spent troubleshooting a network connectivity issue for a server only to find that DNS was misconfigured or down. In fact, Dr. House has a diagnosis for you:

Unintuitive Networking

If the error messages are so vague when it comes to network resources and applications, how are we supposed to figure out what went wrong and how to fix it? I mean, humans have been doing that for years. We take the little amount of information that we get from various sources. Then we compile, cross-reference, and correlate it all until we have enough to make a wild guess about what might be wrong. And when that doesn’t work, we keep trying even more outlandish things until something sticks. That’s what humans do. We will in the gaps and come up with crazy ideas that occasionally work.

But computers can’t do that. Even the best machine learning algorithms can’t extrapolate data from a given set. They need precise inputs to find a solution. Think of it like a Google search for a resolution. If you don’t have a specific enough error message to find the problem, you aren’t going to have enough data to provide a specific fix. You will need to do some work on your own to find the real answer.

Intent-based networking does little to fix this right now. All intent-based networking products are designed to create a steady state from a starting point. No matter how cool it looks during the demo or how powerful it claims to be, it will always fall over when it fails. And how easily it falls over is up to the people that programmed it. If the system is a black box with no error reporting capabilities, it’s going to fail spectacularly with no hope of repair beyond an expensive support contract.

It could be argued that intent-based networking is about making networking easier. It’s about setting things up right the first time and making them work in the future. Yet, no system in a steady state works forever. With the possible exception of the tar pitch experiment, everything will fail eventually. And if the control system in place doesn’t have the capability to handle the errors being seen and propose a solution, then your fancy provisioning system is worth about as much as a car with no seat belts.

Fixing The Issues

So, how do we fix this mess? Well, the first thing that we need to do is scold the application developers. They’re easy targets and usually the cause behind all of these issues. Instead of giving us a vague error message about network connectivity, we need more like lp0 Is On Fire. We need to know what was going on when the function call failed. We need a trace. We need context around process to be able to figure out why something happened.

Now, once we get all these messages, we need a way to correlate them. Is that an NMS or some other kind of platform? Is that an incident response system? I don’t know the answer there, but you and your staff need to know what’s going on. They need to be able to see the problems as they occur and deal with the errors. If it’s DNS (see above), you need to know that fast so you can resolve the problem. If it’s a BGP error or a path problem upstream, you need to know that as soon as possible so you can assess the impact.

And lastly, we’ve got to do a better job of documenting these things. We need to take charge of keeping track of what works and what doesn’t. And keeping track of error messages and their meanings as we work on getting app developers to give us better ones. Because no one wants to be in the same boat as DenverCoder9.

Tom’s Take

I made a career out of taking vague error messages and making things work again. And I hated every second of it. Because as soon as I became the Montgomery Scott of the Network, people started coming to me with every more bizarre messages. Becoming the Rosetta Stone for error messages typed into Google is not how you want to spend your professional career. Yet, you need to understand that even the best things fail and that you need to be prepared for it. Don’t spend money building a very pretty facade for your house if it’s built on quicksand. Demand that your orchestration systems have the capabilities to help you diagnose errors when everything goes wrong.


Legacy IT Sucks

In my last few blog posts, I’ve been looking back at some of the ideas that were presented at Future:Net at VMworld this year. While I’ve discussed resource contention, hardware longevity, and event open source usage, I’ve avoided one topic that I think dictates more of the way our networks are built and operated today. It has very little to do with software, merchant hardware, or even development. It’s about legacy.

They Don’t Make Them Like They Used To

Every system in production today is running some form of legacy equipment. It doesn’t have to be an old switch in a faraway branch office closet. It doesn’t have to be an old Internet router. Often, it’s a critical piece of equipment that can’t be changed or upgraded without massive complications. These legacy pieces of the organization do more to dictate IT policies than any future technology can hope to impact.

In my own career, I’ve seen this numerous times. It could be the inability to upgrade workstation operating systems because users relied on WordPerfect for document creation and legacy document storage. And new workstations wouldn’t run WordPerfect. Or perhaps it cost too much to upgrade. Here, legacy software is the roadblock.

Perhaps it’s legacy hardware causing issues. Most wireless professionals agree that disabling older 802.11b data rates will help with network capacity issues and make things run more smoothly. But those data rates can only be configured if your wireless network clients are more modern. What if you’re still running 802.11b wireless inventory scanners? Or what if your old Cisco 7921 phones won’t run correctly without low data rates enabled? Legacy hardware dictates the configuration here.

In other cases, legacy software development is the limiting factor. I’ve run into a number of situations where legacy applications are dictating IT decisions. Not workstations and their productivity software. But enterprise applications like school grade book systems, time tracking applications, or even accounting programs. How about an accounting database that refuses to load if IPX/SPX isn’t enabled on a Novell server? Or an old AS/400 grade book that can’t be migrated to any computer system that runs software built in this century? Application development blocks newer systems from installation and operation.

Brownfields Forever

We’ve reached the point in IT where it’s safe to say that there are no more greenfield opportunities. The myth that there is an untapped area where no computer or network resources exist is ludicrous. Every organization that is going to be computerized is so right now. No opportunities exist to walk into a completely blank slate and do as you will.

Legacy is what makes a brownfield deployment difficult. Maybe it’s an old IP scheme. Or a server running RIP routing. Or maybe it’s a specialized dongle connected to a legacy server for licensing purposes that can’t be virtualized. These legacy systems and configurations have to be taken into account when planning for new deployments and new systems. You can’t just ignore a legacy system because you don’t like the requirements for operation.

This is part of the reason for the rise of modular-based pod deployments like those from Big Switch Networks. Big Switch realized early on that no one was going to scrap an entire networking setup just to deploy a new network based on BSN technology. And by developing a pod system to help rapidly insert BSN systems into existing operational models, Big Switch proved that you can non-disruptively bring new network areas online. This model has proven itself time and time again in the cloud deployment models that are favored by many today, including many of the Future:Net speakers.

Brownfields full of legacy equipment require careful planning and attention to detail. They also require that development and operations teams both understand the impact of the technical debt carried by an organization. By accounting for specialized configurations and needs, you can help bring portions of the network or systems architecture into the modern world while maintaining necessary services for legacy devices. Yes, it does sound a bit patronizing and contrived for most IT professionals. But given the talk of “burn it all to the ground and build fresh”, one must wonder if anyone truly does understand the impact of legacy technology investment.

Tom’s Take

You can’t walk away from debt. Whether it’s a credit card, a home loan, or the finance server that contains those records on an old DB2 database connected by a 10/100 FastEthernet switch. Part of my current frustration with the world of forward-looking proponents is that they often forget that you must support every part of the infrastructure. You can’t leave off systems in the same way you can’t leave off users. You can’t pretend that the AS/400 is gone when you scrap your entire network for new whitebox switches running open source OSes. You can’t hope that the old Oracle applications won’t have screwed up licensing the moment you migrate them all to AWS. You have to embrace legacy and work through it. You can plan for it to be upgraded and replaced at some point. But you can’t ignore it no matter how much it may suck.

Penny Pinching With Open Source

You might have seen this Register article this week which summarized a Future:Net talk from Peyton Koran. In the article and the talk, Peyton talks about how the network vendor and reseller market has trapped organizations into a needless cycle of bad hardware and buggy software. He suggests that organizations should focus on their new “core competency” of software development and run whitebox or merchant hardware on top of open source networking stacks. He says that developers can use code that has a lot of community contributions and shares useful functionality. It’s a high and mighty goal. However, I think the open source part of the equation is going to cause some issues.

A Penny For Your Thoughts

The idea behind open source isn’t that hard to comprehend. Everything available to see and build. Anyone can contribute and give back to the project and make the world a better place. At least, that’s the theory. Reality is sometimes a bit different.

Many times, I’ve had off-the-record conversations with organizations that are consuming open source resources and projects as a starting point for building something that will end up containing many proprietary resources. When I ask them about contributing back to those projects or finding ways to advance things, the response are usually silence. Very rarely, I hear that the organization sees their proprietary developments as a “competitive advantage” that they are going to use to either beat a competitor or build a product that saves them a significant amount of money.

The analogy I like to use for open source is the “Take A Penny” dish that many businesses have next to their cash register. The idea is that people can contribute something small to help out others. If someone needs a penny or two to help make payment for a good or service, they can take one. If they have a few left over they can give back. It’s a way to give back a little now and then.

However, there are a couple of types of people that skew the trend for the penny dish. The first is the person that gives back much more than the norm. That person might put a quarter in the dish or put in four or five pennies at every dish they find. They contribute above the norm often and give a lot. The second type of person takes quite a few pennies from the dish and never gives back. They may see the dish as “free money” to be used to augment their own. They don’t care if all the pennies are gone when they finish just as long as they got what they needed out of it.

Extend this metaphor into the open source community. There are quite a few contributors that put in significant time and effort in their projects. They may have found a way to do it full time or may be paid by their company to participate in projects. These folks are dedicated to the cause.

On the other side of the metaphor are the people and organizations that take what they want from open source and never give back. They never contribute to the project, even if their enhancements are needed and welcome. They take a free or inexpensive starting point and use it to build a product that could be used internally to give the organization an advantage. The key here is that it’s something used internally. The GPL covers distribution of software that is based on GPL code, but it’s not really clear about what happens if that software is consumed internally. An enterprising developer may say to themselves, “As long as I don’t sell it, I don’t have to give my code back.”

Networking For Pocket Change

There are quite a few open source networking projects out there. Quagga, BiRD, and Open vSwitch are great examples of projects that have significant reach and are used by a lot of companies to build great products. However, imagine what would happen if no one gave back to these projects. Imagine what would happen if contributors decided to make their own BGP daemons or OVS-like program and use it without regard for helping others in the community.

Open source software needs developers willing to contribute back to the project. If networking is going to embrace open source projects as Peyton suggested in his talk, it’s going to take a lot more contribution than quiet consumption. Whether or not you agree with the premise that networking vendors are corrupt and evil you do have to concede that they’ve given us mostly stable protocols like BGP and OSPF. These same vendors have contributed ideas back to the standardization process to improve protocols like spanning tree and power over Ethernet. Their contributions helped shape what networking is today.

If the next generation of software based network developers wants to embrace and extend these contributions with open source, they’re going to need to be transparent and communicate with the project leads. They’re also going to need to push back when someone high up the food chain sees the development process as a way to gain an advantage and try and keep it all secretive. If developers aren’t going to give back to the community it negates the advantages of open source and instead takes us back to the days of networks being one-off creations that have no interoperability beyond a few protocols. Islands in a sea of home-grown lava.

Tom’s Take

As anyone that attended Future:Net within earshot of me can attest, I wasn’t overly thrilled with Peyton’s take on the future of networking. I have some deep seeded reservations about the “screw the vendors, build it all yourself” mentality that is pervading organizations today. Not everyone is a development shop. Law firms and schools don’t employ software engineers regularly. If you want to transform those types of users into open source adherents, you need to lead the pack by giving back and talking about what your doing with open source. If you’re not willing to lead the way, stop telling people to take the fork in the road.

Network Longevity – Think Car, Not iPhone

One of the many takeaways I got from Future:Net last week was the desire for networks to do more. The presenters were talking about their hypothesized networks being able to make intelligent decisions based on intent and other factors. I say “hypothesized” because almost everyone admitted that we aren’t quite there. Yet. But the more I thought about it, the more I realized that perhaps the timeline for these mythical networks is a bit skewed in favor of refresh cycles that are shorter than we expect.

Software Eats The World

SDN has changed the way we look at things. Yes, it’s a lot of hype. Yes, it’s an overloaded term. But it’s also the promise of getting devices to do much more than we had ever dreamed. It’s about automation and programmability and, now, deriving intent from plain language. It’s everything we could ever want a simple box of ASICs to do for us and more.

But why are we asking so much? Why do we now believe that the network is capable of so much more than it was just five years ago? Is it because we’ve developed a revolutionary new method for making chips that are ten times smarter and a hundred times faster? No. In fact, the opposite is true. Chips are a little faster and a little cheaper, but they aren’t custom construction. Most companies are embracing merchant silicon from companies like Broadcom and Mellanox. So, where’s all this functionality coming from?

Its’ the “S” in SDN. We’re seeing software driving this development. Yes, it’s the same thing we’ve been saying for years now. But it’s something more now. Admins and engineers aren’t just excited that network devices are more software now. They’re expecting it. They’re looking to the software to drive the development. Gone are the days of CLI-only access. Now, the interfaces are practically built on API-driven capabilities. The Aruba 8400 seems like it was built for the API first, with the GUI being a superset of API functions. People getting into the networking world are focusing on things like Python first and CLI syntax second. Software is winning again.

It’s like the iPhone revolution. The hardware is completely divorced from the software. I can upgrade to the latest version of Apple’s iOS and get most of the functionality I want in the OS. Aside from some hardware specific things the majority of the functions work no matter what. Every year, I get new toys to play with. Every 12 months I can enable new functions and have new avenues available to me. If you think back to the first iterations of the iPhone ten years ago, you can see how far software development has driven this device.

Hardware For The Long Haul

However, for all the grandeur and amazement that iPhone (and Android for that matter) show, it’s also created a side effect that is causing even more problems in the mobile device world that is bleeding over into other electronics. That is the rapid replacement cycle. I mentioned above how awesome it is that you can get new functions every year in your mobile device. But I didn’t mention how people are already looking forward to the newest hardware even though they may have purchased a new phone just 10 months ago.

The desire to get faster by leaps and bounds every year is almost comical at this point. When the new device is delivered an it’s just a bit faster than the last, people are disappointed. Even I was guilty of this when I moved from a 2013 MacBook to a 2016 model. It was only 20% faster. Even with everything else I got in the new package, I found myself chasing performance over every other feature.

The desire to get better performance isn’t just limited to phones and tablets and laptops. When network designers look to increase network performance, they want to do so in a way that makes their users happy. Gone are the days when a simple gigabit network connection would keep someone pleased. We now have to develop gigabit wireless connections to the backbone network, where data is passed to servers connected by 25Gig connections to spines and super spines running at 100Gig (for now). In some cases, that data doesn’t even move any more, and instead short-lived containers are spun up next to it to make things run faster.

Networks aren’t designed like iPhones. They aren’t built to be ripped out every year and replaced with something a little faster and a little shinier. They’re designed more like cars in that regard. They are large purchases that should last for years upon years, with their performance profile dictated by the design at the time. We’ve gone through the phases of running 10Mbit hubs. We upgraded to FastEthernet. We’re now living in a cycle where Gigabit and 10/40 Gigabit switches are ruling the roost. Some of the more forward-thinking folks are adopting 25/50Gigabit and even looking to faster technologies like 400Gig connections for spines.

However, the longevity of hardware remains. Capital expenditure isn’t the easiest thing in the world to accomplish. You need to budget for new devices in the networking world. You need to justify cost savings or new applications. You can’t just buy a fast new switch or two every year and claim that it makes everything better without some kind of justification. Even if you move to the cloud with your strategy, you’re not changing the purchasing model. You’re just trading capital expenditure to operational expenditure. You’re leasing a car instead of buying it. Because if you leave the cloud, you’re back to your old switching model with nothing to show for it.

Tom’s Take

I was pretty hard on the Future:Net presenters because I felt that their talk of ditching money grubbing vendors for the purity of whitebox switching running open source software was a bit heavy handed. Most organizations are in the middle of a refresh cycle and can’t afford to rip everything out and replace it today. Moreover, the value in whitebox switching isn’t realized when you install new hardware. It’s realized when you turn your switch into an iPhone. Where software development rules the day and your hardware fades away in the background. We’re not there yet. We’re still buying hardware for the sake of hardware as the software is catching up. Networking equipment is still built and bought like a car and will be for a few years to come. Maybe we can revisit this topic in 3 years and see how far software will have driven us by then.