Unknown's avatar

About networkingnerd

Tom Hollingsworth, CCIE #29213, is a former network engineer and current organizer for Tech Field Day. Tom has been in the IT industry since 2002, and has been a nerd since he first drew breath.

Who Wants To Save Forever?

Save-icon

At the recent SpectraLogic summit in Boulder, much of the discussion centered around the idea of storing data and media in perpetuity. Technology has arrived at the point where it is actually cheaper to keep something tucked away rather than trying to figure out whether or not it should be kept. This is leading to a huge influx of media resources being available everywhere. The question now shifts away from storage and to retrieval. Can you really save something forever?

Another One Bites The Dust

Look around your desk. See if you can put your hands on each of the following:

* A USB Flash drive
* A DVD-RW
* A CD-ROM
* A Floppy Disk (bonus points for 5.25")

Odds are good that you can find at least three of those four items. Each of those items represents a common way of saving files in a removal format. I’m not even trying to cover all of the formats that have been used (I’m looking at you, ZIP drives). Each of these formats has been tucked away in a backpack or given to a colleague at some point to pass files back and forth.

Yet, each of these formats has been superseded sooner or later by something better. Floppies were ultraportable and very small. CD-ROMs were much bigger, but couldn’t be re-written without effort. DVD media never really got the chance to take off before bandwidth eclipsed the capacity of a single disc. And USB drives, while the removable media du jour, are mainly used when you can’t connect wirelessly.

Now, with cloud connectivity the idea of having removable media to share files seems antiquated. Instead of copying files to a device and passing it around between machines, you simply copy those files to a central location and have your systems look there. And capacity is very rarely an issue. So long as you can bring new systems online to augment existing storage space, you can effectively store unlimited amounts of data forever.

But how do we extract data from old devices to keep in this new magical cloud? Saving media isn’t that hard. But getting it off the source is proving to be harder than one might think.

Take video for instance. How can you extract data from an old 8mm video camera? It’s not a standard size to convert to VHS (unless you can find an old converter at a junk store). There are a myriad of ways to extract the data once you get it hooked up to an input device. But what happens if the source device doesn’t work any longer? If your 8mm camera is broken you probably can’t extract your media. Maybe there is a service that can do it, but you’re going to pay for that privilege.

I Want To Break Free

Assuming you can even extract the source media files for storage, we start running into another issue. Once I’ve saved those files, how can I be sure that I can read them fifty years from now? Can I even be sure I can read them five years from now?

Data storage formats are a constantly-evolving discussion. All you have to do is look at Microsoft Office. Office is the most popular workgroup suite in the entire world. All of those files have to be stored in a format that allows them to be read. One might be forgiven for assuming that Microsoft Word document formats are all the same or at least similar enough to be backwards compatible across all versions.

Each new version of the format includes a few new pieces that break backwards compatibility. Instead of leveraging new features like smaller file sizes or increased readability we are faced to continue using old formats like Word 97-2002 in order to ensure that file can be read by whomever they send it to for review.

Even the most portable for formats suffers from this malady. Portable Document Format (PDF) was designed by Adobe to be an application independent way to display files using a printing descriptor language. This means that saving a file as a PDF one system makes it readable on a wide variety of systems. PDF has become the de facto way to share files back and forth.

Yet it can suffer from format issues as well. PDF creation software like Adobe Acrobat isn’t immune from causing formatting problems. Files saved with certain attributes can only be read by updated versions of reader software that can understand them. The idea of a portable format only works when you restrict the descriptors available to the lowest common denominator so that all readers can display the format.

Part of this issue comes from the idea that companies feel the need to constantly “improve” things and force users to continue to upgrade software to be able to read the new formats. While Adobe has offered the PDF format to ISO for standardization, adding new features to the process takes time and effort. Adobe would rather have you keep buying Acrobat to make PDFs and downloading new versions to Reader to decode those new files. It’s a win-win situation for them and not as much of one for the consumers of the format.


Tom’s Take

I find it ironic that we have spent years of time and millions of dollars trying to find ways to convert data away from paper and into electronic formats. The irony is that those papers that we converted years ago are more readable that the data that we stored in the cloud. The only limitation of paper is how long the actual paper can last before being obliterated.

Think of the Rosetta Stone or the Code of Hammurabi. We know about these things because they were etched into stone. Literally. Yet, in the case of the Rosetta Stone we ran into file format issues. It wasn’t until we were able to save the Egyptian hieroglyphs as Greek that we were able to read them. If you want your data to stand the test of time, you need to think about more than the cloud. You need to make sure that you can retrieve and read it as well.

Open Networking Needs to Be Interchangeable

OpenBuildingBlocks

We’re coming up quickly on the fall meeting of the Open Networking User Group, which is a time for many of the members of the financial community to debate the needs of modern networking and provide a roadmap and use case set for networking vendors to follow for in the coming months. ONUG provides what some technology desperately needs – a solution to which it can be applied.

Open Or Something Like It

We’ve already started to see the same kind of non-open solution building that plagued the early network years creeping into some aspects of our new “open” systems. Rather than building on what we consider to be tried-and-true building blocks, we instead come to proprietary solutions that promise “magic” when it comes to configuration and maintenance. Should your network provide the magic? Or is that your job?

Magical is what the network should look like to a user, not to the admins. Think about the networking in cloud providers like AWS and MS Azure. The networking there is a very simple model that hides complexity. The average consumer of AWS services doesn’t need to know the specifics of configuration in the underlay of Amazon’s labyrinth of the cloud. All that matters is traffic goes where it is supposed to go and arrives when it is supposed to be there.

Let’s apply those same kinds of lessons to open networks in our environments. What we need isn’t a magic bullet that makes everything turn into a checkbox or button to do mysterious things behind a curtain. What we really need is an open system that allows us to build a system that can be reduced to boxes and buttons. That requires a kind of interoperation that isn’t present in the first generation of driving networks through software.

This is also one of the concerns present in policy definitions and models like those found in Cisco ACI. In order for these higher-order systems to work efficiently, the majority of the focus needs to be on the definition of actions and the execution of those policies. What can’t occur is a large amount of time spent fixing the interoperation between pieces in the policy underlay.

Think about your current network. Do you spend most of your time focused on the packets flowing between applications? Or are you spending a higher percentage of your time fixing the pathways between those applications? Optimizing the underlay for those flows? Trying to figure out why something isn’t working over here versus why it is working over there?

Networking Needs Eli Whitney

Networking isn’t open the way that it needs to be. It’s as open as manufacturing was before the invention of interchangeable parts. Our systems are cobbled together contraptions of unique parts and systems that collapse when a single piece falls out of place. Instead of fixing the issue and restoring sanity, we are forced to exert extra effort molding the new pieces to function like the old.

Truly open networking isn’t just about the software riding on top of the underlay. It’s about making the interfaces said software interacts with seamless enough to swap parts and pieces and allow the system to continue to function without major disruption. We can’t spend our time tinkering with why the API isn’t accepting instructions or reconfiguring the markup language because the replacement part is a different model number.

When networks are open enough that they work the way that AWS and Azure work without massive interference on our part that will be a truly landmark day. That day will mark the moment when our networks become focused on service delivery instead of component integration. The openness in networking will lead us to stop worrying about it. Not because someone built a magic proprietary system that works now with three other devices and will probably be forgotten in another year. But instead because networking vendors finally discovered that solving problems is much more profitable than creating roadblocks.


Tom’s Take

I’ve been very proud to take part in ONUG for the past few years. The meetings have given me an entirely new perspective on how networking is viewed by users and consumers. It’s also a great way to get in touch with people who are doing networking in unique environments with exacting needs. ONUG has also helped forward the cause of opening networking by providing a nucleus for users to bring their requirements to the group that needs to hear them most of all.

ONUG can continue to drive networking forward by insisting that future networking developments are open and interoperable at a level that makes hardware inconsequential. No standards body can exert that influence. It comes from user voting with dollars and ONUG represents some deep purse strings.

If you are in the New York area and would like to attend ONUG this November 4th and 5th, you can use the code TFD30 to get 30% off the conference registration cost. And if you tell them that Tom sent you, I might be able to arrange for a nice fruit basket as well.

 

My Thoughts on Dell, EMC, and Networking

Dell.EMC.logo.storage

The IT world is buzzing about the news that Dell is acquiring EMC for $67 billion. Storage analysts are talking about the demise of the 800-lb gorilla of storage. Virtualization people are trying to figure out what will happen to VMware and what exactly a tracking stock is. But very little is going on in the networking space. And I think that’s going to be a place where some interesting things are going to happen.

It’s Not The Network

The appeal of the Dell/EMC deal has very little to do with networking. EMC has never had any form of enterprise networking, even if they were rumored to have been looking at Juniper a few years ago. The real networking pieces come from VMware and NSX. NSX is a pure software networking implementation for overlay networking implemented in virtualized networks.

Dell’s networking team was practically nonexistent until the Force10 acquisition. Since then there has been a lot of work in building a product to support Dell’s data center networking aspirations. Good work has been done on the hardware front. The software on the switches has had some R&D done internally, but the biggest gains have been in partnerships. Dell works closely with Cumulus Networks and Big Switch Networks to provide alternative operating systems for their networking hardware. This gives users the ability to experiment with new software on proven hardware.

Where does the synergy lie here? Based on a conversation I had on Monday there are some that believe that Cumulus is a loser in this acquisition. The idea is that Dell will begin to use NSX as the primary data center networking piece to drive overlay adoption. Companies that have partnered with Dell will be left in the cold as Dell embraces the new light and way of VMware SDN. Interesting idea, but one that is a bit flawed.

Maybe It’s The Network

Dell is going to be spending a lot of time integrating EMC and all their federation companies. Business needs to continue going forward in other areas besides storage. Dell Networking will see no significant changes in the next six months. Life goes on.

Moving forward, Dell Networking is still an integral piece of the data center story. As impressive as software networking can be, servers still need to plug into something. You can’t network a server without a cable. That means hardware is still important even at a base level. That hardware needs some kind of software to control it, especially in the NSX model without a centralized controller deciding how flows will operating on leaf switches. That means that switches will still need operating systems.

The question then shifts to whether Dell will invest heavily in R&D for expanding FTOS and PowerConnect OS or if they will double down on their partnership with Cumulus and Big Switch and let NSX do the heavy lifting above the fray. The structure of things would lead one to believe that Cumulus will get the nod here, as their OS is much more lightweight and enables basic connectivity and control of the switches. Cumulus can help Dell integrate the switch OS into monitoring systems and put more of the control of the underlay network at the fingertips of the admins.

I think Dell is going to be so busy integrating EMC into their operations that the non-storage pieces are going to be starved for development dollars. That means more reliance on partnerships in the near term. Which begets a vicious cycle that causes in-house software to fall further and further behind. Which is great for the partner, in this case Cumulus.

By putting Dell Networking into all the new offerings that should be forthcoming from a combined Dell/EMC, Dell is putting Cumulus Linux in a lot of data centers. That means familiarizing these networking folks with them more and more. Even if Dell decides not to renew the Cumulus Partnership after EMC and VMware are fully ingested it means that the install base of Cumulus will be bigger than it would have been otherwise. When those devices are up for refresh the investigation into replacing them with Cumulus-branded equipment is one that could generate big wins for Cumulus.


Tom’s Take

Dell and EMC are going to touch every facet of IT when they collide. Between the two of them they compete in almost every aspect of storage, networking, and compute as well as many of the products that support those functions. Everyone is going to face rapid consolidation from other companies banding together to challenge the new 800-lb gorilla in the space.

Networking will see less impact from this merger but it will be important nonetheless. If nothing, it will drive Cisco to start acquiring at a faster rate to keep up. It will also allow existing startups to make a name for themselves. There’s even the possibility of existing networking folks leaving traditional roles and striking out on their own to found startups to explore new ideas. The possibilities are limitless.

The Dell/EMC domino is going to make IT interesting for the next few months. I can’t wait to see how the chips will fall for everyone.

SDN Myths Revisited

techunplugged-logo

I had a great time at TECHunplugged a couple of weeks ago. I learned a lot about emerging topics in technology, including a great talk about the death of disk from Chris Mellor of the Register. All in all, it was a great event. Even with a presentation from the token (ring) networking guy:

I had a great time talking about SDN myths and truths and doing some investigation behind the scenes. What we see and hear about SDN is only a small part of what people think about it.

SDN Myths

Myths emerge because people can’t understand or won’t understand something. Myths perpetuate because they are larger than life. Lumberjacks and blue oxen clearing forests. Cowboys roping tornadoes. That kind of thing. With technology, those myths exist because people don’t want to believe reality.

SDN is going to take the jobs of people that can’t face the reality that technology changes rapidly. There is a segment of the tech worker populace that just moves from new job to new job doing the same old things. We leave technology behind all the time without a care in the world. But we worry when people can’t work on that technology.

I want you to put your hands on a floppy disk. Go on, I’ll wait. Not so easy, is it? Removable disk technology is on the way out the door. Not just magnetic disk either. I had a hard time finding a CD-ROM drive the other day to read an old disc with some pictures. I’ve taken to downloading digital copies of films because my kids don’t like operating a DVD player any longer. We don’t mourn the passing of disks, we celebrate it.

Look at COBOL. It’s a venerable programming language that still runs a large percentage of insurance agency computer systems. It’s safe to say that the amount of money it would cost to migrate away from COBOL to something relatively modern would be in the millions, if not billions, of dollars. Much easier to take a green programmer and teach them an all-but-dead language and pay them several thousand dollars to maintain this out-of-date system.

It’s like the old story of buggy whip manufacturers. There’s still a market for them out there. Not as big as it was before the introduction of the automobile. But it’s there. You probably can’t break into that market and you had better be very good (or really cheap) at making them if you want to get a job doing it. The job that a new technology replaced is still available for those that need that technology to work. But most of the rest of society has moved on and the old technology fills a niche roll.

SDN Truths

I wasn’t kidding when I said that Gartner not having an SDN quadrant was the smartest thing they ever did (aside from the shot at stretched layer 2 DCI). I say this because it will finally force customers to stop asking for a magic bullet SDN solution and it will force traditional networking vendors to stop packaging a bunch of crap and selling it as a magic bullet.

When SDN becomes a part of the entire solution and not some mystical hammer that fixes all the nails in your environment, then the real transformation can happen. Then people that are obstructing real change can be marginalized and removed. And the technology can be the driver for advancement instead of someone coming down the hall complaining about things not working.

We spend so much time reacting to problems that we forgot how to solve them for good. We’re not being malicious. We just can’t get past the triage. That’s the heart of the fire fighter problem. Ivan wrote a great response to my fire fighter post and his points were spot on. Especially the ones about people standing in the way, whether it be through outright obstruction or by taking power away to affect real change. We can’t hold networking people responsible for the architecture and simultaneously keep them from solving the root issues. That’s the ham-handed kind of organizational roadblock that needs to change to move networking forward.


Tom’s Take

Talks like this don’t happen over night. They take careful planning and thought, followed by panic when you realize your 45-minute talk is actually 20-minutes. So you cut out the boring stuff and get right to the meat of the issue. In this case, that meat is the continued misperception of SDN no matter how much education we throw at the networking community. We’re not going to end up jobless programmers being lied to by silver-tongued marketing wonks. But we are going to have to face the need for organization change and process reevaluation on a scale that will take months, if not years, to implement correctly. And then do it all over again as technology evolves to fit the new mold we created when we broke the old one.

I would rather see the easy money flee to a new startup slot machine and all of the fair weather professionals move on to a new career in whatever is the hot new thing. That means those of us left behind in the newly-transformed traditional networking space will be grizzled veterans willing to learn and implement the changes we need to make to stop being blamed for the problems of IT and be a model for how it should be run. That’s a future to look forward to.

 

Premise vs. Premises

premises-not-premise-300x225

If you’ve listened to a technology presentation in the past two years that included discussion of cloud computing, you’ve probably become embroiled in the ongoing war of the usage of the word premises or the shift of people using the word premise in its stead. This battle has raged for many months now, with the premises side of the argument refusing to give ground and watch a word be totally redefined. So where is this all coming from?

The Premise of Your Premises

The etymology of these two words is actually linked, as you might expect. Premise is the first to appear in the late 14th century. It traces from the Old French premisse which is derived from the Medieval Latin premissa, which are both defined as “a previous proposition from which another follows”.

The appearance of premises comes from the use of premise in legal documents in the 15th century. In those documents, a premise was a “matter previously stated”. More often than not, that referred to some kind of property like a house or a building. Over time, that came to be known as a premises.

Where the breakdown starts happening is recently in technology. We live in a world where brevity is important. The more information we can convey in a brief period the better we can be understood by our peers. Just look at the walking briefing scenes from The West Wing to get an idea of how we must compress and rapidly deliver ideas today. In an effort to save precious syllables during a presentation, I’m sure some CTO or Senior Engineer compressed premises into premise. And as we often do in technology, this presentation style and wording was copied ad infinitum by imitators and competitors alike.

Now, we stand on the verge of premise being redefined. This has a precedent in recent linguistics. The word literally was recently been changed from the standard definition of “in a literal sense” or describing something as it actually happened into an informal usage of “emphasizing strong feeling while not being literally true”. This change has grammar nerds and linguistics people at odds. Some argue that language evolves over time to include new meanings. Others claim that changing a word to be defined as the exact opposite meaning is a perversion and is wrong.

The Site of Your Ideas

Perhaps the real solution to this problem is to get rid of the $2 words when a $.50 word will do just fine. Instead of talking about on-premises cloud deployments, how about referring to them as on-site? Instead of talking about the premise behind creating a hybrid cloud, why not refer to the idea behind it (especially when you consider that the strict definition of premise doesn’t really mean idea).

By excising these words from your vocabulary now, you lose the risk of using them improperly. You even get to save a syllable here and there. If word economy is truly the goal, the aim should be to use the most precise word with the least amount of effort. If you are parroting a presentation from Amazon or Google and keep referring to on-premise computing you are doing a disservice to people that are listening to you and will carry your message forward to new groups of listeners.


Tom’s Take

If you’re going to insist on using premises and premise, please make sure you get them right. It takes less than a second to add the missing “s” to the end of that idea and make it a real place. Otherwise you’re going to come off sounding like you don’t know what you’re talking about. Kind of like this (definitely not safe for work):

Instead, let’s move past using these terms and get back to something more simple and straightforward. Sites can never be confused for ideas. It may be more direct and less flashy to say on-site but you never have to worry about using the wrong term or getting the grammarians on your bad side. And that’s a premise worth believing in.

 

Tips For Presenting On Video

video

Giving a presentation is never an easy thing for a presenter. There’s a lot that you have to keep in mind, like pacing and content. You want to keep your audience in mind and make sure you’re providing value for the time they are giving you.

But there is usually something else you need to keep in mind today. Most presentations are being recorded for later publication. When presenting for an audience that has a video camera or two, there are a few other things you want to keep in mind on top of every other thing you are trying to keep track of.

Tip 1: Introduce Early. And Often

One of the things you really need to keep in mind for recorded presentations is time. If the videos are going to be posted to Youtube after the event the length of your presentation is going to matter. People that stumble across your talk aren’t going to want to watch an hour or two of slide discussion. A fifteen minute overview of a topic works much better from a video perspective.

Never rely on a lower third to do something you are capable of taking five seconds to say.

Keeping that in mind, you should start every section of your presentation with an introduction. Tell everyone who you are and what you do. That establishes credibility with the audience. It also helps the viewer figure out who you are right away. Sometimes not knowing who is talking distracts enough that people get lost and miss content. Never rely on a lower third to do something you are capable of taking five seconds to say.

Note that if you decide to go down this road, you need to make sure your audience is aware of what you’re doing. Someone might find it off-putting that you’re introducing yourself twenty minutes after you just did. But you don’t want to turn it into a parody or humor point. Just be clear about why you’re doing it and make it quick and clean.

Tip 2: Take A Deep Breath

When you are transitioning between sections, one of the worst things you can do is try to fill time with idle conversation. People are hard wired to insert filler into conversations. Conquering that compulsion is a difficult task but very worth it in the end.

One of the reasons why getting rid of filler conversation is important for a video recording is the editing around an introduction. If you start your introduction with, “Um, uh, hello there, um, My name is, uh, Tom Hollingsworth…” That particular introduction is rife with unnecessary filler that does nothing but distract from a statement of who you are.

The easiest way to do this is to take a deep breath before you start speaking. By clearing your mind before you open your mouth, you are much less likely to insert filler words in an effort to keep the conversation flowing. Another good technique that news reporters use is the countdown. Yes, it does serve a purpose for the editor to know when to start the clip. But it also helps the reporter focus on when to start and collect their faculties before speaking.

Try it yourself next time. When you’re ready to make a clean transition, just insert a single second of silence while you take a breath. Odds are great that you’ll find your transitions much more appealing.

Tip 3: Questions Anyone?

This one is a bit trickier. The best presentation model works from the idea that the audience should ask questions during a presentation instead of after it. By having a question closely tied to an idea people are much more likely to remember it and find it relevant. This is especially true on video, as the view can rewind and listen to the question and answer a couple of times.

But what about those questions that aren’t exactly tied to a specific idea or cover a topic not discussed? That’s where the final Q&A period comes in. You want to make sure to capture any final thoughts from the audience. But since this is all about the video you also want to make sure you don’t cut someone off with a premature close out.

When you ask for final questions, make sure you take a few seconds and visually glance around the room. Silence is very easy to cut out of a video. But it’s much harder to cut out someone saying “Okay, so there are no more questions…” followed by someone asking a question. It’s much better to take the extra time to make sure there are no questions or comments. The extra seconds are more than worth it.


Tom’s Take

I get to see both sides of the presentation game. Whether I’m presenting at Tech.UNPLUGGED this week or editing a video from Tech Field Day. I find that presenting for a live audience while also being aware of the things that make video useful and successful are important skills to master in today’s speaking circuit.

It doesn’t take a lot of additional effort to make your presentation video-ready. A little practice and you’ll have it down in no time flat.

Network Firefighters or Fire Marshals?

FireMarshal

Throughout my career as a network engineer, I’ve heard lots of comparisons to emergency responders thrown around to describe what the networking team does. Sometimes we’re the network police that bust offenders of bandwidth polices. Other times there is the Network SWAT Team that fixes things that get broken when no one else can get the job done. But over and over again I hear network admins and engineers called “fire fighters”. I think it’s time to change how we look at the job of fires on the network.

Fight The Network

The president of my old company used to try to motivate us to think beyond our current job roles by saying, “We need to stop being firefighters.” It was absolutely true. However, the sentiment lacked some of the important details of what exactly a modern network professional actually does.

Think about your job. You spend most of your time implementing change requests and trying to fix things that don’t go according to plan. Or figuring out why a change six months ago suddenly decided today to create a routing loop. And every problem you encounter is a huge one that requires an “all hands on deck” mentality to fix it immediately.

People look at networks as a closed system that should never break or require maintenance of any kind until it breaks. There’s a reason why a popular job description (and Twitter handle) involves describing your networking job as a plumber or janitor. We’re the response personnel that get called out on holidays to fix a problem that creates a mess for other people that don’t know how to fix it.

And so we come to the fire fighter analogy. The idea of a group of highly trained individuals sitting around the NOC (or firehouse) waiting for the alarm to go off. We scramble to get the problem fixed and prevent disaster. Then go back to our NOC and clean up to do it all over again. People think about networking professionals this way because they only really talk to us when there’s a crisis that we need to deal with.

The catch with networking is that we’re very rarely sitting around playing ping pong or cleaning our gear. In the networking world we find ourselves being pushed from one crisis to the next. A change here creates a problem there. A new application needs changes to run. An old device created a catastrophe when a new one was brought online. Someone did something without approval that prevented the CEO from watching Youtube. The list is long and distinguished. But it all requires us to fix things at the end of the day.

The problem isn’t with the fire fighting mentality. Our society wouldn’t exist without firefighters. We still have to have protection against disaster. So how is it that we can have an ordered society with relatively few firefighters versus the kinds of chaos we see in networking?

Marshal, Marshal, Marshal

The key missing piece is the fire marshal. Firefighters put out fires. Fire marshals prevent them from happening in the first place. They do this by enforcing the local fire codes and investigating what caused a fire in the first place.

The most visible reminder of the job of a fire marshal is found in any bar or meeting room. There is almost always a sign by the door stating the maximum occupancy according to the local fire code. Occupancy limits prevent larger disasters should a fire occur. Yes, it’s a bit of bummer when you are told that no one else can enter the bar until some people leave. But the fire marshal would rather deal with unhappy patrons than with injuries or deaths in case of fire.

Other duties of the fire marshal include investigating the root cause of a fire and trying to find out how to prevent it in the future. Was is deliberate? Did it involve a faulty piece of equipment? These are all important questions to ask in order to find out how to keep things from happening all over again.

Let’s apply these ideas to the image of network professionals as firefighters. Instead of spending the majority of time dealing with the fallout from bad decisions there should be someone designated to enforce network policy and explain why those policies exist. Rather than caving to the whims of application developers that believe their program needs elevated QoS treatment, the network fire marshal can explain why QoS is tiered the way that it is and why including this new app in the wrong tier will create chaos down the road.

How about designating a network fire marshal to figure out why there was a routing table meltdown? Too often we find ourselves fixing the problem and not caring about the root cause so long as we can reassure ourselves that the problem won’t come back. A network fire marshal can do a post mortem investigation and find out which change caused the issue and how to prevent it in the future with proper documentation or policy changes.

It sounds simple and straightforward to many folks. Yet these are the things that tend to be neglected when the fires flare up and time becomes a premium resource. By having someone to fill the role of investigator and educator, we can only hope that network fires can be prevented before they start.


Tom’s Take

Networking can’t evolve if we spend the majority of our time dealing with the disasters we could have prevented with even a few minutes of education or caution. SDN seeks to prevent us from shooting off our own foot. But we can also help out by looking to the fire marshal as an example of how to prevent these network fires before they start. With some education and a bit of foresight we can reduce our workload of disaster response with disaster prevention. Imagine how much more time we would have to sit around the NOC and play ping pong?

 

CCIE at 50k: Software Defined? Or Hardware Driven?

50kSticker

Congratulations to Ryan Booth (@That1Guy_15) on becoming CCIE #50117. It’s a huge accomplishment for him and the networking community. Ryan has put in a lot of study time so this is just the payoff for hard work and a job well done. Ryan has done something many dream of and few can achieve. But where is the CCIE program today? And where will it be in the future?

Who Wants To Be A CCIE?

A lot of virtual ink has been committed to opinions in the past couple of years about how the CCIE is become increasingly irrelevant in a world of software defined DevOps focused non-traditional networking teams. It has been said that the CCIE doesn’t teach modern networking concepts like programming or building networks in a world with no CLI access. While this is all true, I don’t think it diminishes the value of getting a CCIE.

The CCIE has never been about building a modern network. It has never been focused on creating anything other than a medium-sized enterprise network in the case of the routing and switching exam. It is not a test of best practices or of greenfield deployment scenarios. Instead, it has been a test of interoperability with an exisiting architecture. It tests the ability of the candidate to add devices and protocols to a stable existing network.

Other flavors of the CCIE test over different protocols or technologies, but the idea is still the same. The only one that even comes close to requiring programming is the CCIE Collaboration, which tests over the ability to customize Cisco Contact Center scripts. Otherwise, each test focuses on technology implementation and not architecture or operation.

Current logic dictates that people don’t want to take the CCIE because it doesn’t teach programming or API interaction. Yet candidates are showing up in droves. It’s almost as if the networks we have today are going to need to be maintained and built out over the coming years. These are the kinds of tasks that are well suited to a support-focused certification like the CCIE. The ideal CCIE candidate isn’t using Vagrant and Chef in a lab somewhere. They’re muddling through OSPF to RIP distribution somewhere in the dark corners of a network that got welded on after an acquisition.

Is Everyone A CCIE?

One thing I have noticed about the CCIE is the fact the number climb seems to have leveled off. It’s not the rapid explosion of certifications that it has been in the past, nor is it the eventual cliff of increased difficulty. Things seem to be marching more toward steady growth. I don’t know how much of that can be attributed to factors like the Cisco official CCIE training program or the upgrade to version 5 almost two years ago.

Lots of CCIEs doesn’t necessarily mean that the test has lost meaning. Microsoft had several thousand MCSEs by the time the certification became a punchline to countless call center jokes. Novell had a virtual army of Certified NetWare Engineers (CNEs) before software changes locked many of them into CNE 5 or CNE 6. Having a lot of certified individuals doesn’t devalue the certificaiton. It’s what people do with it that creates the reputation. Ask and Novell Certififed Directory Engineer (CDE) about the reputation garnered by a test and they can give you a lesson in hard exams that breed bright engineers.

Does that mean that we should brace ourselves for even more CCIEs in the future? It likely won’t be as bad as has been imagined. The written exam for version 5 has pointed out to me that Cisco is going to start closing ranks around technologies in the near future. The written exam serves as a testing ground for potential new topics on the exam. MPLS was a written topic long before it became a potential lab exam topic. The current written exam is full of technologies that make me think Cisco is starting to put more emphasis on the Cisco and less on the Internetworking in CCIE.

Cisco wants to have a legion of certified individuals that think about Cisco technology benefits. That’s why we’re starting to see a shift toward things like DMVPN and GETVPN in testing. In place of industry standard protocols, we get the Cisco improved versions. This locks candidates into the Cisco method of thinking and ensures that their go-to solutions will include some form of proprietary technology.

If this shift in thinking is really the start of the new way of certification testing, I worry for the future of the CCIE. Not because there are 50,000 CCIEs, but because the new inductees into the CCIE group will be focused on creating islands of Cisco in the sea of interoperable data center networks. That’s good for Cisco’s bottom line, but bad for the reputation of the CCIE. Could you imagine what would happen if a CCIE walked in and told you they couldn’t fix your MPLS VPN configuration issues because “I only know how to work on DMVPN”?


Tom’s Take

Every time someone I know passes the CCIE it makes me happy that they’ve completed a rigorous exam testing process. It tells me this person knows how to follow the lab instructions to create an interoperable enterprise network based on constraints. It also tells me that this person knows how to study material and doesn’t give up. Those are the kinds of people I would want in my networking group.

CCIEs are the perfect people to learn more modern network techniques like programmability and SDN. Not because they learned how to do it on their test. But because they are the kinds of people that learn well and will apply everything they have to picking up a new concept. But it needs to be pointed out here that Cisco must foster that kind of interoperable learning experience with CCIEs. Focusing too heavily on proprietary solutions to help create an army of unknowing Cisco SEs in the field will only serve to hurt Cisco in the future when that group of certified individuals must learn to work in the world of networking post-SD.

 

The Blame Pipeline

wc_pipeline sketch

Talk to any modern IT person about shifting the landscape of how teams work and I can guarantee you that you’ll hear a bit about DevOps as well as “siloed” organizational structures. Fingers get pointed in all directions as to the real culprit behind dysfunctional architecture. Perhaps changing the silo term to something more appropriate will help organizations sort out where the real issues lie.

You Dropped A Bomb On Me

Silos, or stovepipes, are an artifact of reporting structures of days gone by. Greg Ferro (@EtherealMind) has a great piece on the evils of ITIL. In it, he talks about how the silo structure creates blame passing issues and lack of responsibility for problem determination and solving.

I think Greg is spot on here. But I also think that the love of blame extends in the other direction too. It is one thing to have the storage team telling everyone that the arrays are working so it’s not their problem. It’s another issue entirely when the CxO-level folks come down from the High Holy Boardroom to hunt for heads when something goes wrong. They aren’t looking to root out the cause of the issue. They want someone to blame.

That’s where the silo comes in. The vertical integration and focus on team disciplines means the CTO/CIO/CFOO get to find out which particular technology was responsible for the issue and blame those people. Modern IT organizations revel in blame assignment. What they want more than anything is to find out who is responsible and berate them until they feel vindicated.

Silos aren’t organizational structures. They are pipelines that allow management to find “responsible” parties and get their retribution for any problems that happen. The focus lies solely on punishing the guilty rather than correcting the problem. Which leads to people not wanting to be challenged outside of their team environment for fear that they’ll get double the blame when something goes wrong.

Trans-Organizational Pipeline

How can we fix this issue with silos and stovepipes? Think for a moment about the physical structures that we use to model them. Silos and stovepipes are all vertical. They might as well be using sewer pipes for a visual representation. After all, the crap does flow downhill.

A better model for the modern IT environment should be the horizontal pipeline. These would be more like oil transport pipelines or other precious materials. Rather than focus on storing things vertically, horizontal pipelines are all about getting raw materials to a processing facility. There isn’t time to waste along the way. Product moves from one location to the next rapidly.

That’s how teams should function. Resources should be allocated and processed. Equipment should be installed and configured. Applications should be installed and be operational. No time to waste. But who should do it? Which team gets the vote?

The reality of the world is that this kind of implementation style needs a cross-sectional horizontal team. Think about the old episodes of Mission: Impossible. The IMF didn’t pick the disguise team to infiltrate or the computer team to hack door locks. They found one or two people with the unique skills to get the job done and focused them on their task. And when the impossible mission was completed everyone went on their merry way back to real life.

In the case of IT teams, new technology products like hyperconvergence and programmable networking require the inputs of many different disciplines. Creating ad-hoc teams like the IMF can go a long way to helping stagnant IT departments rapidly embrace new technology. The rapid implementation approach leads to great things.

But what about the blame? After all, CxOs love to point fingers when things go wrong? How does a horizontal team help? The best way to treat this kind of issue is to remember that these new, smaller cross-functional teams have a much larger atomic unit than traditional silos. A CTO can blame an individual on the storage team for a failure. But in a cross-discipline team, it is the team that is responsible for success or failure. Blame, if any, resides with the team itself and not the people that comprise it. That kind of insulation helps individuals rise above the fear of committing egregious errors and instead embrace the kinds of thinking needed to make new IT happen.


Tom’s Take

The best teams don’t happen by accident. It takes effort to make a group of people across disciplines work together to make amazing things happen. The best way to make this happen is to form the team and get out of the way. No blame shifting. No segregation of skills. Just putting awesome people together and seeing what happens.

The sooner we stop putting silos around teams for ease of management the sooner we will find that people are capable of amazing feats of technology. Let’s turn those blame pipelines into something infinitely more useful.

 

This WAN Is Your WAN, This WAN Is My WAN

Straw Bales on Hill Landscape, Tuscany, Italy

Straw Bales on Hill Landscape, Tuscany, Italy

Ideas coalesce all the time in every vertical. You don’t really notice it until you wake up one day and suddenly everything around you looks identical. Wireless becoming the new access layer. Flash storage taking hold of the high end performance crown. And in networking we have the dominance of all things software defined. One recent development has coming along much faster than anyone could have predicted: Software Defined Wide Area Networking (SD-WAN).

Automatic For The People

SD-WAN is a force in modern networking because people want simplicity. While Ivan does a great job of decoupling marketing from reality, people still believe that SD-WAN is the silver bullet that will fix all of their WAN woes. Even during the original discussions of SD-WAN technology at conferences like ONUG, the overriding idea wasn’t around tying sites together or driving down costs to the point of feasibility. It was all about making life easier.

How does SD-WAN manage to accomplish this? It’s all black box networking. Just like the fuel injector in your car. There’s no crying about interoperability or standards-based protocols. You just plug things in and it all works, even if you can’t exactly plug one vendor solution into a competitor. Lock in wins again.

The ideas behind SD-WAN aren’t exactly new. Cisco talked about SD-WAN quite a bit at Networking Field Day 10. Here’s Jeff Reed on it:

The rest of the two hour session details how Cisco is using their Intelligent WAN (IWAN) product to drive SD-WAN. The names of the components all sound very familiar to networkers: DMVPN, NBAR, PfR, and so on. That’s because SD-WAN uses a lot of tried-and-true techniques to tie the concept together. There’s nothing earth-shattering about SD-WAN under the hood. In fact, a fair number of people that work at the “pioneering” SD-WAN startups all seem to have their roots in one or more traditional networking companies.

Fables of Reconstruction

Look at the other presenters at Networking Field Day 10. Two of them announced SD-WAN solutions even though they aren’t really known for expertise in SD-WAN. One of them wasn’t even known as a branch office acceleration solution. So why the SD-WAN land rush all of the sudden? What’s behind the need to have a solution?

You probably wouldn’t be surprised to learn that a lot of investors are backing expansion into SD-WAN technologies. It’s a hot property. But why? As above, customers aren’t interested in the technical wizardry that goes into SD-WAN. They aren’t clamoring for it to supplant their current WAN solution and offer a Rosetta Stone of inter-vendor WAN cooperation. What’s behind the push?

It probably goes something like this:

  1. Technologist needs to implement WAN architecture. Is dismayed that things are so difficult.
  2. Technologist starts searching for solutions about WAN. They probably start asking friends about it.
  3. Analyst firm hears that technologists are asking about WAN solutions. Releases a questionnaire asking which technologies you’d like to learn more about.
  4. Responses to questionnaires are loaded into a graph or report that people buy because they don’t know who to talk to.
  5. Companies realize customers want WAN solutions. They break their necks to offer those solutions to keep up with demand.
  6. Investors see companies beginning to offer WAN solutions and think there’s a huge untapped market. They start funding anyone that mentions WAN in a meeting.

By the way, you can replace “WAN” with any technology above and it still works.

Thanks to customers needing a solution for something they can’t configure easily they are going to be inundated with SD-WAN options by the time they turn around. And the biggest concern no long becomes “Who has the easiest solution?” but instead, “Who is still going to be here in six months?”

Collapse Into Now

The reckoning is coming in the SD-WAN market. If a company doesn’t already have an SD-WAN solution in development or if their solution won’t see daylight for another nine months, they are going to exercise the second “B” of innovation and buy it. And they have a lot of prime targets to choose from.

Investors get cagey without an exit strategy. How are they going to win at this game? They either have to get paid with an IPO, with a later round of funding, or by having someone buy out the investment. If an investor thinks they can get their money back (plus a bit of interest) by having this little startup bought by a traditional networking vendor you can better believe they will be advising the startup to sell.

The customers are the real losers in the case of a buyout, or worse a bankruptcy. Those highly proprietary solutions become dead weight if there isn’t any support for them any longer. Black box networking falls apart when the little magical creatures inside the box go away. Which means customers will be skittish of supporting a solution that is likely to go away any time soon.

Who will you support? An established vendor slow to roll out a solution? Or an up-and-coming company with new ideas but at risk of being snapped up by a big bank account?


Tom’s Take

I loved seeing all the SD-WAN discussion at Networking Field Day 10. SD-WAN is no longer magic sauce that aggregates DSL and MPLS circuits with encryption. Nuage Networks showed off deploying Docker apps to remote sites. Riverbed talked about using their WAN optimization experience to deploy SaaS solutions through SD-WAN.

We’ve heard from SD-WAN companies in the past at Networking Field Day. It’s interesting to hear the comparisons between the upstarts and the old geezers. It’s clear there is a ton of money that is being invested in SD-WAN. The trick is to find out your needs and pick the best solution for you. Otherwise you may find yourself losing your SD-WAN religion.