A Stack Full Of It

pancakestack

During the recent Open Networking User Group (ONUG) Meeting, there was a lot of discussion around the idea of a Full Stack Engineer. The idea of full stack professionals has been around for a few years now. Seeing this label applied to networking and network professionals seems only natural. But it’s a step in the wrong direction.

Short Stack

Full stack means having knowledge of the many different pieces of a given area. Full stack programmers know all about development, project management, databases, and other aspects of their environment. Likewise, full stack engineers are expected to know about the network, the servers attached to it, and the applications running on top of those servers.

Full stack is a great way to illustrate how specialized things are becoming in the industry. For years we’ve talked about how hard networking can be and how we need to make certain aspects of it easier for beginners to understand. QoS, routing protocols, and even configuration management are critical items that need to be decoded for anyone in the networking team to have a chance of success. But networking isn’t the only area where that complexity resides.

Server teams have their own jargon. Their language doesn’t include routing or ASICs. They tend to talk about resource pools and patches and provisioning. They might talk about VLANs or latency, but only insofar as it applies to getting communications going to their servers. Likewise, the applications teams don’t talk about any of the above. They are concerned with databases and application behaviors. The only time the hardware below them becomes a concern is when something isn’t working properly. Then it becomes a race to figure out which team is responsible for the problem.

The concept of being a full stack anything is great in theory. You want someone that can understand how things work together and identify areas that need to be improved. The term “big picture” definitely comes to mind. Think of a general practitioner doctor. This person understands enough about basic medical knowledge to be able to fix a great many issues and help you understand how your body works. There are quiet a few general doctors that do well in the medical field. But we all know that they aren’t the only kinds of doctors around.

Silver Dollar Stacks

Generalists are great people. They’ve spent a great deal of time learning many things to know a little bit about everything. I like to say that these people have mud puddle knowledge about a topic. It covers are broad area, but only a few inches deep. It can form quickly and evaporates in the same amount of time. Contrast this with a lake or an ocean, which covers a much deeper area but takes years or decades to create.

Let’s go back to our doctor example. General practitioners are great for a large percentage of simple problems. But when they are faced with a very specific issue they often call out to a specialist doctor. Specialists have made their career out of learning all about a particular part of the body. Podiatrists, cardiologists, and brain surgeons are all specialists. They are the kinds of doctors you want to talk to when you have a problem with that part of your body. They will never see the high traffic of a general doctor, but they more than make up for it in their own area of expertise.

Networking has a lot of people that cover the basics. There are also a lot of people that cover the more specific things, like MPLS or routing. Those specialists are very good a what they do because they have spent the time to hone those skills. They may not be able to create VLANs or provision ports as fast as a generalist, but imagine the amount of time saved when turning up a new MPLS VPN or troubleshooting a routing loop? That money translates into real savings or reduced downtime.


Tom’s Take

The people that claim that networking needs to have full stack knowledge are the kinds of folks further up the stack that get irritated when they have to explain what they want. Server admins don’t like knowing the networking jargon to ask for VLANs. Application developers want you to know what they mean when they say everything is slow. Full stack is just code for “learn about my job so I don’t have to learn about yours”.

It’s important to know about how other roles in the stack work in order to understand how changes can impact the entire organization. But that knowledge needs to be shared across everyone up and down the stack. People need to have basic knowledge to understand what they are asking and how you can help.

The next time someone tells you that you need to be a full stack person, ask them to come do your job for a day while you learn about theirs. Or offer to do their job for one week to learn about their part of the stack. If they don’t recoil in horror at the thought of you doing it, chance are they really want you to have a greater understanding of things. More likely they just want you to know how hard they work and why you’re so difficult to understand. Stop telling us that we need full stack knowledge and start making the stacks easier to understand.

 

How Much Is Unlimited Anyway?

unlimited-resources

The big news today came down from the Microsoft MVP Summit that OneDrive is not going to support “unlimited” cloud storage going forward. This came as a blow to folks that were hoping to store as much data as possible for the foreseeable future. The conversations have already started about how Microsoft pulled a bait-and-switch or how storage isn’t really free or unlimited. I see a lot of parallels in the networking space to this problem as well.

All The Bandwidth You Can Buy

I remember sitting in a real estate class in college talking to my professor, who was a commercial real estate agent. He told us, “The happiest day of your real estate career is the day you buy an apartment complex. The second happiest day of your career is when you sell it to the next sucker.” People are in love with the idea of charging for a service, whether it be an apartment or cloud storage and compute. They think they can raise the price every year and continue to reap the profits of ever-increasing rent. What they don’t realize is that those increases are designed to cover increased operating costs, not increased money in your pocket.

Think about someone like Amazon. They are making money hand over fist in the cloud game. What do you think they are doing with it? Are they piling it up in a storage locker and sitting on it like a throne? Or lighting cigars with $100 bills? The most likely answer is that they are plowing those profits back into increasing capacity and offerings to attract new customers. That’s what customers want. Amazon can take some profit from the business but if they stop expanding customers will leave to find another service that meets their needs.

Bandwidth in networks is no different. I worked for IBM as in intern many years ago. Our site upgraded their Internet connection to a T3 to support the site. We were informed that just a few months after the upgrade that all the extra bandwidth we’d installed was being utilized at more than 90%. It took almost no time for the users to find out there was more headroom available and consume it.

The situation with bandwidth today is no different. Application developers assume that storage and bandwidth are unlimited or significant. They create huge application packages that load every conceivable library or function for the sake of execution speed. Networking and storage pays the price to make things faster. Apps take up lots of space and take forever to download even a simple update. The situation keeps getting worse with every release.

Slimming the Bandwidth Pipeline

Some companies are trying to take a look at how to keep this bloat from exploding. Facebook has instituted a policy that restricts bandwidth on Tuesdays to show developers what browsing at low speeds really feels like. They realize that not everyone in the world has access to ultra-fast LTE or wireless.

Likewise, Amazon realizes that on-boarding data to AWS can be painful if there are hundreds of gigabytes or even a few terabytes to migrate. They created Snowball, which is essentially a 1 petabyte storage array that you load up on-site and return to Amazon to store. It’s a decidedly low tech solution to a growing problem.

Networking professionals know that bandwidth isn’t unlimited. Upgrades and additional capacity cost money. Service providers have the same limitations as regular networks. If you want more bandwidth than they can provide, you are out of luck. If you’re willing to pay through the nose providers are happy to build out solutions for you. You’re providing the capital investment for their expansion. Everything costs money somehow.


Tom’s Take

“Unlimited” is a marketing lie. Whether it’s unlimited nights and weekends, unlimited mobile data, or unlimited storage, nothing is truly infinite. Companies want you to take advantage of their offerings to sell you something else. Free services are supported by advertising or upset opportunities. Providers continue to be shocked when they offer something with no reasonable limit and find that a small percentage of the user base is going to take advantage of their mistake.

Rather than offering false promises of unlimited things, providers should be up front. They should have plans that offer large storage amounts with conditions that make it clear that large consumers of those services will face restriction. People that want to push the limit and download hundreds of gigabytes of mobile data or store hundreds of terabytes of data in the cloud should know up front that they will be singled out for special treatment. Believable terms for services beat the lies of no limits every time.

Who Wants To Save Forever?

Save-icon

At the recent SpectraLogic summit in Boulder, much of the discussion centered around the idea of storing data and media in perpetuity. Technology has arrived at the point where it is actually cheaper to keep something tucked away rather than trying to figure out whether or not it should be kept. This is leading to a huge influx of media resources being available everywhere. The question now shifts away from storage and to retrieval. Can you really save something forever?

Another One Bites The Dust

Look around your desk. See if you can put your hands on each of the following:

* A USB Flash drive
* A DVD-RW
* A CD-ROM
* A Floppy Disk (bonus points for 5.25")

Odds are good that you can find at least three of those four items. Each of those items represents a common way of saving files in a removal format. I’m not even trying to cover all of the formats that have been used (I’m looking at you, ZIP drives). Each of these formats has been tucked away in a backpack or given to a colleague at some point to pass files back and forth.

Yet, each of these formats has been superseded sooner or later by something better. Floppies were ultraportable and very small. CD-ROMs were much bigger, but couldn’t be re-written without effort. DVD media never really got the chance to take off before bandwidth eclipsed the capacity of a single disc. And USB drives, while the removable media du jour, are mainly used when you can’t connect wirelessly.

Now, with cloud connectivity the idea of having removable media to share files seems antiquated. Instead of copying files to a device and passing it around between machines, you simply copy those files to a central location and have your systems look there. And capacity is very rarely an issue. So long as you can bring new systems online to augment existing storage space, you can effectively store unlimited amounts of data forever.

But how do we extract data from old devices to keep in this new magical cloud? Saving media isn’t that hard. But getting it off the source is proving to be harder than one might think.

Take video for instance. How can you extract data from an old 8mm video camera? It’s not a standard size to convert to VHS (unless you can find an old converter at a junk store). There are a myriad of ways to extract the data once you get it hooked up to an input device. But what happens if the source device doesn’t work any longer? If your 8mm camera is broken you probably can’t extract your media. Maybe there is a service that can do it, but you’re going to pay for that privilege.

I Want To Break Free

Assuming you can even extract the source media files for storage, we start running into another issue. Once I’ve saved those files, how can I be sure that I can read them fifty years from now? Can I even be sure I can read them five years from now?

Data storage formats are a constantly-evolving discussion. All you have to do is look at Microsoft Office. Office is the most popular workgroup suite in the entire world. All of those files have to be stored in a format that allows them to be read. One might be forgiven for assuming that Microsoft Word document formats are all the same or at least similar enough to be backwards compatible across all versions.

Each new version of the format includes a few new pieces that break backwards compatibility. Instead of leveraging new features like smaller file sizes or increased readability we are faced to continue using old formats like Word 97-2002 in order to ensure that file can be read by whomever they send it to for review.

Even the most portable for formats suffers from this malady. Portable Document Format (PDF) was designed by Adobe to be an application independent way to display files using a printing descriptor language. This means that saving a file as a PDF one system makes it readable on a wide variety of systems. PDF has become the de facto way to share files back and forth.

Yet it can suffer from format issues as well. PDF creation software like Adobe Acrobat isn’t immune from causing formatting problems. Files saved with certain attributes can only be read by updated versions of reader software that can understand them. The idea of a portable format only works when you restrict the descriptors available to the lowest common denominator so that all readers can display the format.

Part of this issue comes from the idea that companies feel the need to constantly “improve” things and force users to continue to upgrade software to be able to read the new formats. While Adobe has offered the PDF format to ISO for standardization, adding new features to the process takes time and effort. Adobe would rather have you keep buying Acrobat to make PDFs and downloading new versions to Reader to decode those new files. It’s a win-win situation for them and not as much of one for the consumers of the format.


Tom’s Take

I find it ironic that we have spent years of time and millions of dollars trying to find ways to convert data away from paper and into electronic formats. The irony is that those papers that we converted years ago are more readable that the data that we stored in the cloud. The only limitation of paper is how long the actual paper can last before being obliterated.

Think of the Rosetta Stone or the Code of Hammurabi. We know about these things because they were etched into stone. Literally. Yet, in the case of the Rosetta Stone we ran into file format issues. It wasn’t until we were able to save the Egyptian hieroglyphs as Greek that we were able to read them. If you want your data to stand the test of time, you need to think about more than the cloud. You need to make sure that you can retrieve and read it as well.

My Thoughts on Dell, EMC, and Networking

Dell.EMC.logo.storage

The IT world is buzzing about the news that Dell is acquiring EMC for $67 billion. Storage analysts are talking about the demise of the 800-lb gorilla of storage. Virtualization people are trying to figure out what will happen to VMware and what exactly a tracking stock is. But very little is going on in the networking space. And I think that’s going to be a place where some interesting things are going to happen.

It’s Not The Network

The appeal of the Dell/EMC deal has very little to do with networking. EMC has never had any form of enterprise networking, even if they were rumored to have been looking at Juniper a few years ago. The real networking pieces come from VMware and NSX. NSX is a pure software networking implementation for overlay networking implemented in virtualized networks.

Dell’s networking team was practically nonexistent until the Force10 acquisition. Since then there has been a lot of work in building a product to support Dell’s data center networking aspirations. Good work has been done on the hardware front. The software on the switches has had some R&D done internally, but the biggest gains have been in partnerships. Dell works closely with Cumulus Networks and Big Switch Networks to provide alternative operating systems for their networking hardware. This gives users the ability to experiment with new software on proven hardware.

Where does the synergy lie here? Based on a conversation I had on Monday there are some that believe that Cumulus is a loser in this acquisition. The idea is that Dell will begin to use NSX as the primary data center networking piece to drive overlay adoption. Companies that have partnered with Dell will be left in the cold as Dell embraces the new light and way of VMware SDN. Interesting idea, but one that is a bit flawed.

Maybe It’s The Network

Dell is going to be spending a lot of time integrating EMC and all their federation companies. Business needs to continue going forward in other areas besides storage. Dell Networking will see no significant changes in the next six months. Life goes on.

Moving forward, Dell Networking is still an integral piece of the data center story. As impressive as software networking can be, servers still need to plug into something. You can’t network a server without a cable. That means hardware is still important even at a base level. That hardware needs some kind of software to control it, especially in the NSX model without a centralized controller deciding how flows will operating on leaf switches. That means that switches will still need operating systems.

The question then shifts to whether Dell will invest heavily in R&D for expanding FTOS and PowerConnect OS or if they will double down on their partnership with Cumulus and Big Switch and let NSX do the heavy lifting above the fray. The structure of things would lead one to believe that Cumulus will get the nod here, as their OS is much more lightweight and enables basic connectivity and control of the switches. Cumulus can help Dell integrate the switch OS into monitoring systems and put more of the control of the underlay network at the fingertips of the admins.

I think Dell is going to be so busy integrating EMC into their operations that the non-storage pieces are going to be starved for development dollars. That means more reliance on partnerships in the near term. Which begets a vicious cycle that causes in-house software to fall further and further behind. Which is great for the partner, in this case Cumulus.

By putting Dell Networking into all the new offerings that should be forthcoming from a combined Dell/EMC, Dell is putting Cumulus Linux in a lot of data centers. That means familiarizing these networking folks with them more and more. Even if Dell decides not to renew the Cumulus Partnership after EMC and VMware are fully ingested it means that the install base of Cumulus will be bigger than it would have been otherwise. When those devices are up for refresh the investigation into replacing them with Cumulus-branded equipment is one that could generate big wins for Cumulus.


Tom’s Take

Dell and EMC are going to touch every facet of IT when they collide. Between the two of them they compete in almost every aspect of storage, networking, and compute as well as many of the products that support those functions. Everyone is going to face rapid consolidation from other companies banding together to challenge the new 800-lb gorilla in the space.

Networking will see less impact from this merger but it will be important nonetheless. If nothing, it will drive Cisco to start acquiring at a faster rate to keep up. It will also allow existing startups to make a name for themselves. There’s even the possibility of existing networking folks leaving traditional roles and striking out on their own to found startups to explore new ideas. The possibilities are limitless.

The Dell/EMC domino is going to make IT interesting for the next few months. I can’t wait to see how the chips will fall for everyone.

Premise vs. Premises

premises-not-premise-300x225

If you’ve listened to a technology presentation in the past two years that included discussion of cloud computing, you’ve probably become embroiled in the ongoing war of the usage of the word premises or the shift of people using the word premise in its stead. This battle has raged for many months now, with the premises side of the argument refusing to give ground and watch a word be totally redefined. So where is this all coming from?

The Premise of Your Premises

The etymology of these two words is actually linked, as you might expect. Premise is the first to appear in the late 14th century. It traces from the Old French premisse which is derived from the Medieval Latin premissa, which are both defined as “a previous proposition from which another follows”.

The appearance of premises comes from the use of premise in legal documents in the 15th century. In those documents, a premise was a “matter previously stated”. More often than not, that referred to some kind of property like a house or a building. Over time, that came to be known as a premises.

Where the breakdown starts happening is recently in technology. We live in a world where brevity is important. The more information we can convey in a brief period the better we can be understood by our peers. Just look at the walking briefing scenes from The West Wing to get an idea of how we must compress and rapidly deliver ideas today. In an effort to save precious syllables during a presentation, I’m sure some CTO or Senior Engineer compressed premises into premise. And as we often do in technology, this presentation style and wording was copied ad infinitum by imitators and competitors alike.

Now, we stand on the verge of premise being redefined. This has a precedent in recent linguistics. The word literally was recently been changed from the standard definition of “in a literal sense” or describing something as it actually happened into an informal usage of “emphasizing strong feeling while not being literally true”. This change has grammar nerds and linguistics people at odds. Some argue that language evolves over time to include new meanings. Others claim that changing a word to be defined as the exact opposite meaning is a perversion and is wrong.

The Site of Your Ideas

Perhaps the real solution to this problem is to get rid of the $2 words when a $.50 word will do just fine. Instead of talking about on-premises cloud deployments, how about referring to them as on-site? Instead of talking about the premise behind creating a hybrid cloud, why not refer to the idea behind it (especially when you consider that the strict definition of premise doesn’t really mean idea).

By excising these words from your vocabulary now, you lose the risk of using them improperly. You even get to save a syllable here and there. If word economy is truly the goal, the aim should be to use the most precise word with the least amount of effort. If you are parroting a presentation from Amazon or Google and keep referring to on-premise computing you are doing a disservice to people that are listening to you and will carry your message forward to new groups of listeners.


Tom’s Take

If you’re going to insist on using premises and premise, please make sure you get them right. It takes less than a second to add the missing “s” to the end of that idea and make it a real place. Otherwise you’re going to come off sounding like you don’t know what you’re talking about. Kind of like this (definitely not safe for work):

Instead, let’s move past using these terms and get back to something more simple and straightforward. Sites can never be confused for ideas. It may be more direct and less flashy to say on-site but you never have to worry about using the wrong term or getting the grammarians on your bad side. And that’s a premise worth believing in.

 

Tips For Presenting On Video

video

Giving a presentation is never an easy thing for a presenter. There’s a lot that you have to keep in mind, like pacing and content. You want to keep your audience in mind and make sure you’re providing value for the time they are giving you.

But there is usually something else you need to keep in mind today. Most presentations are being recorded for later publication. When presenting for an audience that has a video camera or two, there are a few other things you want to keep in mind on top of every other thing you are trying to keep track of.

Tip 1: Introduce Early. And Often

One of the things you really need to keep in mind for recorded presentations is time. If the videos are going to be posted to Youtube after the event the length of your presentation is going to matter. People that stumble across your talk aren’t going to want to watch an hour or two of slide discussion. A fifteen minute overview of a topic works much better from a video perspective.

Never rely on a lower third to do something you are capable of taking five seconds to say.

Keeping that in mind, you should start every section of your presentation with an introduction. Tell everyone who you are and what you do. That establishes credibility with the audience. It also helps the viewer figure out who you are right away. Sometimes not knowing who is talking distracts enough that people get lost and miss content. Never rely on a lower third to do something you are capable of taking five seconds to say.

Note that if you decide to go down this road, you need to make sure your audience is aware of what you’re doing. Someone might find it off-putting that you’re introducing yourself twenty minutes after you just did. But you don’t want to turn it into a parody or humor point. Just be clear about why you’re doing it and make it quick and clean.

Tip 2: Take A Deep Breath

When you are transitioning between sections, one of the worst things you can do is try to fill time with idle conversation. People are hard wired to insert filler into conversations. Conquering that compulsion is a difficult task but very worth it in the end.

One of the reasons why getting rid of filler conversation is important for a video recording is the editing around an introduction. If you start your introduction with, “Um, uh, hello there, um, My name is, uh, Tom Hollingsworth…” That particular introduction is rife with unnecessary filler that does nothing but distract from a statement of who you are.

The easiest way to do this is to take a deep breath before you start speaking. By clearing your mind before you open your mouth, you are much less likely to insert filler words in an effort to keep the conversation flowing. Another good technique that news reporters use is the countdown. Yes, it does serve a purpose for the editor to know when to start the clip. But it also helps the reporter focus on when to start and collect their faculties before speaking.

Try it yourself next time. When you’re ready to make a clean transition, just insert a single second of silence while you take a breath. Odds are great that you’ll find your transitions much more appealing.

Tip 3: Questions Anyone?

This one is a bit trickier. The best presentation model works from the idea that the audience should ask questions during a presentation instead of after it. By having a question closely tied to an idea people are much more likely to remember it and find it relevant. This is especially true on video, as the view can rewind and listen to the question and answer a couple of times.

But what about those questions that aren’t exactly tied to a specific idea or cover a topic not discussed? That’s where the final Q&A period comes in. You want to make sure to capture any final thoughts from the audience. But since this is all about the video you also want to make sure you don’t cut someone off with a premature close out.

When you ask for final questions, make sure you take a few seconds and visually glance around the room. Silence is very easy to cut out of a video. But it’s much harder to cut out someone saying “Okay, so there are no more questions…” followed by someone asking a question. It’s much better to take the extra time to make sure there are no questions or comments. The extra seconds are more than worth it.


Tom’s Take

I get to see both sides of the presentation game. Whether I’m presenting at Tech.UNPLUGGED this week or editing a video from Tech Field Day. I find that presenting for a live audience while also being aware of the things that make video useful and successful are important skills to master in today’s speaking circuit.

It doesn’t take a lot of additional effort to make your presentation video-ready. A little practice and you’ll have it down in no time flat.

Network Firefighters or Fire Marshals?

FireMarshal

Throughout my career as a network engineer, I’ve heard lots of comparisons to emergency responders thrown around to describe what the networking team does. Sometimes we’re the network police that bust offenders of bandwidth polices. Other times there is the Network SWAT Team that fixes things that get broken when no one else can get the job done. But over and over again I hear network admins and engineers called “fire fighters”. I think it’s time to change how we look at the job of fires on the network.

Fight The Network

The president of my old company used to try to motivate us to think beyond our current job roles by saying, “We need to stop being firefighters.” It was absolutely true. However, the sentiment lacked some of the important details of what exactly a modern network professional actually does.

Think about your job. You spend most of your time implementing change requests and trying to fix things that don’t go according to plan. Or figuring out why a change six months ago suddenly decided today to create a routing loop. And every problem you encounter is a huge one that requires an “all hands on deck” mentality to fix it immediately.

People look at networks as a closed system that should never break or require maintenance of any kind until it breaks. There’s a reason why a popular job description (and Twitter handle) involves describing your networking job as a plumber or janitor. We’re the response personnel that get called out on holidays to fix a problem that creates a mess for other people that don’t know how to fix it.

And so we come to the fire fighter analogy. The idea of a group of highly trained individuals sitting around the NOC (or firehouse) waiting for the alarm to go off. We scramble to get the problem fixed and prevent disaster. Then go back to our NOC and clean up to do it all over again. People think about networking professionals this way because they only really talk to us when there’s a crisis that we need to deal with.

The catch with networking is that we’re very rarely sitting around playing ping pong or cleaning our gear. In the networking world we find ourselves being pushed from one crisis to the next. A change here creates a problem there. A new application needs changes to run. An old device created a catastrophe when a new one was brought online. Someone did something without approval that prevented the CEO from watching Youtube. The list is long and distinguished. But it all requires us to fix things at the end of the day.

The problem isn’t with the fire fighting mentality. Our society wouldn’t exist without firefighters. We still have to have protection against disaster. So how is it that we can have an ordered society with relatively few firefighters versus the kinds of chaos we see in networking?

Marshal, Marshal, Marshal

The key missing piece is the fire marshal. Firefighters put out fires. Fire marshals prevent them from happening in the first place. They do this by enforcing the local fire codes and investigating what caused a fire in the first place.

The most visible reminder of the job of a fire marshal is found in any bar or meeting room. There is almost always a sign by the door stating the maximum occupancy according to the local fire code. Occupancy limits prevent larger disasters should a fire occur. Yes, it’s a bit of bummer when you are told that no one else can enter the bar until some people leave. But the fire marshal would rather deal with unhappy patrons than with injuries or deaths in case of fire.

Other duties of the fire marshal include investigating the root cause of a fire and trying to find out how to prevent it in the future. Was is deliberate? Did it involve a faulty piece of equipment? These are all important questions to ask in order to find out how to keep things from happening all over again.

Let’s apply these ideas to the image of network professionals as firefighters. Instead of spending the majority of time dealing with the fallout from bad decisions there should be someone designated to enforce network policy and explain why those policies exist. Rather than caving to the whims of application developers that believe their program needs elevated QoS treatment, the network fire marshal can explain why QoS is tiered the way that it is and why including this new app in the wrong tier will create chaos down the road.

How about designating a network fire marshal to figure out why there was a routing table meltdown? Too often we find ourselves fixing the problem and not caring about the root cause so long as we can reassure ourselves that the problem won’t come back. A network fire marshal can do a post mortem investigation and find out which change caused the issue and how to prevent it in the future with proper documentation or policy changes.

It sounds simple and straightforward to many folks. Yet these are the things that tend to be neglected when the fires flare up and time becomes a premium resource. By having someone to fill the role of investigator and educator, we can only hope that network fires can be prevented before they start.


Tom’s Take

Networking can’t evolve if we spend the majority of our time dealing with the disasters we could have prevented with even a few minutes of education or caution. SDN seeks to prevent us from shooting off our own foot. But we can also help out by looking to the fire marshal as an example of how to prevent these network fires before they start. With some education and a bit of foresight we can reduce our workload of disaster response with disaster prevention. Imagine how much more time we would have to sit around the NOC and play ping pong?

 

The Blame Pipeline

wc_pipeline sketch

Talk to any modern IT person about shifting the landscape of how teams work and I can guarantee you that you’ll hear a bit about DevOps as well as “siloed” organizational structures. Fingers get pointed in all directions as to the real culprit behind dysfunctional architecture. Perhaps changing the silo term to something more appropriate will help organizations sort out where the real issues lie.

You Dropped A Bomb On Me

Silos, or stovepipes, are an artifact of reporting structures of days gone by. Greg Ferro (@EtherealMind) has a great piece on the evils of ITIL. In it, he talks about how the silo structure creates blame passing issues and lack of responsibility for problem determination and solving.

I think Greg is spot on here. But I also think that the love of blame extends in the other direction too. It is one thing to have the storage team telling everyone that the arrays are working so it’s not their problem. It’s another issue entirely when the CxO-level folks come down from the High Holy Boardroom to hunt for heads when something goes wrong. They aren’t looking to root out the cause of the issue. They want someone to blame.

That’s where the silo comes in. The vertical integration and focus on team disciplines means the CTO/CIO/CFOO get to find out which particular technology was responsible for the issue and blame those people. Modern IT organizations revel in blame assignment. What they want more than anything is to find out who is responsible and berate them until they feel vindicated.

Silos aren’t organizational structures. They are pipelines that allow management to find “responsible” parties and get their retribution for any problems that happen. The focus lies solely on punishing the guilty rather than correcting the problem. Which leads to people not wanting to be challenged outside of their team environment for fear that they’ll get double the blame when something goes wrong.

Trans-Organizational Pipeline

How can we fix this issue with silos and stovepipes? Think for a moment about the physical structures that we use to model them. Silos and stovepipes are all vertical. They might as well be using sewer pipes for a visual representation. After all, the crap does flow downhill.

A better model for the modern IT environment should be the horizontal pipeline. These would be more like oil transport pipelines or other precious materials. Rather than focus on storing things vertically, horizontal pipelines are all about getting raw materials to a processing facility. There isn’t time to waste along the way. Product moves from one location to the next rapidly.

That’s how teams should function. Resources should be allocated and processed. Equipment should be installed and configured. Applications should be installed and be operational. No time to waste. But who should do it? Which team gets the vote?

The reality of the world is that this kind of implementation style needs a cross-sectional horizontal team. Think about the old episodes of Mission: Impossible. The IMF didn’t pick the disguise team to infiltrate or the computer team to hack door locks. They found one or two people with the unique skills to get the job done and focused them on their task. And when the impossible mission was completed everyone went on their merry way back to real life.

In the case of IT teams, new technology products like hyperconvergence and programmable networking require the inputs of many different disciplines. Creating ad-hoc teams like the IMF can go a long way to helping stagnant IT departments rapidly embrace new technology. The rapid implementation approach leads to great things.

But what about the blame? After all, CxOs love to point fingers when things go wrong? How does a horizontal team help? The best way to treat this kind of issue is to remember that these new, smaller cross-functional teams have a much larger atomic unit than traditional silos. A CTO can blame an individual on the storage team for a failure. But in a cross-discipline team, it is the team that is responsible for success or failure. Blame, if any, resides with the team itself and not the people that comprise it. That kind of insulation helps individuals rise above the fear of committing egregious errors and instead embrace the kinds of thinking needed to make new IT happen.


Tom’s Take

The best teams don’t happen by accident. It takes effort to make a group of people across disciplines work together to make amazing things happen. The best way to make this happen is to form the team and get out of the way. No blame shifting. No segregation of skills. Just putting awesome people together and seeing what happens.

The sooner we stop putting silos around teams for ease of management the sooner we will find that people are capable of amazing feats of technology. Let’s turn those blame pipelines into something infinitely more useful.

 

Why Are These Slides Marked Confidential?

top-secret

Imagine you’re sitting in a presentation. You’re hearing some great information from the presenter and you can’t wait to share it with your colleagues or with the wider community. You are just about to say something when you look in the corner of the slide and you see…

Confidential Information

You pause for a moment and ask the presenter if this slide is a secret or if you should consider it under NDA. They respond that this slide can be shared with no restrictions and the information is publicly available. Which raises the question: Why is a public slide marked “confidential”?

I Fought The Law

The laws that govern confidential information are legion. Confidential information is a bit different than copyrighted information or intellectual property that has been patented. In most cases, confidential information is treated as a trade secret. Trade secrets can be harmful if they are divulged, since a trade secret can’t be patented.

A great example is the formula for Coca-Cola. If they tried to patent it they would have to write down all the ingredients. While that would protect the very specific formulation of their drink it would also allow their competitors to create something extremely similar with a few changes and create a viable competitor. Coca-Cola chooses to protect this information by ensuring it isn’t widely known. It ranks right up there with nuclear launch codes and Star Search results.

How does the concept of trade secrets and confidential information apply to slides? Well, one of the provisions of confidential information is that distribution must be controlled somehow. This means that you can’t just hand out information to anyone walking by on the street and hope that it stays confidential. You have to control distribution through confidentiality agreements and non-disclosure agreements (NDAs).

Most of the times you are seeing slides marked “confidential” you have implicitly agreed to some kind of confidentiality agreement. You are either covered by an NDA from your employer or from the event you are attending. Even if you didn’t sign an agreement it can still be argued in a court of law that you were invited to the presentation which means the presenter was selective in who could attend. That should meet the requirements of protecting distribution of the information.

I Shouldn’t Have Said That

The other reason why you see slides prominently marked as confidential is because the law says they have to be to be protected. A company can’t release information not bearing a confidential mark and then suddenly decide after the fact that said information should have been confidential. Could you imagine a world where companies routinely try to remove sensitive information from public knowledge because it isn’t flattering? What if they could use an ex post facto declaration to restrict distribution?

Confidential information has to be treated and marked as such from the very beginning to qualify for protection. In order to make sure that there is no chance for a slip up most companies will mark anything remotely sensitive to ensure it won’t come back to bite them later.

But why put the confidential marking on slides that you’re going to show to the world? What if those slide get uploaded to the Internet and shared all over the world as often happens? What purpose could it serve?

The reason to mark slides as confidential is to make sure you can restrict their use whenever you want. Rarely are slides uploaded by a company with a confidential marking. In order for something to be uploaded it has to be cleared through a legal department. So if there are slides out there that exist with a confidential marking it’s more likely someone uploaded them without explicit permission. Which isn’t a bad thing in general.

What if a competitor gets a copy of the slides and starts using the information? Or better yet, what if they use it in a marketing campaign against the company?

If the slide is marked as “confidential”, that allows the company to use legal means to remove the information or disallow use of the information. It means that rather than just complaining or fighting a marketing battle that heavier means can be used to take down anything embarrassing. It’s also more lasting to bar anyone from mentioning anything listed on a confidential slide.


Tom’s Take

I agree that the whole legal need to label everything short of your underwear as “confidential” is just plain stupid. This is the same legal system that says trademarks must be defended to be protected. But the rules are the rules. Which means that any company that wants to protect confidential information must mark it that way from the genesis of the concept. And having the ability to protect those assets also means dealing with misleading marks long after the information has entered the wild. Just make sure you ask the right questions before divulging anything that could be considered confidential.

 

SDN and the Trough Of Understanding

gartner_net_hype_2015

An article published this week referenced a recent Hype Cycle diagram (pictured above) from the oracle of IT – Gartner. While the lede talked a lot about the apparent “death” of Fibre Channel over Ethernet (FCoE), there was also a lot of time devoted to discussing SDN’s arrival at the Trough of Disillusionment. Quoting directly from the oracle:

Interest wanes as experiments and implementations fail to deliver. Producers of the technology shake out or fail. Investments continue only if the surviving providers improve their products to the satisfaction of early adopters.

As SDN approaches this dip in the Hype Cycle it would seem that the steam is finally being let out of the Software Defined Bubble. The Register article mentions how people are going to leave SDN by the wayside and jump on the next hype-filled networking idea, likely SD-WAN given the amount of discussion it has been getting recently. Do you know what this means for SDN? Nothing but good things.

Software Defined Hammers

Engineers have a chronic case of Software Defined Overload. SD-anything ranks right up there with Fat Free and New And Improved as the Most Overused Marketing Terms. Every solution release in the last two years has been software defined somehow. Why? Because that’s what marketing people think engineers want. Put Software Defined in the product and people will buy it hand over fist. Guess what Little Tommy Callahan has to say about that?

There isn’t any disillusionment in this little bump in the road. Quite the contrary. This is where the rubber meets the road, so to speak. This is where all the pretenders to the SDN crown find out that their solutions aren’t suited for mass production. Or that their much-vaunted hammer doesn’t have any nails to drive. Or that their hammer can’t drive a customer’s screws or rivets. And those pretenders will move on to the next hype bubble, leaving the real work to companies that have working solutions and real products that customers want.

This is no different than every other “hammer and nail” problem from the past few decades of networking. Whether it be ATM, MPLS, or any one of a dozen “game changing” technologies, the reality is that each of these solutions went from being the answer to every problem to being a specific solution for specific problems. Hopefully we’ve gotten SDN to this point before someone develops the software defined equivalent of LANE.

The Software Defined Road Ahead

Where does SD-technology go from here? Well, without marketing whipping everyone into a Software Defined Frenzy, the future is whatever developers want to make of it. Developers that come up with solutions. Developers that integrate SDN ideas into products and quietly sell them for specific needs. People that play the long game rather than hope that they can take over the world in a day.

Look at IPv6. It solves so many problems we have with today’s Internet. Not just IP exhaustion issues either. It solves issues with security, availability, and reachability. Yet we are just now starting to deploy it widely thanks to the panic of the IPocalypse. IPv6 did get a fair amount of hype twenty years ago when it was unveiled as the solution to every IP problem. After years of mediocrity and being derided as unnecessary, IPv6 is poised to finally assume its role.

SDN isn’t going to take nearly as long as IPv6 to come into play. What is going to happen is a transition away from Software Defined as the selling point. Even today we’re starting to see companies move away from SD labeling and instead use more specific terms to help customers understand what’s important about the solution and how it will help customers. That’s what is needed to clarify the confusion and reduce fatigue.