A Voyage of Discover-E



I’m very happy to be attending the first edition of Hewlett-Packard Enterprise (HPE) Discover in London next week. I say the first edition because this is the first major event being held since the reaving of HP Inc from Hewlett-Packard Enterprise. I’m hopeful for some great things to come from this.

It’s The Network (This Time)

One of the most exciting things for me is seeing how HPE is working on their networking department. With the recent news about OpenSwitch, HPE is trying to shift the way of thinking about a switch operating system in a big way. To quote my friend Chris Young:

Vendors today spend a lot of effort re-writing 80% of their code and focus on innovating on the 20% that makes them different. Imagine how much further we’d be if that 80% required no effort at all?

OpenSwitch has some great ideas, like pulling everything from Open vSwitch as a central system database. I would love to see more companies use this model going forward. It makes a lot of sense and can provide significant benefits. Time will tell if other vendors recognize this and start using portions of OpenSwitch in their projects. But for now it’s interesting to see what is possible when someone takes a leap of open-sourced faith.

I’m also excited to hear from Aruba, a Hewlett-Packard Enterprise company (Aruba)¬†and see what new additions they’ve made to their portfolio. The interplay between Aruba and the new HPE Networking will be interesting to follow. I have seen more engagement and discussion coming from HPE Networking now that Aruba has begun integrating themselves into the organization. It’s exciting to have conversations with people involved in the vendor space about what they’re working on. I hope this trend continues with HPE in all areas and not just networking.

Expected To Do Your Duty

HPE is sailing into some very interesting waters. Splitting off the consumer side of the brand does allow the smaller organization to focus on the important things that enterprises need. This isn’t a divestiture. It’s cell mitosis. The behemoth that was HP needed to divide to survive.

I said a couple of weeks ago:

To which it was quickly pointed out that HPE is doing just that. I agree that their effort is impressive. But this is the first time that HP has tried to cut itself to pieces. IBM has done it over and over again. I would amend my original statement to say that no company will be IBM again, including IBM. What you and I think of today as IBM isn’t what Tom Watson built. It’s the remnants of IBM Global Services with some cloud practice acquisitions. The server and PC business that made IBM a household name are gone now.

The lesson to HPE during as they try to find their identity in the new world post-cleaving is to remember what people liked about HP in the enterprise space and focus on keeping that goodwill going. Create a nucleus that allows the brand to continue to build and innovate in new and exciting ways without letting people forget what made you great in the first place.

Tom’s Take

I’m excited to see what HPE has in store for this Discover. There are no doubt going to be lots of product launches and other kinds of things to pique my interest about the direction the company is headed. I’m impressed so far with the changes and the focus back to what matters. I hope the momentum continues to grow into 2016 and the folks behind the wheel of the HPE ship know how to steer into the clear water of success. Here’s hoping for clear skies and calm seas ahead for the good ship Hewlett-Packard Enterprise!



A Stack Full Of It


During the recent Open Networking User Group (ONUG) Meeting, there was a lot of discussion around the idea of a Full Stack Engineer. The idea of full stack professionals has been around for a few years now. Seeing this label applied to networking and network professionals seems only natural. But it’s a step in the wrong direction.

Short Stack

Full stack means having knowledge of the many different pieces of a given area. Full stack programmers know all about development, project management, databases, and other aspects of their environment. Likewise, full stack engineers are expected to know about the network, the servers attached to it, and the applications running on top of those servers.

Full stack is a great way to illustrate how specialized things are becoming in the industry. For years we’ve talked about how hard networking can be and how we need to make certain aspects of it easier for beginners to understand. QoS, routing protocols, and even configuration management are critical items that need to be decoded for anyone in the networking team to have a chance of success. But networking isn’t the only area where that complexity resides.

Server teams have their own jargon. Their language doesn’t include routing or ASICs. They tend to talk about resource pools and patches and provisioning. They might talk about VLANs or latency, but only insofar as it applies to getting communications going to their servers. Likewise, the applications teams don’t talk about any of the above. They are concerned with databases and application behaviors. The only time the hardware below them becomes a concern is when something isn’t working properly. Then it becomes a race to figure out which team is responsible for the problem.

The concept of being a full stack anything is great in theory. You want someone that can understand how things work together and identify areas that need to be improved. The term “big picture” definitely comes to mind. Think of a general practitioner doctor. This person understands enough about basic medical knowledge to be able to fix a great many issues and help you understand how your body works. There are quiet a few general doctors that do well in the medical field. But we all know that they aren’t the only kinds of doctors around.

Silver Dollar Stacks

Generalists are great people. They’ve spent a great deal of time learning many things to know a little bit about everything. I like to say that these people have mud puddle knowledge about a topic. It covers are broad area, but only a few inches deep. It can form quickly and evaporates in the same amount of time. Contrast this with a lake or an ocean, which covers a much deeper area but takes years or decades to create.

Let’s go back to our doctor example. General practitioners are great for a large percentage of simple problems. But when they are faced with a very specific issue they often call out to a specialist doctor. Specialists have made their career out of learning all about a particular part of the body. Podiatrists, cardiologists, and brain surgeons are all specialists. They are the kinds of doctors you want to talk to when you have a problem with that part of your body. They will never see the high traffic of a general doctor, but they more than make up for it in their own area of expertise.

Networking has a lot of people that cover the basics. There are also a lot of people that cover the more specific things, like MPLS or routing. Those specialists are very good a what they do because they have spent the time to hone those skills. They may not be able to create VLANs or provision ports as fast as a generalist, but imagine the amount of time saved when turning up a new MPLS VPN or troubleshooting a routing loop? That money translates into real savings or reduced downtime.

Tom’s Take

The people that claim that networking needs to have full stack knowledge are the kinds of folks further up the stack that get irritated when they have to explain what they want. Server admins don’t like knowing the networking jargon to ask for VLANs. Application developers want you to know what they mean when they say everything is slow. Full stack is just code for “learn about my job so I don’t have to learn about yours”.

It’s important to know about how other roles in the stack work in order to understand how changes can impact the entire organization. But that knowledge needs to be shared across everyone up and down the stack. People need to have basic knowledge to understand what they are asking and how you can help.

The next time someone tells you that you need to be a full stack person, ask them to come do your job for a day while you learn about theirs. Or offer to do their job for one week to learn about their part of the stack. If they don’t recoil in horror at the thought of you doing it, chance are they really want you to have a greater understanding of things. More likely they just want you to know how hard they work and why you’re so difficult to understand. Stop telling us that we need full stack knowledge and start making the stacks easier to understand.


How Much Is Unlimited Anyway?


The big news today came down from the Microsoft MVP Summit that OneDrive is not going to support “unlimited” cloud storage going forward. This came as a blow to folks that were hoping to store as much data as possible for the foreseeable future. The conversations have already started about how Microsoft pulled a bait-and-switch or how storage isn’t really free or unlimited. I see a lot of parallels in the networking space to this problem as well.

All The Bandwidth You Can Buy

I remember sitting in a real estate class in college talking to my professor, who was a commercial real estate agent. He told us, “The happiest day of your real estate career is the day you buy an apartment complex. The second happiest day of your career is when you sell it to the next sucker.” People are in love with the idea of charging for a service, whether it be an apartment or cloud storage and compute. They think they can raise the price every year and continue to reap the profits of ever-increasing rent. What they don’t realize is that those increases are designed to cover increased operating costs, not increased money in your pocket.

Think about someone like Amazon. They are making money hand over fist in the cloud game. What do you think they are doing with it? Are they piling it up in a storage locker and sitting on it like a throne? Or lighting cigars with $100 bills? The most likely answer is that they are plowing those profits back into increasing capacity and offerings to attract new customers. That’s what customers want. Amazon can take some profit from the business but if they stop expanding customers will leave to find another service that meets their needs.

Bandwidth in networks is no different. I worked for IBM as in intern many years ago. Our site upgraded their Internet connection to a T3 to support the site. We were informed that just a few months after the upgrade that all the extra bandwidth we’d installed was being utilized at more than 90%. It took almost no time for the users to find out there was more headroom available and consume it.

The situation with bandwidth today is no different. Application developers assume that storage and bandwidth are unlimited or significant. They create huge application packages that load every conceivable library or function for the sake of execution speed. Networking and storage pays the price to make things faster. Apps take up lots of space and take forever to download even a simple update. The situation keeps getting worse with every release.

Slimming the Bandwidth Pipeline

Some companies are trying to take a look at how to keep this bloat from exploding. Facebook has instituted a policy that restricts bandwidth on Tuesdays to show developers what browsing at low speeds really feels like. They realize that not everyone in the world has access to ultra-fast LTE or wireless.

Likewise, Amazon realizes that on-boarding data to AWS can be painful if there are hundreds of gigabytes or even a few terabytes to migrate. They created Snowball, which is essentially a 1 petabyte storage array that you load up on-site and return to Amazon to store. It’s a decidedly low tech solution to a growing problem.

Networking professionals know that bandwidth isn’t unlimited. Upgrades and additional capacity cost money. Service providers have the same limitations as regular networks. If you want more bandwidth than they can provide, you are out of luck. If you’re willing to pay through the nose providers are happy to build out solutions for you. You’re providing the capital investment for their expansion. Everything costs money somehow.

Tom’s Take

“Unlimited” is a marketing lie. Whether it’s unlimited nights and weekends, unlimited mobile data, or unlimited storage, nothing is truly infinite. Companies want you to take advantage of their offerings to sell you something else. Free services are supported by advertising or upset opportunities. Providers continue to be shocked when they offer something with no reasonable limit and find that a small percentage of the user base is going to take advantage of their mistake.

Rather than offering false promises of unlimited things, providers should be up front. They should have plans that offer large storage amounts with conditions that make it clear that large consumers of those services will face restriction. People that want to push the limit and download hundreds of gigabytes of mobile data or store hundreds of terabytes of data in the cloud should know up front that they will be singled out for special treatment. Believable terms for services beat the lies of no limits every time.

Who Wants To Save Forever?


At the recent SpectraLogic summit in Boulder, much of the discussion centered around the idea of storing data and media in perpetuity. Technology has arrived at the point where it is actually cheaper to keep something tucked away rather than trying to figure out whether or not it should be kept. This is leading to a huge influx of media resources being available everywhere. The question now shifts away from storage and to retrieval. Can you really save something forever?

Another One Bites The Dust

Look around your desk. See if you can put your hands on each of the following:

* A USB Flash drive
* A Floppy Disk (bonus points for 5.25")

Odds are good that you can find at least three of those four items. Each of those items represents a common way of saving files in a removal format. I’m not even trying to cover all of the formats that have been used (I’m looking at you, ZIP drives). Each of these formats has been tucked away in a backpack or given to a colleague at some point to pass files back and forth.

Yet, each of these formats has been superseded sooner or later by something better. Floppies were ultraportable and very small. CD-ROMs were much bigger, but couldn’t be re-written without effort. DVD media never really got the chance to take off before bandwidth eclipsed the capacity of a single disc. And USB drives, while the removable media du jour, are mainly used when you can’t connect wirelessly.

Now, with cloud connectivity the idea of having removable media to share files seems antiquated. Instead of copying files to a device and passing it around between machines, you simply copy those files to a central location and have your systems look there. And capacity is very rarely an issue. So long as you can bring new systems online to augment existing storage space, you can effectively store unlimited amounts of data forever.

But how do we extract data from old devices to keep in this new magical cloud? Saving media isn’t that hard. But getting it off the source is proving to be harder than one might think.

Take video for instance. How can you extract data from an old 8mm video camera? It’s not a standard size to convert to VHS (unless you can find an old converter at a junk store). There are a myriad of ways to extract the data once you get it hooked up to an input device. But what happens if the source device doesn’t work any longer? If your 8mm camera is broken you probably can’t extract your media. Maybe there is a service that can do it, but you’re going to pay for that privilege.

I Want To Break Free

Assuming you can even extract the source media files for storage, we start running into another issue. Once I’ve saved those files, how can I be sure that I can read them fifty years from now? Can I even be sure I can read them five years from now?

Data storage formats are a constantly-evolving discussion. All you have to do is look at Microsoft Office. Office is the most popular workgroup suite in the entire world. All of those files have to be stored in a format that allows them to be read. One might be forgiven for assuming that Microsoft Word document formats are all the same or at least similar enough to be backwards compatible across all versions.

Each new version of the format includes a few new pieces that break backwards compatibility. Instead of leveraging new features like smaller file sizes or increased readability we are faced to continue using old formats like Word 97-2002 in order to ensure that file can be read by whomever they send it to for review.

Even the most portable for formats suffers from this malady. Portable Document Format (PDF) was designed by Adobe to be an application independent way to display files using a printing descriptor language. This means that saving a file as a PDF one system makes it readable on a wide variety of systems. PDF has become the de facto way to share files back and forth.

Yet it can suffer from format issues as well. PDF creation software like Adobe Acrobat isn’t immune from causing formatting problems. Files saved with certain attributes can only be read by updated versions of reader software that can understand them. The idea of a portable format only works when you restrict the descriptors available to the lowest common denominator so that all readers can display the format.

Part of this issue comes from the idea that companies feel the need to constantly “improve” things and force users to continue to upgrade software to be able to read the new formats. While Adobe has offered the PDF format to ISO for standardization, adding new features to the process takes time and effort. Adobe would rather have you keep buying Acrobat to make PDFs and downloading new versions to Reader to decode those new files. It’s a win-win situation for them and not as much of one for the consumers of the format.

Tom’s Take

I find it ironic that we have spent years of time and millions of dollars trying to find ways to convert data away from paper and into electronic formats. The irony is that those papers that we converted years ago are more readable that the data that we stored in the cloud. The only limitation of paper is how long the actual paper can last before being obliterated.

Think of the Rosetta Stone or the Code of Hammurabi. We know about these things because they were etched into stone. Literally. Yet, in the case of the Rosetta Stone we ran into file format issues. It wasn’t until we were able to save the Egyptian hieroglyphs as Greek that we were able to read them. If you want your data to stand the test of time, you need to think about more than the cloud. You need to make sure that you can retrieve and read it as well.

My Thoughts on Dell, EMC, and Networking


The IT world is buzzing about the news that Dell is acquiring EMC for $67 billion. Storage analysts are talking about the demise of the 800-lb gorilla of storage. Virtualization people are trying to figure out what will happen to VMware and what exactly a tracking stock is. But very little is going on in the networking space. And I think that’s going to be a place where some interesting things are going to happen.

It’s Not The Network

The appeal of the Dell/EMC deal has very little to do with networking. EMC has never had any form of enterprise networking, even if they were rumored to have been looking at Juniper a few years ago. The real networking pieces come from VMware and NSX. NSX is a pure software networking implementation for overlay networking implemented in virtualized networks.

Dell’s networking team was practically nonexistent until the Force10 acquisition. Since then there has been a lot of work in building a product to support Dell’s data center networking aspirations. Good work has been done on the hardware front. The software on the switches has had some R&D done internally, but the biggest gains have been in partnerships. Dell works closely with Cumulus Networks and Big Switch Networks to provide alternative operating systems for their networking hardware. This gives users the ability to experiment with new software on proven hardware.

Where does the synergy lie here? Based on a conversation I had on Monday there are some that believe that Cumulus is a loser in this acquisition. The idea is that Dell will begin to use NSX as the primary data center networking piece to drive overlay adoption. Companies that have partnered with Dell will be left in the cold as Dell embraces the new light and way of VMware SDN. Interesting idea, but one that is a bit flawed.

Maybe It’s The Network

Dell is going to be spending a lot of time integrating EMC and all their federation companies. Business needs to continue going forward in other areas besides storage. Dell Networking will see no significant changes in the next six months. Life goes on.

Moving forward, Dell Networking is still an integral piece of the data center story. As impressive as software networking can be, servers still need to plug into something. You can’t network a server without a cable. That means hardware is still important even at a base level. That hardware needs some kind of software to control it, especially in the NSX model without a centralized controller deciding how flows will operating on leaf switches. That means that switches will still need operating systems.

The question then shifts to whether Dell will invest heavily in R&D for expanding FTOS and PowerConnect OS or if they will double down on their partnership with Cumulus and Big Switch and let NSX do the heavy lifting above the fray. The structure of things would lead one to believe that Cumulus will get the nod here, as their OS is much more lightweight and enables basic connectivity and control of the switches. Cumulus can help Dell integrate the switch OS into monitoring systems and put more of the control of the underlay network at the fingertips of the admins.

I think Dell is going to be so busy integrating EMC into their operations that the non-storage pieces are going to be starved for development dollars. That means more reliance on partnerships in the near term. Which begets a vicious cycle that causes in-house software to fall further and further behind. Which is great for the partner, in this case Cumulus.

By putting Dell Networking into all the new offerings that should be forthcoming from a combined Dell/EMC, Dell is putting Cumulus Linux in a lot of data centers. That means familiarizing these networking folks with them more and more. Even if Dell decides not to renew the Cumulus Partnership after EMC and VMware are fully ingested it means that the install base of Cumulus will be bigger than it would have been otherwise. When those devices are up for refresh the investigation into replacing them with Cumulus-branded equipment is one that could generate big wins for Cumulus.

Tom’s Take

Dell and EMC are going to touch every facet of IT when they collide. Between the two of them they compete in almost every aspect of storage, networking, and compute as well as many of the products that support those functions. Everyone is going to face rapid consolidation from other companies banding together to challenge the new 800-lb gorilla in the space.

Networking will see less impact from this merger but it will be important nonetheless. If nothing, it will drive Cisco to start acquiring at a faster rate to keep up. It will also allow existing startups to make a name for themselves. There’s even the possibility of existing networking folks leaving traditional roles and striking out on their own to found startups to explore new ideas. The possibilities are limitless.

The Dell/EMC domino is going to make IT interesting for the next few months. I can’t wait to see how the chips will fall for everyone.

Premise vs. Premises


If you’ve listened to a technology presentation in the past two years that included discussion of cloud computing, you’ve probably become embroiled in the ongoing war of the usage of the word premises or the shift of people using the word premise in its stead. This battle has raged for many months now, with the premises side of the argument refusing to give ground and watch a word be totally redefined. So where is this all coming from?

The Premise of Your Premises

The etymology of these two words is actually linked, as you might expect. Premise is the first to appear in the late 14th century. It traces from the Old French premisse which is derived from the Medieval Latin premissa, which are both defined as “a previous proposition from which another follows”.

The appearance of premises comes from the use of premise in legal documents in the 15th century. In those documents, a premise was a “matter previously stated”. More often than not, that referred to some kind of property like a house or a building. Over time, that came to be known as a premises.

Where the breakdown starts happening is recently in technology. We live in a world where brevity is important. The more information we can convey in a brief period the better we can be understood by our peers. Just look at the walking briefing scenes from The West Wing to get an idea of how we must compress and rapidly deliver ideas today. In an effort to save precious syllables during a presentation, I’m sure some CTO or Senior Engineer compressed premises into premise. And as we often do in technology, this presentation style and wording was copied ad infinitum by imitators and competitors alike.

Now, we stand on the verge of premise being redefined. This has a precedent in recent linguistics. The word literally was recently been changed from the standard definition of “in a literal sense” or describing something as it actually happened into an informal usage of “emphasizing strong feeling while not being literally true”. This change has grammar nerds and linguistics people at odds. Some argue that language evolves over time to include new meanings. Others claim that changing a word to be defined as the exact opposite meaning is a perversion and is wrong.

The Site of Your Ideas

Perhaps the real solution to this problem is to get rid of the $2 words when a $.50 word will do just fine. Instead of talking about on-premises cloud deployments, how about referring to them as on-site? Instead of talking about the premise behind creating a hybrid cloud, why not refer to the idea behind it (especially when you consider that the strict definition of premise doesn’t really mean idea).

By excising these words from your vocabulary now, you lose the risk of using them improperly. You even get to save a syllable here and there. If word economy is truly the goal, the aim should be to use the most precise word with the least amount of effort. If you are parroting a presentation from Amazon or Google and keep referring to on-premise computing you are doing a disservice to people that are listening to you and will carry your message forward to new groups of listeners.

Tom’s Take

If you’re going to insist on using premises and premise, please make sure you get them right. It takes less than a second to add the missing “s” to the end of that idea and make it a real place. Otherwise you’re going to come off sounding like you don’t know what you’re talking about. Kind of like this (definitely not safe for work):

Instead, let’s move past using these terms and get back to something more simple and straightforward. Sites can never be confused for ideas. It may be more direct and less flashy to say on-site but you never have to worry about using the wrong term or getting the grammarians on your bad side. And that’s a premise worth believing in.


Tips For Presenting On Video


Giving a presentation is never an easy thing for a presenter. There’s a lot that you have to keep in mind, like pacing and content. You want to keep your audience in mind and make sure you’re providing value for the time they are giving you.

But there is usually something else you need to keep in mind today. Most presentations are being recorded for later publication. When presenting for an audience that has a video camera or two, there are a few other things you want to keep in mind on top of every other thing you are trying to keep track of.

Tip 1: Introduce Early. And Often

One of the things you really need to keep in mind for recorded presentations is time. If the videos are going to be posted to Youtube after the event the length of your presentation is going to matter. People that stumble across your talk aren’t going to want to watch an hour or two of slide discussion. A fifteen minute overview of a topic works much better from a video perspective.

Never rely on a lower third to do something you are capable of taking five seconds to say.

Keeping that in mind, you should start every section of your presentation with an introduction. Tell everyone who you are and what you do. That establishes credibility with the audience. It also helps the viewer figure out who you are right away. Sometimes not knowing who is talking distracts enough that people get lost and miss content. Never rely on a lower third to do something you are capable of taking five seconds to say.

Note that if you decide to go down this road, you need to make sure your audience is aware of what you’re doing. Someone might find it off-putting that you’re introducing yourself twenty minutes after you just did. But you don’t want to turn it into a parody or humor point. Just be clear about why you’re doing it and make it quick and clean.

Tip 2: Take A Deep Breath

When you are transitioning between sections, one of the worst things you can do is try to fill time with idle conversation. People are hard wired to insert filler into conversations. Conquering that compulsion is a difficult task but very worth it in the end.

One of the reasons why getting rid of filler conversation is important for a video recording is the editing around an introduction. If you start your introduction with, “Um, uh, hello there, um, My name is, uh, Tom Hollingsworth…” That particular introduction is rife with unnecessary filler that does nothing but distract from a statement of who you are.

The easiest way to do this is to take a deep breath before you start speaking. By clearing your mind before you open your mouth, you are much less likely to insert filler words in an effort to keep the conversation flowing. Another good technique that news reporters use is the countdown. Yes, it does serve a purpose for the editor to know when to start the clip. But it also helps the reporter focus on when to start and collect their faculties before speaking.

Try it yourself next time. When you’re ready to make a clean transition, just insert a single second of silence while you take a breath. Odds are great that you’ll find your transitions much more appealing.

Tip 3: Questions Anyone?

This one is a bit trickier. The best presentation model works from the idea that the audience should ask questions during a presentation instead of after it. By having a question closely tied to an idea people are much more likely to remember it and find it relevant. This is especially true on video, as the view can rewind and listen to the question and answer a couple of times.

But what about those questions that aren’t exactly tied to a specific idea or cover a topic not discussed? That’s where the final Q&A period comes in. You want to make sure to capture any final thoughts from the audience. But since this is all about the video you also want to make sure you don’t cut someone off with a premature close out.

When you ask for final questions, make sure you take a few seconds and visually glance around the room. Silence is very easy to cut out of a video. But it’s much harder to cut out someone saying “Okay, so there are no more questions…” followed by someone asking a question. It’s much better to take the extra time to make sure there are no questions or comments. The extra seconds are more than worth it.

Tom’s Take

I get to see both sides of the presentation game. Whether I’m presenting at Tech.UNPLUGGED this week or editing a video from Tech Field Day. I find that presenting for a live audience while also being aware of the things that make video useful and successful are important skills to master in today’s speaking circuit.

It doesn’t take a lot of additional effort to make your presentation video-ready. A little practice and you’ll have it down in no time flat.