About networkingnerd

Tom Hollingsworth, CCIE #29213, is a former network engineer and current organizer for Tech Field Day. Tom has been in the IT industry since 2002, and has been a nerd since he first drew breath.

A Voyage of Discover-E



I’m very happy to be attending the first edition of Hewlett-Packard Enterprise (HPE) Discover in London next week. I say the first edition because this is the first major event being held since the reaving of HP Inc from Hewlett-Packard Enterprise. I’m hopeful for some great things to come from this.

It’s The Network (This Time)

One of the most exciting things for me is seeing how HPE is working on their networking department. With the recent news about OpenSwitch, HPE is trying to shift the way of thinking about a switch operating system in a big way. To quote my friend Chris Young:

Vendors today spend a lot of effort re-writing 80% of their code and focus on innovating on the 20% that makes them different. Imagine how much further we’d be if that 80% required no effort at all?

OpenSwitch has some great ideas, like pulling everything from Open vSwitch as a central system database. I would love to see more companies use this model going forward. It makes a lot of sense and can provide significant benefits. Time will tell if other vendors recognize this and start using portions of OpenSwitch in their projects. But for now it’s interesting to see what is possible when someone takes a leap of open-sourced faith.

I’m also excited to hear from Aruba, a Hewlett-Packard Enterprise company (Aruba)¬†and see what new additions they’ve made to their portfolio. The interplay between Aruba and the new HPE Networking will be interesting to follow. I have seen more engagement and discussion coming from HPE Networking now that Aruba has begun integrating themselves into the organization. It’s exciting to have conversations with people involved in the vendor space about what they’re working on. I hope this trend continues with HPE in all areas and not just networking.

Expected To Do Your Duty

HPE is sailing into some very interesting waters. Splitting off the consumer side of the brand does allow the smaller organization to focus on the important things that enterprises need. This isn’t a divestiture. It’s cell mitosis. The behemoth that was HP needed to divide to survive.

I said a couple of weeks ago:

To which it was quickly pointed out that HPE is doing just that. I agree that their effort is impressive. But this is the first time that HP has tried to cut itself to pieces. IBM has done it over and over again. I would amend my original statement to say that no company will be IBM again, including IBM. What you and I think of today as IBM isn’t what Tom Watson built. It’s the remnants of IBM Global Services with some cloud practice acquisitions. The server and PC business that made IBM a household name are gone now.

The lesson to HPE during as they try to find their identity in the new world post-cleaving is to remember what people liked about HP in the enterprise space and focus on keeping that goodwill going. Create a nucleus that allows the brand to continue to build and innovate in new and exciting ways without letting people forget what made you great in the first place.

Tom’s Take

I’m excited to see what HPE has in store for this Discover. There are no doubt going to be lots of product launches and other kinds of things to pique my interest about the direction the company is headed. I’m impressed so far with the changes and the focus back to what matters. I hope the momentum continues to grow into 2016 and the folks behind the wheel of the HPE ship know how to steer into the clear water of success. Here’s hoping for clear skies and calm seas ahead for the good ship Hewlett-Packard Enterprise!



A Stack Full Of It


During the recent Open Networking User Group (ONUG) Meeting, there was a lot of discussion around the idea of a Full Stack Engineer. The idea of full stack professionals has been around for a few years now. Seeing this label applied to networking and network professionals seems only natural. But it’s a step in the wrong direction.

Short Stack

Full stack means having knowledge of the many different pieces of a given area. Full stack programmers know all about development, project management, databases, and other aspects of their environment. Likewise, full stack engineers are expected to know about the network, the servers attached to it, and the applications running on top of those servers.

Full stack is a great way to illustrate how specialized things are becoming in the industry. For years we’ve talked about how hard networking can be and how we need to make certain aspects of it easier for beginners to understand. QoS, routing protocols, and even configuration management are critical items that need to be decoded for anyone in the networking team to have a chance of success. But networking isn’t the only area where that complexity resides.

Server teams have their own jargon. Their language doesn’t include routing or ASICs. They tend to talk about resource pools and patches and provisioning. They might talk about VLANs or latency, but only insofar as it applies to getting communications going to their servers. Likewise, the applications teams don’t talk about any of the above. They are concerned with databases and application behaviors. The only time the hardware below them becomes a concern is when something isn’t working properly. Then it becomes a race to figure out which team is responsible for the problem.

The concept of being a full stack anything is great in theory. You want someone that can understand how things work together and identify areas that need to be improved. The term “big picture” definitely comes to mind. Think of a general practitioner doctor. This person understands enough about basic medical knowledge to be able to fix a great many issues and help you understand how your body works. There are quiet a few general doctors that do well in the medical field. But we all know that they aren’t the only kinds of doctors around.

Silver Dollar Stacks

Generalists are great people. They’ve spent a great deal of time learning many things to know a little bit about everything. I like to say that these people have mud puddle knowledge about a topic. It covers are broad area, but only a few inches deep. It can form quickly and evaporates in the same amount of time. Contrast this with a lake or an ocean, which covers a much deeper area but takes years or decades to create.

Let’s go back to our doctor example. General practitioners are great for a large percentage of simple problems. But when they are faced with a very specific issue they often call out to a specialist doctor. Specialists have made their career out of learning all about a particular part of the body. Podiatrists, cardiologists, and brain surgeons are all specialists. They are the kinds of doctors you want to talk to when you have a problem with that part of your body. They will never see the high traffic of a general doctor, but they more than make up for it in their own area of expertise.

Networking has a lot of people that cover the basics. There are also a lot of people that cover the more specific things, like MPLS or routing. Those specialists are very good a what they do because they have spent the time to hone those skills. They may not be able to create VLANs or provision ports as fast as a generalist, but imagine the amount of time saved when turning up a new MPLS VPN or troubleshooting a routing loop? That money translates into real savings or reduced downtime.

Tom’s Take

The people that claim that networking needs to have full stack knowledge are the kinds of folks further up the stack that get irritated when they have to explain what they want. Server admins don’t like knowing the networking jargon to ask for VLANs. Application developers want you to know what they mean when they say everything is slow. Full stack is just code for “learn about my job so I don’t have to learn about yours”.

It’s important to know about how other roles in the stack work in order to understand how changes can impact the entire organization. But that knowledge needs to be shared across everyone up and down the stack. People need to have basic knowledge to understand what they are asking and how you can help.

The next time someone tells you that you need to be a full stack person, ask them to come do your job for a day while you learn about theirs. Or offer to do their job for one week to learn about their part of the stack. If they don’t recoil in horror at the thought of you doing it, chance are they really want you to have a greater understanding of things. More likely they just want you to know how hard they work and why you’re so difficult to understand. Stop telling us that we need full stack knowledge and start making the stacks easier to understand.


Gathering No MOS


If you work in the voice or video world, you’ve undoubtedly heard about Mean Opinion Scores (MOS). MOS is a rough way of ranking the quality of the sound on a call. It’s widely used to determine the over experience for the user on the other end of the phone. MOS represents something important in the grand scheme of communications. However, MOS is quickly becoming a crutch that needs some explanation.

That’s Just Like Your Opinion

The first think to keep in mind when you look at MOS data is that the second word in the term is opinion. Originally, MOS was derived by having selected people listen to calls and rank them on a scale of 1 (I can’t hear you) to 5 (We’re sitting next to each other). The idea was to see if listeners could distinguish when certain aspects of the call were changed, such as pathing or exchange equipment. It was an all-or-nothing ranking. Good calls got a 4 or even rarely a 5. Most terrible calls got 2 or 3. You take the average of all your subjects and that gives your the overall MOS for your system.


When digital systems came along, MOS took on an entirely different meaning. Rather than being used to subjectively rank call quality, MOS became a yardstick for tweaking the codec used to digitally transform analog speech to digital packets. Since this has to happen in order for the data to be sent, all digital calls must have a codec somewhere. The first codecs were trying to approximate the quality of a long distance phone call, which was the gold standard for quality. After that target was reached, providers started messing around the codecs in question to reduce bandwidth usage.

G.711 is considered the baseline level of call quality from which all others are measure. It has a relative MOS of 4.1, which means very good voice quality. It also uses around 64 kbps of bandwidth. As developers started playing with encoding schemes and other factors, they started developing codecs which used significantly less bandwidth and had almost equivalent quality. G.729 uses only 8 kbps of bandwidth but has a MOS of 3.9. It’s almost as good as G.711 in most cases but uses an eighth of the resources.

MOS has always been subjective. That was until VoIP system providers found that certain network metrics have an impact on the quality of a call. Things like packet loss, delay, and jitter all have negative impacts on call quality. By measuring these values a system could give an approximation of MOS for an admin without needing to go through the hassle of making people actually listen to the calls. That data could then be provided through analytics dashboards as an input into the overall health of the system.

Like A Rolling Stone

The problem with MOS is that it has always been subjective. Two identical calls may have different MOS scores based on the listener. Two radically different codecs could have similar MOS scores because of simple factors like tonality or speech isolation. Using a subjective ranking matrix to display empirical data is unwieldy at best. The only reason to use MOS as a yardstick is because everyone understands what MOS is.

Enter R-values. R-values take inputs from the same monitoring systems that produce MOS and rank those inputs on a scale of 1 – 100. Those scores can then be ranked with more precision to determine call quality and VoIP network health. A call in the 90s is a great call. If things dip in the 70s or the 60s, there are major issues to identify. R-values solve the problem of trying to bolt empirical data onto a subjective system.

Now that communications is becoming more and more focused on things like video, the need for analytics around them is becoming more pronounced. People want to track the same kinds of metrics – codec quality, packet loss, delay, and jitter. But there isn’t a unified score that can be presented in green, yellow, and red to let people know when things are hitting the fan.

It has been suggested that MOS be adapted to reference video in addition to audio. While the idea behind using a traditional yardstick like MOS sounds good on the surface, the reality is that video is a much more complicated thing that can’t be encompassed by a 50-year-old ranking method like MOS.

Video calls can look horrible and sound great. They can have horrible sound and be crystal clear from a picture perspective. There are many, many subjective pieces that can go into ranking a video call. Trying to shoehorn that into a simple scale of 5 values is doing a real disservice to video codec manufacturers, not to mention the network operators that try and keep things running smoothly for video users.

R-value seems to be a much better way to classify analytics for video. It’s much more nuanced and capable of offering insight into different aspects of call and picture quality. It can still provide a ranked score for threshold measuring, but that rank is much more likely to mean something important for each number as opposed to the absolute values present in MOS.

Tom’s Take

MOS is an old fashioned idea that tries valiantly to tie the telecom of old to the digital age. People who understood subjective quality tried to pair it with objective analytics in an effort to keep the old world and the new world matched. But even communications is starting to eclipse these bounds. Phone calls have given way to email, texting, and video chats. Two of those are asynchronous and require no network reliability beyond up or down. Video, and all the other real-time digital communications, needs to have the right metrics and analytics to provide good feedback about how to improve the experience for users. And whatever we end up calling that composite metric or ranked algorithmic score, it shouldn’t be called MOS. Time to let that term grow some moss in the retirement bin.


How Much Is Unlimited Anyway?


The big news today came down from the Microsoft MVP Summit that OneDrive is not going to support “unlimited” cloud storage going forward. This came as a blow to folks that were hoping to store as much data as possible for the foreseeable future. The conversations have already started about how Microsoft pulled a bait-and-switch or how storage isn’t really free or unlimited. I see a lot of parallels in the networking space to this problem as well.

All The Bandwidth You Can Buy

I remember sitting in a real estate class in college talking to my professor, who was a commercial real estate agent. He told us, “The happiest day of your real estate career is the day you buy an apartment complex. The second happiest day of your career is when you sell it to the next sucker.” People are in love with the idea of charging for a service, whether it be an apartment or cloud storage and compute. They think they can raise the price every year and continue to reap the profits of ever-increasing rent. What they don’t realize is that those increases are designed to cover increased operating costs, not increased money in your pocket.

Think about someone like Amazon. They are making money hand over fist in the cloud game. What do you think they are doing with it? Are they piling it up in a storage locker and sitting on it like a throne? Or lighting cigars with $100 bills? The most likely answer is that they are plowing those profits back into increasing capacity and offerings to attract new customers. That’s what customers want. Amazon can take some profit from the business but if they stop expanding customers will leave to find another service that meets their needs.

Bandwidth in networks is no different. I worked for IBM as in intern many years ago. Our site upgraded their Internet connection to a T3 to support the site. We were informed that just a few months after the upgrade that all the extra bandwidth we’d installed was being utilized at more than 90%. It took almost no time for the users to find out there was more headroom available and consume it.

The situation with bandwidth today is no different. Application developers assume that storage and bandwidth are unlimited or significant. They create huge application packages that load every conceivable library or function for the sake of execution speed. Networking and storage pays the price to make things faster. Apps take up lots of space and take forever to download even a simple update. The situation keeps getting worse with every release.

Slimming the Bandwidth Pipeline

Some companies are trying to take a look at how to keep this bloat from exploding. Facebook has instituted a policy that restricts bandwidth on Tuesdays to show developers what browsing at low speeds really feels like. They realize that not everyone in the world has access to ultra-fast LTE or wireless.

Likewise, Amazon realizes that on-boarding data to AWS can be painful if there are hundreds of gigabytes or even a few terabytes to migrate. They created Snowball, which is essentially a 1 petabyte storage array that you load up on-site and return to Amazon to store. It’s a decidedly low tech solution to a growing problem.

Networking professionals know that bandwidth isn’t unlimited. Upgrades and additional capacity cost money. Service providers have the same limitations as regular networks. If you want more bandwidth than they can provide, you are out of luck. If you’re willing to pay through the nose providers are happy to build out solutions for you. You’re providing the capital investment for their expansion. Everything costs money somehow.

Tom’s Take

“Unlimited” is a marketing lie. Whether it’s unlimited nights and weekends, unlimited mobile data, or unlimited storage, nothing is truly infinite. Companies want you to take advantage of their offerings to sell you something else. Free services are supported by advertising or upset opportunities. Providers continue to be shocked when they offer something with no reasonable limit and find that a small percentage of the user base is going to take advantage of their mistake.

Rather than offering false promises of unlimited things, providers should be up front. They should have plans that offer large storage amounts with conditions that make it clear that large consumers of those services will face restriction. People that want to push the limit and download hundreds of gigabytes of mobile data or store hundreds of terabytes of data in the cloud should know up front that they will be singled out for special treatment. Believable terms for services beat the lies of no limits every time.

Who Wants To Save Forever?


At the recent SpectraLogic summit in Boulder, much of the discussion centered around the idea of storing data and media in perpetuity. Technology has arrived at the point where it is actually cheaper to keep something tucked away rather than trying to figure out whether or not it should be kept. This is leading to a huge influx of media resources being available everywhere. The question now shifts away from storage and to retrieval. Can you really save something forever?

Another One Bites The Dust

Look around your desk. See if you can put your hands on each of the following:

* A USB Flash drive
* A Floppy Disk (bonus points for 5.25")

Odds are good that you can find at least three of those four items. Each of those items represents a common way of saving files in a removal format. I’m not even trying to cover all of the formats that have been used (I’m looking at you, ZIP drives). Each of these formats has been tucked away in a backpack or given to a colleague at some point to pass files back and forth.

Yet, each of these formats has been superseded sooner or later by something better. Floppies were ultraportable and very small. CD-ROMs were much bigger, but couldn’t be re-written without effort. DVD media never really got the chance to take off before bandwidth eclipsed the capacity of a single disc. And USB drives, while the removable media du jour, are mainly used when you can’t connect wirelessly.

Now, with cloud connectivity the idea of having removable media to share files seems antiquated. Instead of copying files to a device and passing it around between machines, you simply copy those files to a central location and have your systems look there. And capacity is very rarely an issue. So long as you can bring new systems online to augment existing storage space, you can effectively store unlimited amounts of data forever.

But how do we extract data from old devices to keep in this new magical cloud? Saving media isn’t that hard. But getting it off the source is proving to be harder than one might think.

Take video for instance. How can you extract data from an old 8mm video camera? It’s not a standard size to convert to VHS (unless you can find an old converter at a junk store). There are a myriad of ways to extract the data once you get it hooked up to an input device. But what happens if the source device doesn’t work any longer? If your 8mm camera is broken you probably can’t extract your media. Maybe there is a service that can do it, but you’re going to pay for that privilege.

I Want To Break Free

Assuming you can even extract the source media files for storage, we start running into another issue. Once I’ve saved those files, how can I be sure that I can read them fifty years from now? Can I even be sure I can read them five years from now?

Data storage formats are a constantly-evolving discussion. All you have to do is look at Microsoft Office. Office is the most popular workgroup suite in the entire world. All of those files have to be stored in a format that allows them to be read. One might be forgiven for assuming that Microsoft Word document formats are all the same or at least similar enough to be backwards compatible across all versions.

Each new version of the format includes a few new pieces that break backwards compatibility. Instead of leveraging new features like smaller file sizes or increased readability we are faced to continue using old formats like Word 97-2002 in order to ensure that file can be read by whomever they send it to for review.

Even the most portable for formats suffers from this malady. Portable Document Format (PDF) was designed by Adobe to be an application independent way to display files using a printing descriptor language. This means that saving a file as a PDF one system makes it readable on a wide variety of systems. PDF has become the de facto way to share files back and forth.

Yet it can suffer from format issues as well. PDF creation software like Adobe Acrobat isn’t immune from causing formatting problems. Files saved with certain attributes can only be read by updated versions of reader software that can understand them. The idea of a portable format only works when you restrict the descriptors available to the lowest common denominator so that all readers can display the format.

Part of this issue comes from the idea that companies feel the need to constantly “improve” things and force users to continue to upgrade software to be able to read the new formats. While Adobe has offered the PDF format to ISO for standardization, adding new features to the process takes time and effort. Adobe would rather have you keep buying Acrobat to make PDFs and downloading new versions to Reader to decode those new files. It’s a win-win situation for them and not as much of one for the consumers of the format.

Tom’s Take

I find it ironic that we have spent years of time and millions of dollars trying to find ways to convert data away from paper and into electronic formats. The irony is that those papers that we converted years ago are more readable that the data that we stored in the cloud. The only limitation of paper is how long the actual paper can last before being obliterated.

Think of the Rosetta Stone or the Code of Hammurabi. We know about these things because they were etched into stone. Literally. Yet, in the case of the Rosetta Stone we ran into file format issues. It wasn’t until we were able to save the Egyptian hieroglyphs as Greek that we were able to read them. If you want your data to stand the test of time, you need to think about more than the cloud. You need to make sure that you can retrieve and read it as well.

Open Networking Needs to Be Interchangeable


We’re coming up quickly on the fall meeting of the Open Networking User Group, which is a time for many of the members of the financial community to debate the needs of modern networking and provide a roadmap and use case set for networking vendors to follow for in the coming months. ONUG provides what some technology desperately needs – a solution to which it can be applied.

Open Or Something Like It

We’ve already started to see the same kind of non-open solution building that plagued the early network years creeping into some aspects of our new “open” systems. Rather than building on what we consider to be tried-and-true building blocks, we instead come to proprietary solutions that promise “magic” when it comes to configuration and maintenance. Should your network provide the magic? Or is that your job?

Magical is what the network should look like to a user, not to the admins. Think about the networking in cloud providers like AWS and MS Azure. The networking there is a very simple model that hides complexity. The average consumer of AWS services doesn’t need to know the specifics of configuration in the underlay of Amazon’s labyrinth of the cloud. All that matters is traffic goes where it is supposed to go and arrives when it is supposed to be there.

Let’s apply those same kinds of lessons to open networks in our environments. What we need isn’t a magic bullet that makes everything turn into a checkbox or button to do mysterious things behind a curtain. What we really need is an open system that allows us to build a system that can be reduced to boxes and buttons. That requires a kind of interoperation that isn’t present in the first generation of driving networks through software.

This is also one of the concerns present in policy definitions and models like those found in Cisco ACI. In order for these higher-order systems to work efficiently, the majority of the focus needs to be on the definition of actions and the execution of those policies. What can’t occur is a large amount of time spent fixing the interoperation between pieces in the policy underlay.

Think about your current network. Do you spend most of your time focused on the packets flowing between applications? Or are you spending a higher percentage of your time fixing the pathways between those applications? Optimizing the underlay for those flows? Trying to figure out why something isn’t working over here versus why it is working over there?

Networking Needs Eli Whitney

Networking isn’t open the way that it needs to be. It’s as open as manufacturing was before the invention of interchangeable parts. Our systems are cobbled together contraptions of unique parts and systems that collapse when a single piece falls out of place. Instead of fixing the issue and restoring sanity, we are forced to exert extra effort molding the new pieces to function like the old.

Truly open networking isn’t just about the software riding on top of the underlay. It’s about making the interfaces said software interacts with seamless enough to swap parts and pieces and allow the system to continue to function without major disruption. We can’t spend our time tinkering with why the API isn’t accepting instructions or reconfiguring the markup language because the replacement part is a different model number.

When networks are open enough that they work the way that AWS and Azure work without massive interference on our part that will be a truly landmark day. That day will mark the moment when our networks become focused on service delivery instead of component integration. The openness in networking will lead us to stop worrying about it. Not because someone built a magic proprietary system that works now with three other devices and will probably be forgotten in another year. But instead because networking vendors finally discovered that solving problems is much more profitable than creating roadblocks.

Tom’s Take

I’ve been very proud to take part in ONUG for the past few years. The meetings have given me an entirely new perspective on how networking is viewed by users and consumers. It’s also a great way to get in touch with people who are doing networking in unique environments with exacting needs. ONUG has also helped forward the cause of opening networking by providing a nucleus for users to bring their requirements to the group that needs to hear them most of all.

ONUG can continue to drive networking forward by insisting that future networking developments are open and interoperable at a level that makes hardware inconsequential. No standards body can exert that influence. It comes from user voting with dollars and ONUG represents some deep purse strings.

If you are in the New York area and would like to attend ONUG this November 4th and 5th, you can use the code TFD30 to get 30% off the conference registration cost. And if you tell them that Tom sent you, I might be able to arrange for a nice fruit basket as well.


My Thoughts on Dell, EMC, and Networking


The IT world is buzzing about the news that Dell is acquiring EMC for $67 billion. Storage analysts are talking about the demise of the 800-lb gorilla of storage. Virtualization people are trying to figure out what will happen to VMware and what exactly a tracking stock is. But very little is going on in the networking space. And I think that’s going to be a place where some interesting things are going to happen.

It’s Not The Network

The appeal of the Dell/EMC deal has very little to do with networking. EMC has never had any form of enterprise networking, even if they were rumored to have been looking at Juniper a few years ago. The real networking pieces come from VMware and NSX. NSX is a pure software networking implementation for overlay networking implemented in virtualized networks.

Dell’s networking team was practically nonexistent until the Force10 acquisition. Since then there has been a lot of work in building a product to support Dell’s data center networking aspirations. Good work has been done on the hardware front. The software on the switches has had some R&D done internally, but the biggest gains have been in partnerships. Dell works closely with Cumulus Networks and Big Switch Networks to provide alternative operating systems for their networking hardware. This gives users the ability to experiment with new software on proven hardware.

Where does the synergy lie here? Based on a conversation I had on Monday there are some that believe that Cumulus is a loser in this acquisition. The idea is that Dell will begin to use NSX as the primary data center networking piece to drive overlay adoption. Companies that have partnered with Dell will be left in the cold as Dell embraces the new light and way of VMware SDN. Interesting idea, but one that is a bit flawed.

Maybe It’s The Network

Dell is going to be spending a lot of time integrating EMC and all their federation companies. Business needs to continue going forward in other areas besides storage. Dell Networking will see no significant changes in the next six months. Life goes on.

Moving forward, Dell Networking is still an integral piece of the data center story. As impressive as software networking can be, servers still need to plug into something. You can’t network a server without a cable. That means hardware is still important even at a base level. That hardware needs some kind of software to control it, especially in the NSX model without a centralized controller deciding how flows will operating on leaf switches. That means that switches will still need operating systems.

The question then shifts to whether Dell will invest heavily in R&D for expanding FTOS and PowerConnect OS or if they will double down on their partnership with Cumulus and Big Switch and let NSX do the heavy lifting above the fray. The structure of things would lead one to believe that Cumulus will get the nod here, as their OS is much more lightweight and enables basic connectivity and control of the switches. Cumulus can help Dell integrate the switch OS into monitoring systems and put more of the control of the underlay network at the fingertips of the admins.

I think Dell is going to be so busy integrating EMC into their operations that the non-storage pieces are going to be starved for development dollars. That means more reliance on partnerships in the near term. Which begets a vicious cycle that causes in-house software to fall further and further behind. Which is great for the partner, in this case Cumulus.

By putting Dell Networking into all the new offerings that should be forthcoming from a combined Dell/EMC, Dell is putting Cumulus Linux in a lot of data centers. That means familiarizing these networking folks with them more and more. Even if Dell decides not to renew the Cumulus Partnership after EMC and VMware are fully ingested it means that the install base of Cumulus will be bigger than it would have been otherwise. When those devices are up for refresh the investigation into replacing them with Cumulus-branded equipment is one that could generate big wins for Cumulus.

Tom’s Take

Dell and EMC are going to touch every facet of IT when they collide. Between the two of them they compete in almost every aspect of storage, networking, and compute as well as many of the products that support those functions. Everyone is going to face rapid consolidation from other companies banding together to challenge the new 800-lb gorilla in the space.

Networking will see less impact from this merger but it will be important nonetheless. If nothing, it will drive Cisco to start acquiring at a faster rate to keep up. It will also allow existing startups to make a name for themselves. There’s even the possibility of existing networking folks leaving traditional roles and striking out on their own to found startups to explore new ideas. The possibilities are limitless.

The Dell/EMC domino is going to make IT interesting for the next few months. I can’t wait to see how the chips will fall for everyone.