Network acquisitions are in the news once again. This time, the buyer is EMC. In a blog article from last week, EMC is reportedly mulling the purchase of either Brocade or Arista to add a networking component to its offerings. While Arista would be a good pickup for EMC to add a complete data center networking practice, one must ask themselves “Does EMC Really Need A Network?”
Hardware? For What?
The “smart money” says that EMC needs a network offering to help complete their vBlock offering now that the EMC/Cisco divorce is in the final stages. EMC has accelerated those plans from the server side by offering EVO:RAIL as an option for VSPEX now. Yes, VSPEX isn’t a vBlock. But it’s a flexible architecture that will eventually supplant vBlock when the latter is finally put out to pasture once the relationship between Cisco and EMC is done.
EMC being the majority partner in VCE has incentive to continue offering the package to customers to make truckloads of cash. But long term, it makes more sense for EMC to start offering alternatives to a Cisco-only network. There have been many, many assurances that vBlock will not be going away any time soon (almost to the level of “the lady doth protest too much, methinks“). But to me, that just means that the successor to vBlock will be called something different, like nBlock or eBlock.
Regardless of what the next solution is called, it will still need networking components installed in order to facilitate communication between the components in the system. EMC has been looking at networking companies in the past, especially Juniper (again with much protesting to the contrary). It’s obvious they want to have a hardware solution to offer alongside Cisco for future converged systems. But do they really need to?
How About A BriteBlock?
EMC needs a network component. NSX is a great control system that EMC already owns (and is already considering for vBlocks), but as Joe Onisick (@JOnisick) is fond of pointing out, NSX doesn’t actually forward packets. So we still need something to fling bits back and forth. But why does it have to be something EMC owns?
Whitebox switching is making huge strides toward being a data center solution. Cumulus, Pluribus, and Big Switch have created stable platforms that offer several advantages over more traditional offerings, not the least of which is cost. The ability to customize the OS to a degree is also attractive to people that want to integrate with other systems.
Could you imagine running a Cumulus switch in a vBlock and having the network forwarding totally integrated with the management platform? Or how about running Big Switch’s Big Fabric as the backplane for vBlock? These solutions would work with minimal effort on the part of EMC and very little tuning required by the end user. Add in the lowered acquistion cost of the network hardware and you end up with a slightly healthier profit margin for EMC.
Is The Answer A FaceBlock?
The other solution is to use OpenCompute Project switches in a vBlock offering. OCP is gaining momentum, with Cumulus and Big Switch both making big contributions recently at the 2015 OCP Summit. Add in the buzz around the Wedge switch and new Six Pack chassis and you have the potential to have significant network performance for a relative pittance.
Wedge and Six Pack are not without their challenges. Even running Cumulus Linux or Open Network Linux from Big Switch, it’s going to take some time to integrate the network OS with the vBlock architecture. NSX can alleviate some of these challenges, but it’s more a matter of time than technology. EMC is actually very good at taking nascent technology from startups and integrating with their product lines. Doing the same with OCP networking would not be much different from their current R&D style.
Another advantage of using OCP networking comes from the effect that EMC would have to the project. By having a major vendor embrace OCP as the spine of their architecture, Facebook gains the advantages of reduced component costs and increased development. Even if EMC doesn’t release their developments back into the community, they will attract more developers to the project and magnify the work being done. This benefits EMC as well, as every OCP addition flows back into their offerings as well.
We’re running out of big companies to buy other companies. Through consolidation and positioning, the mid-tier has grown to the point where they can’t easily be bought by anyone other than Cisco. Thanks to Aruba, HP is going to be busy with that integration until well after the company split. EMC is the last company out there that has the resources to buy someone as big as Arista or Brocade.
The question that the people at EMC need to ask themselves is: Do we really need hardware? Or can we make everything work without pulling out the checkbook? Cisco will always been an option for vBlock, just not necessarily the cheapest solution. EMC can find solutions to increase their margins, but it’s going to take some elbow grease and a few thinking caps to integrate whitebox or OCP-style offerings.
EMC does need a network. It just may not need to be one they own.