HPE Networking: Past, Present, and Future

hpe_pri_grn_pos_rgb

I had the chance to attend HPE Discover last week by invitation from their influencer team. I wanted to see how HPE Networking had been getting along since the acquisition of Aruba Networks last year. There have been some moves and changes, including a new partnership with Arista Networks announced in September. What follows is my analysis of HPE’s Networking portfolio after HPE Discover London and where they are headed in the future.

Campus and Data Center Divisions

Recently, HPE reorganized their networking division along two different lines. The first is the Aruba brand that contains all the wireless assets along with the campus networking portfolio. This is where the campus belongs. The edge of the network is an ever-changing area where connectivity is king. Reallocating the campus assets to the capable Aruba team means that they will do the most good there.

The rest of the data center networking assets were loaded into the Data Center Infrastructure Group (DCIG). This group is headed up by Dominick Wilde and contains things like FlexFabric and Altoline. The partnership with Arista rounds out the rest of the switch portfolio. This helps HPE position their offerings across a wide range of potential clients, from existing data center infrastructure to newer cloud-ready shops focusing on DevOps and rapid application development.

After hearing Dom Wilde speak to us about the networking portfolio goals, I think I can see where HPE is headed going forward.

The Past: HPE FlexFabric

As Dom Wilde said during our session, “I have a market for FlexFabric and can sell it for the next ten years.” FlexFabric represents the traditional data center networking. There is a huge market for existing infrastructure for customers that have made a huge investment in HPE in the past. Dom is absolutely right when he says the market for FlexFabric isn’t going to shrink the foreseeable future. Even though the migration to the cloud is underway, there are a significant number of existing applications that will never be cloud ready.

FlexFabric represents the market segment that will persist on existing solutions until a rewrite of critical applications can be undertaken to get them moved to the cloud. Think of FlexFabric as the vaunted buggy whip manufacturer. They may be the last one left, but for the people that need their products they are the only option in town. DCIG may have eyes on the future, but that plan will be financed by FlexFabric.

The Present: HPE Altoline

Altoline is where HPE was pouring their research for the past year. Altoline is a product line that benefits from the latest in software defined and webscale technologies. It is technology that utilizes OpenSwitch as the operating system. HPE initially developed OpenSwitch as an open, vendor-neutral platform before turning it over to the Linux Foundation this summer to run with development from a variety of different partners.

Dom brought up a couple of great use cases for Altoline during our discussion that struck me as brilliant. One of them was using it as an out-of-band monitoring solution. These switches don’t need to be big or redundant. They need to have ports and a management interface. They don’t need complexity. They need simplicity. That’s where Altoline comes into play. It’s never going to be as complex as FlexFabric or as programmable as Arista. But it doesn’t have to be. In a workshop full of table saw and drill presses, Altoline is a basic screwdriver. It’s a tool you can count on to get the easy jobs done in a pinch.

The Future: Arista

The Arista partnership, according to Dom Wilde, is all about getting ready for the cloud. For those customers that are looking at moving workloads to the cloud or creating a hybrid environment, Arista is the perfect choice. All of Arista’s recent solution sets have been focused on providing high-speed, programmable networking that can integrate a number of development tools. EOS is the most extensible operating system on the market and is a favorite for developers. Positioning Arista at the top of the food chain is a great play for customers that don’t have a huge investment in cloud-ready networking right now.

The question that I keep coming back to is…when does this Arista partnership become an acquisition? There is a significant integration between the two companies. Arista has essentially displaced the top of the line for HPE. How long will it take for Arista to make the partnership more permanent? I can easily foresee HPE making a play for the potential revenues produced by Arista and the help they provide moving things to the cloud.


Tom’s Take

I was the only networking person at HPE Discover this year because the HPE networking story has been simplified quite a bit. On the one hand, you have the campus tied up with Aruba. They have their own story to tell in a different area early next year. On the other hand, you have the simplification of the portfolio with DCIG and the inclusion of the Arista partnership. I think that Altoline is going to find a niche for specific use cases but will never really take off as a separate platform. FlexFabric is in maintenance mode as far as development is concerned. It may get faster, but it isn’t likely to get smarter. Not that it really needs to. FlexFabric will support legacy architecture. The real path forward is Arista and all the flexibility it represents. The question is whether HPE will try to make Arista a business unit before Arista takes off and becomes too expensive to buy.

Disclaimer

I was an invited guest of HPE for HPE Discover London. They paid for my travel and lodging costs as well as covering event transportation and meals. They did not ask for nor were they promised any kind of consideration in the coverage provided here. The opinions and analysis contained in this article represent my thoughts alone.

OpenFlow Is Dead. Long Live OpenFlow.

The King Is Dead - Long Live The King

Remember OpenFlow? The hammer that was set to solve all of our vaguely nail-like problems? Remember how everything was going to be based on OpenFlow going forward and the world was going to be a better place? Or how heretics like Ivan Pepelnjak (@IOSHints) that dared to ask questions about scalability or value of application were derided and laughed at? Yeah, good times. Today, I stand here to eulogize OpenFlow, but not to bury it. And perhaps find out that OpenFlow has a much happier life after death.

OpenFlow Is The Viagra Of Networking

OpenFlow is not that much different than Sildenafil, the active ingredient in Vigara. Both were initially developed to do something that they didn’t end up actually solving. In the case of Sildenafil, it was high blood pressure. The “side effect” of raising blood pressure in a specific body part wasn’t even realized until after the trials of the drug. The side effect because the primary focus of the medication that was eventually developed into a billion dollar industry.

In the same way, OpenFlow failed at its stated mission of replacing the forwarding plane programming method of switches. As pointed out by folks like Ivan, it had huge scalability issues. It was a bit clunky when it came to handling flow programming. The race from 1.0 to 1.3 spec finalization left vendors in the dust, but the freeze on 1.3 for the past few years has really hurt innovation. Objectively, the fact that almost no major shipping product uses OpenFlow as a forwarding paradigm should be evidence of it’s failure.

The side effect of OpenFlow is that it proved that networking could be done in software just as easily as it could be done in hardware. Things that we thought we historically needed ASICs and FPGAs to do could be done by a software construct. OpenFlow proved the viability of Software Defined Networking in a way that no one else could. Yet, as people abandoned it for other faster protocols or rewrote their stacks to take advantage of other methods, OpenFlow did still have a great number of uses.

OpenFlow Is a Garlic Press, Not A Hammer

OpenFlow isn’t really designed to solve every problem. It’s not a generic tool that can be used in a variety of situations. It has some very specific use cases that it does excel at doing, though. Think more like a garlic press. It’s a use case tool that is very specific for what it does and does that thing very well.

This video from Networking Field Day 13 is a great example of OpenFlow being used for a specific task. NEC’s flavor of OpenFlow, ProgrammableFlow, is used on conjunction with higher layer services like firewalls and security appliances to mitigate the spread of infections. That’s a huge win for networking professionals. Think about how hard it would be to track down these systems in a network of thousands of devices. Even worse, with the level of virulence of modern malware, it doesn’t take long before the infected system has infected others. It’s not enough to shut down the payload. The infection behavior must be removed as well.

What NEC is showing is the ultimate way to stop this from happening. By interrogating the flows against a security policy, the flow entries can be removed from switches across the network or have deny entries written to prevent communications. Imagine being able to block a specific workstation from talking to anything on the network until it can be cleaned. And have that happen automatically without human interaction. What if a security service could get new malware or virus definitions and install those flow entries on the fly? Malware could be stopped before it became a problem.

This is where OpenFlow will be headed in the future. It’s no longer about adapting the problems to fit the protocol. We can’t keep trying to frame the problem around how much it resembles a nail just so we can use the hammer in our toolbox. Instead, OpenFlow will live on as a point protocol in a larger toolbox that can do a few things really well. That means we’ll use it when we need to and use a different tool when needed that better suits the problem we’re actually trying to solve. That will ensure that the best tool is used for the right job in every case.


Tom’s Take

OpenFlow is still useful. Look at what Coho Data is using it for. Or NEC. Or any one of a number of companies that are still developing on it. But the fact that these companies have put significant investment and time into the development of the protocol should tell you what the larger industry thinks. They believe that OpenFlow is a dead end that can’t magically solve the problems they have with their systems. So they’ve moved to a different hammer to bang away with. I think that OpenFlow is going to live a very happy life now that people are leaving it to solve the problems it’s good at solving. Maybe one day we’ll look back on the first life of OpenFlow not as a failure, but instead as the end of the beginning of it become what it was always meant to be.

Nutanix and Plexxi – An Affinity to Converge

nutanix-logo

Nutanix has been lighting the hyperconverged world on fire as of late. Strong sales led to a big IPO for their stock. They are in a lot of conversations about using their solution in place of large traditional virtualization offerings that include things like blade servers or big boxes. And even coming off the recent Nutanix .NEXT conference there were some big announcements in the networking arena to help them complete their total solution. However, I think Nutanix is missing a big opportunity that’s right in front of them.

I think it’s time for Nutanix to buy Plexxi.

Software Says

If you look at the Nutanix announcements around networking from .NEXT, they look very familiar to anyone in the server space. The highlights include service chaining, microsegmentation, and monitoring all accessible through an API. If this sounds an awful lot like VMware NSX, Cisco ACI, or any one of a number of new networking companies then you are in the right mode of thinking as far as Nutanix is concerned.

SDN in the server space is all about overlay networking. Segmentation of flows and service chaining are the reason why security is so hard to do in the networking space today. Trying to get traffic to behave in a certain way drives networking professionals nuts. Monitoring all of that to ensure that you’re actually doing what you say you’re doing just adds complexity. And the API is the way to do all of that without having to walk down to the data center to console into a switch and learn a new non-Linux CLI command set.

SDN vendors like VMware and Cisco ACI would naturally have jumped onto these complaints and difficulties in the networking world and both have offered solutions for them with their products. For Nutanix to have bundled solutions like this into their networking offering is no accident. They are looking to battle VMware head-to-head and need to offer the kind of feature parity that it’s going to take a make medium to large shops shift their focus away from the VMware ecosystem and take a long look at what Nutanix is offering.

In a way, Nutanix and VMware are starting to reinforce the idea that the network isn’t a magical realm of protocols and tricks that make applications work. Instead, it’s a simple transport layer between locations. For instance, Amazon doesn’t rely on the magic of the interstate system to get your packages from the distribution center to your home. Instead, the interstate system is just a transport layer for their shipping overlays – UPS, FedEX, and so on. The overlay is where the real magic is happening.

Nutanix doesn’t care what your network looks like. They can do almost everything on top of it with their overlay protocols. That would seem to suggest that the focus going forward should be to marginalize or outright ignore the lower layers of the network in favor of something that Nutanix has visibility into and can offer control and monitoring of. That’s where the Plexxi play comes into focus.

Plexxi Logo

Affinity for Awesome

Plexxi has long been a company in search of a way to sell what they do best. When I first saw them years ago, they were touting their Affinities idea as a way to build fast pathways between endpoints to provide better performance for applications that naturally talked to each other. This was a great idea back then. But it quickly got overshadowed by the other SDN solutions out there. It even caused Plexxi to go down a slightly different path for a while looking at other options to compete in a market that they didn’t really have a perfect fit product.

But the Affinities idea is perfect for hyperconverged solutions. Companies like Nutanix are marking their solutions as the way to create application-focused compute nodes on-site without the need to mess with the cloud. It’s a scalable solution that will eventually lead to having multiple nodes in the future as your needs expand. Hyperconverged was designed to be consumable per compute unit as opposed to massively scaling out in leaps and bounds.

Plexxi Affinities is just the tip of the iceberg. Plexxi’s networking connectivity also gives Nutanix the ability to build out a high-speed interconnect network with one advantage – noninterference. I’m speaking about what happens when a customer needs to add more networking ports to support this architecture. They need to make a call to their Networking Vendor of Choice. In the case of Cisco, HPE, or others, that call will often involve a conversation about what they’re doing with the new network followed by a sales pitch for their hyperconverged solution or a partner solution that benefits both companies. Nutanix has a reputation for being the disruptor in traditional IT. The more they can keep their traditional competitors out of the conversation, the more likely they are to keep the business into the future.


Tom’s Take

Plexxi is very much a company with an interesting solution in need of a friend. They aren’t big enough to really partner with hyperconverged solutions, and most of the hyperconverged market at this point is either cozy with someone else or not looking to make big purchases. Nutanix has the rebel mentality. They move fast and strike quickly to get their deals done. They don’t take prisoners. They look to make a splash and get people talking. The best way to keep that up is to bundle a real non-software networking component alongside a solution that will make the application owners happy and keep the conversation focused on a single source. That’s how Cisco did it back and the day and how VMware has climbed to the top of the virtualization market.

If Nutanix were to spend some of that nice IPO money on a Plexxi Christmas present, I think 2017 would be the year that Nutanix stops being discussed in hushed whispers and becomes a real force to be reckoned with up and down the stack.

Facebook Wedge 100 – The Future of the Data Center?

 

FBLike

Facebook is back in the news again. This time, it’s because of the release of their new Wedge 100 switch into the Open Compute Project (OCP). Wedge was already making headlines when Facebook announced it two years ago. A fast, open sourced 40Gig Top-of-Rack (ToR) switch was huge. Now, Facebook is letting everyone in on the fun of a faster Wedge that has been deployed into production at Facebook data centers as well as being offered for sale through Edgecore Networks, which is itself a division of Accton. Accton has been leading the way in the whitebox switching market and Wedge 100 may be one of the ways it climbs to the top.

Holy Hardware!

Wedge 100 is pretty impressive from the spec sheet. They paid special attention to making sure the modules were expandable, especially for faster CPUs and special purpose devices down the road. That’s possible because Wedge is a highly specialized micro server already. Rather than rearchitecting the guts of the whole thing, Facebook kept the CPU and the monitoring stack and just put newer, faster modules on it to ramp to 32x100Gig connectivity.

12809187_1676340369272065_1831349201_n

As suspected in the above image, Facebook is using Broadcom Tomahawk as the base connectivity in their switch, which isn’t surprising. Tomahawk is the roadmap for all vendors to get to 100Gig. It also means that the downlink connectivity for these switches could conceivably work in 25/50Gig increments. However, given the enormous amount of east/west traffic that Facebook must generate, Facebook has created a server platform they call Yosemite that has 100Gig links as well. Given the probably backplane there, you can imagine the data that’s getting thrown around the data centers.

That’s not all. Omar Baldonado has said that they are looking at going to 400Gig connectivity soon. That’s the kind of mind blowing speed that you see in places like Google and Facebook. Remember that this hardware is built for a specific purpose. They don’t just have elephant flows. They have flows the size of an elephant herd. That’s why they fret about the operating temperature of optics or the rack design they want to use (standard versus Open Racks). Because every little change matters a thousand fold at that scale.

Software For The People

The other exciting announcement from Facebook was on the software front. Of course, FBOSS has been updated to work with Wedge 100. I found it very interesting in the press release that much of the programming in FBOSS went into interoperability with Wedge 40 and with fixing the hardware side of things. This makes some sense when you realize that Facebook didn’t need to spend a lot of time making Wedge 40 interoperate with anything, since it was a wholesale replacement. But Wedge 100 would need to coexist with Wedge 40 as the rollout happens, so making everything play nice is a huge point on the checklist.

The other software announcement that got the community talking was support for third-party operating systems running on Wedge 100. The first one up was Open Network Linux from Big Switch Networks. ONL ran on the original Wedge 40 and now runs on the Wedge 100. This means that if you’re familiar with running BSN OSes on your devices, you can drop in a Wedge 100 in your spine or fabric and be ready to go.

The second exciting announcement about software comes from a new company, Apstra. Apstra announced their entry into OCP and their intent to get their Apstra Operating System (AOS) running on Wedge 100 by next year. That has a big potential impact for Apstra customers that want to deploy these switches down the road. I hope to hear more about this from Apstra during their presentation at Networking Field Day 13 next month.


Tom’s Take

Facebook is blazing a trail for fast ToR switches. They’ve got the technical chops to build what they need and release the designs to the rest of the world to be used for a variety of ideas. Granted, your data center looks nothing like Facebook. But the ideas they are pioneering are having an impact down the line. If Open Rack catches on you may see different ideas in data center standardization. If the Six Pack catches on as a new chassis concept, it’s going to change spines as well.

If you want to get your hands dirty with Wedge, build a new 100Gig pod and buy one from Edgecore. The downlinks can break out into 10Gig and 25Gig links for servers and knowing it can run ONL or Apstra AOS (eventually) gives you some familiar ground to start from. If it runs as fast as they say it does, it may be a better investment right now than waiting for Tomahawk II to come to your favorite vendor.

 

 

Tomahawk II – Performance Over Programmability

tomahawk2

Broadcom announced a new addition to their growing family of merchant silicon today. The new Broadcom Tomahawk II is a monster. It doubles the speed of it’s first-generation predecessor. It has 6.4 Tbps of aggregate throughout, divided up into 256 25Gbps ports that can be combined into 128 50Gbps or even 64 100Gbps ports. That’s fast no matter how you slice it.

Broadcom is aiming to push these switches into niches like High-Performance Computing (HPC) and massive data centers doing big data/analytics or video processing to start. The use cases for 25/50Gbps haven’t really changed. What Broadcom is delivering now is port density. I fully expect to see top-of-rack (ToR) switches running 25Gbps down to the servers with new add-in cards connected to 50Gbps uplinks that deliver them to the massive new Tomahawk II switches running in the spine or end-of-row (EoR) configuration for east-west traffic disbursement.

Another curious fact of the Tomahawk II is the complete lack of 40Gbps support. Granted, the support was only paid lip service in the Tomahawk I. The real focus was on shifting to 25/50Gbps instead of the weird 10/40/100Gbps split we had in Trident II. I talked about this a couple of years ago and wasn’t very high on it back then, but I didn’t know the level of apathy people had for 40Gbps uplinks. The push to 25/50Gbps has only been held up so far by the lack of availability of new NICs for servers to enable faster speeds. Now that those are starting to be produced in volume expect the 40Gbps uplinks to be a relic of the past.

A Foot In The Door

Not everyone is entirely happy about the new Broadcom Tomahawk II. I received an email today with a quote from Martin Izzard of Barefoot Networks, discussing their new Tofino platform. He said in part:

Barefoot led the way in June with the introduction of Tofino, the world’s first fully programmable switches, which also happen to be the fastest switches ever built.

It’s true that Tofino is very fast. It was the first 6.4 Tbps switch on the market. I talked a bit about it a few months ago. But I think that Barefoot is a bit off on its assessment here and has a bit of an axe to grind.

Barefoot is pushing something special with Tofino. They are looking to create a super fast platform with programmability. P4 is not quite an FPGA and it’s not an ASIC. It’s a switch stripped to its core and rebuilt with a language all its own. That’s great if you’re a dev shop or a niche market that has to squeeze every ounce of performance out of a switch. In the world of cars, the best analogy would be looking at Tofino like a specialized sports car like a Koenigsegg Agera. It’s very fast and very stylish, but it’s purpose built to do one thing – drive really fast on pavement and carry two passengers.

Broadcom doesn’t really care about development shops. They don’t worry about niche markets. Because those users are not their customers. Their customers are Arista, Cisco, Brocade, Juniper and others. Broadcom really is the Intel of the switching world. Their platforms power vendor offerings. Buying a basic Tomahawk II isn’t something you’re going to be able to do. Broadcom will only sell these in huge lots to companies that are building something with them. To keep the car analogy, Tomahawk II is more like the old F-body cars produced by Chevrolet that later went on to become Camaros, Firebirds, and Trans Ams. Each of these cars was distinctive and had their fans, but the chassis was the same underneath the skin.

Broadcom wants everyone to buy their silicon and use it to power the next generation of switches. Barefoot wants a specialist kit that is faster than anything else on the market, provided you’re willing to put the time into learning P4 and stripping out all the bits they feel are unnecessary. Your use case determines your hardware. That hasn’t changed, nor is it likely to change any time soon.


Tom’s Take

The data center will be 25/50/100Gbps top to bottom when the next switch refresh happens. It could even be there sooner if you want to move to a pod-based architecture instead of more traditional designs. The odds are very good that you’re going to be running Tomahawk or Tomahawk II depending on which vendor you buy from. You’re probably only going to be running something special like Tofino or maybe even Cavium if you’ve got a specific workload or architecture that you need performance or programmability.

Don’t wait for the next round of hardware to come out before you have an upgrade plan. Write it now. Think about where you want to be in 4 years. Now double your requirements. Start investigating. Ask your vendor of choice what their plans are. If their plans stink, as their competitor. Get quotes. Get ideas. Be ready for the meeting when it’s scheduled. Make sure you’re ready to work with your management to bury the hatchet, not get a hatchet jobbed network.

Cisco vs. Arista: Shades of Gray

CiscoVArista

Yesterday was D-Day for Arista in their fight with Cisco over the SysDB patent. I’ve covered this a bit for Network Computing in the past, but I wanted to cover some new things here and put a bit more opinion into my thoughts.

Cisco Designates The Competition

As the great Stephen Foskett (@SFoskett) says, you always have to punch above your weight. When you are a large company, any attempt to pick on the “little guy” looks bad. When you’re at the top of the market it’s even tougher. If you attempt to fight back against anyone you’re going to legitimize them in the eye of everyone else wanting to take a shot at you.

Cisco has effectively designated Arista as their number one competitor by way of this lawsuit. Arista represents a larger threat that HPE, Brocade, or Juniper. Yes, I agree that it is easy to argue that the infringement constituted a material problem to their business. But at the same time, Cisco very publicly just said that Arista is causing a problem for Cisco. Enough of a problem that Cisco is going to take them to court. Not make Arista license the patent. That’s telling.

Also, Cisco’s route of going through the ITC looks curious. Why not try to get damages in court instead of making the ITC ban them from importing devices? I thought about this for a while and realized that even if there was a court case pending it wouldn’t work to Cisco’s advantage in the short term. Cisco doesn’t just want to prove that Arista is copying patents. They want to hurt Arista. That’s why they don’t want to file an injunction to get the switches banned. That could take years and involve lots of appeals. Instead, the ITC can just simply say “You cannot bring these devices into the country”, which effectively bans them.

Cisco has gotten what it wants short term: Arista is going to have to make changes to SysDB to get around the patent. They are going to have to ramp up domestic production of devices to get around the import ban. Their train of development is disrupted. And Cisco’s general counsel gets to write lots of posts about how they won.

Yet, even if Arista did blatantly copy the SysDB stuff and run with it, now Cisco looks like the 800-pound gorilla stomping on the little guy through the courts. Not by making better products. Not by innovating and making something that eclipses the need for software like this. No, Cisco won by playing the playground game of “You stole my idea and I’m going to tell!!!”

Arista’s Three-Edged Sword

Arista isn’t exactly coming out of this on the moral high ground either. Arista has gotten a black eye from a lot of the quotes being presented as evidence in this case. Ken Duda said that Arista “slavishly copied” Cisco’s CLI. There have been other comments about “secret sauce” and the way that SysDB is used. A few have mentioned to me privately that the copying done by Arista was pretty blatant.

Understanding is a three-edged sword: Your side, their side, and the truth.

Arista’s side is that they didn’t copy anything important. Cisco’s side is that EOS has enough things that have been copied that it should be shut down and burned to the ground. The truth lies somewhere in the middle of it all.

Arista didn’t copy everything from IOS. They hired people who worked on IOS and likely saw things they’d like to implement. Those people took ideas and ran with them to come up with a better solution. Those ideas may or may not have come from things that were worked on at Cisco. But if you hire a bunch of employees from a competitor, how do you ensure that their ideas aren’t coming from something they were supposed to have “forgotten”?

Arista most likely did what any other company in that situation would do: they gambled. Maybe SysDB was more copied that created. But so long as Arista made money and didn’t really become a blip on Cisco radar. That’s telling. Listen to this video, which starts at 4:40 and goes to about 6:40:

Doug Gourlay said something that has stuck with me for the last four years: “Everyone that ever set out to compete against Cisco and said, ‘We’re going to do it and be everything to everyone’ has failed. Utterly.”

Arista knew exactly which market they wanted to attack: 10Gig and 40Gig data center switches. They made the best switch they could with the best software they could and attacked that market with all the force they could muster. But, the gamble would eventually have to either pay off or come due. Arista had to know at some point that a strategy shift would bring them under the crosshairs of Cisco. And Cisco doesn’t forgive if you take what’s theirs. Even if, and I’m quoting from both a Cisco 10-K from 1996 and a 2014 Annual Report:

[It is] not economically practical or even possible to determine in advance whether a product or any of its components infringes or will infringe on the patent rights of others.

So Arista built the best switch they could with the knowledge that some of their software may not have been 100% clean. Maybe they had plans to clean it up later. Or iterate it out of existence. Who knows? Now, Arista has to face up to that choice and make some new ones to keep selling their products. Whether or not they intended to fight the 800-pound gorilla of networking at the start, they certainly stumbled into a fight here.


Tom’s Take

I’m not a lawyer. I don’t even pretend to be one. I do know that the fate of a technology company now rests in the hands of non-technical people that are very good and wringing nuance out of words. Tech people would look at this and shake their heads. Did Arista copy something? Probably? Was is something Cisco wanted copied? Probably not? Should Cisco have unloaded the legal equivalent of a thermonuclear warhead on them? Doubtful.

Cisco is punishing Arista to ensure no one every copies their ideas again. As I said before, the outcome of this case will doom the Command Line Interface. No one is going to want to tangle with Cisco again. Which also means that no one is going to want to develop things along the Cisco way again. Which means Cisco is going to be less relevant in the minds of networking engineers as REST APIs and other programming architectures become more important that remembering to type conf t every time.

Arista will survive. They will make changes that mean their switches will live on for customers. Cisco will survive. They get to blare their trumpets and tell the whole world they vanquished an unworthy foe. But the battle isn’t over yet. And instead of it being fought over patents, it’s going to be fought as the industry moves away from CLI and toward a model that doesn’t favor those who don’t innovate.

Will Dell Networking Wither Away?

chopping-block-Dell-EMC

The behemoth merger of Dell and EMC is nearing conclusion. The first week of August is the target date for the final wrap up of all the financial and legal parts of the acquisition. After that is done, the long task of analyzing product lines and finding a way to reduce complexity and product sprawl begins. We’ve already seen the spin out of Quest and Sonicwall into a separate entity to raise cash for the final stretch of the acquisition. No doubt other storage and compute products are going to face a go/no go decision in the future. But one product line which is in real danger of disappearing is networking.

Whither Whitebox?

The first indicator of the problems with Dell and networking comes from whitebox switching. Dell released OS 10 earlier this year as a way to capitalize on the growing market of free operating systems running on commodity hardware. Right now, OS 10 can run on Dell equipment. In the future, they are hoping to spread it out to whitebox devices. That assumes that soon you’ll see Dell branded OSes running on switches purchased from non-Dell sources booting with ONIE.

Once OS 10 pushes forward, what does that mean for Dell’s hardware business? Dell would naturally want to keep selling devices to customers. Whitebox switches would undercut their ability to offer cheap ports to customers in data center deployments. Rather than give up that opportunity, Dell is positioning themselves to run some form of Dell software on top of that hardware for management purposes, which has always been a strong point for Dell. Losing the hardware means little to Dell if they have to lose profit margin to keep it there in the first place.

The second indicator of networking issues comes from comments from Michael Dell at EMCworld this year. Check out this short video featuring him with outgoing EMC CEO Joe Tucci:

Some of the telling comments in here involve Michael Dell’s praise for the NSX business model and how it is being adopted by a large number of other vendors in the industry. Also telling is their reaffirmation that Cisco is an important partnership in VCE and won’t be going away any time soon. While these two things don’t seem to be related on the surface, they both point to a truth Dell is trying hard to accept.

In the future, with overlay network virtualization models gaining traction in the data center, the underlying hardware will matter little. In almost every case, the hardware choice will come down to one of two options:

  1. Which switch is the cheapest?
  2. Which switch is on the Approved List?

That’s it. That’s the whole decision tree. No one will care what sticker is on the box. They will only care that it didn’t cost a fortune and that they won’t get fired for buying it. That’s bad for companies that aren’t making white boxes or named Cisco. Other network vendors are going to try and add value in some way, but the overlay sitting on top of those bells and whistles will make it next to impossible to differentiate in anything but software. Whether that’s superior management capabilities, open plug-in model, or some other thing we haven’t thought of will make no difference in the end. Software will still be king and the hardware will be an inexpensive pawn or a costly piece that has been pre-approved.

Whither Wireless?

The other big inflection point that makes me worry about the Dell networking story is the lack of movement in the wireless space. Dell has historically been a company to partner first and acquire second. But with HPE’s acquisition of Aruba Networks last year, the dominos in the wireless space are still waiting to fall. Brocade raced out to buy Ruckus. Meru offered itself on a platter to anyone that would buy them. Now Aerohive stands as the last independent wireless vendor without a dance partner. Yes, they’ve announced that they are partnering with Dell, but have you been to the Dell Wireless Networking page? Can you guess what the Dell W-series is? Here’s a hint: it rhymes with “Peruba”.

Every time Dell leads with a W-series deployment, they are effectively paying their biggest competitor. They are opening the door to allowing HPE/Aruba to come in and not only start talking about wireless but servers, storage, and other networking as well. Dell would do well at this point to start deemphasizing the W-series and start highlighting the “new generation” of Aerohive APs and how they are going to the be the focus moving forward.

The real solution would be for Dell to buy a wireless company and take all the wireless expertise they are selling in-house. That would show they are serious about both the campus network of the future and the data center network needed to support their other server and storage infrastructure. Sadly, with Dell being leveraged due to the privatization of his company just two years ago and mounting debt for this mega merger, Dell is looking to make cash with spin offs instead of spending it on yet another company to ingest and subsume. Which means a real non-partner wireless solution is still many years away.


Tom’s Take

Dell’s networking strategy is in maintenance mode. Make switches to support faster speeds for now, probably with Tomahawk support soon, and hope that this whole networking thing goes software sooner rather than later. Otherwise, the need to shore up the campus wireless areas along with the coming decision about showing support fully behind NSX and partnerships is going to be a bitter pill to swallow. Perhaps Dell Networking will exist as an option for companies wanting a 100% Dell solution? Or maybe they are waiting for a new offering from Dell/EMC in the data center to drive profits to research and development to keep pace with Cisco and Arista? One can only hope that their networking flower doesn’t wither on the vine.