Broadening Your Horizons, or Why Broadcom Won’t Get VMware

You might have missed the news over the weekend that Broadcom is in talks to buy VMware. As of right now this news is still developing so there’s no way of knowing exactly what’s going to happen. But I’m going to throw my hat into the ring anyway. VMware is what Broadcom really wants and they’re not going to get it.

Let’s break some of this down.

Broad Street

Broadcom isn’t just one of the largest chip manufactures on the planet. Sure, they make networking hardware that goes into many of the products you buy. Yes, they do make components for mobile devices and access points and a whole host of other things, including the former Brocade fibre channel assets. So they make a lot of chips.

However, starting back in November 2018, Broadcom has been focused on software acquisitions. They purchased CA Technologies for $19 billion. They bought Symantec the next year for $10 billion. They’re trying to assemble a software arm to work along with their hardware aspirations. Seems kind of odd, doesn’t it?

Ask IBM how it feels to be the dominant player in mainframes. Or any other dominant player in a very empty market. It’s lonely and boring. And boring is the exact opposite of what investors want today. Mainframes and legacy computing may be the only thing keeping IBM running right now. And given that Broadcom’s proposed purchase of Qualcomm was blocked a few years ago you can see that Broadcom is likely at the limit of what they’re going to be able to do with chipsets.

Given the explosion of devices out there you’d think that a chip manufacturer would want to double and triple down on development, right? Especially given the ongoing chip shortage. However, you can only iterate on those chips so many times. There’s only so much you can squeeze before you run out of juice. Ask Intel and AMD. Or, better yet, see how they’ve acquired companies to diversify into things like FPGAs and ARM-based DPUs. They realize that CPUs alone aren’t going to drive growth. There have to be product lines that will keep the investor cash flowing in. And that doesn’t come from slow and steady business.

Exciting and New

VMware represents a huge potential business arena for Broadcom. They get to jump into the data center with both feet and drive hybrid cloud deployments. They can be a leader in the on-prem software market overnight with this purchase. Cloud migrations take time. Software needs to be refactored. And that means running it in your data center for a while until you can make it work in the cloud. You could even have software that is never migrated for technical or policy-based reasons.

However, that very issue is going to cause problems for VMware and Broadcom. Is there a growth market in the data center in an enterprise? Do you see companies adding new applications and resources to their existing enterprise data centers? Or do you see them migrating to the cloud first? Do you imagine given the choice between building more compute cluster in the existing hybrid data center or developing for the cloud the first time that companies are going to choose the former?

To me, the quandary of VMware isn’t that different from the one faced by IBM. Yes, you can develop new applications to run on mainframes or on-prem data centers. But you can also not do that too. Which means you have to persuade people to use your technology. Maybe it’s for ease-of-management for existing operations teams. Could be an existing relationship that allows you to execute faster. But no matter what choice you make you’re incurring some form of technical debt when you choose to use existing technology to do something that could also be accomplished with new ideas.

Know who else hates the idea of technical debt and slow steady growth? That’s right, investors. They want flashy year-over-year numbers every quarter to prove they’re right. They want to see that stock price climb so they can eventually sell it and invest in some other growth market. Gone are the days when people were proud to own a bit of company and see it prosper over time. Instead, investors have a vision that lasts about 90 days and is entirely focused on the bottom line. If you aren’t delivering growth you don’t have value. It’s not even about making money any more. You have to make more all the time.

The Broad Elephant

The two biggest shareholders of VMware would love to see it purchased for a ton of money. Given the current valuation north of $40 billion, Dell and Silver Lake would profit handsomely from a huge acquisition. That could be used to pay down even more debt and expand the market for Dell solutions. So they’re going to back it if the numbers look good.

The other side of this equation is the rest of the market that thinks Broadcom’s acquisition is a terrible idea. Twitter is filled with analysts and industry experts talking about how terrible this would be for VMware customers. Given that Symantec and CA haven’t really been making news recently would tend to lend credence to that assessment.

The elephant in the room is what happens when customers don’t want VMware to sell? Sure, if VMware is allowed to operate as an independent entity like it was during the EMC Federation days things are good. They’ll continue to offer support and make happy customers. However, there is always the chance something will change down the road and force the status quo to change. And that’s the thing no one wants to talk about. Does VMware reject a very good offer in favor of autonomy? They just got out from under the relationship with Dell Technologies that was odd to say the least. Do they really want to get snapped up right away?


Tom’s Take

My read on this is simple but likely too simple. Broadcom is going to make a big offer based on stock price so they can leverage equity. The market has already responded favorably to the rumors. Dell and Silver Lake will back the offer because they like the cash to change their leverage situation. Ultimately, regulators will step in and decide the deal. I’m betting the regulators will say “no” like they did with Qualcomm. When the numbers are this big there are lots of eyeballs on the deal.

Ultimately Broadcom may want VMware and the biggest partners may be on board with it. But I think we’re going to see it fall apart because the approvals will either take too long and the stock price will fall enough to make it no longer worth it or the regulators will just veto the deal outright. Either way we’re going to be analyzing this for months to come.

Extreme-ly Interesting Times In Networking

If you’re a fan of Extreme Networks, the last few months have been pretty exciting for you. Just yesterday, it was announced that Extreme is buying the data center networking business of Brocade for $55 million once the Broadcom acquisition happens. Combined with the $100 million acquisition of Avaya’s campus networking portfolio on March 7th and the purchase of Zebra Wireless (nee Motorola) last September, Extreme is pushing itself into the market as a major player. How is that going to impact the landscape?

Building A Better Business

Extreme has been a player in the wireless space for a while. Their acquisition of Enterasys helped vault them into the mix with other big wireless players. Now, the rounding out of the portfolio helps them complete across the board. They aren’t just limited to playing with stadium wifi and campus technologies now. The campus networking story that was brought in through Avaya was a must to help them compete with Aruba, A Hewlett Packard Enterprise Company. Aruba owns the assets of HPE’s campus networking business and has been leveraging them effectively.

The data center play was an interesting one to say the least. I’ve mused recently that Brocade’s data center business may end up lying fallow once Arris grabbed Ruckus. Brocade had some credibility in very large networks through VCS and the MLX router series, but outside of the education market and specialized SDN deployments it was rare to encounter them. Arista has really dug into Cisco’s market share here and the rest of the players seem to be content to wait out that battle. Juniper is back in the carrier business, and the rest seem to be focusing now on OCP and the pieces that flow logically from that, such as Six Pack, Backpack, and Whatever Facebook Thinks The Next Fast Switch Should Be Called That Ends In “Pack”.

Seeing Extreme come from nowhere to snap up the data center line from Brocade signals a new entrant into the data center crowd. Imagine, if you will, a mosh pit. Lots of people fighting for their own space to do their thing. Two people in the middle have decided to have an all-out fight over their space. Meanwhile, everyone else is standing around watching them. Finally, a new person enters the void of battle to do their thing on the side away from the fistfight that has captured everyone’s attention. This is where Extreme finds itself now.

Not Too Extreme

The key for Extreme now is to tell the “Full Stack” story to customers. Whereas before they had to hand off the high end to another “frenemy” and hope that it didn’t come back to bite them, now Extreme can sell all the way up and down the stack. They have some interesting ideas about SDN that will bear some watching as they begin to build them into their stack. The integration of VCS into their portfolio will take some time, as the way that Brocade does their fabric implementation is a bit different than the rest of the world.

This is also a warning call for the rest of the industry. It’s time to get off the sidelines and choose your position. Arista and Cisco won’t be fighting forever. Cisco is also reportedly looking to create a new OS to bring some functionality to older devices. That means that they can continue and try to innovate while fighting against their competitors. The winner of the Cisco and Arista battle is inconsequential to the rest of the industry right now. Either Arista will be wiped off the map and a stronger Cisco will pick a new enemy, or Arista will hurt Cisco and pull even with them in the data center market, leaving more market share for others to gobble up.

Extreme stands a very good chance of picking up customers with their approach. Customers that wouldn’t have considered them in the past will be lining up to see how Avaya campus gear will integrate with Enterasys wireless and Brocade data center gear. It’s not all the different from the hodge-podge approach that many companies have picked for years to lower costs and avoid having a single vendor solution. Now, those lower cost options are available in a single line of purple boxes.


Tom’s Take

Who knew we were going to get a new entrant into the Networking Wars for the tidy sum of $155 million? Feels like it should have cost more than that, but given the number of people holding fire sales to get rid of things they have to divest before pending acquisition or pending dissolution, it really doesn’t come as much surprise. Someone had to buy these pieces and put them together. I think Extreme is going to turn some heads and make some for some interesting conversations in the next few months. Don’t count them out just yet.

Facebook Wedge 100 – The Future of the Data Center?

 

FBLike

Facebook is back in the news again. This time, it’s because of the release of their new Wedge 100 switch into the Open Compute Project (OCP). Wedge was already making headlines when Facebook announced it two years ago. A fast, open sourced 40Gig Top-of-Rack (ToR) switch was huge. Now, Facebook is letting everyone in on the fun of a faster Wedge that has been deployed into production at Facebook data centers as well as being offered for sale through Edgecore Networks, which is itself a division of Accton. Accton has been leading the way in the whitebox switching market and Wedge 100 may be one of the ways it climbs to the top.

Holy Hardware!

Wedge 100 is pretty impressive from the spec sheet. They paid special attention to making sure the modules were expandable, especially for faster CPUs and special purpose devices down the road. That’s possible because Wedge is a highly specialized micro server already. Rather than rearchitecting the guts of the whole thing, Facebook kept the CPU and the monitoring stack and just put newer, faster modules on it to ramp to 32x100Gig connectivity.

12809187_1676340369272065_1831349201_n

As suspected in the above image, Facebook is using Broadcom Tomahawk as the base connectivity in their switch, which isn’t surprising. Tomahawk is the roadmap for all vendors to get to 100Gig. It also means that the downlink connectivity for these switches could conceivably work in 25/50Gig increments. However, given the enormous amount of east/west traffic that Facebook must generate, Facebook has created a server platform they call Yosemite that has 100Gig links as well. Given the probably backplane there, you can imagine the data that’s getting thrown around the data centers.

That’s not all. Omar Baldonado has said that they are looking at going to 400Gig connectivity soon. That’s the kind of mind blowing speed that you see in places like Google and Facebook. Remember that this hardware is built for a specific purpose. They don’t just have elephant flows. They have flows the size of an elephant herd. That’s why they fret about the operating temperature of optics or the rack design they want to use (standard versus Open Racks). Because every little change matters a thousand fold at that scale.

Software For The People

The other exciting announcement from Facebook was on the software front. Of course, FBOSS has been updated to work with Wedge 100. I found it very interesting in the press release that much of the programming in FBOSS went into interoperability with Wedge 40 and with fixing the hardware side of things. This makes some sense when you realize that Facebook didn’t need to spend a lot of time making Wedge 40 interoperate with anything, since it was a wholesale replacement. But Wedge 100 would need to coexist with Wedge 40 as the rollout happens, so making everything play nice is a huge point on the checklist.

The other software announcement that got the community talking was support for third-party operating systems running on Wedge 100. The first one up was Open Network Linux from Big Switch Networks. ONL ran on the original Wedge 40 and now runs on the Wedge 100. This means that if you’re familiar with running BSN OSes on your devices, you can drop in a Wedge 100 in your spine or fabric and be ready to go.

The second exciting announcement about software comes from a new company, Apstra. Apstra announced their entry into OCP and their intent to get their Apstra Operating System (AOS) running on Wedge 100 by next year. That has a big potential impact for Apstra customers that want to deploy these switches down the road. I hope to hear more about this from Apstra during their presentation at Networking Field Day 13 next month.


Tom’s Take

Facebook is blazing a trail for fast ToR switches. They’ve got the technical chops to build what they need and release the designs to the rest of the world to be used for a variety of ideas. Granted, your data center looks nothing like Facebook. But the ideas they are pioneering are having an impact down the line. If Open Rack catches on you may see different ideas in data center standardization. If the Six Pack catches on as a new chassis concept, it’s going to change spines as well.

If you want to get your hands dirty with Wedge, build a new 100Gig pod and buy one from Edgecore. The downlinks can break out into 10Gig and 25Gig links for servers and knowing it can run ONL or Apstra AOS (eventually) gives you some familiar ground to start from. If it runs as fast as they say it does, it may be a better investment right now than waiting for Tomahawk II to come to your favorite vendor.

 

 

Tomahawk II – Performance Over Programmability

tomahawk2

Broadcom announced a new addition to their growing family of merchant silicon today. The new Broadcom Tomahawk II is a monster. It doubles the speed of it’s first-generation predecessor. It has 6.4 Tbps of aggregate throughout, divided up into 256 25Gbps ports that can be combined into 128 50Gbps or even 64 100Gbps ports. That’s fast no matter how you slice it.

Broadcom is aiming to push these switches into niches like High-Performance Computing (HPC) and massive data centers doing big data/analytics or video processing to start. The use cases for 25/50Gbps haven’t really changed. What Broadcom is delivering now is port density. I fully expect to see top-of-rack (ToR) switches running 25Gbps down to the servers with new add-in cards connected to 50Gbps uplinks that deliver them to the massive new Tomahawk II switches running in the spine or end-of-row (EoR) configuration for east-west traffic disbursement.

Another curious fact of the Tomahawk II is the complete lack of 40Gbps support. Granted, the support was only paid lip service in the Tomahawk I. The real focus was on shifting to 25/50Gbps instead of the weird 10/40/100Gbps split we had in Trident II. I talked about this a couple of years ago and wasn’t very high on it back then, but I didn’t know the level of apathy people had for 40Gbps uplinks. The push to 25/50Gbps has only been held up so far by the lack of availability of new NICs for servers to enable faster speeds. Now that those are starting to be produced in volume expect the 40Gbps uplinks to be a relic of the past.

A Foot In The Door

Not everyone is entirely happy about the new Broadcom Tomahawk II. I received an email today with a quote from Martin Izzard of Barefoot Networks, discussing their new Tofino platform. He said in part:

Barefoot led the way in June with the introduction of Tofino, the world’s first fully programmable switches, which also happen to be the fastest switches ever built.

It’s true that Tofino is very fast. It was the first 6.4 Tbps switch on the market. I talked a bit about it a few months ago. But I think that Barefoot is a bit off on its assessment here and has a bit of an axe to grind.

Barefoot is pushing something special with Tofino. They are looking to create a super fast platform with programmability. P4 is not quite an FPGA and it’s not an ASIC. It’s a switch stripped to its core and rebuilt with a language all its own. That’s great if you’re a dev shop or a niche market that has to squeeze every ounce of performance out of a switch. In the world of cars, the best analogy would be looking at Tofino like a specialized sports car like a Koenigsegg Agera. It’s very fast and very stylish, but it’s purpose built to do one thing – drive really fast on pavement and carry two passengers.

Broadcom doesn’t really care about development shops. They don’t worry about niche markets. Because those users are not their customers. Their customers are Arista, Cisco, Brocade, Juniper and others. Broadcom really is the Intel of the switching world. Their platforms power vendor offerings. Buying a basic Tomahawk II isn’t something you’re going to be able to do. Broadcom will only sell these in huge lots to companies that are building something with them. To keep the car analogy, Tomahawk II is more like the old F-body cars produced by Chevrolet that later went on to become Camaros, Firebirds, and Trans Ams. Each of these cars was distinctive and had their fans, but the chassis was the same underneath the skin.

Broadcom wants everyone to buy their silicon and use it to power the next generation of switches. Barefoot wants a specialist kit that is faster than anything else on the market, provided you’re willing to put the time into learning P4 and stripping out all the bits they feel are unnecessary. Your use case determines your hardware. That hasn’t changed, nor is it likely to change any time soon.


Tom’s Take

The data center will be 25/50/100Gbps top to bottom when the next switch refresh happens. It could even be there sooner if you want to move to a pod-based architecture instead of more traditional designs. The odds are very good that you’re going to be running Tomahawk or Tomahawk II depending on which vendor you buy from. You’re probably only going to be running something special like Tofino or maybe even Cavium if you’ve got a specific workload or architecture that you need performance or programmability.

Don’t wait for the next round of hardware to come out before you have an upgrade plan. Write it now. Think about where you want to be in 4 years. Now double your requirements. Start investigating. Ask your vendor of choice what their plans are. If their plans stink, as their competitor. Get quotes. Get ideas. Be ready for the meeting when it’s scheduled. Make sure you’re ready to work with your management to bury the hatchet, not get a hatchet jobbed network.

Flash Needs a Highway

CarLights

Last week at Storage Field Day 10, I got a chance to see Pure Storage and their new FlashBlade product. Storage is an interesting creature, especially now that we’ve got flash memory technology changing the way we think about high performance. Flash transformed the industry from slow spinning gyroscopes of rust into a flat out drag race to see who could provide enough input/output operations per second (IOPS) to get to the moon and back.

Take a look at this video about the hardware architecture behind FlashBlade:

It’s pretty impressive. Very fast flash storage on blades that can outrun just about anything on the market. But this post isn’t really about storage. It’s about transport.

Life Is A Network Highway

Look at the backplane of the FlashBlade chassis. It’s not something custom or even typical for a unit like that. The key is when the presenter says that the architecture of the unit is more like a blade server chassis than a traditional SAN. In essence, Pure has taken the concept of a midplane and used it very effectively here. But their choice of midplane is interesting in this case.

Pure is using the Broadcom Trident II switch as their networking midplane for FlashBlade. That’s pretty telling from a hardware perspective. Trident II runs a large majority of switches in the market today that are merchant silicon based. They are essentially becoming the Intel of the switch market. They are supplying arms to everyone that wants to build something quickly at low cost without doing any kind of custom research and development of their own silicon manufacturing.

Using a Trident II in the backplane of the FlashBlade means that Pure evaluated all the alternatives and found that putting something merchant-based in the midplane is cost effective and provides the performance profile necessary to meet the needs of flash storage. Saturating backplanes with IOPS can be accomplished. But as we learned from Coho Data, it takes a lot of CPU horsepower to create a flash storage system that can saturate 10Gig Ethernet links.

I Am Speed

Using Trident II as a midplane or backplane for devices like this has huge implications. First and foremost, networking technology has a proven track record. If Trident II wasn’t a stable and reliable platform, no one would have used it in their products. And given that almost everyone in the networking space has a Trident platform for sale, it speaks volumes about reliability.

Second, Trident II is available. Broadcom is throwing these units off the assembly line as fast as they can. That means that there’s no worry about silicon shortages or plant shutdowns or any one of a number of things that can affect custom manufacturing. Even if a company wants to look at a custom fabrication, it could take months or even years to bring things online. By going with a reference design like Trident II, you can have your software engineers doing the hard work of building a system to support your hardware. That speeds time to market.

Third, Trident is a modular platform. That part can’t be understated even though I think it wasn’t called out very much in the presentation from Pure. By having a midplane that is built as a removable module, it’s very easy to replace it should problems arise. That’s the point of field replaceable units (FRUs). But in today’s market, it’s just as easy to create a system that can run multiple different platforms as well. The blade chassis idea extends equally to both blades and mid or backplanes.

Imagine being able to order a Tomahawk-based controller unit for FlashBlade that only requires you to swap the units at the back of the system. Now, that investment in 10Gig blade connectivity with 40Gig uplinks just became 25Gig blade connectivity with 100Gig uplinks to the network. All for the cost of two network controller blades. There may be some software that needs to be written to make the transition smooth for the consumers in the system, but the hardware is more than capable of supporting a change like that.


Tom’s Take

I was thrilled to see Pure Storage building a storage platform that tightly integrates with networking the way that FlashBlade does. This is how the networking stack is going to be completely integrated with storage and compute. We should still look at things through the lens of APIs and programmability, but making networking and consistent transport layer for all things in the datacenter is a good start.

The funny thing about making something a consistent transport layer is that by design it has to be extensible. That means more and more folks are going to be driving those pieces into the network. Software can be created on top of this common transport to differentiate, much like we’re seeing with network operating systems right now. Even Pure was able to create a different kind of transport protocol to do the heavy lifting at low latency.

It’s funny that it took a presentation from a storage company to make me see the value of the network as something agnostic. Perhaps I just needed some perspective from the other side of the fence.

Intel and the Network Arms Race

IntelLogo

Networking is undergoing a huge transformation. Software is surely a huge driver for enabling technology to grow by leaps and bounds and increase functionality. But the hardware underneath is growing just as much. We don’t seem to notice as much because the port speeds we deal with on a regular basis haven’t gotten much faster than the specs we read about years go. But the chips behind the ports are where the real action is right now.

Fueling The Engines Of Forwarding

Intel has jumped into networking with both feet and is looking to land on someone. Their work on the Data Plane Development Kit (DPDK) is helping developers write code that is highly portable across CPU architecture. We used to deal with specific microprocessors in unique configurations. A good example is Dynamips.

Most everyone is familiar with this program or the projects that spawned, Dynagen and GNS3. Dynamips worked at first because it emulated the MIPS processor found in Cisco 7200 routers. It just happened that the software used the same code for those routers all the way up to the first releases of the 15.x train. Dynamips allowed for the emulation of Cisco router software but it was very, very slow. It almost didn’t allow for packets to be processed. And most of the advanced switching features didn’t work at all thanks to ASICs.

Running networking code on generic x86 processors doesn’t provide the kinds of performance that you need in a network switching millions of packets per second. That’s why DPDK is helping developers accelerate their network packet forward to approach the levels of custom ASICs. This means that a company could write software for a switch using Intel CPUs as the base of the system and expect to get good performance out of it.

Not only can you write code that’s almost as good as the custom stuff network vendors are creating, but you can also have a relative assurance that the code will be portable. Look at the pfSense project. It can run on some very basic hardware. But the same code can also run on a Xeon if you happen to have one of those lying around. That performance boost means a lot more packet switching and processing. No modifications to the code needed. That’s a powerful way to make sure that your operating system doesn’t need radical modifications to work across a variety of platforms, from SMB and ROBO all the way to an enterprise core device.

Fighting The Good Fight

The other reason behind Intel’s drive to get DPDK to everyone is to fight off the advances of Broadcom. It used to be that the term merchant silicon meant using off-the-shelf parts instead of rolling your own chips. Now, it means “anything made by Broadcom that we bought instead of making”. Look at your favorite switching vendor and the odds are better than average that the chipset inside their most popular switches is a Broadcom Trident, Trident 2, or even a Tomahawk. Yes, even the Cisco Nexus 9000 runs on Broadcom.

Broadcom is working their way to the position of arms dealer to the networking world. It soon won’t matter what switch wins because they will all be the same. That’s part of the reason for the major differentiation in software recently. If you have the same engine powering all the switches, your performance is limited by that engine. You also have to find a way to make yourself stand out when everything on the market has the exact same packet forwarding specs.

Intel knows how powerful it is to become the arms dealer in a market. They own the desktop, laptop, and server market space. Their only real competition is AMD, and one could be forgiven for arguing that the only reason AMD hasn’t gone under yet is through a combination of video card sales and Intel making sure they won’t get in trouble for having a monopoly. But Intel also knows what it feels like to miss the boat on a chip transition. Intel missed the mobile device market, which is now ruled by ARM and custom SoC manufacturing. Intel needs to pull off a win in the networking space with DPDK to ensure that the switches running in the data center tomorrow are powered by x86, not Broadcom.


Tom’s Take

Intel’s on the right track to make some gains in networking. Their new Xeon chips with lots and lots of cores can do parallel processing of workloads. Their contributions to CoreOS will help the accelerate the adoption of containers, which are becoming a standard part of development. But the real value for Intel is helping developers create portable networking code that can be deployed on a variety of devices. That enables all kinds of new things to come, from system scaling to cloud deployment and beyond.