Tomahawk II – Performance Over Programmability

tomahawk2

Broadcom announced a new addition to their growing family of merchant silicon today. The new Broadcom Tomahawk II is a monster. It doubles the speed of it’s first-generation predecessor. It has 6.4 Tbps of aggregate throughout, divided up into 256 25Gbps ports that can be combined into 128 50Gbps or even 64 100Gbps ports. That’s fast no matter how you slice it.

Broadcom is aiming to push these switches into niches like High-Performance Computing (HPC) and massive data centers doing big data/analytics or video processing to start. The use cases for 25/50Gbps haven’t really changed. What Broadcom is delivering now is port density. I fully expect to see top-of-rack (ToR) switches running 25Gbps down to the servers with new add-in cards connected to 50Gbps uplinks that deliver them to the massive new Tomahawk II switches running in the spine or end-of-row (EoR) configuration for east-west traffic disbursement.

Another curious fact of the Tomahawk II is the complete lack of 40Gbps support. Granted, the support was only paid lip service in the Tomahawk I. The real focus was on shifting to 25/50Gbps instead of the weird 10/40/100Gbps split we had in Trident II. I talked about this a couple of years ago and wasn’t very high on it back then, but I didn’t know the level of apathy people had for 40Gbps uplinks. The push to 25/50Gbps has only been held up so far by the lack of availability of new NICs for servers to enable faster speeds. Now that those are starting to be produced in volume expect the 40Gbps uplinks to be a relic of the past.

A Foot In The Door

Not everyone is entirely happy about the new Broadcom Tomahawk II. I received an email today with a quote from Martin Izzard of Barefoot Networks, discussing their new Tofino platform. He said in part:

Barefoot led the way in June with the introduction of Tofino, the world’s first fully programmable switches, which also happen to be the fastest switches ever built.

It’s true that Tofino is very fast. It was the first 6.4 Tbps switch on the market. I talked a bit about it a few months ago. But I think that Barefoot is a bit off on its assessment here and has a bit of an axe to grind.

Barefoot is pushing something special with Tofino. They are looking to create a super fast platform with programmability. P4 is not quite an FPGA and it’s not an ASIC. It’s a switch stripped to its core and rebuilt with a language all its own. That’s great if you’re a dev shop or a niche market that has to squeeze every ounce of performance out of a switch. In the world of cars, the best analogy would be looking at Tofino like a specialized sports car like a Koenigsegg Agera. It’s very fast and very stylish, but it’s purpose built to do one thing – drive really fast on pavement and carry two passengers.

Broadcom doesn’t really care about development shops. They don’t worry about niche markets. Because those users are not their customers. Their customers are Arista, Cisco, Brocade, Juniper and others. Broadcom really is the Intel of the switching world. Their platforms power vendor offerings. Buying a basic Tomahawk II isn’t something you’re going to be able to do. Broadcom will only sell these in huge lots to companies that are building something with them. To keep the car analogy, Tomahawk II is more like the old F-body cars produced by Chevrolet that later went on to become Camaros, Firebirds, and Trans Ams. Each of these cars was distinctive and had their fans, but the chassis was the same underneath the skin.

Broadcom wants everyone to buy their silicon and use it to power the next generation of switches. Barefoot wants a specialist kit that is faster than anything else on the market, provided you’re willing to put the time into learning P4 and stripping out all the bits they feel are unnecessary. Your use case determines your hardware. That hasn’t changed, nor is it likely to change any time soon.


Tom’s Take

The data center will be 25/50/100Gbps top to bottom when the next switch refresh happens. It could even be there sooner if you want to move to a pod-based architecture instead of more traditional designs. The odds are very good that you’re going to be running Tomahawk or Tomahawk II depending on which vendor you buy from. You’re probably only going to be running something special like Tofino or maybe even Cavium if you’ve got a specific workload or architecture that you need performance or programmability.

Don’t wait for the next round of hardware to come out before you have an upgrade plan. Write it now. Think about where you want to be in 4 years. Now double your requirements. Start investigating. Ask your vendor of choice what their plans are. If their plans stink, as their competitor. Get quotes. Get ideas. Be ready for the meeting when it’s scheduled. Make sure you’re ready to work with your management to bury the hatchet, not get a hatchet jobbed network.

The 25GbE Datacenter Pipeline

pipeline

SDN may have made networking more exciting thanks to making hardware less important than it has been in the past, but that’s not to say that hardware isn’t important at all. The certainty with which new hardware will come out and make things a little bit faster than before is right there with death and taxes. One of the big announcements yesterday from Hewlett Packard Enterprise (HPE) during HPE Discover was support for a new 25GbE / 100GbE switch architecture built around the FlexFabric 5950 and 12900 products. This may be the tipping point for things.

The Speeds of the Many

I haven’t always been high on 25GbE. Almost two years ago I couldn’t see the point. Things haven’t gotten much different in the last 24 months from a speed perspective. So why the change now? What make this 25GbE offering any different than things from the nascent ideas presented by Arista?

First and foremost, the 25GbE released by HPE this week is based on the Broadcom Tomahawk chipset. When 25GbE was first presented, it was a collection of vendors trying to convince you to upgrade to their slightly faster Ethernet. But in the past two years, most of the merchant offerings on the market have coalesced around using Broadcom as the primary chipset. That means that odds are good your favorite switching platform is running Trident 2 or Trident 2+ under the hood.

With Broadcom backing the silicon, that means wider adoption of the specification. Why would anyone buy 25GbE from Brocade or Dell or HPE if the only vendor supporting it was that vendor of choice? If you can’t ever be certain that you’ll have support for the hardware in three or five years time, making an investment today seems silly. Broadcom’s backing means that eventually everyone will be adopting 25GbE.

Likewise, one of my other impediments to adoption was the lack of server NICs to ramp hosts to 25GbE. Having fast access ports means nothing if the severs can’t take advantage of them. HPE addressed this with the release of FlexFabric networking adapters that can run 25GbE Ethernet. More importantly, those adapters (and switches) can run at 10GbE as well. This means that adoption of higher bandwidth is no longer an all-or-nothing proposition. You don’t have to abandon your existing investment to get to 25GbE right away. You don’t have to build a lab pod to test things and then sneak it into production. You can just buy a 5950 today and clock the ports down to 10GbE while you await the availability and purchasing cycle to buy 25GbE NICs. Then you can flip some switches in the next maintenance window and be running at 25GbE speeds. And you can leave some ports enabled at 10GbE to ensure that there is maximum backwards compatibility.

The Needs of the Few

Odds are good that 25GbE isn’t going to be right for you today. HPE is even telling people that 25GbE only really makes sense in a few deployment scenarios, among which are large container-based hosts running thousands of virtual apps, flash storage arrays that use Ethernet as a backplane, or specialized high-performance computing (HPC) tricks with RDMA and such. That means the odds are good that you won’t need 25GbE first thing tomorrow morning.

However, the need for 25GbE is going to be there. As applications grow more bandwidth hungry and data centers keep shrinking in footprint, the network hardware you do have left needs to work harder and faster to accomplish more with less. If the network really is destined to become a faceless underlay that serves as a utility for applications, it needs to run flat out fast to ensure that developers can’t start blaming their utility company for problems. Multi-core server architectures and flash storage have solved two of the three legs of this problem. 25GbE host connectivity and the 100GbE backbone connectivity tied to it, solve the other side of the equation so everything balances properly.

Don’t look at 25GbE as an immediate panacea for your problems. Instead, put it on a timeline with your other server needs and see what the adoption rate looks like going forward. If server NICs are bought in large quantities, that will drive manufactures to push the technology onto the server boards. If there is enough need for connectivity at these speeds the switch vendors will start larger adoption of Tomahawk chipsets. That cycle will push things forward much faster than the 10GbE / 40GbE marathon that’s been going on for the past six years.


Tom’s Take

I think HPE is taking a big leap with 25GbE. Until the Dell/EMC merger is completed they won’t find themselves in a position to adopt Tomahawk quickly in the Force10 line. That means the need to grab 25GbE server NICs won’t materialize if there’s nothing to connect them. Cisco won’t care either way so long as switches are purchased and all other networking vendors don’t sell servers. So that leaves HPE to either push this forward to fall off the edge of the cliff. Time will tell how this will all work out, but it would be nice to see HPE get a win here and make the network the least of application developer problems.

Disclaimer

I was a guest of Hewlett Packard Enterprise for HPE Discover 2016. They paid for my travel, hotel, and meals during the event. While I was briefed on the solution discussed here and many others, there was no expectation of coverage of the topics discussed. HPE did not ask for, nor were they guaranteed any consideration in the writing of this article. The conclusions and analysis contained herein are mine and mine alone.

Flash Needs a Highway

CarLights

Last week at Storage Field Day 10, I got a chance to see Pure Storage and their new FlashBlade product. Storage is an interesting creature, especially now that we’ve got flash memory technology changing the way we think about high performance. Flash transformed the industry from slow spinning gyroscopes of rust into a flat out drag race to see who could provide enough input/output operations per second (IOPS) to get to the moon and back.

Take a look at this video about the hardware architecture behind FlashBlade:

It’s pretty impressive. Very fast flash storage on blades that can outrun just about anything on the market. But this post isn’t really about storage. It’s about transport.

Life Is A Network Highway

Look at the backplane of the FlashBlade chassis. It’s not something custom or even typical for a unit like that. The key is when the presenter says that the architecture of the unit is more like a blade server chassis than a traditional SAN. In essence, Pure has taken the concept of a midplane and used it very effectively here. But their choice of midplane is interesting in this case.

Pure is using the Broadcom Trident II switch as their networking midplane for FlashBlade. That’s pretty telling from a hardware perspective. Trident II runs a large majority of switches in the market today that are merchant silicon based. They are essentially becoming the Intel of the switch market. They are supplying arms to everyone that wants to build something quickly at low cost without doing any kind of custom research and development of their own silicon manufacturing.

Using a Trident II in the backplane of the FlashBlade means that Pure evaluated all the alternatives and found that putting something merchant-based in the midplane is cost effective and provides the performance profile necessary to meet the needs of flash storage. Saturating backplanes with IOPS can be accomplished. But as we learned from Coho Data, it takes a lot of CPU horsepower to create a flash storage system that can saturate 10Gig Ethernet links.

I Am Speed

Using Trident II as a midplane or backplane for devices like this has huge implications. First and foremost, networking technology has a proven track record. If Trident II wasn’t a stable and reliable platform, no one would have used it in their products. And given that almost everyone in the networking space has a Trident platform for sale, it speaks volumes about reliability.

Second, Trident II is available. Broadcom is throwing these units off the assembly line as fast as they can. That means that there’s no worry about silicon shortages or plant shutdowns or any one of a number of things that can affect custom manufacturing. Even if a company wants to look at a custom fabrication, it could take months or even years to bring things online. By going with a reference design like Trident II, you can have your software engineers doing the hard work of building a system to support your hardware. That speeds time to market.

Third, Trident is a modular platform. That part can’t be understated even though I think it wasn’t called out very much in the presentation from Pure. By having a midplane that is built as a removable module, it’s very easy to replace it should problems arise. That’s the point of field replaceable units (FRUs). But in today’s market, it’s just as easy to create a system that can run multiple different platforms as well. The blade chassis idea extends equally to both blades and mid or backplanes.

Imagine being able to order a Tomahawk-based controller unit for FlashBlade that only requires you to swap the units at the back of the system. Now, that investment in 10Gig blade connectivity with 40Gig uplinks just became 25Gig blade connectivity with 100Gig uplinks to the network. All for the cost of two network controller blades. There may be some software that needs to be written to make the transition smooth for the consumers in the system, but the hardware is more than capable of supporting a change like that.


Tom’s Take

I was thrilled to see Pure Storage building a storage platform that tightly integrates with networking the way that FlashBlade does. This is how the networking stack is going to be completely integrated with storage and compute. We should still look at things through the lens of APIs and programmability, but making networking and consistent transport layer for all things in the datacenter is a good start.

The funny thing about making something a consistent transport layer is that by design it has to be extensible. That means more and more folks are going to be driving those pieces into the network. Software can be created on top of this common transport to differentiate, much like we’re seeing with network operating systems right now. Even Pure was able to create a different kind of transport protocol to do the heavy lifting at low latency.

It’s funny that it took a presentation from a storage company to make me see the value of the network as something agnostic. Perhaps I just needed some perspective from the other side of the fence.

Intel and the Network Arms Race

IntelLogo

Networking is undergoing a huge transformation. Software is surely a huge driver for enabling technology to grow by leaps and bounds and increase functionality. But the hardware underneath is growing just as much. We don’t seem to notice as much because the port speeds we deal with on a regular basis haven’t gotten much faster than the specs we read about years go. But the chips behind the ports are where the real action is right now.

Fueling The Engines Of Forwarding

Intel has jumped into networking with both feet and is looking to land on someone. Their work on the Data Plane Development Kit (DPDK) is helping developers write code that is highly portable across CPU architecture. We used to deal with specific microprocessors in unique configurations. A good example is Dynamips.

Most everyone is familiar with this program or the projects that spawned, Dynagen and GNS3. Dynamips worked at first because it emulated the MIPS processor found in Cisco 7200 routers. It just happened that the software used the same code for those routers all the way up to the first releases of the 15.x train. Dynamips allowed for the emulation of Cisco router software but it was very, very slow. It almost didn’t allow for packets to be processed. And most of the advanced switching features didn’t work at all thanks to ASICs.

Running networking code on generic x86 processors doesn’t provide the kinds of performance that you need in a network switching millions of packets per second. That’s why DPDK is helping developers accelerate their network packet forward to approach the levels of custom ASICs. This means that a company could write software for a switch using Intel CPUs as the base of the system and expect to get good performance out of it.

Not only can you write code that’s almost as good as the custom stuff network vendors are creating, but you can also have a relative assurance that the code will be portable. Look at the pfSense project. It can run on some very basic hardware. But the same code can also run on a Xeon if you happen to have one of those lying around. That performance boost means a lot more packet switching and processing. No modifications to the code needed. That’s a powerful way to make sure that your operating system doesn’t need radical modifications to work across a variety of platforms, from SMB and ROBO all the way to an enterprise core device.

Fighting The Good Fight

The other reason behind Intel’s drive to get DPDK to everyone is to fight off the advances of Broadcom. It used to be that the term merchant silicon meant using off-the-shelf parts instead of rolling your own chips. Now, it means “anything made by Broadcom that we bought instead of making”. Look at your favorite switching vendor and the odds are better than average that the chipset inside their most popular switches is a Broadcom Trident, Trident 2, or even a Tomahawk. Yes, even the Cisco Nexus 9000 runs on Broadcom.

Broadcom is working their way to the position of arms dealer to the networking world. It soon won’t matter what switch wins because they will all be the same. That’s part of the reason for the major differentiation in software recently. If you have the same engine powering all the switches, your performance is limited by that engine. You also have to find a way to make yourself stand out when everything on the market has the exact same packet forwarding specs.

Intel knows how powerful it is to become the arms dealer in a market. They own the desktop, laptop, and server market space. Their only real competition is AMD, and one could be forgiven for arguing that the only reason AMD hasn’t gone under yet is through a combination of video card sales and Intel making sure they won’t get in trouble for having a monopoly. But Intel also knows what it feels like to miss the boat on a chip transition. Intel missed the mobile device market, which is now ruled by ARM and custom SoC manufacturing. Intel needs to pull off a win in the networking space with DPDK to ensure that the switches running in the data center tomorrow are powered by x86, not Broadcom.


Tom’s Take

Intel’s on the right track to make some gains in networking. Their new Xeon chips with lots and lots of cores can do parallel processing of workloads. Their contributions to CoreOS will help the accelerate the adoption of containers, which are becoming a standard part of development. But the real value for Intel is helping developers create portable networking code that can be deployed on a variety of devices. That enables all kinds of new things to come, from system scaling to cloud deployment and beyond.