Cisco vs. Arista: Shades of Gray

CiscoVArista

Yesterday was D-Day for Arista in their fight with Cisco over the SysDB patent. I’ve covered this a bit for Network Computing in the past, but I wanted to cover some new things here and put a bit more opinion into my thoughts.

Cisco Designates The Competition

As the great Stephen Foskett (@SFoskett) says, you always have to punch above your weight. When you are a large company, any attempt to pick on the “little guy” looks bad. When you’re at the top of the market it’s even tougher. If you attempt to fight back against anyone you’re going to legitimize them in the eye of everyone else wanting to take a shot at you.

Cisco has effectively designated Arista as their number one competitor by way of this lawsuit. Arista represents a larger threat that HPE, Brocade, or Juniper. Yes, I agree that it is easy to argue that the infringement constituted a material problem to their business. But at the same time, Cisco very publicly just said that Arista is causing a problem for Cisco. Enough of a problem that Cisco is going to take them to court. Not make Arista license the patent. That’s telling.

Also, Cisco’s route of going through the ITC looks curious. Why not try to get damages in court instead of making the ITC ban them from importing devices? I thought about this for a while and realized that even if there was a court case pending it wouldn’t work to Cisco’s advantage in the short term. Cisco doesn’t just want to prove that Arista is copying patents. They want to hurt Arista. That’s why they don’t want to file an injunction to get the switches banned. That could take years and involve lots of appeals. Instead, the ITC can just simply say “You cannot bring these devices into the country”, which effectively bans them.

Cisco has gotten what it wants short term: Arista is going to have to make changes to SysDB to get around the patent. They are going to have to ramp up domestic production of devices to get around the import ban. Their train of development is disrupted. And Cisco’s general counsel gets to write lots of posts about how they won.

Yet, even if Arista did blatantly copy the SysDB stuff and run with it, now Cisco looks like the 800-pound gorilla stomping on the little guy through the courts. Not by making better products. Not by innovating and making something that eclipses the need for software like this. No, Cisco won by playing the playground game of “You stole my idea and I’m going to tell!!!”

Arista’s Three-Edged Sword

Arista isn’t exactly coming out of this on the moral high ground either. Arista has gotten a black eye from a lot of the quotes being presented as evidence in this case. Ken Duda said that Arista “slavishly copied” Cisco’s CLI. There have been other comments about “secret sauce” and the way that SysDB is used. A few have mentioned to me privately that the copying done by Arista was pretty blatant.

Understanding is a three-edged sword: Your side, their side, and the truth.

Arista’s side is that they didn’t copy anything important. Cisco’s side is that EOS has enough things that have been copied that it should be shut down and burned to the ground. The truth lies somewhere in the middle of it all.

Arista didn’t copy everything from IOS. They hired people who worked on IOS and likely saw things they’d like to implement. Those people took ideas and ran with them to come up with a better solution. Those ideas may or may not have come from things that were worked on at Cisco. But if you hire a bunch of employees from a competitor, how do you ensure that their ideas aren’t coming from something they were supposed to have “forgotten”?

Arista most likely did what any other company in that situation would do: they gambled. Maybe SysDB was more copied that created. But so long as Arista made money and didn’t really become a blip on Cisco radar. That’s telling. Listen to this video, which starts at 4:40 and goes to about 6:40:

Doug Gourlay said something that has stuck with me for the last four years: “Everyone that ever set out to compete against Cisco and said, ‘We’re going to do it and be everything to everyone’ has failed. Utterly.”

Arista knew exactly which market they wanted to attack: 10Gig and 40Gig data center switches. They made the best switch they could with the best software they could and attacked that market with all the force they could muster. But, the gamble would eventually have to either pay off or come due. Arista had to know at some point that a strategy shift would bring them under the crosshairs of Cisco. And Cisco doesn’t forgive if you take what’s theirs. Even if, and I’m quoting from both a Cisco 10-K from 1996 and a 2014 Annual Report:

[It is] not economically practical or even possible to determine in advance whether a product or any of its components infringes or will infringe on the patent rights of others.

So Arista built the best switch they could with the knowledge that some of their software may not have been 100% clean. Maybe they had plans to clean it up later. Or iterate it out of existence. Who knows? Now, Arista has to face up to that choice and make some new ones to keep selling their products. Whether or not they intended to fight the 800-pound gorilla of networking at the start, they certainly stumbled into a fight here.


Tom’s Take

I’m not a lawyer. I don’t even pretend to be one. I do know that the fate of a technology company now rests in the hands of non-technical people that are very good and wringing nuance out of words. Tech people would look at this and shake their heads. Did Arista copy something? Probably? Was is something Cisco wanted copied? Probably not? Should Cisco have unloaded the legal equivalent of a thermonuclear warhead on them? Doubtful.

Cisco is punishing Arista to ensure no one every copies their ideas again. As I said before, the outcome of this case will doom the Command Line Interface. No one is going to want to tangle with Cisco again. Which also means that no one is going to want to develop things along the Cisco way again. Which means Cisco is going to be less relevant in the minds of networking engineers as REST APIs and other programming architectures become more important that remembering to type conf t every time.

Arista will survive. They will make changes that mean their switches will live on for customers. Cisco will survive. They get to blare their trumpets and tell the whole world they vanquished an unworthy foe. But the battle isn’t over yet. And instead of it being fought over patents, it’s going to be fought as the industry moves away from CLI and toward a model that doesn’t favor those who don’t innovate.

Repeat After Me

callandresponse

Thanks to Tech Field Day, I fly a lot. As Southwest is my airline of choice and I have status, I tend to find myself sitting the slightly more comfortable exit row seating. One of the things that any air passenger sitting in the exit row knows by heart is the exit row briefing. You must listen to the flight attendant brief you on the exit door operation and the evacuation plan. You are also required to answer with a verbal acknowledgment.

I know that verbal acknowledgment is a federal law. I’ve also seen some people blatantly disregard the need to verbal accept responsibility for their seating choice, leading to some hilarious stories. But it also made me think about why making people talk to you is the best way to make them understand what you’re saying

Sotto Voce

Today’s society full of distractions from auto-play videos on Facebook to Pokemon Go parks at midnight is designed to capture the attention span of a human for a few fleeting seconds. Even playing a mini-trailer before a movie trailer is designed to capture someone’s attention for a moment. That’s fine in a world where distraction is assumed and people try to multitask several different streams of information at once.

People are also competing for noise attention as well. Pleasant voices are everywhere trying to sell us things. High volume voices are trying to cut through the din to sell us even more things. People walk around the headphones in most of the day. Those that don’t pollute what’s left with cute videos that sound like they were recorded in an echo chamber.

This has also led to a larger amount of non-verbal behavior being misinterpreted. I’ve experienced this myself on many occasions. People distracted by a song on their phone or thinking about lyrics in their mind may nod or shake their head in rhythm. If you ask them a question just before the “good part” and they don’t hear you clearly, they may appear to agree or disagree even though they don’t know what they just heard.

Even worse is when you ask someone to do something for you and they agree only to turn around and ask, “What was it you wanted again?” or “Sorry, I didn’t catch that.” It’s become acceptable in society to agree to things without understanding their meaning. This leads to breakdowns in communication and pieces of the job left undone because you assume someone was going to do something when they agreed, yet they agreed and then didn’t understand what they were supposed to do.

Fortississimo!

I’ve found that the most effective way to get someone to understand what you’ve told them is to ask you to repeat it back in their own words. It may sound a bit silly to hear what you just told them, but think about the steps that they must go through:

  • They have to stop for moment and think about what you said.
  • They then have to internalize the concepts so they understand them.
  • They then must repeat back to you those concepts in their own phrasing.

Those three steps mean that the ideas behind what you are stating or asking must be considered for a period of time. It means that the ideas will register and be remembered because they were considered when repeating them back to the speaker.

Think about this in a troubleshooting example. A junior admin is supposed to go down the hall and tell you when a link light comes for port 38. If the admin just nods and doesn’t pay attention, ask them to repeat those instructions back. The admin will need to remember that port 38 is the right port and that they need to wait until the link light is on before saying something. It’s only two pieces of information, but it does require thought and timing. By making the admin repeat the instructions, you make sure they have them down right.


Tom’s Take

Think about all the times recently when someone has repeated something back to you. A food order or an amount of money given to you to pay for something. Perhaps it was a long list of items to accomplish for an event or a task. Repetition is important to internalize things. It builds neural pathways that force the information into longer-term memory. That’s why a couple of seconds of repetition are time well invested.

Networking Needs Information, Not Data

GameAfoot

Networking Field Day 12 starts today. There are a lot of great presenters lined up. As I talk to more and more networking companies, it’s becoming obvious that simply moving packets is not the way to go now. Instead, the real sizzle is in telling you all about those packets instead. Not packet inspection but analytics.

Tell Me More, Tell Me More

Ask any networking professional and they’ll tell you that the systems they manage have a wealth of information. SNMP can give you monitoring data for a set of points defined in database files. Other protocols like NetFlow or sFlow can give you more granular data about a particular packet group of data flow in your network. Even more advanced projects like Intel’s Snap are building on the idea of using telemetry to collect disparate data sources and build collection methodologies to do something with them.

The concern that becomes quickly apparent is the overwhelming amount of data being received from all these sources. It reminds me a bit of this scene:

How can you drink from this firehose? Maybe you should be asking if you should instead?

Order From Chaos

Data is useless. We need to perform analysis on it to get information. That’s where a new wave of companies is coming into the networking market. They are building on the frameworks and systems that are aggregating data and presenting it in a way that makes it useful information. Instead of random data points about NetFlow, these solutions tell you that you’ve got a huge problem with outbound traffic of a specific type that is sent at a specific time with a specific payload. The difference is that instead of sorting through data to make sense of it, you’ve got a tool delivering the analysis instead of the raw data.

Sometimes it’s as simple as color-coding lines of Wireshark captures. Resets are bad, so they show up red. Properly torn down connections are good so they are green. You can instantly figure out how good things are going by looking for the colors. That’s analysis from raw data. The real trick in modern networking monitoring is to find a way to analyze and provide context for massive amounts of data that may not have an immediate correlation.

Networking professionals are smart people. They can intuit a lot of potential issues from a given data set. They can make the logical leap to a specific issue given time. What reduces that ability is the sheer amount of things that can go wrong with a particular system and the speed at which those problems must be fixed, especially at scale. A hiccup on one end of the network can be catastrophic on the others if allowed to persist.

Analytics can give us the context we need. It can provide confidence levels for common problems. It can ensure that symptoms are indeed happening above a given baseline or threshold. It can help us narrow the symptoms and potential issues before we even look at the data. Analytics can exclude the impossible while highlighting the more probably causes and outcomes. Analytics can give us peace of mind.


Tom’s Take

Analytics isn’t doing our job for us. Instead, it’s giving us the ability to concentrate. Anyone that spends their time sifting through data to try and find patterns is losing the signal in the noise. Patterns are things that software can find easily. We need to leverage the work being put into network analytics systems to help us track down the issues before they blow up into full problems. We need to apply the thing that makes network professionals the best suited to look at the best information we can gather about a situation. Our concentration on what matters is where our job will be in five years. Let’s take the knowledge we have and apply it.

Ten Years of Cisco Live – Community Matters Most of All

CLUS2016SignPic

Hey! I made the sign pic this year!

I’ve had a week to get over my Cisco Live hangover this year. I’ve been going to Cisco Live for ten years and been involved in the social community for five of them. And I couldn’t be prouder of what I’ve seen. As the picture above shows, the community is growing by leaps and bounds.

People Are What Matter

TomsCornerSelfie

I was asked many, many times about Tom’s Corner. What was it? Why was it important? Did you really start it? The real answer is that I’m a bit curious. I want to meet people. I want to talk to them and learn their stories. I want to understand what drives people to learn about networking or wireless or fax machines. Talking to a person is one of the best parts of my job, whether it be my Bruce Wayne day job or my Batman night job.

Social media helps us all stay in touch when we aren’t face-to-face, but meeting people in real life is as important too. You know who likes to hug. You find out who tells good stories. Little things matter like finding out how tall someone is in real life. You don’t get that unless you find a way to meet them in person.

FishHug

Hugging Denise Fishburne

Technology changes every day. We change from hardware to software and back again. Routers give way to switches. Fabrics rise. Analytics tell all. But all this technology still has people behind it. Those people make the difference. People learn and grow and change. They figure out how to make SDN work today after learning ISDN and Frame Relay yesterday. They have the power to expand beyond their station and be truly amazing.

Conferences Are Still King

Cisco Live is huge. Almost 30,000 attendees this year. The Mandalay Bay Convention Center was packed to the gills. The World of Solutions took up two entire halls this year. The number of folks coming to the event keeps going up every year. The networking world has turned this show into the biggest thing going on. Just like VMworld, it’s become synonymous with the industry.

People have a desire to learn. They want to know things. They want high quality introductions to content and deep dives into things they want to know inside and out. So long as those sessions are offered at conferences like Cisco Live and Interop people will continue to flock to them. For the shows that assemble content from the community this is an easy proposition. People are going to want to talk where others are willing to listen. For single sourced talks like Cisco Live, it’s very important to identify great speakers like Denise Fishburne (@DeniseFishburne) and Peter Jones (@PeterGJones) and find ways to get them involved. It’s also crucial to listen to feedback from attendees about what did work and what they want to see more of in the coming years.

Keeping The Community Growing

CLUS2016Tweetup

One thing that I’m most proud of is seeing the community grow and grow. I love seeing new faces come in and join the group. This year had people from many different social circles taking part in the Cisco Live community. Reddit’s /r/networking group was there. Kilted Monday happened. Engineering Deathmatches happened. Everywhere you looked, communities were doing great things.

As great as it was to see so many people coming together, it’s just as important to understand that we have to keep the momentum going. Networking doesn’t keep rolling along without new ideas and new people expressing them. Four years ago I could never have guessed the impact that Matt Oswalt (@Mierdin) and Jason Edelman (@JEdelman8) could have had on the networking community. They didn’t start out on top of the world. They fought their way up with new ideas and perspectives. The community adopted what they had to say and ran with it.

We need to keep that going. Not just at Cisco Live either. We need to identify the people doing great things and shining a spotlight on them. Thankfully, my day job affords me an opportunity to do just that. But the whole community needs to be doing it as well. If you can just find one person to tell the world about it’s a win for all of us. Convince a friend to write a blog post. Make a co-worker join Twitter. In the end every new voice is a chance for us all to learn something.


Tom’s Take

As Dennis Leary said in Demolition Man,

I’m no leader. I do what I have to do. Sometimes people come with me.

That’s what Cisco Live is to me. It’s not about a corner or a table or a suite at an event. It’s about people coming together to do things. People talking about work and having a good time. The last five years of Cisco Live have been some of the happiest of my life. More than any other event, I look forward to seeing the community and catching up with old friends. I am thankful to have a job that allows me to go to the event. I’m grateful for a community full of wonderful people that are some of the best and brightest at what they do. For me, Cisco Live is about each of you. The learning and access to Cisco is a huge benefit. But I would go for the people time and time and time again. Thanks for making the fifth year of this community something special to me.

The Complexity Conundrum

NailPuzzle

Complexity is the enemy of understanding. Think about how much time you spend in your day trying to simplify things. Complexity is the reason why things like Reddit’s Explain Like I’m Five exist. We strive in our daily lives to find ways to simplify the way things are done. Well, except in networking.

Building On Shifting Sands

Networking hasn’t always been a super complex thing. Back when bridges tied together two sections of Ethernet, networking was fairly simple. We’ve spent years trying to make the network do bigger and better things faster with less input. Routing protocols have become more complicated. Network topologies grow and become harder to understand. Protocols do magical things with very little documentation beyond “Pure Freaking Magic”.

Part of this comes from applications. I’ve made my feelings on application development clear. Ivan Pepelnjak had some great comments on this post as well from Steve Chalmers and Derick Winkworth (@CloudToad). I especially like this one:

Derick is right. The application developers have forced us to make networking do more and more faster with less requirement for humans to do the work to meet crazy continuous improvement and continuous development goalposts. Networking, when built properly, is a static object like the electrical grid or a plumbing system. Application developers want it to move and change and breathe with their needs when they need to spin up 10,000 containers for three minutes to run a test or increase bandwidth 100x to support a rollout of a video streaming app or a sticker-based IM program designed to run during a sports championship.

We’ve risen to meet this challenge with what we’ve had to work with. In part, it’s because we don’t like being the scapegoat for every problem in the data center. We tire of sitting next to the storage admins and complaining about the breakneck pace of IT changes. We have embraced software enhancements and tried to find ways to automate, orchestrate, and accelerate. Which is great in theory. But in reality, we’re just covering over the problem.

Abstract Complexity

The solution to our software networking issues seems simple on the surface. Want to automate? Add a layer to abstract away the complexity. Want to build an orchestration system on top of that? Easy to do with another layer of abstraction to tie automation systems together. Want to make it all go faster? Abstract away!

“All problems in computer science can be solved with another layer of indirection.”

This is a quote from Butler Lampson often attributed to David Wheeler. It’s absolutely true. Developers, engineers, and systems builders keep adding layers of abstraction and indirection on top of complex system and proclaiming that everything is now easier because it looks simple. But what happens why the abstraction breaks down?

Automobiles are perfect example of this. Not too many years ago, automobiles were relatively simple things. Sure, internal combustion engines aren’t toys. But most mechanics could disassemble the engine and fix most issues with a wrench and some knowledge. Today’s cars have computers, diagnostics systems, and require lots of lots of dedicated tools to even diagnose the problem, let alone fix it. We’ve traded simplicity and ease of repairability the appearance of “simple” which conceals a huge amount of complexity under the surface.

To refer back to the Lampson/Wheeler quote, the completion of it is, “Except, of course, for the problem of too many indirections.” Even forty years ago it was understood that too many layers of abstraction would eventually lead to problems. We are quickly reaching this point in networking today. With all the reliance on complex tools providing an overwhelming amount of data about every point of the network, we find ourselves forced to use dashboards and data lakes to keep up with the rapid pace of changes dictated to the network by systems integrations being driven by developer desires and not sound network systems thinking.

Networking professionals can’t keep up. Just as other systems now must be maintained by algorithms to keep pace, so too does the network find itself being run by software instead of augmented by it. Even if people wanted to make a change they would be unable to do so because validating those changes manually would cause issues or interactions that could create havoc later on.

Simple Solutions

So how do we fix the issues? Can we just scrap it all and start over? Sadly, the answer here is a resounding “no”. We have to keep moving the network forward to match pace with the rest of IT. But we can do our part to cut down on the amount of complexity and abstraction being created in the process. Documentation is as critical as ever. Engineers and architects need to make sure to write down all the changes they make as well as their proposed designs for adding services and creating new features. Developers writing for the network need to document their APIs and their programs liberally so that troubleshooting and extension are easily accomplished instead of just guessing about what something is or isn’t supposed to be doing.

When the time comes to build something new, instead of trying to plaster over it with an abstraction, we need to break things down into their basic components and understand what we’re trying to accomplish. We need to augment existing systems instead of building new ones on top of the old to make things look easy. When we can extend existing ideas or augment them in such as way as to coexist then we can worry less about hiding problems and more about solving them.


Tom’s Take

Abstraction has a place, just like NAT. It’s when things spiral out of control and hide the very problems we’re trying to fix that it becomes an abomination. Rather than piling things on the top of the issue and trying to hide it away until the inevitable day when everything comes crashing down, we should instead do the opposite. Don’t hide it, expose it instead. Understand the complexity and solve the problem with simplicity. Yes, the solution itself may require some hard thinking and some pretty elegant programming. But in the end that means that you will really understand things and solve the complexity conundrum.

The 25GbE Datacenter Pipeline

pipeline

SDN may have made networking more exciting thanks to making hardware less important than it has been in the past, but that’s not to say that hardware isn’t important at all. The certainty with which new hardware will come out and make things a little bit faster than before is right there with death and taxes. One of the big announcements yesterday from Hewlett Packard Enterprise (HPE) during HPE Discover was support for a new 25GbE / 100GbE switch architecture built around the FlexFabric 5950 and 12900 products. This may be the tipping point for things.

The Speeds of the Many

I haven’t always been high on 25GbE. Almost two years ago I couldn’t see the point. Things haven’t gotten much different in the last 24 months from a speed perspective. So why the change now? What make this 25GbE offering any different than things from the nascent ideas presented by Arista?

First and foremost, the 25GbE released by HPE this week is based on the Broadcom Tomahawk chipset. When 25GbE was first presented, it was a collection of vendors trying to convince you to upgrade to their slightly faster Ethernet. But in the past two years, most of the merchant offerings on the market have coalesced around using Broadcom as the primary chipset. That means that odds are good your favorite switching platform is running Trident 2 or Trident 2+ under the hood.

With Broadcom backing the silicon, that means wider adoption of the specification. Why would anyone buy 25GbE from Brocade or Dell or HPE if the only vendor supporting it was that vendor of choice? If you can’t ever be certain that you’ll have support for the hardware in three or five years time, making an investment today seems silly. Broadcom’s backing means that eventually everyone will be adopting 25GbE.

Likewise, one of my other impediments to adoption was the lack of server NICs to ramp hosts to 25GbE. Having fast access ports means nothing if the severs can’t take advantage of them. HPE addressed this with the release of FlexFabric networking adapters that can run 25GbE Ethernet. More importantly, those adapters (and switches) can run at 10GbE as well. This means that adoption of higher bandwidth is no longer an all-or-nothing proposition. You don’t have to abandon your existing investment to get to 25GbE right away. You don’t have to build a lab pod to test things and then sneak it into production. You can just buy a 5950 today and clock the ports down to 10GbE while you await the availability and purchasing cycle to buy 25GbE NICs. Then you can flip some switches in the next maintenance window and be running at 25GbE speeds. And you can leave some ports enabled at 10GbE to ensure that there is maximum backwards compatibility.

The Needs of the Few

Odds are good that 25GbE isn’t going to be right for you today. HPE is even telling people that 25GbE only really makes sense in a few deployment scenarios, among which are large container-based hosts running thousands of virtual apps, flash storage arrays that use Ethernet as a backplane, or specialized high-performance computing (HPC) tricks with RDMA and such. That means the odds are good that you won’t need 25GbE first thing tomorrow morning.

However, the need for 25GbE is going to be there. As applications grow more bandwidth hungry and data centers keep shrinking in footprint, the network hardware you do have left needs to work harder and faster to accomplish more with less. If the network really is destined to become a faceless underlay that serves as a utility for applications, it needs to run flat out fast to ensure that developers can’t start blaming their utility company for problems. Multi-core server architectures and flash storage have solved two of the three legs of this problem. 25GbE host connectivity and the 100GbE backbone connectivity tied to it, solve the other side of the equation so everything balances properly.

Don’t look at 25GbE as an immediate panacea for your problems. Instead, put it on a timeline with your other server needs and see what the adoption rate looks like going forward. If server NICs are bought in large quantities, that will drive manufactures to push the technology onto the server boards. If there is enough need for connectivity at these speeds the switch vendors will start larger adoption of Tomahawk chipsets. That cycle will push things forward much faster than the 10GbE / 40GbE marathon that’s been going on for the past six years.


Tom’s Take

I think HPE is taking a big leap with 25GbE. Until the Dell/EMC merger is completed they won’t find themselves in a position to adopt Tomahawk quickly in the Force10 line. That means the need to grab 25GbE server NICs won’t materialize if there’s nothing to connect them. Cisco won’t care either way so long as switches are purchased and all other networking vendors don’t sell servers. So that leaves HPE to either push this forward to fall off the edge of the cliff. Time will tell how this will all work out, but it would be nice to see HPE get a win here and make the network the least of application developer problems.

Disclaimer

I was a guest of Hewlett Packard Enterprise for HPE Discover 2016. They paid for my travel, hotel, and meals during the event. While I was briefed on the solution discussed here and many others, there was no expectation of coverage of the topics discussed. HPE did not ask for, nor were they guaranteed any consideration in the writing of this article. The conclusions and analysis contained herein are mine and mine alone.

Flash Needs a Highway

CarLights

Last week at Storage Field Day 10, I got a chance to see Pure Storage and their new FlashBlade product. Storage is an interesting creature, especially now that we’ve got flash memory technology changing the way we think about high performance. Flash transformed the industry from slow spinning gyroscopes of rust into a flat out drag race to see who could provide enough input/output operations per second (IOPS) to get to the moon and back.

Take a look at this video about the hardware architecture behind FlashBlade:

It’s pretty impressive. Very fast flash storage on blades that can outrun just about anything on the market. But this post isn’t really about storage. It’s about transport.

Life Is A Network Highway

Look at the backplane of the FlashBlade chassis. It’s not something custom or even typical for a unit like that. The key is when the presenter says that the architecture of the unit is more like a blade server chassis than a traditional SAN. In essence, Pure has taken the concept of a midplane and used it very effectively here. But their choice of midplane is interesting in this case.

Pure is using the Broadcom Trident II switch as their networking midplane for FlashBlade. That’s pretty telling from a hardware perspective. Trident II runs a large majority of switches in the market today that are merchant silicon based. They are essentially becoming the Intel of the switch market. They are supplying arms to everyone that wants to build something quickly at low cost without doing any kind of custom research and development of their own silicon manufacturing.

Using a Trident II in the backplane of the FlashBlade means that Pure evaluated all the alternatives and found that putting something merchant-based in the midplane is cost effective and provides the performance profile necessary to meet the needs of flash storage. Saturating backplanes with IOPS can be accomplished. But as we learned from Coho Data, it takes a lot of CPU horsepower to create a flash storage system that can saturate 10Gig Ethernet links.

I Am Speed

Using Trident II as a midplane or backplane for devices like this has huge implications. First and foremost, networking technology has a proven track record. If Trident II wasn’t a stable and reliable platform, no one would have used it in their products. And given that almost everyone in the networking space has a Trident platform for sale, it speaks volumes about reliability.

Second, Trident II is available. Broadcom is throwing these units off the assembly line as fast as they can. That means that there’s no worry about silicon shortages or plant shutdowns or any one of a number of things that can affect custom manufacturing. Even if a company wants to look at a custom fabrication, it could take months or even years to bring things online. By going with a reference design like Trident II, you can have your software engineers doing the hard work of building a system to support your hardware. That speeds time to market.

Third, Trident is a modular platform. That part can’t be understated even though I think it wasn’t called out very much in the presentation from Pure. By having a midplane that is built as a removable module, it’s very easy to replace it should problems arise. That’s the point of field replaceable units (FRUs). But in today’s market, it’s just as easy to create a system that can run multiple different platforms as well. The blade chassis idea extends equally to both blades and mid or backplanes.

Imagine being able to order a Tomahawk-based controller unit for FlashBlade that only requires you to swap the units at the back of the system. Now, that investment in 10Gig blade connectivity with 40Gig uplinks just became 25Gig blade connectivity with 100Gig uplinks to the network. All for the cost of two network controller blades. There may be some software that needs to be written to make the transition smooth for the consumers in the system, but the hardware is more than capable of supporting a change like that.


Tom’s Take

I was thrilled to see Pure Storage building a storage platform that tightly integrates with networking the way that FlashBlade does. This is how the networking stack is going to be completely integrated with storage and compute. We should still look at things through the lens of APIs and programmability, but making networking and consistent transport layer for all things in the datacenter is a good start.

The funny thing about making something a consistent transport layer is that by design it has to be extensible. That means more and more folks are going to be driving those pieces into the network. Software can be created on top of this common transport to differentiate, much like we’re seeing with network operating systems right now. Even Pure was able to create a different kind of transport protocol to do the heavy lifting at low latency.

It’s funny that it took a presentation from a storage company to make me see the value of the network as something agnostic. Perhaps I just needed some perspective from the other side of the fence.