If you work in data center networking today, you are probably being bombarded from all sides by vendors pitching their new fabric solutions. Every major vendor from Cisco to Juniper to Brocade has some sort of new solution that allows you to flatten your data center network and push a collapsed control plane all the way to the edges of your network. However, the more I look at it, the more it appears to me that we’re looking at a new spin on an old issue.
Chassis switches are a common sight for high-density network deployments. They contain multiple interfaces bundled into line cards that are all interconnected via a hardware backplane (or fabric). There is usually one or more intelligent pieces running a control plane and making higher level decisions (usually called a director or a supervisor). This is the basic idea behind the switch architecture that has been driving networking for a long time now. A while back, Denton Gentry wrote a very interesting post about the reasoning behind vendors supporting chassis-based networking the way they do. By having a point of presence in your networking racks that provides an interface that can only be populated by hardware purchased from a vendor that you bought the enclosure from, they can continue to count on you as a customer until you grow tired enough to rip the whole thing out and start all over again with Vendor B. Innovation does come and it allows you to upgrade your existing infrastructure over and over again with new line cards and director hardware. However, you can’t just hop over to Vendor C’s website and buy and new module and just plug it into your Vendor A chassis. That’s what we call “lock-in”. Not surprisingly, this idea soon found its way into the halls of IBM, HP, and Sun to live on as the blade server enclosure. Same principle, only revolving around the hardware that plugs into your network rather than being the network itself. Chassis-based networking and server hardware makes a fortune for vendors every year due to repeat business. Hold that thought, we’ll be back to it in just a minute.
Now, every vendor is telling you that data center networking is growing bigger and faster every day. Your old fashioned equipment is dragging you down and if you want to support new protocols like TrILL and 40Gig/100Gig Ethernet, you’re going to have to upgrade. Rest assured though, because we will interoperate with the other vendors out there to keep you from spending tons of money to rip out your old network and replace it with ours. We aren’t proprietary. Once you get our solution up and running, everything will be wine and roses. Promise. I may be over selling the rosy side here, but the general message is that interoperability is king in the new fabric solutions. No matter what you’ve got in your network right now, we’ll work with it.
Now, if you’re a customer looking at this, I’ve got a couple questions for you to ask. First, which port do I plug my Catalyst 4507 into on the QFabric Interconnect? What is the command to bring up an IRF instance on my QFX3500? Where should I put my HP12500 in my FabricPath deployment? Odds are good, you’re going to be met with looks of shock and incredulence. Turns out, interoperability in a fabric deployment doesn’t work quite like that.
I’m going to single out Juniper here and their QFabric solution not because I dislike them. I’m going to do it because their solution most resembles something we already are familiar with – the chassis switch. The QFX3500 QFabric end node switch is most like a line card where your devices plug in. These are connected to QFX3008 QFabric Interconnect Switches that provide a backplane (or fabric) to ensure packets are forwarded at high speeds to their destinations. There is also a supervisor on the deployment providing control plane and higher-level functions, in this case referred to as the QF/Director. Sound familiar? It should. QFabric (and FabricPath and others) look just like exploded chassis switches. Rather than being constrained to a single enclosure, the wizards at these vendors have pulled all the pieces out and spread them over the data center into multiple racks.
Juniper must get asked about QFabric and whether or not it’s proprietary a lot, because Abner Germanow wrote an article entitled “Is QFabric Proprietary?” where he says this:
Fact: A QFabric switch is no more or less proprietary than any Ethernet chassis switch on the market today.
He’s right, of course. QFabric looks just like a really big chassis switch and behaves like one. And, just like Denton’s blog post above, it’s going to be sold like one.
Now, instead of having a chassis welded to one rack in your data center, I can conceivably have one welded to every rack in your data center. By putting a QFX3500/Nexus 5000 switch in the top of every rack and connecting it to QFabric/FabricPath, I provide high speed connectivity over a stretched out backplane that can run to every rack you have. Think of it like an interstate highway system in the US, high speed roads that allow you to traverse between major destinations quickly. So long as you are going somewhere that is connected via interstate, it’s a quick and easy trip.
What about interoperability? It’s still there. You just have to make a concession or two. QFabric end nodes connect to the QF/Interconnects via 40Gbps connections. They aren’t Ethernet, but they push packets all the same. Since they aren’t standard Ethernet, you can only plug in devices that speak QFabric (right now, the QFX3500). If you want to interconnect to a Nexus FabricPath deployment or a Brocade VCS cluster, you’re going to have to step down and use slower standardized connectivity, such as 10Gbps Ethernet. Even if you bundle them into port channels, you’re going to take a performance hit for switching traffic off of your fabric. That’s like exiting the interstate system and taking a two-lane highway. You’re still going to get to your destination, it’s just going to take a little longer. And if there’s a lot of traffic on that two-lane road, be prepared to wait.
Interoperability only exists insofar as to provide a bridge to your existing equipment. In effect, you are creating islands of vendor solutions in the Ocean of Interoperability. Once you install VCS/FabricPath/QFabric and see how effectively you can move traffic between two points, you’re going to start wanting to put more of it in. When you go to turn up that new rack or deployment, you’ll buy the fabric solution before looking at other alternatives since you already have all the pieces in place. Pretty soon, you’re going to start removing the old vendor’s equipment and putting in the new fabric hotness. Working well with others only comes up when you mention that you’ve already got something in place. If this was a greenfield data center deployment, vendors would be falling all over you to put their solution in place tomorrow.
Tom’s Take
Again, I’m not specifically picking on Juniper in this post. Every vendor is guilty of the “interoperability” game (yes, the quotes are important). Abner’s post just got my wheels spinning about the whole thing. He’s right though. QFabric is no more proprietary than a Catalyst 6500 or other chassis switches. It all greatly depends on your point of view. Being proprietary isn’t a bad thing. Using your own technology allows you to make things work the way you want without worrying about other extraneous pieces or parts. The key is making sure everyone knows which pieces only work with your stuff and which pieces work with other people’s stuff.
Until a technology like OpenFlow comes fully into its own and provides a standardized method for creating these large fabrics that can interconnect everything from a single rack to a whole building, we’re going to be using this generation of QFabric/FabricPath/VCS. The key is making sure to do the research and keep and eye out for the trees so you know when you’ve wandered into the forest.
I’d like to thank Denton Gentry and Abner Germanow for giving me the ideas for this post, as well as Ivan Pepelnjak for his great QFabric dissection that helped me sort out some technical details.
One thing I that stood out to me in Abner’s post was his comment, ” QFabrics are switches, not networks. The distinction is important.” And he is absolutely right. You can hook up the QFabric “switch” to another device, but that’s not the end game. The end game and the difference between a network and a switch is that Juniper (and other vendors) would ideally have you replace your entire disparate network with their single “switch”. Yea, it’s still only a switch, but when the “switch” replaces me entire network, I am locked in to a vendor.
I like the idea of the “fabric when you need it” mentality. Unfortunately, Juniper has really targeted a niche market for QFabric. There are not alot of companies that needs this much infrastructure.
I’m from a medium-sized business (<$750M revenue) and would never need this kind of performance and flexibility. Even though Juniper does not need to sell many to make a profit (the margins are probably through the roof), it places the QFabric technology a little out of reach for the majority of enterprises out there. For many, the fixed chassis is usually enough.
Thoughts?
Great points from Robert and Aaron.
Robert has the key point. Replacing your existing ‘network’ with a family or model of switch is the lock in that each of these DC fabric vendors want.
While interoperability sounds like a nice features to advertise with regards to their products, IMO, vendors want nothing to do with interoperability. Interop just means that you can extend your existing infrastructure in the future with their competitors product. Why would they want that?
Aaron I work in the EDU space. While we’re no where near a $750M business, we have 2 DC’s kitted out with millions of dollars worth of Nexus family gear. The point is, modern computing means its horses for corses. Edu’s who have massive compute power; Venture banks and financials that do real time trading and many others need key features (other than massive bandwidth) that are offered in these DC fabric model lines.
Pingback: My Way or the Highway | The Networking Nerd
Pingback: Juniper – Network Field Day 2 | The Networking Nerd