New Wrinkles in the Fabric – Cisco Nexus Updates


There’s no denying that The Cloud is an omnipresent fixture in our modern technological lives.  If we aren’t already talking about moving things there, we’re wondering why it’s crashed.  I don’t have any answers about these kinds of things, but thankfully the people at Cisco have been trying to find them.  They let me join in on a briefing about the announcements that were made today regarding some new additions to their data center switching portfolio more commonly known by the Nexus moniker.

Nexus 6000

The first of the announcements is around a new switch family, the Nexus 6000.  The 6000 is more akin to the 5000 series than the 7000, containing a set of fixed-configuration switches with some modularity.  The Nexus 6001 is the true fixed-config member of the lot.  It’s a 1U 48-port 10GbE switch with 4 40GbE uplinks.  If that’s not enough to get your engines revving, you can look at the bigger brother, the Nexus 6004.  This bad boy is a 4U switch with a fixed config of 48 40GbE ports and 4 expansion modules that can double the total count up to 96 40GbE ports.  That’s a lot of packets flying across the wire.  According to Cisco, those packets can fly at a 1 microsecond latency port-to-port.  The Nexus 6000 is also an Fibre Channel over Ethernet (FCoE) switch, as all Nexus switches are.  This one is a 40GbE-capable FCoE switch.  However, as there are no 40GbE targets available in FCoE right now, it’s going to be on an island until those get developed.  A bit of future proofing, if you will.  The Nexus 6000 also support FabricPath, Cisco’s TRILL-based fabric technology, along with a large number of multicast entries in the forwarding table.  This is no doubt to support VXLAN and OTV in the immediate future for layer 2 data center interconnect.

The Nexus line also gets a few little added extras.  There is going to be a new FEX, the 2248PQ, that features 10GbE downlink ports and 40GbE uplink ports.  There’s also going to be a 40GbE expansion module for the 5500 soon, so your DC backbone should be able to run a 40GbE with a little investment.  Also of interest is the new service module  for the Nexus 7000.  That’s right, a real service module.  The NAM-NX1 is a Network Analysis Module (NAM) for the Nexus line of switches.  This will allow spanned traffic to be pumped though for analysis of traffic composition and characteristics without taking a huge hit to performance.  We’ve all known that the 7000 was going to be getting service modules for a while.  This is the first of many to roll off the line.  In keeping with Cisco’s new software strategy, the NAM also has a virtual cousin, not surprising named the vNAM.  This version lives entirely in software and is designed to serve the same function that its hardware cousin does only in the land of virtual network switches.  Now that the Nexus line has service modules, kind of makes you wonder what the Catalyst 6500 has all to itself now?  We know that the Cat6k is going to be supported in the near term, but is it going to be used as a campus aggregation or core?  Maybe as a service module platform until the SMs can be ported to the Nexus?  Or maybe with the announcement of FabricPath support for the Cat6k this venerable switch will serve as a campus/DC demarcation point?  At this point the future of Cisco’s franchise switch is really anyone’s guess.

Nexus 1000v InterCloud

The next major announcement from Cisco is the Nexus 1000v InterCloud.  This is very similar to what VMware is doing with their stretched data center concept in vSphere 5.1.  The 1000v InterCloud (1kvIC) builds a secure layer 2 GRE tunnel between your private could and a provider’s public could.  You can now use this tunnel to migrate workloads back and forth between public and private server space.  This opens up a whole new area of interesting possibilities, not the least of which is the Cloud Services Router (CSR).  When I first heard about the CSR last year at Cisco Live, I thought it was a neat idea but had some shortcomings.  The need to be deployed to a place where it was visible to all your traffic was the most worrisome.  Now, with the 1kvIC, you can build a tunnel between yourself and a provider and use CSR to route traffic to the most efficient or cost effective location.  It’s also a very compelling argument for disaster recovery and business continuity applications.  If you’ve got a category 4 hurricane bearing down on your data center, the ability to flip a switch and cold migrate all your workloads to a safe, secure vault across the country is a big sigh of relief.

The 1kvIC also has its own management console, the vNMC.  Yes, I know there’s already a vNMC available from Cisco.  The 1kvIC version is a bit special thought.  It not only gives you control over your side of the interconnect, but it also integrates with the provider’s management console as well.  This gives you much more visibility into what’s going on inside the provider instances beyond what we already have from simple dashboards or status screens on public web pages.  This is a great help when you think about the kinds of things you would be doing with intercloud mobility.  You don’t want to send your workloads to the provider if an engineer has started an upgrade on their core switches on a Friday night.  When it comes to the cloud, visibility is viability.

CiscoONE

In case you haven’t heard, Cisco wants to become a software company.  Not a bad idea when hardware is becoming a commodity and software is the home of the high margins.  Most of the development that Cisco has been doing along the software front comes from the Open Network Environment (ONE) initiative.  In today’s announcement, CiscoONE will now be the home for an OpenFlow controller.  In this first release, Cisco will be supporting OpenFlow and their own OnePK API extensions on the southbound side.  On the northbound side of things, the CiscoONE Controller will expose REST and Java hooks to allow interaction with flows passing though the controller.  While that’s all well and good for most of the enterprise devs out there, I know a lot of homegrown network admins that hack together their own scripts through Perl and Python.  For those of you that want support for your particular flavor of language built into CiscoONE, I highly recommend getting to their website and telling them what you want.  They are looking at adding additional hooks as time goes on, so you can get in on the ground floor now.

Cisco is also announcing OnePK support for the ISR G2 router platform and the ASR 1000 platform.  There will be OpenFlow support on the Nexus 3000 sometime in the near future, along with support in the Nexus 1000v for Microsoft Hyper-V and KVM.  And somewhere down the line, Cisco will have a VXLAN gateway for all the magical unicorn packet goodness across data centers that stretch via non-1kvIC links.


Tom’s Take

The data center is where the dollars are right now.  I’ve heard people complain that Cisco is leaving the enterprise campus behind as they charge forward into the raised floor garden of the data center.  These are the people driving the data that produces the profits that buy more equipment.  Whether it be massive Hadoop clusters or massive private cloud projects, the accounting department has given the DC team a blank checkbook today.  Cisco is doing its best to drive some of those dollars their way by providing new and improved offerings like the Nexus 6000.  For those that don’t have a huge investment in the Nexus 7000, the 6000 makes a lot of sense as both a high speed core aggregation switch or an end-of-row solution for a herd of FEXen.  The Nexus 1000v InterCloud is competing against VMware’s stretched data center concept in much the same way that the 1000v itself competes against the standard VMware vSwitch.  WIth Nicira in the driver’s seat of VMware’s networking from here on out, I wouldn’t be shocked to see more solutions that come from Cisco that mirror or augment VMware solutions as a way to show VMware that Cisco can come up with alternatives just as well as anyone else.

3 thoughts on “New Wrinkles in the Fabric – Cisco Nexus Updates

  1. One of the features of the 6004 that really made me sit up and notice was that each of those 40Gb ports can be split in to 4x10Gb using an QSFP adapter and split out cable. So you can use the 6004 as a beefy aggregation switch or as a 384 port 10Gb switch…in a 4U chassis, or a combination of both.

    As a side note, do the 3K’s support FCOE? I thought they were minimal service, low latency switches only. More datasheet reading for me…

    These are certainly interesting times we live in.

  2. Thanks a lot for the info. It’s so hard to keep up with Cisco’s product line. I happened to notice from various outlets the 3850 and the 6000 but saw nothing on the 40GbE FEX. I’m always glad bloggers like yourself are willing to share.

    You also make a good point on the campus. If I were to build a large green field campus now – I’m curious to know the eventual core switches that would come out of the design.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s