Cisco Data Center – Network Field Day 3


Day two of Network Field Day 3 brought us to Tasman Drive in San Jose – the home of a little networking company named Cisco.  I don’t know if you’ve heard of them or not, but they make a couple of things I use regularly.  We had a double session of four hours at the Cisco Cloud Innovation Center (CCIC) covering a lot of different topics.  For the sake of clarity I’m going to split the two posts along product lines.  The first will deal with the Cisco Data Center team and their work on emerging standards.

Han Yang, Nexus 1000v Product Manager, started us off with a discussion centered around VXLAN.  VXLAN is an emerging solution to “the problem” (drawing by Tony Bourke):

The Problem

The Problem - courtesy of Tony Bourke

The specific issue we’re addressing with VXLAN is “lots of VLANS”.  As it turns out, when you try to create multitenant clouds for large customers, you tend to run out of VLANs pretty quickly.  Seems 4096 VLANs ranks right up there with 640k of conventional memory on the “Seemed Like A Good Idea At The Time” scale of computer miscalculations.  VXLAN seeks to remedy this issue by wrapping the original frame in a VXLAN header that contains an additional 24-bit VXLAN header along with an additional 802.1q tag:

VXLAN allows the packet to be encapsulated by the vSwitch (in this case a Nexus 1000v) and be tunneled over the network before arriving in the proper destination where the VXLAN header is stripped off, leaving the tag underneath.  The hypervisor isn’t aware of VXLAN at all.  It merely serves as an overlay.  VXLAN does require multicast to be enabled in your network, but for your PIM troubles you get an additional 16 million sub divisions to your network structure.  That means you shouldn’t run out of VLANs any time soon.

Han gave us a great overview of VXLAN and how it’s going to be used a bit more extensively in the data center in the coming months as we begin to attempt to scale out and break through our limitation of VLANs in large clouds.  Here’s hoping that VXLAN really begins to take off and becomes the de facto standard of NVGRE.  Because I still haven’t forgiven Microsoft for Teredo.  I’m not about to give them a chance to screw up the cloud too.

Up next was Victor Moreno, a technical lead in the Data Center Business Unit.  Victor has been a guest on Packet Pushers before on show 54 talking about the Locator/ID Separation Protocol (LISP).  Victor talked to us about LISP as well as the difficulties in creating large-scale data centers.  One key point of Victor’s talk was about moving servers (or workloads as he put them).  Victor pointed out that moving all of the LAN extensions like STP and VTP across the site was totally unnecessary.  The most important part of the move is preservation of IP reachability.  In the video above, this elicited some applause from the delegates because it’s nice to see that people are starting to realize that extending the layer 2 domain everywhere might not be the best way to do things.

Another key point that I took from Victor was about VXLAN headers and LISP headers and even Overlay Transport Virtualization (OTV) headers.  It seems they all have the same 24-bit ID field in the wrapper.  Considering that Cisco is championing OTV and LISP and was an author on the VXLAN draft, this isn’t all that astonishing.  What really caught me was the idea that Victor proposed wherein LISP was used to implement many of the features in VXLAN so that the two protocols could be very interoperable.  This also eliminates the need to continually reinvent the wheel every time a new protocol is needed for VM mobility or long-distance workload migration.  Pay close attention to a slide about 22:50 into the video above.  Victor’s Inter-DC and Intra-DC slide detailing which protocol works best in a given scenario at a specific layer is something that needs to be committed to memory for anyone that wants to be involved in data center networking any time in the next few years.

If you’d like to learn more about Cisco’s data center offerings, you can head over to the data center page on Cisco’s website at http://www.cisco.com/en/US/netsol/ns340/ns394/ns224/index.html.  You can also get data center specific information on Twitter by following the Cisco Data Center account, @CiscoDC.

Tom’s Take

I’m happy that Cisco was able to present on a lot of the software and protocols that are going into building the new generation of data center networking.  I keep hearing things like VXLAN, OTV, and LISP being thrown around when discussing how we’re going to address many of the challenges presented to us by the hypervisor crowd.  Cisco seems to be making strides in not only solving these issues but putting the technology at the forefront so that everyone can benefit from it.  That’s not to say that their solutions are going to end up being the de facto standard.  Instead, we can use the collective wisdom behind things like VXLAN to help us drive toward acceptable methods of powering data center networks for tomorrow.  I may not have spent a lot of my time in the data center during my formal networking days, but I have a funny feeling I’m going to be there a lot more in the coming months.

Tech Field Day Disclaimer

Cisco Data Center was a sponsor of Network Field Day 3.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 3. In addition, they provided me a USB drive containing marketing collateral and copies of the presentation as well as a pirate eyepatch and fake pirate pistol (long story).  They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

1 thought on “Cisco Data Center – Network Field Day 3

  1. Pingback: Networking Field Day 3: The Links

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s