Is ACI Coming For The CLI?

I’m soon to depart from Cisco Live Barcelona. It’s been a long week of fun presentations. While I’m going to avoid using the words intent and context in this post, there is one thing I saw repeatedly that grabbed my attention. ACI is eating Cisco’s world. And it’s coming for something else very soon.

Devourer Of Interfaces

Application-Centric Infrastructure has been out for a while and it’s meeting with relative success in the data center. It’s going up against VMware NSX and winning in a fair number of deals. For every person that I talk to that can’t stand it I hear from someone gushing about it. ACI is making headway as the tip of the spear when it comes to Cisco’s software-based networking architecture.

Don’t believe me? Check out some of the sessions from Cisco Live this year. Especially the Software-Defined Access and DNA Assurance ones. You’re going to hear context and intent a lot, as those are the key words for this new strategy. You know what else you’re going to hear a lot?

Contract. Endpoint Group (EPG). Policy.

If you’re familiar with ACI, you know what those words mean. You see the parallels between the data center and the push in the campus to embrace SD-Access. If you know how to create a contract for an EPG in ACI, then doing it in DNA Center is just as easy.

If you’ve never learned ACI before, you can dive right in with new DNA Center training and get started. And when you finally figured out what you’re doing, you can not only use those skills to program your campus LAN. You can extend them into the data center network as well thanks to consistent terminology.

It’s almost like Cisco is trying to introduce a standard set of terms that can be used to describe consistent behaviors across groups of devices for the purpose of cross training engineers. Now, where have we seen that before?

Bye Bye, CLI

Oh yeah. And, while you’re at it, don’t forget that Arista “lost” a copyright case against Cisco for the CLI and didn’t get fined. Even without the legal ramifications, the Cisco-based CLI has been living on borrowed time for quite a while.

APIs and Python make programming networks easy. Provided you know Python, that is. That’s great for DevOps folks looking to pick up another couple of libraries and get those VLANs tamed. But it doesn’t help people that are looking to expand their skillset without leaning an entirely new language. People scared by semicolons and strict syntax structure.

That’s the real reason Cisco is pushing the ACI terminology down into DNA Center and beyond. This is their strategy for finally getting rid of the CLI across their devices. Now, instead of dealing with question marks and telnet/SSH sessions, you’re going to orchestrate policies and EPGs from your central database. Everything falls into place after that.

Maybe DNA Center does some fancy Python stuff on the back end to handle older devices. Maybe there’s even some crazy command interpreters literally force-feeding syntax to an ancient router. But the end goal is to get people into the tools used to orchestrate. And that day means that Cisco will have a central location from which to build. No more archaic terminal windows. No more console cables. Just the clean purity of the user interface built by Insieme and heavily influenced by Cisco UCS Director.


Tom’s Take

Nothing goes away because it’s too old. I still have a VCR in my house. I don’t even use it any longer. It sits in a closet for the day that my wife decides she wants to watch our wedding video. And then I spend an hour hooking it up. But, one of these days I’m going to take that tape and transfer it to our Plex server. The intent is still the same – my wife gets to watch videos. But I didn’t tell her not to use the VCR. Instead, I will give her a better way to accomplish her task. And on that day, I can retire that old VCR to the same pile as the CLI. Because I think the ACI-based terminology that Cisco is focusing on is the beginning of the end of the CLI as we know it.

Extreme-ly Interesting Times In Networking

If you’re a fan of Extreme Networks, the last few months have been pretty exciting for you. Just yesterday, it was announced that Extreme is buying the data center networking business of Brocade for $55 million once the Broadcom acquisition happens. Combined with the $100 million acquisition of Avaya’s campus networking portfolio on March 7th and the purchase of Zebra Wireless (nee Motorola) last September, Extreme is pushing itself into the market as a major player. How is that going to impact the landscape?

Building A Better Business

Extreme has been a player in the wireless space for a while. Their acquisition of Enterasys helped vault them into the mix with other big wireless players. Now, the rounding out of the portfolio helps them complete across the board. They aren’t just limited to playing with stadium wifi and campus technologies now. The campus networking story that was brought in through Avaya was a must to help them compete with Aruba, A Hewlett Packard Enterprise Company. Aruba owns the assets of HPE’s campus networking business and has been leveraging them effectively.

The data center play was an interesting one to say the least. I’ve mused recently that Brocade’s data center business may end up lying fallow once Arris grabbed Ruckus. Brocade had some credibility in very large networks through VCS and the MLX router series, but outside of the education market and specialized SDN deployments it was rare to encounter them. Arista has really dug into Cisco’s market share here and the rest of the players seem to be content to wait out that battle. Juniper is back in the carrier business, and the rest seem to be focusing now on OCP and the pieces that flow logically from that, such as Six Pack, Backpack, and Whatever Facebook Thinks The Next Fast Switch Should Be Called That Ends In “Pack”.

Seeing Extreme come from nowhere to snap up the data center line from Brocade signals a new entrant into the data center crowd. Imagine, if you will, a mosh pit. Lots of people fighting for their own space to do their thing. Two people in the middle have decided to have an all-out fight over their space. Meanwhile, everyone else is standing around watching them. Finally, a new person enters the void of battle to do their thing on the side away from the fistfight that has captured everyone’s attention. This is where Extreme finds itself now.

Not Too Extreme

The key for Extreme now is to tell the “Full Stack” story to customers. Whereas before they had to hand off the high end to another “frenemy” and hope that it didn’t come back to bite them, now Extreme can sell all the way up and down the stack. They have some interesting ideas about SDN that will bear some watching as they begin to build them into their stack. The integration of VCS into their portfolio will take some time, as the way that Brocade does their fabric implementation is a bit different than the rest of the world.

This is also a warning call for the rest of the industry. It’s time to get off the sidelines and choose your position. Arista and Cisco won’t be fighting forever. Cisco is also reportedly looking to create a new OS to bring some functionality to older devices. That means that they can continue and try to innovate while fighting against their competitors. The winner of the Cisco and Arista battle is inconsequential to the rest of the industry right now. Either Arista will be wiped off the map and a stronger Cisco will pick a new enemy, or Arista will hurt Cisco and pull even with them in the data center market, leaving more market share for others to gobble up.

Extreme stands a very good chance of picking up customers with their approach. Customers that wouldn’t have considered them in the past will be lining up to see how Avaya campus gear will integrate with Enterasys wireless and Brocade data center gear. It’s not all the different from the hodge-podge approach that many companies have picked for years to lower costs and avoid having a single vendor solution. Now, those lower cost options are available in a single line of purple boxes.


Tom’s Take

Who knew we were going to get a new entrant into the Networking Wars for the tidy sum of $155 million? Feels like it should have cost more than that, but given the number of people holding fire sales to get rid of things they have to divest before pending acquisition or pending dissolution, it really doesn’t come as much surprise. Someone had to buy these pieces and put them together. I think Extreme is going to turn some heads and make some for some interesting conversations in the next few months. Don’t count them out just yet.

HPE Networking: Past, Present, and Future

hpe_pri_grn_pos_rgb

I had the chance to attend HPE Discover last week by invitation from their influencer team. I wanted to see how HPE Networking had been getting along since the acquisition of Aruba Networks last year. There have been some moves and changes, including a new partnership with Arista Networks announced in September. What follows is my analysis of HPE’s Networking portfolio after HPE Discover London and where they are headed in the future.

Campus and Data Center Divisions

Recently, HPE reorganized their networking division along two different lines. The first is the Aruba brand that contains all the wireless assets along with the campus networking portfolio. This is where the campus belongs. The edge of the network is an ever-changing area where connectivity is king. Reallocating the campus assets to the capable Aruba team means that they will do the most good there.

The rest of the data center networking assets were loaded into the Data Center Infrastructure Group (DCIG). This group is headed up by Dominick Wilde and contains things like FlexFabric and Altoline. The partnership with Arista rounds out the rest of the switch portfolio. This helps HPE position their offerings across a wide range of potential clients, from existing data center infrastructure to newer cloud-ready shops focusing on DevOps and rapid application development.

After hearing Dom Wilde speak to us about the networking portfolio goals, I think I can see where HPE is headed going forward.

The Past: HPE FlexFabric

As Dom Wilde said during our session, “I have a market for FlexFabric and can sell it for the next ten years.” FlexFabric represents the traditional data center networking. There is a huge market for existing infrastructure for customers that have made a huge investment in HPE in the past. Dom is absolutely right when he says the market for FlexFabric isn’t going to shrink the foreseeable future. Even though the migration to the cloud is underway, there are a significant number of existing applications that will never be cloud ready.

FlexFabric represents the market segment that will persist on existing solutions until a rewrite of critical applications can be undertaken to get them moved to the cloud. Think of FlexFabric as the vaunted buggy whip manufacturer. They may be the last one left, but for the people that need their products they are the only option in town. DCIG may have eyes on the future, but that plan will be financed by FlexFabric.

The Present: HPE Altoline

Altoline is where HPE was pouring their research for the past year. Altoline is a product line that benefits from the latest in software defined and webscale technologies. It is technology that utilizes OpenSwitch as the operating system. HPE initially developed OpenSwitch as an open, vendor-neutral platform before turning it over to the Linux Foundation this summer to run with development from a variety of different partners.

Dom brought up a couple of great use cases for Altoline during our discussion that struck me as brilliant. One of them was using it as an out-of-band monitoring solution. These switches don’t need to be big or redundant. They need to have ports and a management interface. They don’t need complexity. They need simplicity. That’s where Altoline comes into play. It’s never going to be as complex as FlexFabric or as programmable as Arista. But it doesn’t have to be. In a workshop full of table saw and drill presses, Altoline is a basic screwdriver. It’s a tool you can count on to get the easy jobs done in a pinch.

The Future: Arista

The Arista partnership, according to Dom Wilde, is all about getting ready for the cloud. For those customers that are looking at moving workloads to the cloud or creating a hybrid environment, Arista is the perfect choice. All of Arista’s recent solution sets have been focused on providing high-speed, programmable networking that can integrate a number of development tools. EOS is the most extensible operating system on the market and is a favorite for developers. Positioning Arista at the top of the food chain is a great play for customers that don’t have a huge investment in cloud-ready networking right now.

The question that I keep coming back to is…when does this Arista partnership become an acquisition? There is a significant integration between the two companies. Arista has essentially displaced the top of the line for HPE. How long will it take for Arista to make the partnership more permanent? I can easily foresee HPE making a play for the potential revenues produced by Arista and the help they provide moving things to the cloud.


Tom’s Take

I was the only networking person at HPE Discover this year because the HPE networking story has been simplified quite a bit. On the one hand, you have the campus tied up with Aruba. They have their own story to tell in a different area early next year. On the other hand, you have the simplification of the portfolio with DCIG and the inclusion of the Arista partnership. I think that Altoline is going to find a niche for specific use cases but will never really take off as a separate platform. FlexFabric is in maintenance mode as far as development is concerned. It may get faster, but it isn’t likely to get smarter. Not that it really needs to. FlexFabric will support legacy architecture. The real path forward is Arista and all the flexibility it represents. The question is whether HPE will try to make Arista a business unit before Arista takes off and becomes too expensive to buy.

Disclaimer

I was an invited guest of HPE for HPE Discover London. They paid for my travel and lodging costs as well as covering event transportation and meals. They did not ask for nor were they promised any kind of consideration in the coverage provided here. The opinions and analysis contained in this article represent my thoughts alone.

Building A Lego Data Center Juniper Style

JDC-BirdsEye

I think I’ve been intrigued by building with Lego sets as far back as I could remember.  I had a plastic case full of them that I would use to build spaceships and castles day in and day out.  I think much of that building experience paid off when I walked into the real world and I started building data centers.  Racks and rails are network engineering versions of the venerable Lego brick.  Little did I know what would happen later.

Ashton Bothman (@ABothman) is a social media rock star for Juniper Networks.  She emailed me and asked me if I would like to participate in a contest to build a data center from Lego bricks.  You could imagine my response:

YES!!!!!!!!!!!!!

I like the fact that Ashton sent me a bunch of good old fashioned Lego bricks.  One of the things that has bugged me a bit since the new licensed sets came out has been the reliance on specialized pieces.  Real Lego means using the same bricks for everything, not custom-molded pieces.  Ashton did it right by me.

Here’s a few of my favorite shots of my Juniper Lego data center:

My rack setup.  I even labeled some of the devices!

My rack setup. I even labeled some of the devices!

Ladder racks for my Lego cables.  I like things clean.

Ladder racks for my Lego cables. I like things clean.

Can't have a data center with a generator.  Complete with flashing lights.

Can’t have a data center with a generator. Complete with flashing lights.

The Big Red Button.  EPO is a siren call for troublemakers.

The Big Red Button. EPO is a siren call for troublemakers.

The Token Unix Guy.  Complete with beard and old workstation.

The Token Unix Guy. Complete with beard and old workstation.

Storage lockers and a fire extinguisher.  I didn't have enough bricks for a halon system.

Storage lockers and a fire extinguisher. I didn’t have enough bricks for a halon system.

The Obligatory Logo Shot.  Just for Ashton.

The Obligatory Logo Shot. Just for Ashton.


Tom’s Take

This was fun.  It’s also for a great cause in the end.  My son has already been eyeing this set and he helped a bit in the placement of the pirate DC admin and the lights on the server racks.  He wanted to put some ninjas in the data center when I asked him what else was needed.  Maybe he’s got a future in IT after all.

JDC-Overview

Here are some more Lego data centers from other contest participants:

Ivan Pepelnjak’s Lego Data Center

Stephen Foskett’s Datacenter History: Through The Ages in Lego

Amy Arnold’s You Built a Data Center?  Out Of A DeLorean?

Cisco Borderless Idol

Cisco Logo

Day one of Network Field Day 5 (NFD5) included presentations from the Cisco Borderless team. You probably remember their “speed dating” approach at NFD4 which gave us a wealth of information in 15 minute snippets. The only drawback to that lineup is when you find a product or a technology that interests you there really isn’t any time to quiz the presenter before they are ushered off stage. Someone must have listened when I said that before, because this time they brought us 20 minute segments – 10 minutes of presentation, 10 minutes of demo. With the switching team, we even got to vote on our favorite to bring the back for the next round (hence the title of the post). More on that in a bit.

6500 Quad Supervisor Redundancy

First up on the block was the Catalyst 6500 team. I swear this switch is the Clint Howard of networking, because I see it everywhere. The team wanted to tell us about a new feature available in the ((verify code release)) code on the Supervisor 2T (Sup2T). Previously, the supervisor was capable of performing a couple of very unique functions. The first of these was Stateful Switch Over (SSO). During SSO, the redundant supervisor in the chassis can pick up where the primary left off in the event of a failure. All of the traffic sessions can keep on trucking even if the active sup module is rebooting. This gives the switch a tremendous uptime, as well as allowing for things like hitless upgrades in production. The other existing feature of the Sup2T is Virtual Switching System (VSS). VSS allows two Sup2Ts to appear as one giant switch. This is helpful for applications where you don’t want to trust your traffic to just one chassis. VSS allows for two different chassis to terminate Multi-Chassis EtherChannel (MLAG) connections so that distribution layer switches don’t have a single point of failure. Traffic looks like it’s flowing to one switch when in actuality it may be flowing to one or the other. In the event that a Supervisor goes down, the other one can keep forwarding traffic.

Enter the Quad Sup SSO ability. Now, instead of having an RPR-only failover on the members of a VSS cluster, you can setup the redundant Sup2T modules to be ready and waiting in the event of a failure. This is great because you can lose up to three Sup2Ts at once and still keep forwarding while they reboot or get replaced. Granted, anything that can take out 3 Sup2Ts at once is probably going to take down the fourth (like power failure or power surge), but it’s still nice to know that you have a fair amount of redundancy now. This only works on the Sup2T, so you can’t get this if you are still running the older Sup720. You also need to make sure that your linecards support the newer Distributed Forwarding Card 3 (DFC3), which means you aren’t going to want to do this with anything less than a 6700-series line card. In fact, you really want to be using the 6800 series or better just to be on the safe side. As Josh O’brien (@joshobrien77) commented, this is a great feature to have. But it should have been there already. I know that there are a lot of technical reasons why this wasn’t available earlier, and I’m sure the increase fabric speeds in the Sup2T, not to mention the increased capability of the DFC3, are the necessary component for the solution. Still, I think this is something that probably should have shipped in the Sup2T on the first day. I suppose that given the long road the Sup2T took to get to us that “better late than never” is applicable here.

UCS-E

Next up was the Cisco UCS-E series server for the ISR G2 platform. This was something that we saw at NFD4 as well. The demo was a bit different this time, but for the most part this is similar info to what we saw previously.


Catalyst 3850 Unified Access Switch

The Catalyst 3800 is Cisco’s new entry into the fixed-configuration switch arena. They are touting this a “Unified Access” solution for clients. That’s because the 3850 is capable of terminating up to 50 access points (APs) per stack of four. This think can basically function as a wiring closet wireless controller. That’s because it’s using the new IOS wireless controller functionality that’s also featured in the new 5760 controller. This gets away from the old Airespace-like CLI that was so prominent on the 2100, 2500, 4400, and 5500 series controllers. The 3850, which is based on the 3750X, also sports a new 480Gbps Stackwise connector, appropriately called Stackwise480. This means that a stack of 3850s can move some serious bits. All that power does come at a cost – Stackwise480 isn’t backwards compatible with the older Stackwise v1 and v2 from the 3750 line. This is only an issue if you are trying to deploy 3850s into existing 3750X stacks, because Cisco has announced the End of Sale (EOS) and End of Life (EOL) information for those older 3750s. I’m sure the idea is that when you go to rip them out, you’ll be more than happy to replace them with 3850s.

The 3850 wireless setup is a bit different from the old 3750 Access Controller that had a 4400 controller bolted on to it. The 3850 uses Cisco’s IOS-XE model of virtualizing IOS into a sort of VM state that can run on one core of a dual-core processor, leaving the second core available to do other things. Previously at NFD4, we’d seen the Catalyst 4500 team using that other processor core for doing inline Wireshark captures. Here, the 3850 team is using it to run the wireless controller. That’s a pretty awesome idea when you think about it. Since I no longer have to worry about IOS taking up all my processor and I know that I have another one to use, I can start thinking about some interesting ideas.

The 3850 does have a couple of drawbacks. Aside from the above Stackwise limitations, you have to terminate the APs on the 3850 stack itself. Unlike the CAPWAP connections that tunnel all the way back to the Airespace-style controllers, the 3850 needs to have the APs directly connected in order to decapsulate the tunnel. That does provide for some interesting QoS implications and applications, but it doesn’t provide much flexibility from a wiring standpoint. I think the primary use case is to have one 3850 switch (or stack) per wiring closet, which would be supported by the current 50 AP limitation. the othe drawback is that the 3850 is currently limited to a stack of four switches, as opposed to the increased six switch limit on the 3750X. Aside from that, it’s a switch that you probably want to take a look at in your wiring closets now. You can buy it with an IP Base license today and then add on the AP licenses down the road as you want to bring them online. You can even use the 3850s to terminate CAPWAP connections and manage the APs from a central controller without adding the AP license.

Here is the deep dive video that covers a lot of what Cisco is trying to do from a unified wired and wireless access policy standpoint. Also, keep an eye out for the cute Unifed Access video in the middle.

Private Data Center Mobility

I found it interesting this this demo was in the Borderless section and not the Data Center presentation. This presentation dives into the world of Overlay Transport Virtualization (OTV). Think of OTV like an extra layer of 802.1 q-in-q tunneling with some IS-IS routing mixed in. OTV is Cisco’s answer to extending the layer 2 boundary between data centers to allow VMs to be moved to other sites without breaking their networking. Layer 2 everywhere isn’t the most optimal solution, but it’s the best thing we’ve got to work with the current state of VM networking (until Nicira figures out what they’re going to do).

We loved this session so much that we asked Mostafa to come back and talk about it more in depth.

The most exciting part of this deep dive to me was the introduction of LISP. To be honest, I haven’t really been able to wrap my head around LISP the first couple of times that I saw it. Now, thanks to the Borderless team and Omar Sultan (@omarsultan), I’m going to dig into a lot more in the coming months. I think there are some very interesting issues that LISP can solve, including my IPv6 Gordian Knot.


Tom’s Take

I have to say that I liked Cisco’s approach to the presentations this time.  Giving us discussion time along with a demo allowed us to understand things before we saw them in action.  The extra five minutes did help quite a bit, as it felt like the presenters weren’t as rushed this time.  The “Borderless Idol” style of voting for a presentation to get more info out of was brilliant.  We got to hear about something we wanted to go into depth about, and I even learned something that I plan on blogging about later down the line.  Sure, there was a bit of repetition in a couple of areas, most notably UCS-E, but I can understand how those product managers have invested time and effort into their wares and want to give them as much exposure as possible.  Borderless hits all over the spectrum, so keeping the discussion focused in a specific area can be difficult.  Overall, I would say that Cisco did a good job, even without Ryan Secrest hosting.

Tech Field Day Disclaimer

Cisco was a sponsor of Network Field Day 5.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 5.  In addition, Cisco provided me with a breakfast and lunch at their offices.  They also provided a Moleskine notebook, a t-shirt, and a flashlight toy.  At no time did they ask for, nor where they promised any kind of consideration in the writing of this review.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

CCIE Data Center – The Waiting Is The Hardest Part

By now, you’ve probably read the posts from Jeff Fry and Tony Bourke letting the cat out of the CCIE bag for the oft-rumored CCIE Data Center (DC) certification.  As was the case last year, a PDF posted to the Cisco Live Virtual website spoiled all the speculation.  Contained within the slide deck for BRKCRT-1612 Evolution of Data Centre Certification and Training is a wealth of confirmation starting around slide 18.  It spells out in bold letters the CCIE DC 1.0 program.  It seems to be focused around three major technology pillars: Unified Computing, Unified Fabric, and Unified Network Services.  As people who have read my blog since last year have probably surmised, this wasn’t really a surprise to me after Cisco Live 2011.

As I surmised eight months ago, it encompasses the Nexus product line top to bottom, with the 7009, 5548, 2232, and 1000v switches all being represented.  Also included just for you storage folks is a 9222i MDS SAN switch.  There’s even a Catalyst 3750 thrown in for good measure.  Maybe they’re using it to fill an air gap in the rack or something.  From the UCS server side of the house, you’ll likely get to see a UCS 6248 fabric interconnect and a 5148 blade chassis.  And because no CCIE lab would exist without a head scratcher on the blueprint there is also an ACE 4710 module.  I’m sure that this has to do with the requirement that almost every data center needs some kind of load balancer or application delivery controller.  As I mentioned before and Tony mentioned in his blog post, don’t be surprised to see an ACE GSS module in there as well.  Might be worth a two point question.

Is the CCIE SAN Dead?

If you’re currently studying for your SAN CCIE, don’t give up just yet.  While there hasn’t been any official announcement just yet, that also doesn’t mean the SAN program is being retired any time soon.  There will be more than enough time for you SAN jockeys to finish up this CCIE just in time to start studying for a new one.  If you figure that the announcement will be made by Cisco Live Melbourne near the end of March, it will likely be three months for the written beta.  That puts the wide release of the written exam at Cisco Live San Diego in June.  The lab will be in beta from that point forward, so it will be the tail end of the year before the first non-guinea pigs are sitting the CCIE DC lab.  Since you SAN folks are buried in your own track right now, keep heading down that path.  I’m sure that all the SAN-OS configs and FCoE experience will serve you well on the new exam, as UCS relies heavily on storage networking.  In fact, I wouldn’t be surprised to see some sort of bridge program run concurrently with the CCIE SAN / CCIE DC candidates for the first 6-8 months where SAN CCIEs can sit the DC lab as an opportunity and incentive to upgrade.  After all, the first DC CCIEs are likely to be SAN folks anyway.  Why not try to certify all you can?

Expect the formal announcement of the program to happen sometime between March 6th and March 20th.  It will likely come with a few new additions to the UCS line and be promoted as a way to prove to the world that Cisco is very serious about servers now.  Shortly after that, expect an announcement for signups for the beta written exam.  I’d bank on 150-200 questions of all kinds, from FCoE to UCS Manager.  It’ll take some time to get all those graded, so while you’re waiting to see if you’ve hit the cut score, head over to the Data Center Supplemental Learning page and start refreshing things.  Maybe you’ll have a chance to head to San Jose and sit in my favorite building on Tasman Drive to try and break a brand new lab.  Then, you’ll just be waiting for your score report.  That’s the hardest part.

2012, Year of the CCIE Data Center?

About six months ago, I wrote out my predictions about the rumored CCIE Data Center certification.  I figured it would be a while before we saw anything about it.  In the interim, there are a lot of people out there that are talking about the desire to have a CCIE focused on things like Cisco UCS and Nexus.  People like Tony Bourke are excited and ready to dive head first into the mountain of material that is likely needed to learn all about being an internetworking expert for DC equipment.  Sadly though, I think Tony’s going to have to wait just a bit longer.

I don’t think we’ll see the CCIE Data Center before December of 2012.

DISCLAIMER: These suppositions are all based on my own research and information.  They do not reflect the opinion of any Cisco employee, or the employees of training partners.  This work is mine and mine alone.

Why do I think that?  Several reasons actually.  The first is that there are new tests due for the professional level specialization for Cisco Data Center learning.  The DC Networking Infrastructure Support and Design Specialist certifications are getting new tests in February.  This is probably a refresh of the existing learning core around Nexus switches, as the new tests reference Unified Fabric in the title.  With these new tests imminent, I think Cisco is going to want a little more stability in their mid-tier coursework before they introduce their expert level certification.  By having a stable platform to reference and teach from, it becomes infinitely easier to build a lab.  The CCIE Voice lab has done this for a while now, only supporting versions 4.2 and 7.x, skipping over 5.x and 6.x.  It makes sense that Cisco isn’t going to want to change the lab every time a new Nexus line card comes out, so having a stable reference platform is critical.  And that can only come if you have a stable learning path from beginning to end.  It will take at least 6 months to work out the kinks in the new material.

Speaking of 6 months, that’s a bit of the magic number when it comes to CCIE programs.  All current programs require a 6 month window for notification of major changes, such as blueprints or technology refreshes.  Since we haven’t heard any rumblings of an imminent blueprint change for the CCIE SAN, I doubt we’ll see the CCIE DC any sooner than the end of the year.  From what I’ve been able to gather, the CCIE DC will be an add-on augmentation to the existing CCIE SAN program rather than being a brand new track.  The amount of overlap between DC and SAN would be very large, and the DC core network would likely include SAN switching in the form of MDS, so keeping both tracks alive doesn’t make a lot of sense.  If you start seeing rumors about a blueprint change coming for the CCIE SAN, that’s when you can bet that you are 6-9 months out from the CCIE DC.

One other reason for the delay is that the CCIE Security lab changes still have not gone live yet (as of this writing).  There are a lot of people in limbo right now waiting to see what is changing in the security internetworking expert realm, many more than those currently taking the CCIE SAN track.  CCIE Security is easily the third most popular track behind R&S and SP.  Keeping all those candidates focused and on task is critical to the overall health of the CCIE program.  Cisco tends to focus on one major track at a time when it comes to CCIE revamps, so with all their efforts focused on the security track presently, I doubt they will begin to look at the DC track until the security lab changes are live and working as intended.  Once the final changes to the security lab are implemented, expect a 6-9 month window before the DC lab goes live.

The final reason that I think the DC will wait until the last part of the year is timing.  If you figure that Cisco is aiming for the latter part of the calendar year to implement something, it won’t happen until after August.  Cisco’s fiscal year begins on August 1, so they tend to freeze things for the month of August while they work out things like reassigning personnel and forecasting projections.  September is the first realistic timeframe to look at changes being implemented, but that’s still a bit of a rush given all the other factors that go into creating a new CCIE track.  Especially one with all the moving parts that would be involved in a full data center network implementation.

Tom’s Take

Creating a program that is as sought after as the CCIE Data Center involves a lot of planning.  Implementing this plan is an involved process that will require lots of trial and error to ensure that it lives up to the standards of the CCIE program.  This isn’t something that should be taken lightly.  I expect that we will hear about the changes to the program around the time frame of Cisco Live 2012.  I think that will be the announcement of the beta program and the recruitment of people to try the written test beta.  With a short window between the release of the cut scores and beta testing the lab, I think that it will be a stretch to get the CCIE DC finalized by the end of the year.  Also, given that the labs tend to shut down around Christmas and not open back up until the new year, I doubt that 2012 will be the year of the CCIE DC.  I’ve been known to be wrong before, though.  So long as we don’t suffer from the Mayan Y2K bug, we might be able to get out butts kicked by a DC lab sometime in 2013.  Here’s hoping.