Infineta – Network Field Day 3

The first day of Network Field Day 3 wrapped up at the offices of Infineta Systems.  Frequent listeners of the Packet Pushers podcast will remember them from Show 60 and Show 62.  I was somewhat familiar with their data center optimization technology before the event, but I was interested to see how they did their magic.  That desire to see behind the curtains would come back to haunt me.

Infineta was primed to talk to us.  They even had a special NFD3 page setup for the streaming video and more information about their solutions.  We arrived on site and were ushered into a conference room where we got setup for the ensuing fun.

Haseeb Budhani (@haseebbudhani), Vice President of Products, kicked off the show with a quick overview of Infineta’s WAN optimization product line.  Unlike companies like Riverbed or Cisco WAAS, Infineta doesn’t really care about optimizing branch office traffic.  Infineta focuses completely on the data center at 10Gbps speeds.  Those aren’t office documents, kids.  That’s heavy duty data for things like SAN replication, backup and archive jobs, and even scaling out application traffic.  Say you’re a customer wanting to do VMDK snapshots across a gigabit WAN link between sites on two different coasts.  Infineta allows you to reduce the amount of time that transferring the data takes while at the same time allowing you to better utilize the links.  If you’re only seeing 25-30% link utilization in a scenario like this, Infineta can increase that to something on the order of 90%.  However, the proof for something like this doesn’t come in a case study on Powerpoint.  That means demo time!  Here is one place where I think Infineta hit a home run.  Their demo was going to take several minutes to compress and transfer data.  Rather than waiting for the demo to complete at the end of the presentation and boring the delegates with ever-increasing scrollbars, Infineta kicked off the demo and let it run in the background.  That’s great thinking to keep our attention focused on the goods of the solution even while the proof of things is working in the background.  While the demo was chugging along in the background, Infineta brought in someone that did something I never thought possible.  They found someone that out-nerded Victor Shtrom.

That fine gentleman is Dr. K. V. S. Ramarao (@kvsramarao) or “Ram” as he is affectionately known.  He was a professor of Computer Science at Pitt.  And he’s ridiculously smart.  I jokingly said that I was going to need to go back to college to write this blog post because of all the math that he pulled out in discussion of how Infineta does their WAN optimization.  Even watching the video again didn’t help me much.  There’s a LOT of algorithmic math going on in this explanation.  The good Dr. Ramarao definitely earned his Ph.D if he worked on this.  If you are at all interested in the theoretical math behind large-scale data deduplication, you should watch the above video at least three times.  Then do me a favor and explain it to me like I was a kindergartner.

The wrap up for Infineta was a bit of reinforcement of the key points that differentiate them from the rest of the market.  All in all, a very good presentation and a great way to keep the nerd meter way off the charts.

If you’d like to learn more about Infineta Systems, you can find them at http://www.infineta.com/.  You can also follow them on Twitter as @Infineta


Tom’s Take

Data centers in the modern world are increasing the amount of traffic they can produce exponentially.  They are no longer bound to a single set of hard disks or a physical memory limit.  We also ask a lot more of our servers when we task them with sub-second failover across three or more timezones.  Since WAN links are keeping up nearly as fast with the explosion of moving data and the reduction in time that it has to arrive in the proper place, we need to look at how to reduce the data being put on the wire.  I think Infineta has done a very good job of fitting into this niche of the market.  By basing their product on some pretty solid math, they’ve shown how to scale their solution to provide much better utilization of WAN links while still allowing for magical things like vMotion to happen.  I’m going to be keeping a closer eye on Infineta, especially when I find myself in need to migrating a server from Los Angeles to New York in no time flat.

Tech Field Day Disclaimer

Infineta was a sponsor of Network Field Day 3.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 3. In addition, they provided me a t-shirt, coffee mug, pen, and USB drive containing product information and marketing collateral.  They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

CCIE Data Center – The Waiting Is The Hardest Part

By now, you’ve probably read the posts from Jeff Fry and Tony Bourke letting the cat out of the CCIE bag for the oft-rumored CCIE Data Center (DC) certification.  As was the case last year, a PDF posted to the Cisco Live Virtual website spoiled all the speculation.  Contained within the slide deck for BRKCRT-1612 Evolution of Data Centre Certification and Training is a wealth of confirmation starting around slide 18.  It spells out in bold letters the CCIE DC 1.0 program.  It seems to be focused around three major technology pillars: Unified Computing, Unified Fabric, and Unified Network Services.  As people who have read my blog since last year have probably surmised, this wasn’t really a surprise to me after Cisco Live 2011.

As I surmised eight months ago, it encompasses the Nexus product line top to bottom, with the 7009, 5548, 2232, and 1000v switches all being represented.  Also included just for you storage folks is a 9222i MDS SAN switch.  There’s even a Catalyst 3750 thrown in for good measure.  Maybe they’re using it to fill an air gap in the rack or something.  From the UCS server side of the house, you’ll likely get to see a UCS 6248 fabric interconnect and a 5148 blade chassis.  And because no CCIE lab would exist without a head scratcher on the blueprint there is also an ACE 4710 module.  I’m sure that this has to do with the requirement that almost every data center needs some kind of load balancer or application delivery controller.  As I mentioned before and Tony mentioned in his blog post, don’t be surprised to see an ACE GSS module in there as well.  Might be worth a two point question.

Is the CCIE SAN Dead?

If you’re currently studying for your SAN CCIE, don’t give up just yet.  While there hasn’t been any official announcement just yet, that also doesn’t mean the SAN program is being retired any time soon.  There will be more than enough time for you SAN jockeys to finish up this CCIE just in time to start studying for a new one.  If you figure that the announcement will be made by Cisco Live Melbourne near the end of March, it will likely be three months for the written beta.  That puts the wide release of the written exam at Cisco Live San Diego in June.  The lab will be in beta from that point forward, so it will be the tail end of the year before the first non-guinea pigs are sitting the CCIE DC lab.  Since you SAN folks are buried in your own track right now, keep heading down that path.  I’m sure that all the SAN-OS configs and FCoE experience will serve you well on the new exam, as UCS relies heavily on storage networking.  In fact, I wouldn’t be surprised to see some sort of bridge program run concurrently with the CCIE SAN / CCIE DC candidates for the first 6-8 months where SAN CCIEs can sit the DC lab as an opportunity and incentive to upgrade.  After all, the first DC CCIEs are likely to be SAN folks anyway.  Why not try to certify all you can?

Expect the formal announcement of the program to happen sometime between March 6th and March 20th.  It will likely come with a few new additions to the UCS line and be promoted as a way to prove to the world that Cisco is very serious about servers now.  Shortly after that, expect an announcement for signups for the beta written exam.  I’d bank on 150-200 questions of all kinds, from FCoE to UCS Manager.  It’ll take some time to get all those graded, so while you’re waiting to see if you’ve hit the cut score, head over to the Data Center Supplemental Learning page and start refreshing things.  Maybe you’ll have a chance to head to San Jose and sit in my favorite building on Tasman Drive to try and break a brand new lab.  Then, you’ll just be waiting for your score report.  That’s the hardest part.

2012, Year of the CCIE Data Center?

About six months ago, I wrote out my predictions about the rumored CCIE Data Center certification.  I figured it would be a while before we saw anything about it.  In the interim, there are a lot of people out there that are talking about the desire to have a CCIE focused on things like Cisco UCS and Nexus.  People like Tony Bourke are excited and ready to dive head first into the mountain of material that is likely needed to learn all about being an internetworking expert for DC equipment.  Sadly though, I think Tony’s going to have to wait just a bit longer.

I don’t think we’ll see the CCIE Data Center before December of 2012.

DISCLAIMER: These suppositions are all based on my own research and information.  They do not reflect the opinion of any Cisco employee, or the employees of training partners.  This work is mine and mine alone.

Why do I think that?  Several reasons actually.  The first is that there are new tests due for the professional level specialization for Cisco Data Center learning.  The DC Networking Infrastructure Support and Design Specialist certifications are getting new tests in February.  This is probably a refresh of the existing learning core around Nexus switches, as the new tests reference Unified Fabric in the title.  With these new tests imminent, I think Cisco is going to want a little more stability in their mid-tier coursework before they introduce their expert level certification.  By having a stable platform to reference and teach from, it becomes infinitely easier to build a lab.  The CCIE Voice lab has done this for a while now, only supporting versions 4.2 and 7.x, skipping over 5.x and 6.x.  It makes sense that Cisco isn’t going to want to change the lab every time a new Nexus line card comes out, so having a stable reference platform is critical.  And that can only come if you have a stable learning path from beginning to end.  It will take at least 6 months to work out the kinks in the new material.

Speaking of 6 months, that’s a bit of the magic number when it comes to CCIE programs.  All current programs require a 6 month window for notification of major changes, such as blueprints or technology refreshes.  Since we haven’t heard any rumblings of an imminent blueprint change for the CCIE SAN, I doubt we’ll see the CCIE DC any sooner than the end of the year.  From what I’ve been able to gather, the CCIE DC will be an add-on augmentation to the existing CCIE SAN program rather than being a brand new track.  The amount of overlap between DC and SAN would be very large, and the DC core network would likely include SAN switching in the form of MDS, so keeping both tracks alive doesn’t make a lot of sense.  If you start seeing rumors about a blueprint change coming for the CCIE SAN, that’s when you can bet that you are 6-9 months out from the CCIE DC.

One other reason for the delay is that the CCIE Security lab changes still have not gone live yet (as of this writing).  There are a lot of people in limbo right now waiting to see what is changing in the security internetworking expert realm, many more than those currently taking the CCIE SAN track.  CCIE Security is easily the third most popular track behind R&S and SP.  Keeping all those candidates focused and on task is critical to the overall health of the CCIE program.  Cisco tends to focus on one major track at a time when it comes to CCIE revamps, so with all their efforts focused on the security track presently, I doubt they will begin to look at the DC track until the security lab changes are live and working as intended.  Once the final changes to the security lab are implemented, expect a 6-9 month window before the DC lab goes live.

The final reason that I think the DC will wait until the last part of the year is timing.  If you figure that Cisco is aiming for the latter part of the calendar year to implement something, it won’t happen until after August.  Cisco’s fiscal year begins on August 1, so they tend to freeze things for the month of August while they work out things like reassigning personnel and forecasting projections.  September is the first realistic timeframe to look at changes being implemented, but that’s still a bit of a rush given all the other factors that go into creating a new CCIE track.  Especially one with all the moving parts that would be involved in a full data center network implementation.

Tom’s Take

Creating a program that is as sought after as the CCIE Data Center involves a lot of planning.  Implementing this plan is an involved process that will require lots of trial and error to ensure that it lives up to the standards of the CCIE program.  This isn’t something that should be taken lightly.  I expect that we will hear about the changes to the program around the time frame of Cisco Live 2012.  I think that will be the announcement of the beta program and the recruitment of people to try the written test beta.  With a short window between the release of the cut scores and beta testing the lab, I think that it will be a stretch to get the CCIE DC finalized by the end of the year.  Also, given that the labs tend to shut down around Christmas and not open back up until the new year, I doubt that 2012 will be the year of the CCIE DC.  I’ve been known to be wrong before, though.  So long as we don’t suffer from the Mayan Y2K bug, we might be able to get out butts kicked by a DC lab sometime in 2013.  Here’s hoping.

2011 in Review, 2012 in Preview

2011 was a busy year for me.  I set myself some rather modest goals exactly one year ago as a way to keep my priorities focused for the coming 365 days.  How’d I do?

1. CCIE R&S: Been There. Done That. Got the Polo Shirt.

2. Upgrade to VCP4: Funny thing.  VMware went and released VMware 5 before I could get my VCP upgraded.  So I skipped straight over 4 and went right to 5.  I even got to go to class..

3. Go for CCIE: Voice: Ha! Yeah, I was starting to have my doubts when I put that one down on the list.  Thankfully, I cleared my R&S lab.  However, the thought of a second track is starting to sound compelling…

4. Wikify my documentation: Missed the mark on this one.  Spent way to much time doing things and not enough time writing them all down.  I’ll carry this one over for 2012.

5. Spend More Time Teaching: Never got around to this one.  Seems my time was otherwise occupied for the majority of the year.

Forty percent isn’t bad, right?  Instead, I found myself spending time becoming a regular guest on the Packet Pushers podcast and attending three Tech Field Day Events: Tech Field Day 5, Wireless Field Day 1, and Network Field Day 2.  I’ve gotten to meet a lot of great people from social media and made a lot of new friends.  I even managed to keep making blog posts the whole year.  That, in and of itself, is an accomplishment.

What now?  I try to put a couple of things out there as a way to hold myself to the fire and be accountable for my aspirations.  That way, I can look back in 2013 and hopefully hit at least 50% next time.  Looking forward to the next 366 days (356 if the Mayans were right):

1. Juniper – I think it’s time to broaden my horizons.  I’ve talked to the Juniper folks quite a bit in 2011.  They’ve given me a great overview of how their technology works and there is some great potential in it.  Juniper isn’t something I run into every day, but I think it would be in my best interest to start learning how to get around in the curly CLI.  After all, if they can convert Ivan, they must really have some good stuff.

2. Data Center – Another growth area that I feel I have a lot of catching up to do is in the data center.  I feel comfortable working on NX-OS somewhat, but the lack of time I get to configure it every day makes the rust a little thick some times.  If it wasn’t for guys like Tony Mattke and Jeff Fry, I’d have a lot more catching up to do.  When you look at how UCS is being positioned by Cisco and where Juniper wants to take QFabric, I think I need to spend some time picking up more data center technology.  Just in case I find myself stranded in there for an extended period of time.  Can’t have this turning into the Lord of the CLIs.

3. Advanced Virtualization – Since I finally upgraded my VCP to version 5, I can start looking at some of the more advanced certifications that didn’t exist back when I was a VCP3.  Namely the VCAP.  I’m a design junkie, so the DCD track would be a great way for me to add some of the above data center skills while picking up some best practices.  The DCA troubleshooting training would be ideal for my current role, since anything beyond a simple check of vCenter is all I can muster in the troubleshooting arena.  I’d rather spend some time learning how the ESXi CLI works than fighting with a mouse to admin my virtual infrastructure.

4. Head to The Cloud – No, not quite what you’re thinking.  I suffered an SSD failure this year and if it hadn’t been for me having two hard drives in my laptop, I’d probably have lost a good portion of my files as well.  I keep a lot of notes on my laptop and not all of them are saved elsewhere.  Last year I tried to wikify everything and failed miserably.   This year I think I’m going to take some baby steps and get my important documents and notes saved elsewhere and off my local drives.  I’m looking to replace my OneNote archive with Evernote and keep my important documents in Google Docs as opposed to local Microsoft Word.  By keeping my important documents in the cloud, I don’t have to sweat the next drive death quite as much.

The free time that I seem to have acquired now that I’ve conquered the lab seems to have been filled with a whole lot of nothing.  In this industry, you can’t sit still for very long or you’ll find yourself getting passed by almost everyone and everything.  I need to sharpen my focus back to these things to keep moving forward and spend less time sitting on my laurels.  I hope to spend even more time debating technology with the Packet Pushers and engaging with vendors at Tech Field Day.  Given how amazing and humbling 2011 was, I can’t wait to see what 2012 has in store for me.

Dell Force 10 – Network Field Day 2

The final presentation of Network Field Day 2 came from Force 10.  Now, Force 10 is a part of the larger Dell Networking umbrella.  This includes their campus PowerConnect switching line as well as the datacenter-focused equipment from Force 10.  We arrived at the old Force 10 headquarters and noticed a couple of changes since last year.  Mainly lots of new round, four letter logos everywhere.  As well, the office seemed slightly rearranged.  We walked back to a large common area where tables and chairs were assembled for the presentation.

To say Dell Force 10 brought the big guns is an understatement.  I counted no less that 15 people in the room that were a part of Force 10, from engineers to marketing to executive.  They all turned out to see the traveling circus sideshow we put on.  Although we didn’t get the same zero-slide whiteboard-only affair from last year, Dell Force 10 took the time to ask each one of us what we would like to hear from them during the presentation.  That little touch helped put us at ease and allowed us to tell them up front what we were hoping to see.  I specifically asked about the branding of Dell Force 10 alongside the PowerConnect line.

Dell kept the slides somewhat short and did manage to address many topics that we put on the whiteboard before we started.  I was happy to learn that while Force 10 equipment would stay primarily in the data center realm, the Force10OS (FTOS) that is so beloved by many would be finding its way into the PowerConnect line at some point in the coming months.  One of my many gripes about the PowerConnect line is the horr^H^H^H^Hdifficult OS.  In fact, I was the only person in the office that knew CTRL+H was Backspace.  Whether or not the underlying packet flinging mechanism is superior, having a CLI coded by monkeys doesn’t really help me get my job accomplished.  Now that I can look forward to getting FTOS on all of Dell’s equipment, my ire may go down a little.

After the positioning talk, Dell Force 10 jumped into talking about some of its hardware, specifically the Z9000.  The specs are pretty impressive.  It can run all 128 ports at line rate 10GBE or use 40GBE modules in 32 ports at line rate.  The power draw for a fully loaded box is a svelte 800 watts (6.25 watts per port) which did generate some healthy discussion about the power consumption of a 10GBE SR fiber module.  I tend to err on the side that there is a little more power draw that 7 watts, but if Dell can produce numbers to support their claim I’ll be a believer.  Dell Force 10 also says that there is support for TRILL in the Z9000 which will help it create a spine-leaf node fabric, their term for the core and aggregation layers of switching in a data center.  I think the Z9000 has some interesting applications and am curious to see how it fares with the offerings put forth by Cisco and Juniper.


Tom’s Take

No one was more surprised than me that Dell bought Force 10 instead of Brocade.  But, after reflection it did make sense.  Now we get to see the fruit of that acquisition.  Dell has positioned Force 10 directly into the data center and it has allowed them to build a top to bottom strategy in the data center, which they lacked before.  Their hardware is fairly impressive from the information we were given and the familiarity of FTOS means that we aren’t going to spend days relearning things.  I wonder if Dell is going to use Force10 in a niche market alongside large server deployments or if they hope that Force10 will catch fire in existing data centers and start replacing legacy gear.  One can only hope for the former, as the latter won’t leave a lot of room for Dell to recoup their investment.


Tech Field Day Disclaimer

Dell Force10 was a sponsor of Network Field Day 2, as as such was responsible for paying a portion of my travel and lodging fees. They also provided us with a pint glass with the Dell Networking logo, a Dell sticker, and a USB drive containing presentations and markting collateral. At no time did Dell Force10 ask for, nor where they promised any kind of consideration in the drafting of this review. The analysis and opinions herein are mine and mine alone.

Brocade – Network Field Day 2

Brocade was the second vendor up on day 2 of Network Field Day 2. We arrived at the resplendant Brocade offices and were immediately ushered in for lunch. A side note here: Brocade had the best lunch ever. No sandwiches and chips. Chicken Parmesan, salad, and pasta were the meal du jour. After feeding us, Brocade then proceeded to whet our appetite by hinting that there would be some time later for a competition.

Once we got underway, we got a quick intro from <<MARKETING PERSON>> followed by a great short presentation from Jon Hudson, also known as @the_socalist. He went into a quick overview of the Brocade line of products, touching on everything from wiring closet switches to massive fibre channel encryption boxes. He kept it short and light by playing to our strengths. As it has been noted before, presenting to Tech Field Day delegates is a very unique experience. Jon peppered his presentation with pictures of unicorns (the mascot of Networking Field Day) and talk of long-distance vMotion issues that could one day lead to roving packs of VM clusters vMotioning across the world and even into space. Good laughs were had all around. Overall, Brocade has some good products that they aim into the middle of the switching pack, but their real capability is their extensive fibre channel knowledge and how to integrate that in your environment. Their VCS take on fabric is equally interesting.

Afterwards, we were informed that we were going to have a lab. Not a demo, not a video. A hands-on configuration lab. We were broken up into teams and given the task of configuring a pair of Brocade 6720s into a fabric configuration. We had the disposal of the assembled Brocade engineers for assitance with configuration, as well as for escort to the data center down the hall where we had to (GASP!) plug in our own fiber jumpers. Just before the lab kicked off, the moderator Marcus Thordal informed us that they usually saw some sabotage occuring after a team completed their configuration tasks. Once we started, Jeff Fry  and I teamed up to start building fabric. As Jeff prepped the fiber cables, I quickly assigned a management address to the switch. My previous familiarty with the Brocade CLI came in handy, and I finally got to show off my Brocade Certified Network Engineer (BCNE) skills. Jeff commented that the CLI seemed very IOS-like, which I’m almost certain is no coincidence.

When it came time to go back to the data center and start cabling, the competition really started to heat up. Tony Mattke and Greg Ferro sat next to us in a team, and as we plugged our fiber jumpers in to cross-connect switches and fire up servers, Tony slipped in behind us to do the same. When I got back to the terminal to verifiy the fabric connections, the VMware host wasn’t pingable. I did some quick troubleshooting and found that it had simply disappeared. When I looked over, Tony was giggling like a schoolgirl, which told me he decided to play dirty. I walked back to the data center and checked our cables. I quickly discovered that the server fiber jumper was slightly unplugged, just enough to break the connection but not enough to be dangling there and give away the treachery. When I got back, I glared at Tony and Greg, sure that I would find a way to repay them in kind. As soon as I sat down, Tony and Greg both jumped up to repatch cables in the data center, sure I had sabotaged them. I decided to play different game, so I used the basic configuration given to the delegates and figured out the switch IP for Tony and Greg. I then telnetted in and used the default password to log on, at which point I rebooted the switch. The 6720s took a few minutes to come back up, at which point I could configure in peace. Greg and Tony came back as Jeff and I were vMotioning our host across the fabric to test resiliency. Greg took a minute to figure out that his switch wasn’t at the CLI prompt, but was instead running ASIC checks. He looked over at me, but my smile was just too hard to contain. As he plotted more revenge, Jeff turned it up a notch by suggesting I change the login info for Team Five’s switch and reboot once again. While they were distracted, I changed the ADMIN user password to “gregisatosser” and rebooted after saving the config. As the switch was coming back up, the Brocade engineers in the back were having a great time with our efforts to sabotage each other. I took special delight in telling Greg the new password to his switch.

Once we finished our configuration lab, the Brocade people used the remaining time to answer Q&A about their product and direction in areas like TRILL and FCoE. I was especially impressed by Jon Hudson as he was able to spar with Ivan Pepelnjak about many different TRILL ideas, while at the same time withering an assualt from Ivan and Tony Bourke about fiber channel. He recalled many things off the top of his head, but he was also not afraid to say “I don’t know” when faced with a unique take on a problem. That always impresses me when a presenter is willing to go under the gun on Q&A and ever more so when they admit that they don’t know something. As I overhead him say afterwards, Jon remarked, “There’s no sense in lying. If I don’t know, I don’t know. Lying about it never leads to anything good.

Here’s a video of Jon’s introduction to Brocade:


Tom’s Take

I liked Brocade’s presentation. The slide deck was short and funny, but the real gem was the hands-on lab. While many a Tech Field Day presentation has been saved by a great demo, there’s just something about getting your hands dirty on real hardware. We learned how Brocade implemented things that we do in our everyday jobs, as well as a couple of things that are unique to them. I really helps us decide how worthwhile their equipment might be to our environment. In fact, I’d wager to say that they moved into some serious consideration among one or two delegates for ease of use and features simply because we had a chance to take it for a test drive. Future Tech Field Day presenters take note: getting the delegates involved is never a bad idea.


Tech Field Day Disclaimer

Brocade was a sponsor of Network Field Day 2, as as such was responsible for paying a portion of my travel and lodging fees. They also provided us with lunch and a takeaway bag containing a USB drive with the presentation, chocolate covered espresso beans, and a VCS T-shirt in 2XL. At no time did Brocade ask for, nor where they promised any kind of consideration in the drafting of this review. The analysis and opinions herein are mine and mine alone.

Juniper – Network Field Day 2

Day 2 of Network Field Day started out with a super-sized session at the Juniper headquarters.  We arrived a bit hungover from the night before at Murphy’s Law and sat down to a wonderful breakfast with Abner Germanow.  He brought coffee and oatmeal and all manner of delicious items, as well as Red Bull and Vitamin Water to help flush the evil of Guiness and Bailey’s Irish Cream from our systems.  Once we were settled, Abner gave us a brief overview of Juniper as a company.  He also talked about Juniper’s support of Network Field Day last year and this year and how much they enjoy having the delegates because we ask public questions and wish to obtain knowledge to make the world a better place for networkers despite any ridicule we might suffer at each other’s hands.

Dan Backman was next up to start things off with an overview of Junos.  Rather than digging into the gory details of the underlying operating system like Mike Bushong did last year, Dan instead wanted to focus on the extensibility of Junos via things like XML and API calls.  Because Junos was designed from the ground up as a transactional operating system, it has the ability to do some very interesting things in the area of scripting and automation.  Because changes made to a device running Junos aren’t actually made until they are committed to the running config, you can have things like error checking scripts running in the background monitoring for things like OSPF processes and BGP neighbor relationships.  If I stupidly try to turn off BGP for some reason, the script can stop me from committing my changes.  This would be a great way to keep the junior admins from dropping your BGP sessions or OSPF neighbors without thinking.  As we kept moving through the CLI of Junos, the delegates were becoming more and more impressed with the capabilities inherent therein.  Many times, someone would exclaim that Junos did something that would be very handy for them, such as taking down a branch router link if a keepalive script determined that the remote side had been brought down.  By the end of Dan’s presentation, he revealed that he was in fact not running this demo on a live router, but instead had configured everything in a virtual instance running in Junosphere.  I’ve written a little about Junosphere before and I think the concept of having a virtual instantiation of Junos that is easily configurable for many different types of network design.  Juniper is using Junosphere not just for education, but for customer proof-of-concept as well.  For large customers that need to ensure that network changes won’t cause major issues, they can copy the configuration from their existing devices and recreate everything in the cloud to break as they see fit.  Only when confirmed configs are generated from the topology will the customer then decide to put that config on their live devices.  All this lists for about $5/router per day from any Juniper partner.  However, Dan hit us with our Network Field Day “Oprah Moment”.  Dan would give us access to Junosphere!  All we have to do is email him and he’ll get everything setup.  Needless to say, I’m going to be giving him a shout in the near future.

Next up was Dave Ward, Juniper’s CTO of the Platform Divison.  A curious fact about Dave: he likes to present sans shoes.  This might be disconcerting to some, but having been through business communications class in college, I can honestly say it’s not the weirdest quirk I’ve ever seen.  Dave’s presentation focused around programmable network, which is Juniper’s approach to OpenFlow.  Dave has the credentials to really delve into the weeds of programmable networking, and to be honest some of what he had to say went right by me.  It’s like listening to Ivan turn the nerd meter up to 9 or 10.  I recommend you watch part one of the Juniper video and start about halfway through to see what Dave has to say about things.  His ideas behind using our new found knowledge of programmable networking to better engineer things like link utilization and steering traffic to specific locations is rather interesting.

Next up was Kevin with a discussion about vGW, which came for Altor Networks, and using Juniper devices to secure virtual flows between switches.  This is quickly become a point of contention with customers, especially in the compliance area.  If I can’t see the flows going between VMs, how can I certify my network for things like Payment Card Industry (PCI) compliance?  Worse yet, if someone nefarious compromises my virtual infrastructure and begins attacking VMs in the same vSwitch, if I can’t see the traffic I’ll never know what’s happening.  Juniper is using vGW to address all of these issues in an easy-to-use manner.  vGW allows you to do things like attach different security policies to each virtual NIC on a VM and then let the policy follow the VM around the network as it vMotions from here to eternity.  vGW can also reroute traffic to a number of different IDS devices to snoop on traffic flows and determine whether or not you’ve got someone in your network that isn’t supposed to be there.  There’s even a new antivirus module in the new 5.0 release that can provide AV services to VMs without the need to install a heavy AV client on the host OS and worry about things like updates and scanning.  I hope that this becomes the new model for AV security for VMs going forward, as I realize the need to run AV on systems but detest the fact that so many software licenses are required when there is a better solution out there that is quick and easy and lightweight.

The last presentation was over QFabric.  This new technology represents Juniper’s foray in the the fabric switching technology sweeping across the data center like wildfire right now.  I’ve discussed at length my thoughts on QFabric before.  I still see it as a proprietary solution that works really well for switching packets quickly among end nodes.  Of course, to me the real irony is that HP/Procurve spent many years focusing on their Edge-centric network view of the world and eventually bought 3COM/Huawei to compete in the data center core.  Juniper instead went to the edge-centric model and seems to be ready to bring it to the masses.  Irony indeed.  I do have to call out Juniper here for their expected slide about “The Problem”:

The Problem

The Problem - courtesy of Tony Bourke

To Juniper’s credit, once I pointed out that we may or may not have seen this slide before, the presenter quickly acknowledged it and moved on quickly to get to the good stuff about QFabric.  I didn’t necessarily learn any more about QFabric that I already knew from my own research, but it was a good talk overall.  If you want to delve more into QFabric, head over to Ivan’s site and read through his QFabric posts.

Our last treat from the super session was a tour of the Proof-of-Concept labs at the Juniper EBC.  They’ve got a lot of equipment in there and boy is it loud!  I did get to see how Juniper equipment plays well with others, though, as they had a traded-in CRS-1 floating around with a big “I Wish This Ran Junos” sticker.  Tony Mattke was even kind enough to take a picture of it.

Here are the videos: Part 1 – Introduction to Junos

Part 2 – Open Flow Deep Dive

Part 3 – A Dive Into Security

Part 4 – Network Design with QFabric


Tom’s Take

I’m coming around to Juniper.  The transaction-based model allows me to fat-finger things and catch them before I screw up royally.  Their equipment runs really well from what I’ve been told and their market share seems to be growing in the enterprise from all accounts.  I’ve pretty much consigned myself at this point to learning Junos as my second CLI language, and the access that Dan Backman is going to provide to Junosphere will help in that regard.  I can’t say how long it will take me to be a convert to the cause of Juniper, but if they ever introduce a phone system into the lineup, watch out!  I also consider the fine presentations that were put on in this four hour session to be the benchmark for all future Tech Field Day presenters.  Very little fluff, packed with good info and demonstrations is the way to go when you present to delegates at Tech Field Day.  Otherwise, the water bottles will start flying.


Tech Field Day Disclaimer

Juniper was a sponsor of Network Field Day 2, as as such was responsible for paying a portion of my travel and lodging fees. They also provided us with breakfast and a USB drive containing the Day One Juniper guides and markting collateral. At no time did Juniper ask for, nor where they promised any kind of consideration in the drafting of this review. The analysis and opinions herein are mine and mine alone.

Gigamon – Network Field Day 2

The third presenter for the first day of Network Field Day 2 was Gigamon. I’d seen them before at a couple of trade shows, so I was somewhat aware of what their product was capable of. When we arrived at their offices, we grabbed a quick lunch as the Gigamon people setup for their presentation. They wheeled a rack of equipment over to the side of the room, all of it painted in bright orange. We started off with Jim Berkman, Director of Worldwide Channel Marketing giving us a little overview of who Gigamon was and what they could do.

Gigamon is a company that specializes in creating devices to assist in monitoring your network. They don’t make Network Management Systems (NMS) like Solarwinds or HP, though. What they do make is a box capable of being inserted into a packet stream and redirecting traffic flows to appropriate tools. If anyone has ever configured a SPAN port on a switch, you know what kind of a pain that can be, especially if you need to extend that SPAN port across multiple devices. Gigamon gives you the ability to drop their equipment in-line with your existing infrastructure and move the packets to the appropriate tools at wire speed. Yes, even at 10GBE. This would allow you to relocate devices in your data center to more convenient locations and worry less about having your NMS or Intrusion Prevention System (IPS) located right next to your SPANned devices.

Gigamon delved into the capabilities of their lineup, from a lowly unit designed for small deployments all the way to a chassis-based solution that will take anything you throw at it. We also got to hear from a field engineer about his latest deployment with a European mobile provider that was using their product for redirecting all their 3G data traffic to packet analyzers and filters, which set off the Big Brother vibe with a couple of delegates. As Gigamon later said, “We just provide capability to send packets somewhere. Where you send them is your business.” Still, the possiblities behind being able to shunt packets to tools at wire speed for very large flows is interesing to say the least. Gigamon also told us about their ability to strip packet headers for things like MPLS and VN-Tag. This got the attention of the delegates, as now we can monitor and manage MPLS flows without worrying about how to strip off the vairable-length headers that can be attached to them. Ivan Pepelnjak asked about support for VXLAN header stripping, but the answer wasn’t really clear. That’s mostly likely because the implementation ideas around VXLAN are still up in the air for most people.

We didn’t get a demo from Gigamon (as there really wouldn’t be much to see) but we did get a good Q&A session as well as a tour of their facilities. All the assembly and testing of their units happens on-site, so it was very interesting to see the development areas as well as the burn-in lab where these boxes are tested for a week straight before shipping. A quick anecdote: when previously asked by someone what happens when a Gigamon unit is Dead on Arrival (DOA), Gigamon replied they weren’t sure, as they’ve never had a DOA box in their 6-year existence.

Tom’s Take

As Kurt Bales put it, “Gigamon is the greatest thing I never knew I needed!” The use-case for their equipment is very compelling, as the monitoring of high speed traffic flows is becoming harder and harder to manage as the amount of packets flying through the data center increases. Gigamon gives you the ability to direct that traffic wherever it is needed, whether it be NMS or filter. They can also do it without slowing down your carefully designed infrastructure. I would highly recommend taking a look at their products if you find yourself in need of creating a lot of SPAN ports to service packet flows to various different tools.

Tech Field Day Disclaimer

Gigamon was a sponsor of Network Field Day 2, as as such was responsible for paying a portion of my travel and lodging fees. They also provided us with lunch and a Gigamon folio pad, as well as a USB drive containing presentations and markting collateral. At no time did Gigamon ask for, nor where they promised any kind of consideration in the drafting of this review. The analysis and opinions herein are mine and mine alone.

NEC – Network Field Day 2

The second presenter on day one of Network Field Day 2 was NEC.  I didn’t know a whole lot about their networking initiatives going into the presentation, even though I knew they were a manufacturer of VoIP systems among a variety of other things.  What I saw really impressed me.

After a quick company overview by John Wise, we dove right into what NEC is bringing to market in the OpenFlow arena.  I’ve posted some links to OpenFlow overviews already, so it was nice to see how NEC had built the technology into their ProgrammableFlow product line.  NEC has a shipping ProgrammableFlow Controller (PFC) as well as ProgrammableFlow switches.  This was a very interesting change of pace, as most vendors I’ve heard from recently have announced support for OpenFlow, but no plans for shipping any equipment that runs it right now.

The PFC is a Linux-based system that supports OpenFlow v1.0.  It allows you to deploy multi-tenant networks on the same physical infrastructure as well as providing location independence.  The PFC can be located anywhere and isn’t restricted to being deployed next to the switches that it supports.  The PFC allows you to do topology discovery via LLDP to find devices as well as more advanced features like fault detection and even self repair.  This is a great boon to network rock stars that can use the controller to fix problems as they occur without the need to leave their chair and start recabling their data center on the fly.  The PFC also supports graphic network creation, with the interface being as simple as creating a Visio drawing.  Except this Visio is a real network.

The ProgrammableFlow switch is a 48-port gigabit switch with 4 SFP+ uplinks capable of 10GBE.  It functions as a hybrid switch, allowing OpenFlow networks to be connected to a traditional L2/L3 environment.  This a wonderful for those that want to try out OpenFlow without wrecking their existing infrastructure.  A great idea going forward, as OpenFlow is designed to be overlaid without disturbing your current setup.  By providing a real shipping product to customers, NEC can begin to leverage the power of the coming OpenFlow storm to capitalize on a growing market.

Next we got a quick overview of the OpenNetworking Foundation and NEC’s participation in it.  What is of note here is that the board members are all consumers of technology, not the producers.  The producers are members, but not the steering committee.  In my mind, this ensures that OpenFlow will always reflect what the users want from it and not what the vendors want it to become.  NEC has provided us with a physical switch and controller to leverage OpenFlow and has even committed to providing support for a virtualized Hyper-V vSwitch in Windows 8.  This means that NEC will hit the ground running when Microsoft starts using the tools built into Windows 8 to virtualize large numbers of servers.  Whether or not this will be enough to unseat the VMware monster is anyone’s guess, but it never hurts to get in on the ground level.

I missed out on most of the demo of the ProgrammableFlow system in the second half of the presentation due to reality intruding on my serene Network Field Day world, but the video was interesting.  I’m going to spend a little time in the coming weeks doing some more investigation into what ProgrammableFlow has to offer.

<<EDIT>>

I want video! Part 1: NEC Introduction

Part 2: ProgrammableFlow Architecture and Use Cases

Part 3: ProgrammableFlow Demonstration


Tom’s Take

Okay, show of hands: who knew NEC made switches?  They rarely get mentioned in the same breath with Cisco, Juniper, or HP.  When it seems that the market has left you behind the best way to catch up is to move markets.  NEC has really embraced the concept of OpenFlow and I think it’s going to pay off handsomely for them.  By having one of the first shipping devices for OpenFlow integration and making it widely known to the networking consumer, NEC can reap the benefits of interesting in OpenFlow while other vendors ramp up to enter the market.  There’s something to be said for getting there first, and NEC has surely done that.  Now the trick will be taking that advantage and reaping what they have sown.


Tech Field Day Disclaimer

NEC was a sponsor of Network Field Day 2, as as such was responsible for paying a portion of my travel and lodging fees.  They also provided us with a USB drive containing marketing information and the presentation we were given.  We also received an NEC coffee mug and a set of children’s building blocks with NEC logos and slogans screenprinted on them.  At no time did NEC ask for, nor where they promised any kind of consideration in the drafting of this review. The analysis and opinions herein are mine and mine alone.

Cisco Data Center – Network Field Day 2

Our first presenters up on the block for day 1 of Network Field Day 2 were from the Cisco Data Center team. We arrived at Building Seven at the mothership and walked into the Cisco Cloud Innovation Center (CCIC). We were greeted by Omar Sultan and found our seats in the briefing center. Omar gave us a brief introduction, followed by Ron Fuller, Cisco’s TME for Nexus switching. In fact, he may have written a book about it.  Ron gave us an overview of the Nexus line, as well as some recent additions in the form of line cards and OS updates. His slides seemed somewhat familiar by now, but having Ron explain them was great for us as we could ask questions about reference designs and capabilities. There were even hints about things like PONG, a layer 2 traceroute across the fabric. I am very interested to hear a little more about this particular little enhancement.

Next up were representatives from two of Cisco’s recent acquisitions related to cloud-based services, Linesider and NewScale (Cisco Cloud Portal). They started out their presentations similarly, reminding us of “The Problem”:

The Probelm

The Problem - courtesy of Tony Bourke

You might want to bookmark this picture, I’m going to be referring to it a lot. The cloud guys were the first example of a familiar theme from day 1 – Defining the Problem. It seems that all cloud service providers feel the need to spend the beginning of the presentation telling us what’s wrong. As I tweeted the next day, I’m pretty sick of seeing that same old drawing over and over again. By now, I think we have managed to identify the problem down to the DNA level. There is very little we don’t know about the desire to automate provisioning of services in the cloud so customers in a multi-tenancy environment can seamlessly click their way into building application infrastructure. That being said, with the brain trust represented by the Network Field Day 2 delegates, it should be assumed that we already know the problem. What needs to happen is that the presenters need to show us how they plan to address it. In fact, over the course of Network Field Day, we found that the vendors have identified the problem to death, but their methods of approaching the solution are all quite different. I don’t mean to pick on Cisco here, but they were first up and did get “The Problem” ball rolling, so I wanted to set the stage for later discussion. Once we got to the Cisco IT Elastic Infrastructure Services (CITEIS) demo, however, the ability to quickly create a backend application infrastructure was rather impressive. I’m sure for providers in a multitenancy environment that will be a huge help going forward to reduce the need to have staff sitting around just to spin up new resource pools. If this whole process can truly be automated as outlined by Cisco, the factor by which we can scale infrastructure will greatly increase.

After the cloud discussion, Cisco brought Prashant Ghandi from the Nexus 1000V virtual switching line to talk to us. Ivan Pepelnjak and Greg Ferro perked up and started asking some really good questions during the discussion of the capabilities of the 1000V and what kinds of things were going to be considered in the future. We ended up running a bit long on the presentation which set the whole day back a bit, but the ability to ask questions of key people involved in virtual switching infrastructure is a rare treat that shoud be taken advantage of whenever possible.

<<EDIT>>

Now with video goodness! Part 1: Nexus

Part 2: Cloud Orchestration and Automation

Part 3: Nexus 1000V


Tom’s Take

I must say that Cisco didn’t really bring us much more than what we’ve already seen. Maybe that’s a big credit to Cisco for putting so much information out there for everyone to digest when it comes to data center networking. Much like their presentation at Wireless Field Day, Cisco spends the beginning and end of the presentation reviewing things with us that we’re familiar with allowing for Q&A time with the key engineers. The middle is reserved for new technology discussion that may not be immediately relevant but represents product direction for what Cisco sees as important in the coming months. It’s a good formula, but when it comes to Tech Field Day, I would rather Cisco take a chance to let us poke around on new gear or let us ask the really good questions about what kind of things Cisco sees coming down the road.


Tech Field Day Disclaimer

Cisco was a sponsor of Network Field Day 2, as as such was responsible for paying a portion of my travel and lodging fees. At no time did Cisco ask for, nor where they promised any kind of consideration in the drafting of this review. The analysis and opinions herein are mine and mine alone.