My Cisco Live 2012 Schedule

It’s that time of year again.  Cisco Live 2012 in San Diego is coming up in June.  I will once again be attending for my seventh event.  After last year’s event, I realized for the first time that networking with my peers is just as important as attending breakout sessions.  With that in mind, I chose carefully this year when I build my Cisco Live conference schedule:

Monday
10:00 AM
12:00 PM
CUG-1002 Unified Communications Product Direction – Large Enterprise
1:00 PM
3:00 PM
BRKARC-3452 Cisco Nexus 5000/5500 and 2000 Switch Architecture
Tuesday
10:00 AM
11:30 AM
Conference Event GENKEY-4346 Keynote and Welcome Address
4:00 PM
6:00 PM
BRKCRT-9344 IPv6 for Cert Nuts
Wednesday
10:00 AM
11:30 AM
Conference Event GENKEY-4347 Cisco Technology Keynote
12:30 PM
2:30 PM
CUG-1008 Cisco Collaboration User Group Open Forum
4:00 PM
6:00 PM
BRKSEC-2006 It is 2012, Why Do You Keep Getting Hacked?
Thursday
8:00 AM
9:30 AM
BRKCRT-8862 Cisco Certified Architect: How to complete the journey from CCIE to CCDE to CCAr
12:00 PM
1:30 PM
CUG-1010 Cisco Collaboration User Group Business Meeting
2:00 PM
3:00 PM
Conference Event GENKEY-4358 Closing Keynote: An Afternoon with Adam Savage and Jamie Hyneman

Most of my unified communications sessions this year are going to be taking part in the Collaboration Users Group.  I like the small focus and immediate response to feedback I get from being a part of this users group.  I’m also going to be checking out some IPv6 and data center sessions, as I feel that much of what I’m going to be doing in the next couple of years will focus on these technologies.  Of course, having a security session is almost a requirement, so I found an interesting one in the list.  I’m also going to check out the Cisco Certified Architect briefing.  I’m nowhere near qualified to sit for the exam, having neither my CCDE nor the requisite experience in architect projects.  However, I think it will be interesting to see what’s going on with this certification since I was around for the initial formation discussion groups.

The keynotes are usually fairly interesting affairs.  John Chambers will likely have something to say about the new, slimmer Cisco and how they are doing in the market.  Padma Warrior will also likely be talking about the data center and the advantages that UCS offers to Cisco in this space.  The closing keynote appears to be the one that most people are talking about.  Discovery’s Mythbusters will be delivering a talk to the assembled crowd.  The closing keynotes are always interesting affairs, as you can never be quite sure what the guests will have to say to Carlos Dominguez.  I’m really looking forward to it.

If you’re headed to Cisco Live, feel free to leave a comment.  The Twitter and blogger contingent is usually fairly large and always great to hang out with.  The more people we know about at Cisco Live, the better the party will be.  See you in San Diego!

Welcome To The vExpert Class of 2012

It appears that I’ve been placed in some rarified company. In keeping with my goals for this year, I wanted to start writing more about virtualization. I do a lot of work with it in my day job and figured I should devote some time to talking about it here. I decided at the last minute to sign up for the VMware vExpert program as a way to motivate myself to spend more time on the topic of virtualization. Given that I work for a VMware partner, I almost signed up through the partner track. However, it was more important to me to be an independent vExpert and be considered based on the content on my writing. I’d seen many others talking about their inclusion into the program already via pictures and welcome emails. So it was that I figured I’d just been passed over due to lack of VMware content on my blog.

On Sunday, April 15th, VMware announced the list of vExperts for 2012. I browsed through the list after I woke up, curious to see if friends like Stephen Foskett (@SFoskett) and Maish Saidel-Keesing (@MaishSK) were still there. Imagine my surprise when I found my name in the first page of the list (they alphabetize by first name, and I’d signed up under “Alfred”). I was shocked to say the least. This means that I can now count myself among a group of distinguished individuals in virtualization. I’m an evangelist now, even if just officially. I’ve been a huge advocate of using VMware solutions for servers for a while now. This designation just means that I’m going to be spending even more time working with VMware, as well as coming up with good topics to write about. It also makes sense to me that with my desire to chase after the VCAP-DCA and VCAP-DCD to further my virtualization education, the blogging opportunities for these topics are very possible.

A vExpert isn’t the final word in virtualization. I recognize that I’ve got quite a bit to learn when it comes to the ins-and-outs of large scale virtualization. What the vExpert designation means to me is that I’ve shown my desire to learn more about these technologies and share them with everyone. There are a lot of great bloggers out there doing this very thing already. I’m excited and humbled to be included in their ranks for the coming year. I just hope I can keep up with the expectations that come with being a vExpert and reward the faith that John Troyer (@jtroyer) and Alex Maier (@lxmaier) have show in me.

Spirent – Network Field Day 3

The final presentation for Network Field Day 3 came from Spirent Communications.  This was the one company at NFD3 that I was completely in the dark about.  Beyond knowing that they “test stuff”, I was unsure how that would translate into something that a networker would be interested in using.  After I walked out of their building, I now how a new-found respect for companies that build the devices that we take for granted when reading reports.

We almost didn’t get the chance to show Spirent to the viewing audience.  Spirent was unsure how some of their software would come across on a live stream.  I can attest to the fact that software demos are sometimes not the best thing to showcase to the home audience.  However, after watching the coverage of NFD3 from the previous day, Spirent was impressed by the amount of feedback and discussion going on between the delegates and the home audience.  When we arrived at the Spirent offices, we grabbed a quick lunch while the video crew set up for the session.  We got a quick introduction from Sailaja Tennati and Patrick Johnson about who Spirent is and what they do.  Turns out that Spirent makes many of the tools that other networking vendors use to test their equipment.  I liken it to the people that make the equipment that is used to test high performance cars. As impressive as the automobile might be, it’s equally (if not more) impressive to build a machine that can test that performance and even exceed it as needed.  A famous quote says “Fred Astaire was a great dancer.  But don’t forget Ginger Rogers did everything he did backwards in high heels.”  To me, Spirent is like Ginger Rogers.  They not only have to keep up with the equipment that Cisco puts out, they have to exceed it and provide that additional capacity to the vendor.

Ankur Chadda was the next presenter.  He started off by telling us about the difficulties in testing equipment.  Firstly, as soon as there is a problem, the first thing to blame is the testing equipment.  It seems that certain people are so sure their equipment is right, there is no way anything could be wrong.  Instead, it’s the tester that’s at fault.  Many times, this comes from the idea that the data used to test the equipment should be carefully considered.  Ask yourself how many times you’ve looked at “speed and feed” numbers on a data sheet or in a publication and said to yourself, “Yeah, but are those real numbers?”  Odds are good that’s because those numbers are somewhat synthetic and generated with carefully crafted packets.  Throughput is done with very small packet sizes.  VPN connections are done with clients that just connect and not transfer data.  And so on.  Spirent uses their PASS methodology to test equipment – Performance, Availability, Security, and Scalability.  This ensures that the numbers that are generated are grounded in reality and useful to the customers wanting to run this in a production environment.

Jurrie van den Breekel introduced us to the data center testing arm of Spirent.  I find it very interesting that many vendors like Alcatel, Avaya, and Huawei come to Spirent to provide objective interoperability testing.  That says a lot about their capability as well as the trust invested in a company to provide unbiased results.  This is something I‘ve said we’ve needed in networking for very long time.  Another key piece of testing methodology is ensure that you’re comparing similar capabilities.  The example Jurrie gave in the above video is comparing switching performance when the devices use cut-through forwarding versus store-and-forward.  Based on understanding of the way those methods work, cut-through should beat store-and-forward.  However, Jurrie mentioned that there have been testing scenarios when the converse it true.  The key is making sure that the tests match the specifications being tested.  Otherwise, you end up with wacky results like those above.  The other fun anecdote from Jurrie involved testing a Juniper QFabric implementation.  One thing that most people tend to overlook when testing or installing equipment is simple cabling.  While many might take it for granted, it becomes a non-trivial issue at a big enough scale.  In the case of the QFabric test, it took two full days to cable the 1500 ports.  That’s something to keep in mind the next time someone wants you to quote hours for an installation.

Our last presenter for the streamed portion of NFD3 was Ameya Barve.  He led his talk with a nice prediction – testing as we know it will shift from individual scenarios like application or network testing and instead become converged on infrastructure testing.  This is critical because most of these tests today occur completely independent of each other.  This means that the people doing the testing need to know what to test for.  That’s one of the things that Spirent is moving towards.  I think that this kind of holistic testing is going to be critical as well.  Too many times we find out after the fact that an application had some unforeseen interaction with a portion of the network in what is normally called a “corner case scenario”.  Corner cases are extremely hard to test for in siloed testing because the interaction never happens.  It’s only when you toss everything together and shake it all up that you start finding these interesting problems.

After we shut off the cameras, we got a chance to look at a tool that Spirent uses for more focused testing.  It’s an Integrated Development Environment (IDE) tool called iTest.  iTest allows you to use all kinds of interesting things to test all aspects of your network.  You can have iTest SSH to a router to observe what happens when you pump a lot of HTTP traffic through it.  You can also write regular expressions (regex) to pull in all kinds of information that is present in log files and console output.  There’s a ton of things that you can do with iTest, and I’m just scratching the surface with it.  I’m hoping to have a totally separate post up at some point covering some of the more interesting parts of iTest.

If you’d like to learn more about Spirent and their testing tools and methodology, you can head over to their website at http://www.spirent.com.  You can also follow them on Twitter as @Spirent.

Tom’s Take

It’s always a fun when I realize there is a whole world out there that I have no idea about.  My trip to Spirent showed me that the industry built around testing is a world unto itself.  I had no idea that so much went into the methodology and setup for generating the numbers we see in marketing slides.  I’m really interested to see what Spirent will be bringing to market to help converge the siloed testing that we see today.

Tech Field Day Disclaimer

Spirent was a sponsor of Network Field Day 3.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 3. In addition, they provided me with a gift bag containing a coffee mug, polo shirt, pen, scratchpad, USB drive containing marketing collateral, and a 1-foot long Toblerone chocolate bar. They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Solarwinds – Network Field Day 3

The first presenter up for Network Field Day 3 was a familiar face to many Tech Field Day viewers.  Solarwinds presented at the first Network Field Day and has been a sponsor of more events than any other.  It’s always nice to see vendors coming back time and again to show the delegates what they’ve been cooking since their last appearance.

We started our day in the Doubletree San Jose boardroom.  We were joined by Joel Dolisy, the Chief Software Architect for Solarwinds and Mav Turner (@mavturner), the Senior Product Manager for the network software division.  After introductions, we jumped right into some of the great software that Solarwinds makes for network engineers.  First up was the Solarwinds IP SLA Monitor.  IP Service Level Agreement (SLA) is a very important tool used by engineers to track key network metrics like reachability and latency.  What makes IP SLA so great as opposed to a bigger monitoring tool is that the engineer can take the information from IP SLA and use it to create actionable items, such as bringing down an overloaded link or sending trap information to the third-party monitoring system to alert key personnel when something is amiss.  One of the sore spots about IP SLA from my perspective is the difficulty that I have in setting it up.  Thankfully, Solarwinds thought of that for me already.  No only can the IP SLA Monitor show me all the pertinent details about a given IP SLA configuration, I can even create a new one on the fly if needed.  IP SLA Monitor allows me to push the configurations down to a single router, or to multiple routers as quickly as I can select interfaces and metrics to track.  It’s a very interesting product, especially when you know that it grew out of a simple way to manage Voice over IP (VoIP) call metrics.  When Solarwinds realized the potential of the program, they immediately added more features and enabled it across a whole host of protocols.  If you’d like to try it out on a single router, you can download the free version here.

During the presentation, I asked Solarwinds about adding some additional wireless troubleshooting capabilities to the product lines, courtesy of a request from Blake Krone (@BlakeKrone).  One thing that Joel and Mav said was that Solarwinds adds the large majority of their new features based on customer response and request.  I do admire that a company that is so highly regarded by most engineers I know is willing to sit down and make sure that customer needs are addressed in such a manner.  That way, the features that get added into the program really do come from the desires of the userbase.  The only thing that might give me pause this arrangement is that Solarwinds may be missing an opportunity to drive some development around new features by waiting for people to ask for them.  Many times I’ve looked at a piece of software and seen a curious feature in a list only to realize that I never knew I needed it.  I hope that Solarwinds is keeping up with the rapid pace of software development and ensuring that the hottest new technologies are being supported as quickly as possible in their flagship Orion platform.

One thing that Solarwinds took some additional time to show off to us was their Virtualization Manager.  An acquisition from Hyper9 last year, Virtualization Manager allows Solarwinds to hook into the VMware vCenter APIs to find all kinds of interesting things like orphaned VMs or performance issues.  You can create custom alerts on these data points to let you know if a VM goes missing after a difficult vMotion or if your hypervisors have become CPU or memory bound.  You can also archive configs and perform capacity planning and a whole host of other useful features.  One of the nicest things, though, was the fact that the UI was completely devoid of Flash!  Everything was written with HTML5 so that there is no need to worry about whether you’re using the correct device to manage your VM infrastructure’s web portal.  This was a big win for the assembled delegates, as management systems that require proprietary scripting languages or horrendously laggy and memory hungry plugins tend to make us cranky at best.

We also had some good discussions toward the end around building Linux-based polling devices and how extensible the querying capabilities can be inside of Orion.  I think this kind of flexibility is huge in allowing me to craft the tool to my needs instead of the other way around.  When you think about it, there aren’t that many companies that are willing to provide you the framework to rebuild the tool to your environment.  That’s one thing that Solarwinds has in the their favor.

If you’d like to learn more about the various offerings that Solarwinds has available, you can check them out at http://www.solarwinds.com/.  You can also follow them on Twitter at their new handle, @solarwinds

Tom’s Take

Solarwinds has been making tools that make my life easier for quite some time.  They’ve also been offering them for free for a while as well.  This is a great way for people to figure out if the larger collection of tools in the Orion suite will be a good fit for what they want to do with their network.  I think the large number of tools can be daunting for an engineer just starting out or one that’s in over their head.  While the overview we received was a wonderful peek at things, Solarwinds needs to take the time to be sure the educate users to the tool capabilities, both free and paid.  I also feel that Solarwinds needs to take the time to develop some software functionality independently of user requests.  I know that the majority of the features they build into their tools are requested by users.  But as I said above, sometimes the feature I need is the one I didn’t know could be done until I read the release notes.

Tech Field Day Disclaimer

Solarwinds was a sponsor of Network Field Day 3.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 3. In addition, they provided me with a coffee cup.  They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Cisco Borderless – Network Field Day 3

The second half our our visit to Cisco during day 2 of Network Field Day 3 was filled with members of the Cisco Borderless Networks team.  Borderless Networks is really an umbrella term for the devices in the campus LAN such as wireless, campus switching, and the ASA firewall.  It was a nice break from much of the data center focus that we had been experiencing for the past couple of presentations.

Brian Conklin kicked things off with an overview of the ASA CX next generation firewall.  This was a very good overview of the product and reinforced many of the things I wrote about in my previous ASA CX blog post.  Some high points from the talk with Brian include Active Directory and LDAP integration and the inner workings of how packets are switched up to the CX module from the ASA itself.  As I had suspected, the CX is really a plugin module along the lines of IDS module or the CSC module.  We also learned that much of the rule base for application identification came from Ironport.  This isn’t really all that surprising when you think about the work that Ironport has put into fingerprinting applications.  I just hope that all of the non-web based traffic will eventually be able to be identified without the need to have the AnyConnect client installed on every client machine.  I think Brian did a very good job of showing off all the new bells and whistles of the new box while enduring questions from myself, Mrs. Y, and Brandon Carroll.  I know that the CX is still a very new product, so I’m going to hold any formal judgement until I see the technology moved away from the niche of the 5585-X platform and down into the newer 55×5-X boxes.

Next up on our tour of the borderless network was Mark Emmerson and Tomer Hagay Nevel with Cisco Prime.  Prime is a new network management and monitoring solution that Cisco is rallying behind to unify all their disparate products.  Many of you out there might remember CiscoWorks.  And if any of you actually used it regularly, you probably just shuddered when I mentioned that name.  To say that CiscoWorks has a bit of a sullied reputation might be putting it mildly.  In fact, the first time I was ever introduced to the product the person I was talking too referred to it as Cisco(Sometimes)Works.  Now, with Cisco Prime, Cisco is getting back to a solution that is useful and easy to configure.  Cisco Prime LAN Management Solution is focused on the Borderless Networks platforms specifically, with the ability to do things like archive configurations of devices and push out firmware updates when bugs are fixed or new features need to be implemented.  As well, Cisco is standardizing on the Prime user interface for all of the GUIs in their products, so you can expect a consistent experience whether you’re using Prime LMS or the Identity Services Engine (which will be folded into Prime at a later date).  The only downside to the UI right now is that there is still a reliance on Adobe Flash.  While this is still a great leap forward from Java and other nasty things like ActiveX controls, I think we need to start leveraging all the capabilities in HTML5 to create scalable UIs for customers.  Sure, much of the development of HTML5 UIs is driven by people that want to use them on devices that don’t or won’t support Flash (like the iPad).  But don’t you think it’s a bit easier to share your UI between all the devices when it’s not dependent on a third party scripting language?  After all, Aruba’s managed to do it.  We wrapped up the Prime demo with a peak at the new Collaboration Manager product.  I’ve never been one to use a product like this to manage my communications infrastructure.  However, with some of the very cool features like hop-by-hop Telepresence call monitoring and troubleshooting, I may have to take another look at it in the future.

Our last presentation at Cisco came courtesy of Nikhil Sharma, a Technical Marketing Engineer (TME) working on the Catalyst 4500 switch as well as some other fixed configuration devices.  Nikhil showed us something very interesting that’s capable now on the Supervisor 7E running IOS XE.  Namely…Wireshark.  As someone that spends a large amount of time running Wireshark on networks as well as someone that installs it on every device I own, having a copy of Wireshark available on the switch I’m troubleshooting is icing on the cake.  The 4500 Wireshark can capture packets in either the control plane or the data plane to extend your troubleshooting options when faced with a particularly vexing issue.  Once you’ve assembled your packet captures in the now-familiar PCAP format, you can TFTP or SFTP the file to another server to break it down in your viewer of choice. Another nice feature of the 4500 Wireshark is that the packet captures are automatically rate limited to protect the switch CPU from melting into a pile of slag if you end up overwhelming it with a packet tsunami.  If only we could get a protection like that from a nastier command like debug ip packet detail.

The ability to run Wireshark on the switch is due in large part to IOS XE.  This is a reimplementation of IOS running on top of a Linux kernel with a hardware abstraction layer.  It also allows the IOS software running in the form of a system daemon to utilize one core of the dual core CPU in the Sup7E.  The other core can be dedicated to running other third party software like Wireshark.  I think I’m going to have to do some more investigation of IOS XE to find out what kind of capabilities and limitations are in this new system.  I know it’s not Junos.  It’s also not Arista’s EOS.  But it’s a step forward for Cisco.

If you’d like to learn more about Cisco’s Borderless networks offerings, you can check out the Borderless Networks website at http://www.cisco.com/en/US/netsol/ns1015/index.html.  You can also follow their Twitter account as @CiscoGeeks.


Tom’s Take

Borderless is a little closer to my comfort level than most of the Data Center stuff.  While I do enjoy learning about FabricPath and NX-OS and VXLAN, I realize that when my journey to the fantasy land that is Tech Field Day is over, I’m going to go right back to spending my days configuring ASAs and Catalyst 4500s.  With Cisco spotlighting some of the newer technologies in the portfolio for us at NFD3, I got an opportunity to really dig in deeper with the TMEs supporting the product.  It also helps me avoid peppering my local Cisco account team with endless questions about the ASA CX or asking them for a demo 4500 with a Sup7E so I can Wireshark to my heart’s content.  That huge sigh of relief you just heard was from a very happy group of people.  Now, if I can just figure out what “Borderless” really means…

Tech Field Day Disclaimer

Cisco Data Center was a sponsor of Network Field Day 3.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 3. In addition, they provided me a USB drive containing marketing collateral and copies of the presentation as well as a pirate eyepatch and fake pirate pistol (long story).  They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Cisco Data Center – Network Field Day 3

Day two of Network Field Day 3 brought us to Tasman Drive in San Jose – the home of a little networking company named Cisco.  I don’t know if you’ve heard of them or not, but they make a couple of things I use regularly.  We had a double session of four hours at the Cisco Cloud Innovation Center (CCIC) covering a lot of different topics.  For the sake of clarity I’m going to split the two posts along product lines.  The first will deal with the Cisco Data Center team and their work on emerging standards.

Han Yang, Nexus 1000v Product Manager, started us off with a discussion centered around VXLAN.  VXLAN is an emerging solution to “the problem” (drawing by Tony Bourke):

The Problem

The Problem - courtesy of Tony Bourke

The specific issue we’re addressing with VXLAN is “lots of VLANS”.  As it turns out, when you try to create multitenant clouds for large customers, you tend to run out of VLANs pretty quickly.  Seems 4096 VLANs ranks right up there with 640k of conventional memory on the “Seemed Like A Good Idea At The Time” scale of computer miscalculations.  VXLAN seeks to remedy this issue by wrapping the original frame in a VXLAN header that contains an additional 24-bit VXLAN header along with an additional 802.1q tag:

VXLAN allows the packet to be encapsulated by the vSwitch (in this case a Nexus 1000v) and be tunneled over the network before arriving in the proper destination where the VXLAN header is stripped off, leaving the tag underneath.  The hypervisor isn’t aware of VXLAN at all.  It merely serves as an overlay.  VXLAN does require multicast to be enabled in your network, but for your PIM troubles you get an additional 16 million sub divisions to your network structure.  That means you shouldn’t run out of VLANs any time soon.

Han gave us a great overview of VXLAN and how it’s going to be used a bit more extensively in the data center in the coming months as we begin to attempt to scale out and break through our limitation of VLANs in large clouds.  Here’s hoping that VXLAN really begins to take off and becomes the de facto standard of NVGRE.  Because I still haven’t forgiven Microsoft for Teredo.  I’m not about to give them a chance to screw up the cloud too.

Up next was Victor Moreno, a technical lead in the Data Center Business Unit.  Victor has been a guest on Packet Pushers before on show 54 talking about the Locator/ID Separation Protocol (LISP).  Victor talked to us about LISP as well as the difficulties in creating large-scale data centers.  One key point of Victor’s talk was about moving servers (or workloads as he put them).  Victor pointed out that moving all of the LAN extensions like STP and VTP across the site was totally unnecessary.  The most important part of the move is preservation of IP reachability.  In the video above, this elicited some applause from the delegates because it’s nice to see that people are starting to realize that extending the layer 2 domain everywhere might not be the best way to do things.

Another key point that I took from Victor was about VXLAN headers and LISP headers and even Overlay Transport Virtualization (OTV) headers.  It seems they all have the same 24-bit ID field in the wrapper.  Considering that Cisco is championing OTV and LISP and was an author on the VXLAN draft, this isn’t all that astonishing.  What really caught me was the idea that Victor proposed wherein LISP was used to implement many of the features in VXLAN so that the two protocols could be very interoperable.  This also eliminates the need to continually reinvent the wheel every time a new protocol is needed for VM mobility or long-distance workload migration.  Pay close attention to a slide about 22:50 into the video above.  Victor’s Inter-DC and Intra-DC slide detailing which protocol works best in a given scenario at a specific layer is something that needs to be committed to memory for anyone that wants to be involved in data center networking any time in the next few years.

If you’d like to learn more about Cisco’s data center offerings, you can head over to the data center page on Cisco’s website at http://www.cisco.com/en/US/netsol/ns340/ns394/ns224/index.html.  You can also get data center specific information on Twitter by following the Cisco Data Center account, @CiscoDC.

Tom’s Take

I’m happy that Cisco was able to present on a lot of the software and protocols that are going into building the new generation of data center networking.  I keep hearing things like VXLAN, OTV, and LISP being thrown around when discussing how we’re going to address many of the challenges presented to us by the hypervisor crowd.  Cisco seems to be making strides in not only solving these issues but putting the technology at the forefront so that everyone can benefit from it.  That’s not to say that their solutions are going to end up being the de facto standard.  Instead, we can use the collective wisdom behind things like VXLAN to help us drive toward acceptable methods of powering data center networks for tomorrow.  I may not have spent a lot of my time in the data center during my formal networking days, but I have a funny feeling I’m going to be there a lot more in the coming months.

Tech Field Day Disclaimer

Cisco Data Center was a sponsor of Network Field Day 3.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 3. In addition, they provided me a USB drive containing marketing collateral and copies of the presentation as well as a pirate eyepatch and fake pirate pistol (long story).  They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Infineta – Network Field Day 3

The first day of Network Field Day 3 wrapped up at the offices of Infineta Systems.  Frequent listeners of the Packet Pushers podcast will remember them from Show 60 and Show 62.  I was somewhat familiar with their data center optimization technology before the event, but I was interested to see how they did their magic.  That desire to see behind the curtains would come back to haunt me.

Infineta was primed to talk to us.  They even had a special NFD3 page setup for the streaming video and more information about their solutions.  We arrived on site and were ushered into a conference room where we got setup for the ensuing fun.

Haseeb Budhani (@haseebbudhani), Vice President of Products, kicked off the show with a quick overview of Infineta’s WAN optimization product line.  Unlike companies like Riverbed or Cisco WAAS, Infineta doesn’t really care about optimizing branch office traffic.  Infineta focuses completely on the data center at 10Gbps speeds.  Those aren’t office documents, kids.  That’s heavy duty data for things like SAN replication, backup and archive jobs, and even scaling out application traffic.  Say you’re a customer wanting to do VMDK snapshots across a gigabit WAN link between sites on two different coasts.  Infineta allows you to reduce the amount of time that transferring the data takes while at the same time allowing you to better utilize the links.  If you’re only seeing 25-30% link utilization in a scenario like this, Infineta can increase that to something on the order of 90%.  However, the proof for something like this doesn’t come in a case study on Powerpoint.  That means demo time!  Here is one place where I think Infineta hit a home run.  Their demo was going to take several minutes to compress and transfer data.  Rather than waiting for the demo to complete at the end of the presentation and boring the delegates with ever-increasing scrollbars, Infineta kicked off the demo and let it run in the background.  That’s great thinking to keep our attention focused on the goods of the solution even while the proof of things is working in the background.  While the demo was chugging along in the background, Infineta brought in someone that did something I never thought possible.  They found someone that out-nerded Victor Shtrom.

That fine gentleman is Dr. K. V. S. Ramarao (@kvsramarao) or “Ram” as he is affectionately known.  He was a professor of Computer Science at Pitt.  And he’s ridiculously smart.  I jokingly said that I was going to need to go back to college to write this blog post because of all the math that he pulled out in discussion of how Infineta does their WAN optimization.  Even watching the video again didn’t help me much.  There’s a LOT of algorithmic math going on in this explanation.  The good Dr. Ramarao definitely earned his Ph.D if he worked on this.  If you are at all interested in the theoretical math behind large-scale data deduplication, you should watch the above video at least three times.  Then do me a favor and explain it to me like I was a kindergartner.

The wrap up for Infineta was a bit of reinforcement of the key points that differentiate them from the rest of the market.  All in all, a very good presentation and a great way to keep the nerd meter way off the charts.

If you’d like to learn more about Infineta Systems, you can find them at http://www.infineta.com/.  You can also follow them on Twitter as @Infineta


Tom’s Take

Data centers in the modern world are increasing the amount of traffic they can produce exponentially.  They are no longer bound to a single set of hard disks or a physical memory limit.  We also ask a lot more of our servers when we task them with sub-second failover across three or more timezones.  Since WAN links are keeping up nearly as fast with the explosion of moving data and the reduction in time that it has to arrive in the proper place, we need to look at how to reduce the data being put on the wire.  I think Infineta has done a very good job of fitting into this niche of the market.  By basing their product on some pretty solid math, they’ve shown how to scale their solution to provide much better utilization of WAN links while still allowing for magical things like vMotion to happen.  I’m going to be keeping a closer eye on Infineta, especially when I find myself in need to migrating a server from Los Angeles to New York in no time flat.

Tech Field Day Disclaimer

Infineta was a sponsor of Network Field Day 3.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 3. In addition, they provided me a t-shirt, coffee mug, pen, and USB drive containing product information and marketing collateral.  They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Arista – Network Field Day 3

The third stop for the Network Field Day 3 express took us to the offices of Arista Networks.  I’m marginally familiar with Arista from some ancillary conversations I’ve had with their customers.  I’m more familiar with one of their employees, Doug Gourlay (@dgourlay) the Vice President of Marketing.  Doug was a presenter at Network Field Day 1 and I’ve seen him at some other events as well.  He’s also written an article somewhat critical of OpenFlow for Network World, so I was very interested to see what he had to say at an event that has been so involved in OpenFlow recently.

After we got settled at Arista, Doug wasted no time getting down to business.  If nothing else can be said about Arista, they’re going to win points by having Doug out front.  He’s easily one of the most natural presenters I’ve seen at a Tech Field Day event.  He clearly understands his audience and plays to what we want to see from our presentations.  Doug offered to can his entire slide deck and have a two hour conversation about things with just a whiteboard for backup.  I think this kind of flexibility makes for a very interesting presentation.  This time, however, there were enough questions about some of the new things that Arista was doing that slides were warranted.

The presentation opened with a bit about Arista and what they do.  Doug was surprising frank in admitting that Arista focused on one thing – making 10 gigabit switches for the data center.  His candor in one area was a bit refreshing, “Every company that ever set out to compete against Cisco…and try to be everything to everybody has failed.  Utterly.”  I don’t think he said this out of deference for his old employer.  On the contrary, I think it comes from the idea that too many companies have tried to emulate the multiple product strategy that Cisco has done with their drive into market adjacencies and subsequently scaled back.  One might argue some are still actively competing in some areas.  However, I think Arista’s decision to focus on a specific product segment gives them a great competitive advantage.  This allows the Arista developers to focus on different things, like making switches easier to manage or giving you more information about your network to allow you to “play offense” when figuring out problems like the network being slow.  Doug said that the idea is to make the guys in the swivel chairs happy.  Having a swiveling chair in my office, I can identify with that.

After a bit more background on Arista, we dove head first into the new darling of their product line – the FX series.  The FX series is a departure from the existing Arista switch lines in that it uses Intel silicon instead of the Broadcom Trident chipset in the SX series.  It also sports some interesting features like a dual core processor, a 50GB SSD, and an onboard rubidium atomic clock.  That last bullet point plays well into one of Arista’s favorite verticals – financial markets.  If you can timestamp packets with a precision time stamp from RFC1588, you don’t have to worry about when they arrived or exited the switch.  The timestamp tells you when to replay them and how to process them.  Plus, 300 picoseconds of drift a year sure beats the hell out of relying on NTP.  The biggest feature of the FX series though is the Field Programmable Gate Array (FPGA) onboard.  Arista has included these little gems in the FX series to allow customers even more flexibility to program their switches after the fact.  For those customers that can program in VHDL or are willing to outsource the programming to one of Arista’s partners, you can make the hardware on this switch do some very interesting things like hardware accelerated video transcoding or inline risk analysis for financial markets.  You’re only limited by your imagination and ability to write code.  While programming FPGAs won’t be for everyone out there, it fits in rather well with the niche play that Arista is shooting for.

At this point, Arista “brought in a ringer” as Stephen Foskett (@SFoskett) put it.  Doug introduced us to Andy Bechtolsheim.  Andy is currently the Chief Development Officer at Arista.  However, he’s probably more well known for another company he founded – Sun Microsystems.  He was also the first person to write a check to invest in a little Internet company then known as “Google, Inc.”  Needless to say, Andy has seen a lot of Internet history.  We only got to talk to him for about half an hour but that time was very well spent.  It was interesting to see him analyze things going on in the current market (like OpenFlow) and kind of poke holes all over the place.  From any other person it might sound like clever marketing or sour grapes.  But from someone like Bechtolsheim it sounded more like the voice of someone that has seen much of this before.  I especially liked his critique of those in academics creating a “perfect network” and seeing it fail in implementation because people don’t really build networks like that in real life.  There’s a lot of wisdom in the above video and I highly recommend a viewing or two.

The remainder of our time at Arista was a demo of Arista’s EOS platform that runs the switches.  Doug and his engineer/developer Andre wanted to showcase some of the things that make EOS so special.  EOS is currently running a Fedora Core 2.6.32 Linux kernel as the heart of the operating system.  It also allows you to launch a bash shell to interact with the system.  One of the keys here is that you can use Linux programs to aid in troubleshooting.  Like, say, running tcpdump on a switchport to analyze traffic going in and out.  Beats trying to load up Wireshark, huh?  The other neat thing was the multi-switch CLI enabled via XMPP.  By connecting to a group of switches you can issue commands to each of them simultaneously to query things like connected ports or even issue upgrades to the switches.  This answered a lingering question I had from NFD1.  I thought to myself, “Sure, having your switches join and XMPP chat room is cool.  But besides novelty, what’s the point?”  This shows me the power of using standard protocols to drive innovation.  Why reinvent the wheel when you can simply leverage something like XMPP to do something that I haven’t seen from any other switch vendor.  You can even lock down the multi-switch CLI to prevent people from issuing a reload command to a switch group.  That prevents someone from being malicious and crashing your network at the height of business.  It also protects you from your own stupidity so that you don’t do the same thing inadvertently.  There’s even more fun things from EOS, such as being able to display the routes on a switch at a given point in time historically.  Thankfully for the NFD3 delegates, we’re going to get our chance to play around with all the cool things that EOS is capable of, as Arista provided us with a USB stick containing a VM of EOS.  I hope I get the chance to try it out and put it through some interesting paces.

If you’d like to learn more about Arista Networks, you can check out their website at http://www.aristanetworks.com.  You can also follow them on Twitter as @aristanetnews.


Tom’s Take

Odds are good that without Network Field Day, I would never have come into contact with Arista.  Their primary market segment isn’t one that I play into very much in my day job.  I am glad to have the opportunity to finally see what they have to offer.  The work that they are doing not only with software but with hardware like FPGAs and onboard atomic clocks shows attention to detail that is often missed by other vendors.  The ability to learn their OS in a VM on my machine is just icing on the cake.  I’m looking forward to seeing what EOS is capable of in my own time.  And while I’m not sure whether or not I’ll ever find an opportunity to use their equipment in the near future, chance does favor the prepared mind.

Tech Field Day Disclaimer

Arista was a sponsor of Network Field Day 3.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 3. In addition, they provided me a USB drive containing marketing collateral and copies of the presentation as well as a copy of EOS in a virtual machine.  The USB drive also functioned as a bottle opener.  They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

NEC – Network Field Day 3

OpenFlow was on the menu for our second presentation at Network Field Day 3.  We returned the the NEC offices to hear about all the new things that have come about since our first visit just six months ago.  Don Clark started us off with an overview of NEC as a company.  They have a pretty impressive balance sheet ($37.5 billion annually) for a company that gets very little press in the US.  They make a large variety of products across all electronics lines, from monitors to storage switches and even things like projectors and digital television transmitters.  But the key message to us at NFD3 revolved around the data center and OpenFlow.

According to NEC, the major problem today with data center design and operation is the silo effect.  As Ethan Banks discussed recently, the various members of the modern datacenter (networking, storage, and servers) don’t really talk to each other any longer.  We exist in our own little umwelts and the world outside doesn’t exist.  With the drive to converge data center operations for the sake of reduced costs, both capital expenditures and operational expenditures, we can no longer afford to exist in solitude.  NEC sees OpenFlow and programmable networking as a way to remove these silo walls and drive down costs by pushing networking intelligence into the application layer while also allowing for more centralized command and control of devices and packet flows.  That’s a very laudable goal indeed.

A few things stuck out to me during the presentation.  First, in the video above, Ivan asks what kind of merchant silicon is powering the NEC solution.  He specifically mentions the Broadcom Trident chipset that many vendors are beginning to use as their entry into merchant silicon.  As in, the Juniper QFX3500, Cisco Nexus 3000, HP5900AF, and Arista 7050.  Ivan says that the specs he’s seeing on the PF5820 are very similar.  Don’s response of “it’s merchant silicon” seems to lend credence to the use of a Trident chipset in this switch.  I think this means that we’re going to start seeing switches with very similar “speeds and feeds” coming from every vendor that decides to outsource their chipsets.  The real power is going to come from software and management layers that drive these switches to do things.  That’s what OpenFlow is really getting into.  If all the switches can have the same performance, it’s a relatively trivial matter to drive their performance with a centralized controller.  When you consider that most of them will end up running similar chipsets anyway, it’s not a big leap to suggest that the first generation of OpenFlow/SDN enabled switches are going to look identical to a controller at a hardware level.

The other takeaway from the first part of the session is the “recommended” limit of 25 switches per controller in Programmable Flow architecture.  This, in my mind, is the part that really cements this solution firmly in the data center and not in the campus as we know it.  Campus closets can be very interesting environments with multiple switches across disparate locations.  I’m not sure if the PF-series switches need to have direct connections to a controller or if they can be daisy chained.  But by setting a realistic limitation of 25 switches in this revision, you’re creating a scaling limitation of 25 racks of equipment, since NEC considers the PF5820 to be a Top-of-Rack (ToR) switch for data center users.  A 25-rack data center could be an acreage of servers for some or a drop in the bucket for others.  The key will be seeing if NEC is going to support a large install base per controller in future releases.

We got a great overview of using OpenFlow in network design from Samrat Ganguly.  He mentioned a lot of interesting scenarios where OpenFlow and Programmable Flow could be used to provide functionality similar to what we do today with things like MPLS.  We could force a traffic flow to transit from a firewall to an IDS and then onto its final destination all by policy rather than clever cabling tricks.  The idea for using OpenFlow as opposed to MPLS focuses mostly on the idea of using a (relatively) simple central controller versus the more traditional method of setting up VRFs and BGP to connect paths across your core.  This is another place where software defined networking (SDN) will help in the data center.  I don’t know what kind of inroads it will make against those organizations that are extensively using MPLS today, but it gives many starting out a good option for easy traffic steering.  We rounded out our time at NEC with a live demo of Programmable Flow:

If you’d like to learn more about NEC and ProgrammableFlow, check them out at http://www.necam.com/pflow/.  You can also follow them on Twitter as @NECAm1

Tom’s Take

It appears to me that NEC has doubled down on OpenFlow.  That’s not a bad thing in the least.  However, I do believe that OpenFlow has a very well defined set of characteristics today that make it a good fit for data center networking and not for the campus LAN.  The campus LAN is still the wild, wild west and won’t benefit in the near-term from the ability to push flows down into the access layer in a flash.  The data center, on the other hand, is much less tolerant of delay and network reconfiguration.  By allowing a ProgrammableFlow controller to direct traffic around your network, you can put the resources where they are needed much quicker than with some DC implementations on the market.  The key to take away from NEC this time around is that OpenFlow is still very much a 1.0 product release.  There are a lot of things planned for the future of OpenFlow, even in the 1.1 and 1.2 specs.  I think NEC has the right ideas with where they want to take things in OpenFlow 2.0.  The key is going to be whether or not the industry changes fast enough to keep up.

Tech Field Day Disclaimer

NEC was a sponsor of Network Field Day 3.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 3. In addition, they provided me a USB drive containing marketing collateral and copies of the presentation and a very interesting erasable pen.  They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Start Menus and NAT – An Experiment

Fresh off my recent fame from my NAT66 articles (older and newer), I decided first thing Monday morning that a little experiment was in order.  I wanted to express my displeasure for sullying something like IPv6 with something I consider to be at best a bad idea.  The only thing I could come up with was this:

The response was interesting to say the least.  Questions were raised.  Some asked if I was playing a late April Fools joke.  Others rounded up the pitchforks and torches and threatened to burn down my house if I didn’t recant on the spot.  Mostly though, people made sure to express their displeasure by educating me to the fact that I should use something else to do what I wanted rather than rely on the tried-and-true metaphor of a Start Menu.

Now do you see what I’m talking about with NAT66?  Some people think that NAT is needed not because it’s a technological necessity.  Not because it’s solving fifteen problems that IPv6 has right now.  They want NAT because they really don’t understand how things work in IPv6.  It’s the same as bolting a Start Menu on to OS X.  When I started using my new MacBook a few months ago, I took the time to figure out how to use things like Spotlight and Alfred.  They weren’t my Start Menu, but they worked almost the same way (in many cases better).  I didn’t protest the lack of a metaphor I clearly didn’t need.  I adapted and overcame.  And in the end I found myself happier because I found something that worked better than I had hoped.

In much the same way, people that crave NAT on IPv6 are just looking for familiar metaphors for addressing.  I’m going to cast aside the multihoming argument right now because we’ve done that one to death.  Yes, it exists.  Yes, it needs to be addressed.  Yes, NPT is the best solution we’ve got right now.  However, when I started going through all the comments on my NAT66 blog post after the link from the Register article, I noticed that some of the commenters weren’t entirely sure how IPv6 worked.  They did understand that the addresses being assigned to the adapters were globally routable.  But some seemed to believe that a globally routable address meant that every device was going to need a firewall along with DDoS protection and ruleset monitoring.  Besides the fact that every OS has had a firewall since 2002, let me ask one question.  Are you tearing out your WAN firewall when you move to IPv6?  Because as far as I know, you still one have one (maybe two) WAN connections that are terminated on some device.  That could be a router or a firewall.  In the IPv4 world, that device is doing NAT in addition to controlling which devices on the outside can talk to the inside.  Configuring a service to traverse the firewall is generally a two-stage process today.  You must configure a static NAT entry for the device in question and then allow one or more ports to pass through the firewall.  It’s not too difficult, but it is time consuming.  In IPv6, with the same firewall and no NAT, there isn’t a need to create a static NAT entry.  You just permit the ports to access the devices on the inside.  No NAT required.  If you don’t want anyone to talk to the devices on the inside, you don’t configure any inbound rules.  Simple as that.  When you need to poke holes in the firewall for things like web servers, email servers, and so on, all you need to do is poke the hole and be done.

Perhaps what we really need to end this NAT issue is wildcard masking for IPv6 addresses in firewalls.  I have no doubt that eventually any simple home network device that support DHCPv4 today will eventually support DHCPv6 or SLAAC in the near future.  As fast as new chipsets are created to increase the processing power we install into small office/home office devices, it’s inevitable that support will come.  But to support the “easy” argument, what we likely need to do is create a field in the firewall that says “Network Address”.  That would be the higher ordered 48 bits of the IPv6 address.  Once it’s plugged in, the hosts will use DHCPv6 or SLAAC to address themselves.  Then, we select the devices from a list based on DNS name and click a couple of checkboxes to allow ports to open for inbound and outbound traffic.  If a customer is forced to change their address allocation, all they need to do is change the “Network Address” field.  Then, software on the backend would script changes to DHCPv6/SLAAC and all the firewall rules.  DNS would update automatically and all would work again.  Perhaps this idea is too far fetched right now and the scripting necessary would be difficult to write at the present time.  But if it answers the “easy” outcry about IPv6 addressing without the need to add NAT to the protocol, I’m all for it.  Who knows?  Maybe Apple will come up with something just this easy.


Tom’s Take

For the record, I really don’t think there needs to be a Start Menu in OS X.  I think Spotlight is a perfectly fine way to launch programs not located on the dock and find files on your computer.  Even alternatives like Alfred and Quicksilver are fine for me.  The point of my tweet and subsequent replies wasn’t meant to advocate for screwing up the UI of OS X.  It was meant to show that while some people think that my distaste for NAT is silly, all it takes is the right combination of silliness to get people up in arms.  To all of you that were quick to jump and offer alternatives and education for my apparent lack of vision, I say that we need to focus effort like that into educating people about how IPv6 works or spend our time figuring out how to remove the roadblocks standing in the way of adoption.  If that means time writing scripting for low-end devices or figuring out easy UI options, so be it.  After all, someone else has already figured out how to create a Start Menu on a Mac: