Network Field Day 4

I am once again humbled and honored to accept an invitation to my favorite industry event – Network Field Day (now in its fourth iteration).  Network Field Day 4 (NFD4) will be coming to you from San Jose October 10-12th.  The delegate lineup has a bunch of new faces that I’m excited to catch up with and/or meet for the first time: Anthony Burke @Pandom_ Bob Plankers @Plankers Brad Casemore @BradCasemore Brent Salisbury @NetworkStatic Colin McNamara @ColinMcNamara Greg Ferro @EtherealMind Michael McNamara @mfMcNamara Paul Stewart @PacketU

This is a great crew with a lot to say and I’m anxious to see them unleashed on our assembled sponsors:

Brocade – I’m betting that VCS is going to be up on the block this time around.  We got a chance to play with it a while back and we had a blast.  With the annoucements that you’ve made around Brocade Tech Day, I’d like to hear more about the VCS strategy and how it will dovetail into your other product lines.  I’d also like to hear more about the ADX and how you plan on terminating VXLAN tunnels in hardware.  Please be sure that you can talk about these in decent depth.  Being told over and over again that something is NDA when it shouldn’t be a huge mystery is a bit disconcerting.  Also, if Jon Hudson isn’t presenting, at least have him show up for a few minutes to say hello.  We love that guy.During Wireless Field Day 3, Gregor Vučajnk (@GregorVucajnk) had a great blog post about attending that had something that I’m going to borrow for this NFD outing.  He called out each of the participating sponsors and gave them a short overview of what he wanted to see from each of them.  I loved the idea, as it gives a bit more direction to the people making the decisions about presentation content.

Cisco Borderless – Please, please, oh please tell me what Borderless really means.  Even if it’s just “everything but data center and collaboration”.  I really want to know how you’re pulling all these product lines together to create synergy.  Otherwise, it’s still just going to be the routing BU, switching BU, and so on.  We had a great time listening to the last presentation about ASA CX and Wireshark on the Cat 4500.  More of that good stuff, even if it means you have to shave your presentation down a bit to accommodate.  Remember, we ask lots of questions.

Juniper – Firstly, I want a bit of talk about Ivan’s post exploring all the gooey details around QFabric.  I understand that in this case it may be a bit like the magician telling how the trick is done, but this is the kind of thing that fascinates me.  I’m also sure there’s going to be discussion around SDN and the Juniper approach to it.  The presentation at NFD2 was so great I want to see you keeping up the good work.

OpenGear – Hello there.  I know nothing about you beyond the cursory Google search.  It looks like you’ve got some interesting technology that could be of great use to network professionals.  Case studies and anecdotes about using a 3G console failover to prevent global chaos would be awesome.  Also, allowing us the opportunity to poke around on a box for a few minutes would rock.  I want to think about how I can use your product to make my life less miserable when it comes to offline console access.

Spirent – Hello again to you.  I didn’t know anything about Spirent last time, but now I see them everywhere I look.  Spirent is like the Good Housekeeping seal for network gear.  Lets dive deeper into things.  I know you’re squeamish about showing off GUIs and things like that, but we nerd out on those things.  Also, I want to talk about how you plan on building testing rigs to handle all the coming 100GigE traffic.  Show me how Spirent is going to keep up the Ginger Rogers mystique that I’ve associated with it.

Statseeker – Network Performance Management and monitoring can be a bit of a dry subject, but doing it with an accent from the Land Down Under could be a bit of a treat.  After your recent Packet Pushers episode, I want to drill down more into how you go about keeping all the monitoring data.  I’ve seen what overwhelming an NMS with data can do, and while it was a pretty light show, I want to prevent it from happening again.  I don’t expect you to bring one of your famous Minis to give away to the delegates, but don’t underestimate the power of bribery via Tim Tam.

Tech Field Day – Audience Participation

For those of you that like to follow along with the Tech Field Day delegates from the comfort of your office chair or recliner, you are more than welcome.  I’ve even seen people talking about taking the day off from work or making sure they aren’t on a remote site.  We will be streaming each of the presentations live at  Note that this stream does use uStream, so we aren’t optimized for mobile devices just yet.  We’re working on it, though.  We will also be spending a lot of time on Twitter discussing the presentations and questions about them.  Just make sure to use the hashtag #NFD4 and you can be a part of the discussion.  I love seeing discussion and commentary from all the people watching online.  I always make sure to keep my Twitter client at the forefront so I can ask questions from the home audience when they arise.  That way, I’m truly a delegate representing people and giving them a say in what shapes the events.

If you’d like to learn a little more about Tech Field Day, you can head over to and read up on things.  You can also apply to be a delegate at this link.  I look forward to seeing you online and hearing from you at this Tech Field Day event.

Standard Tech Field Day Sponsor Disclaimer

Tech Field Day is a massive undertaking that involves the coordination of many moving parts.  It’s not unlike trying to herd cats with a helicopter.  One of the most important pieces is the sponsors.  Each of the presenting companies is responsible for paying a portion of the travel and lodging costs for the delegates.  This means they have some skin in the game.  What this does NOT mean is that they get to have a say in what we do.  No Tech Field Day delegate is every forced to write about the event due to sponsor demands. If a delegate chooses to write about anything they see at Tech Field Day, there are no restrictions about what can be said.  Sometimes this does lead to negative discussion.  That is entirely up to the delegate.  Independence means no restrictions.  At times, some Tech Field Day sponsors have provided no-cost evaluation equipment to the delegates.  This is provided solely at the discretion of the sponsor and is never a requirement.  This evaluation equipment is also not a contingency of writing a review, be it positive or negative.

Reality As A Service

If you are a fan of Tech Field Day or a frequent viewer of my blog posts, you know that I’m somewhat skeptical of the majority of analyst firms out there.  At best, many of them function solely as a mouthpiece regurgitating old information to remind CxOs that the decisions they made 2-3 years ago were the right ones.  At worst, they are the paid shills for companies looking for market share and attention.  Thanks to a convenient vendor event, I got to spend some time picking the brains of many of my colleagues about topics like this, and I find I’m not alone.  Independence and objectivity are always important, and as I’ve said in the past when talking about an independent testing company idea, it can be hard to maintain in an environment where you are so reliant on the vendors to provide support and funding for the things you want to do.  After all, not everyone can be as rich as  Richard Branson.  I think, however, that I might have finally hit on an idea that could work for me.

The movie Patton holds a clue to my devious intentions.  Within, the general describes a scene from ancient Rome.  Conquering generals were awarded a triumph, a giant parade through the heart of Rome where the population would shower the hero with adulation and praise.  For those very successful generals, this could soon become a source of feelings of superiority.  After all, here are all these people telling you how great you are.  Sooner or later, you’re going to start believing your own press.  According to Patton, however, it was common practice for a slave to stand behind the general and whisper in his ear every so often, “Remember, fame is fleeting…”  This is the “what have you done for me lately” mentality so prevalent today.  People quick to forget your successes but take a very long time to forgive your failures.  No where is this more apparent to me that in the audition process for the TV show American Idol.  For the five of you that might not be familiar with this particular program, it’s essentially a serialized talent competition/reality show.  The real interesting part for most people isn’t the competition itself.  It’s the auditions for the first two to three weeks of each season.  This is where you get to see the people that turn up and try out.  Many of these people have absolutely no business singing.  At all.  For whatever reason, whether it be believing their own press or the false praise of others, these people truly think they have amazing talent where none actually exists.  These “trainwrecks” drive a lot of the views for the first few episodes because people take some kind of perverse delight in watching failure.  Once the trainwrecks are finished, the real competition can start.

I’ve always said to myself that what these trainwrecks need is a harsh dose of reality.  I’ve been gifted in my life that I’ve been able to have people tell me that maybe I wasn’t best suited to be a singer or a baseball player.  They encouraged me to work toward realistic goals, like being a snarky network rock star.  However, some of these American Idol contestants don’t have that.  They go right on believing they can sing like a real rock star until they get in front of the cameras and Simon Cowell hammers them with reality in front of the whole nation.  What I had originally proposed was a service that did much the same thing, only not so public.  For those people that care enough to tell these contestants that maybe singing isn’t what they were cut out for but can’t bring themselves to do it for whatever reason, I would gladly offer my services in their stead.  I can call people up and let them know that the prevailing opinion is that while they might sound good in the shower, they really shouldn’t try to make a living singing old show tunes in front of a harsh judging panel.

My conversations as of late have finally made the lightbulb go off and join these two disparate ideas together.  That’s what bothers me to a degree about the analyst firms.  They never really have anything bad to say.  The praise is heaped on by the ladle full in many cases.  Everyone has a positive place in the mystical polygon.  There is no “suck” quadrant.  Yet, when we expose these technologies to real deployments and real workloads, they start breaking and causing all manner of problems.  What we really need is a Reality As A Service offering.  Myself, along with a group of talented individuals, will pour over your product offering and tear it to shreds.  These reports are going to be decidedly negative.  We’re going to tell you all the things that are wrong with your widget.  Just like the slave in the chariot in Rome, we’re going to remind the vendors that all the praise being offered by the crowd is fleeting.  Instead, in three months time the only thing people will care about is how broken your product is.  By contracting with Reality As A Service, we will tell you up front all the things you don’t want to hear and the regular analysts don’t want to tell you.  You may not want to hear it.  You may not like us very much after we’re finished.  But, you won’t be able to tell us we’re absolutely wrong.  And you will then have a list of things to work on to make your product better.

It’s not unlike submitting an article or a book to an editor for proofreading.  It think I have a fairly decent grasp of the English language.  However, watching a professional editor slice-and-dice my work reminds me how far I still have to go.  I don’t hate the editor for pointing out my mistakes.  I make myself better by recognizing those problems and correcting them.  That’s what Reality As A Service can help fix.  Bad GUI interfaces, horrible design decisions, and academic delusion with the way things operate outside of an incubation vacuum.  Does your interface still rely on Java or Flash?  We’ll tell you.  How about requiring a $50,000 license for a feature that should really be free at this point?  We’re going to bring that up too.  And why on earth doesn’t this use the same standard protocol that the rest of the world has used for the last five years?!? That’ll be in the report as well.  In the end, rather than hear how great you are, you’ll be reminded of all the things you should be concentrating on.  Reality As A Service won’t let you lose sight of all the important things because others are too busy telling you how great the unimportant things are.

Does this idea have a future?  Not likely.  People that create things don’t take kindly to being told their widgets aren’t up to snuff.  Just like the American Idol contestants that come out of the audition after being smacked in the face with reality and say, “I don’t know what the professional talent judge was thinking.  My mom tells me that I’m the best singer she’s heard in the general store in the last fifty years!”  They can’t accept criticism when they are absolutely convinced they are right.  But for a small portion of the people, the ones that can accept constructive feedback and use it as a tool to better themselves and the products they make, there might just be some hope.

Call To Independence

Paul Revere’s ride – Courtesy of Wikipedia

The life of an independent blogger is never boring.  With all the news coming out about acquisitions and speculations about lines of business converging and moving, we have a lot to write about.  When you factor in the realization that practically no one is secure anymore and the next major data breech is just around the corner, you can see how one might stay busy with all the things coming out that need to be written about.  However, I wanted to take a moment to talk about something that I’ve been hearing recently with regards to the independent blogging community that has me a bit distressed.

In the last couple of months, we’ve seen several of the voices in the blogging community moving on to working with vendors.  It started with Andrew von Nagy (@revolutionwifi) heading to Aerohive.  Since then, we’ve seen Marcus Burton (@marcusburton) jumping to Ruckus Wireless, Hans de Leenheer (@hansdeleenheer) moving to Veeam, and most recently, Derick Winkworth (@cloudtoad) landing at Juniper.  I’ve met each and every one of these people and I greatly admire their work and their voice in the community.  I’m very happy for them that they’ve found gainful employment with a vendor and the fact that they will be bringing their talents and opinions to those that want to hear them is a boon to everyone.  However, I had a chance to talk with Stephen Foskett (@SFoskett) the other day on the phone.  We were talking about some Tech Field Day related material when the subject of independent bloggers came up.  Stephen told me that he’d heard from some people out there that we’d lost people like Andrew and Marcus to vendors.  We both agreed that kind of terminology wasn’t the best phrasing for what had occurred.

Yes, it’s true that the bloggers above are no longer independent in the strictest sense of the word.  They now have a vendor patron that will shape their views and give them information and insights that they might not otherwise get elsewhere.  They also still possess the sense of independence and critical thinking that have always made them such great resources for us all.  They are going to keep creating amazing content and helping out the community in every way they can.  They just wear a different shirt to work everyday.  They aren’t dead to us.  We don’t have to recoil in horror every time we speak to them.  Some of the best and brightest people I know work for vendors.  Especially as of late, vendors have shown that they are willing to go out and get the best and brightest of the industry.  Independent bloggers are no different.  Every word that is written or every tweet that is tweeted gives a better picture of the talent of the independent blogging community.  We all listen, and so do the vendors.

Don’t look at a vendor hiring an independent and think to yourself, “Oh boy.  What are we going to do now?”  Instead, look at this as an opportunity.  There are hundreds of people out there that have stories to tell and information to share.  The independent community is overflowing with opportunity to step up and tell the world what you want them to hear.  When you listen to the opening comment videos that I’ve done recently for Tech Field Day events, I always close with the same line – Make sure that your voice is heard.  I chose that line very carefully.  A lot of people will say that an independent blogger needs to “find their voice.”  That statement makes no sense to me.  Those of you out there with more than 30 seconds of experience with something already have a voice.  You have a thinking strategy and an opinion and a way to form words out of those, whether they be out loud or on a printed page.  You don’t need to find your voice.  You need to project it.  Blogging is all about writing down your thoughts.  I initially started this place to codify those thoughts in my head that were 141+ characters and wouldn’t fit on my Twitter stream.  Instead, it’s evolved into a place where I can prognosticate about industry news or give my opinions about things.  The key is that I put all those thoughts down here and get them out there.  People read them.  People comment on them.  People discuss them.  Sometimes people even yell at me about them.  What’s important is that people are talking.  That’s the key to becoming an independent blogger.  Every time I get a new follower on Twitter or a new LinkedIn request, I always go out to see if that person has a blog.  I like to read the things they have to talk about.  I like to see what kind of discussions they are having with people. I like to know more about what makes them tick.  That’s the kind of information that can’t be conveyed in a profile or a 140-character stream.

Those of you out there in the community that are on the fence about making your voice heard need to stop what you are doing right now and go do it.  It doesn’t matter if you think it will amount to anything in the long run.  I sure didn’t think I’d be making 250 posts when I started.  When I was talking to Greg Ferro (@etherealmind) and Ethan Banks (@ecbanks) about their plans for the opening Packet Pushers up to independent bloggers, I told them that I thought it was a great idea because “Everyone has a blog post in them somewhere.”  If I had it to do over again today, I’d probably be a Packet Pushers blogger.  I don’t like the hassle of dealing with site administration stuff.  I don’t like picking themes or deciding what widgets to put in the sidebar.  I care more about the message and the information.  Packet Pushers is great for the blogger that wants to get their feet wet and put out a few posts to gauge interest.  People like Derick and Mrs. Y (@mrsyiswhy) blog almost exclusively on Packet Pushers.  It’s a great platform for the community.  For those of you that want to make a go of it yourself there are also great options available.  WordPress and Blogger offer great free platforms.  Just pick a theme and start writing.  My blog is still hosted by WordPress and likely will be for the foreseeable future.  I’m not in this to make money or rule the world.  I want to share my thoughts and opinions with the world.  I want to generate good technical posts to help people out of tight spots.  I want to make bad NAT videos.  Wordpress helps me do that, and they can help you too.  Even if you start out writing a post a month, the key is to start.  Once you’ve gotten a post or two under your belt, you may find you like it and you want to keep doing it.  I constantly push myself to keep writing because I know that if I stop, I’m not going to keep up with it like I should.  I’m not saying you have to make a post a day, but you need to start before it can become a habit.

In the end, the independent blogging community exists because people write.  People share ideas and start conversations.  The more people that are out there doing those things, the bigger and better the blogger community becomes.  That’s the reason why Google Plus has had such a hard time competing with Facebook.  Facebook is where the people are.  In the blogging community, we already have a large number of people out there reading posts.  In order for us to truly prosper, we need to grow.  When independent bloggers get the chance to go to a vendor, that means that there is all that much more opportunity for someone new to step up.  Participation guarantees citizenship in the independent blogger community.  If you have ever wanted to share with the rest of the world, now is the time to do it.  Sit down and think about that one blog post that everyone has in them.  Write it down tonight.  Don’t worry about grammar or spelling.  Just put the thoughts on paper.  Editing can happen later.  Once you have that good blog post down and committed to paper (or text file), then decide how you want to tell the world about it.  Whether it be Packet Pushers or your own blog, just get everything together and out there so people can start reading it.  Tell the community where to find your blog.  Twitter, Facebook, Google Plus, LinkedIn, Pinterest, and many others are good sounding boards.  Heck, you could rent an airplane to tow a banner around downtown New York City if you wanted.  They important thing is to make sure you are heard so we know where to go to read what you have to say.

If even one person reading this decides to start a blog or share their thoughts about the industry, then I will have succeeded in my call to arms.  I don’t want to hear people telling me that the independent blogging community is being diminished because vendors are hiring the best and brightest.  Instead, I want the vendors to be telling me that there are so many great independent bloggers out there that they couldn’t possibly hire them all even though they want to.  That’s the way to keep a community strong.  And I challenge each and every one of you to make us all great.

More Technical Presentation Tips

As an engineer for a Value-Added Reseller (VAR) as well as a frequent Tech Field Day delegate and technical presenter, I spend a lot of my time listening to presentations.  I often find myself critiquing them for things like speaker delivery and content.  I feel that it’s my duty to share some of my thoughts on presenting and presentation structure, especially when you choose to talk to a group of technical people.  I’ve already talked about some presentation tips before, so what follows are a couple of new things that I’ve been thinking about for the last year or so.

Time Is Not On Your Side

One of the biggest concerns that I’ve seen with technical presentations as of late is the time issue.  People are typically given a one or two hour presentation slot depending on the event I am attending or presenting at.  The presenter then proceeds to fill the entire time with slide decks and lecture.  Every minute of the presentation is accounted for by a bullet point or a fancy animated slide.  Should someone disrupt the flow of the presenter’s zen with a question or a request for clarification, they are met either with a curt answer or a request to hold all questions until the end of the session.  After the end of the presentation, there is usually very little time for Q&A.

Nowhere was this more apparent to me than at the recent Network Field Day 3.  We managed to gather a great group of individuals once again to listen to industry experts talk to us about great new technologies.  However, for the first time that I can remember, we had a group that was willing to start peppering away with questions not even five minutes into the presentation.  Between Ivan Pepelnjak (@ioshints) and Marko Milivojevic (@icemarkom), there were some very good back-and-forth discussions going on.  I love these kinds of discussions.  They really show how people can take a point and launch from it into a rabbit hole of technical brilliance.  The problem with these discussions come when you have the aforementioned presenters that have filled every minute with a slide.  There’s no room to freestyle and talk about things.  Occasionally, you have companies like Metageek come along and do something totally off the wall.  They want to listen as much as they want to present.  At Wireless Field Day 2, Ryan and Trent spent quite a bit of time talking to the delegates and getting feedback.  I’d say the last twenty minutes of their presentation was spent posing questions rather than answering them.  I found this refreshing.  So refreshing, in fact, that my presentation over cloud computing not a month later got slashed from it’s allotted hour of time down to around 45-50 minutes.  Why?  I wanted to get good feedback from my audience.  I wanted to field questions as they came in and not worry about running out of time to get to my last slide.  I wanted to be sure that my presentation involved the audience as much as possible.  I think that’s a key the needs to be taken forward for presenters.  Don’t look at your time slot as a container to fill to the brim with your own ideas.  Instead, take a cue from the coffee bars of the world and pour your slot almost full.  Leave some room for questions and discussion, which are just like the sweetener and cream I pour in my coffee.  Aim for 75-80% of your time slot for presentation.  The rest should be for your audience.  Even if you don’t get a lot of questions about your presentation, at least the people will be happy that they got out fifteen minutes early and they don’t have to rush to their next session.  Either way, your audience will love you.

Live By The Demo, Die By The Demo

Oh, the demo.  How I love thee.  No boring slide deck.  No relentless bullet points.  All the joy of seeing something work in real life.  But, at the same time I hate the demo.  Too much chance for failure.  Too easy for things to go off the rails and result in a wandering audience.  How then do we reconcile the good things about a demo with all the possible downsides?

The key to giving a good demo is to make it flow.  Come up with a script for your tour that moves the viewers seamlessly from one area to the next.  It should feel connected and coherent.  You should leave some time for improvisation in case your audience finds an area where they would like to spend some more time focusing.  However, these rabbit holes are the first sign that the demo pitfalls are coming soon.  It’s all too easy to waste time talking about a specific feature and lose sight of the big picture.  When that happens, you get lots of sidebar conversations between your audience.  When the people you are talking to spend more time talking to each other, you’ve lost control.  You need to find a way to bring things back to you.  It’s also important to note that technical people hate watching progress bars and incrementing counters.  If your demo is going to require time to load a program or push out a firmware, consider kicking it off early in your presentation and then talking more about a specific feature or fielding questions while it goes on in the background.  Infineta did this at Network Field Day 3.  Rather than let us watch the couple of hundred gigabytes of traffic flooding across a boring screen, they instead kicked off the demo and let it run in the background while they melted our brains with algorithm math.  When we had been beaten into submission by formulae, we flipped back over to see the results of the live demo.  All the benefits of a real walkthrough without any wasted time.

Tom’s Take

There’s no such thing as a perfect presentation.  It’s goal that we all strive for but can never really accomplish.  That’s not to say we as presenters can’t give it our best shot.  I’m not saying these tips will apply to you.  In fact, a large portion of the presentations that I do either don’t involve a demo or don’t have a place for one.  They key is to recognize that a live (or simulcasted) audience isn’t a group of mindless drones that will absorb your every word without question.  You should do your best to involve and include them at every step of the way.  When the audience feels they have a choice in the content and direction, they’ll be more involved and happier in the end.

Spirent – Network Field Day 3

The final presentation for Network Field Day 3 came from Spirent Communications.  This was the one company at NFD3 that I was completely in the dark about.  Beyond knowing that they “test stuff”, I was unsure how that would translate into something that a networker would be interested in using.  After I walked out of their building, I now how a new-found respect for companies that build the devices that we take for granted when reading reports.

We almost didn’t get the chance to show Spirent to the viewing audience.  Spirent was unsure how some of their software would come across on a live stream.  I can attest to the fact that software demos are sometimes not the best thing to showcase to the home audience.  However, after watching the coverage of NFD3 from the previous day, Spirent was impressed by the amount of feedback and discussion going on between the delegates and the home audience.  When we arrived at the Spirent offices, we grabbed a quick lunch while the video crew set up for the session.  We got a quick introduction from Sailaja Tennati and Patrick Johnson about who Spirent is and what they do.  Turns out that Spirent makes many of the tools that other networking vendors use to test their equipment.  I liken it to the people that make the equipment that is used to test high performance cars. As impressive as the automobile might be, it’s equally (if not more) impressive to build a machine that can test that performance and even exceed it as needed.  A famous quote says “Fred Astaire was a great dancer.  But don’t forget Ginger Rogers did everything he did backwards in high heels.”  To me, Spirent is like Ginger Rogers.  They not only have to keep up with the equipment that Cisco puts out, they have to exceed it and provide that additional capacity to the vendor.

Ankur Chadda was the next presenter.  He started off by telling us about the difficulties in testing equipment.  Firstly, as soon as there is a problem, the first thing to blame is the testing equipment.  It seems that certain people are so sure their equipment is right, there is no way anything could be wrong.  Instead, it’s the tester that’s at fault.  Many times, this comes from the idea that the data used to test the equipment should be carefully considered.  Ask yourself how many times you’ve looked at “speed and feed” numbers on a data sheet or in a publication and said to yourself, “Yeah, but are those real numbers?”  Odds are good that’s because those numbers are somewhat synthetic and generated with carefully crafted packets.  Throughput is done with very small packet sizes.  VPN connections are done with clients that just connect and not transfer data.  And so on.  Spirent uses their PASS methodology to test equipment – Performance, Availability, Security, and Scalability.  This ensures that the numbers that are generated are grounded in reality and useful to the customers wanting to run this in a production environment.

Jurrie van den Breekel introduced us to the data center testing arm of Spirent.  I find it very interesting that many vendors like Alcatel, Avaya, and Huawei come to Spirent to provide objective interoperability testing.  That says a lot about their capability as well as the trust invested in a company to provide unbiased results.  This is something I‘ve said we’ve needed in networking for very long time.  Another key piece of testing methodology is ensure that you’re comparing similar capabilities.  The example Jurrie gave in the above video is comparing switching performance when the devices use cut-through forwarding versus store-and-forward.  Based on understanding of the way those methods work, cut-through should beat store-and-forward.  However, Jurrie mentioned that there have been testing scenarios when the converse it true.  The key is making sure that the tests match the specifications being tested.  Otherwise, you end up with wacky results like those above.  The other fun anecdote from Jurrie involved testing a Juniper QFabric implementation.  One thing that most people tend to overlook when testing or installing equipment is simple cabling.  While many might take it for granted, it becomes a non-trivial issue at a big enough scale.  In the case of the QFabric test, it took two full days to cable the 1500 ports.  That’s something to keep in mind the next time someone wants you to quote hours for an installation.

Our last presenter for the streamed portion of NFD3 was Ameya Barve.  He led his talk with a nice prediction – testing as we know it will shift from individual scenarios like application or network testing and instead become converged on infrastructure testing.  This is critical because most of these tests today occur completely independent of each other.  This means that the people doing the testing need to know what to test for.  That’s one of the things that Spirent is moving towards.  I think that this kind of holistic testing is going to be critical as well.  Too many times we find out after the fact that an application had some unforeseen interaction with a portion of the network in what is normally called a “corner case scenario”.  Corner cases are extremely hard to test for in siloed testing because the interaction never happens.  It’s only when you toss everything together and shake it all up that you start finding these interesting problems.

After we shut off the cameras, we got a chance to look at a tool that Spirent uses for more focused testing.  It’s an Integrated Development Environment (IDE) tool called iTest.  iTest allows you to use all kinds of interesting things to test all aspects of your network.  You can have iTest SSH to a router to observe what happens when you pump a lot of HTTP traffic through it.  You can also write regular expressions (regex) to pull in all kinds of information that is present in log files and console output.  There’s a ton of things that you can do with iTest, and I’m just scratching the surface with it.  I’m hoping to have a totally separate post up at some point covering some of the more interesting parts of iTest.

If you’d like to learn more about Spirent and their testing tools and methodology, you can head over to their website at  You can also follow them on Twitter as @Spirent.

Tom’s Take

It’s always a fun when I realize there is a whole world out there that I have no idea about.  My trip to Spirent showed me that the industry built around testing is a world unto itself.  I had no idea that so much went into the methodology and setup for generating the numbers we see in marketing slides.  I’m really interested to see what Spirent will be bringing to market to help converge the siloed testing that we see today.

Tech Field Day Disclaimer

Spirent was a sponsor of Network Field Day 3.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 3. In addition, they provided me with a gift bag containing a coffee mug, polo shirt, pen, scratchpad, USB drive containing marketing collateral, and a 1-foot long Toblerone chocolate bar. They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Solarwinds – Network Field Day 3

The first presenter up for Network Field Day 3 was a familiar face to many Tech Field Day viewers.  Solarwinds presented at the first Network Field Day and has been a sponsor of more events than any other.  It’s always nice to see vendors coming back time and again to show the delegates what they’ve been cooking since their last appearance.

We started our day in the Doubletree San Jose boardroom.  We were joined by Joel Dolisy, the Chief Software Architect for Solarwinds and Mav Turner (@mavturner), the Senior Product Manager for the network software division.  After introductions, we jumped right into some of the great software that Solarwinds makes for network engineers.  First up was the Solarwinds IP SLA Monitor.  IP Service Level Agreement (SLA) is a very important tool used by engineers to track key network metrics like reachability and latency.  What makes IP SLA so great as opposed to a bigger monitoring tool is that the engineer can take the information from IP SLA and use it to create actionable items, such as bringing down an overloaded link or sending trap information to the third-party monitoring system to alert key personnel when something is amiss.  One of the sore spots about IP SLA from my perspective is the difficulty that I have in setting it up.  Thankfully, Solarwinds thought of that for me already.  No only can the IP SLA Monitor show me all the pertinent details about a given IP SLA configuration, I can even create a new one on the fly if needed.  IP SLA Monitor allows me to push the configurations down to a single router, or to multiple routers as quickly as I can select interfaces and metrics to track.  It’s a very interesting product, especially when you know that it grew out of a simple way to manage Voice over IP (VoIP) call metrics.  When Solarwinds realized the potential of the program, they immediately added more features and enabled it across a whole host of protocols.  If you’d like to try it out on a single router, you can download the free version here.

During the presentation, I asked Solarwinds about adding some additional wireless troubleshooting capabilities to the product lines, courtesy of a request from Blake Krone (@BlakeKrone).  One thing that Joel and Mav said was that Solarwinds adds the large majority of their new features based on customer response and request.  I do admire that a company that is so highly regarded by most engineers I know is willing to sit down and make sure that customer needs are addressed in such a manner.  That way, the features that get added into the program really do come from the desires of the userbase.  The only thing that might give me pause this arrangement is that Solarwinds may be missing an opportunity to drive some development around new features by waiting for people to ask for them.  Many times I’ve looked at a piece of software and seen a curious feature in a list only to realize that I never knew I needed it.  I hope that Solarwinds is keeping up with the rapid pace of software development and ensuring that the hottest new technologies are being supported as quickly as possible in their flagship Orion platform.

One thing that Solarwinds took some additional time to show off to us was their Virtualization Manager.  An acquisition from Hyper9 last year, Virtualization Manager allows Solarwinds to hook into the VMware vCenter APIs to find all kinds of interesting things like orphaned VMs or performance issues.  You can create custom alerts on these data points to let you know if a VM goes missing after a difficult vMotion or if your hypervisors have become CPU or memory bound.  You can also archive configs and perform capacity planning and a whole host of other useful features.  One of the nicest things, though, was the fact that the UI was completely devoid of Flash!  Everything was written with HTML5 so that there is no need to worry about whether you’re using the correct device to manage your VM infrastructure’s web portal.  This was a big win for the assembled delegates, as management systems that require proprietary scripting languages or horrendously laggy and memory hungry plugins tend to make us cranky at best.

We also had some good discussions toward the end around building Linux-based polling devices and how extensible the querying capabilities can be inside of Orion.  I think this kind of flexibility is huge in allowing me to craft the tool to my needs instead of the other way around.  When you think about it, there aren’t that many companies that are willing to provide you the framework to rebuild the tool to your environment.  That’s one thing that Solarwinds has in the their favor.

If you’d like to learn more about the various offerings that Solarwinds has available, you can check them out at  You can also follow them on Twitter at their new handle, @solarwinds

Tom’s Take

Solarwinds has been making tools that make my life easier for quite some time.  They’ve also been offering them for free for a while as well.  This is a great way for people to figure out if the larger collection of tools in the Orion suite will be a good fit for what they want to do with their network.  I think the large number of tools can be daunting for an engineer just starting out or one that’s in over their head.  While the overview we received was a wonderful peek at things, Solarwinds needs to take the time to be sure the educate users to the tool capabilities, both free and paid.  I also feel that Solarwinds needs to take the time to develop some software functionality independently of user requests.  I know that the majority of the features they build into their tools are requested by users.  But as I said above, sometimes the feature I need is the one I didn’t know could be done until I read the release notes.

Tech Field Day Disclaimer

Solarwinds was a sponsor of Network Field Day 3.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 3. In addition, they provided me with a coffee cup.  They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Cisco Borderless – Network Field Day 3

The second half our our visit to Cisco during day 2 of Network Field Day 3 was filled with members of the Cisco Borderless Networks team.  Borderless Networks is really an umbrella term for the devices in the campus LAN such as wireless, campus switching, and the ASA firewall.  It was a nice break from much of the data center focus that we had been experiencing for the past couple of presentations.

Brian Conklin kicked things off with an overview of the ASA CX next generation firewall.  This was a very good overview of the product and reinforced many of the things I wrote about in my previous ASA CX blog post.  Some high points from the talk with Brian include Active Directory and LDAP integration and the inner workings of how packets are switched up to the CX module from the ASA itself.  As I had suspected, the CX is really a plugin module along the lines of IDS module or the CSC module.  We also learned that much of the rule base for application identification came from Ironport.  This isn’t really all that surprising when you think about the work that Ironport has put into fingerprinting applications.  I just hope that all of the non-web based traffic will eventually be able to be identified without the need to have the AnyConnect client installed on every client machine.  I think Brian did a very good job of showing off all the new bells and whistles of the new box while enduring questions from myself, Mrs. Y, and Brandon Carroll.  I know that the CX is still a very new product, so I’m going to hold any formal judgement until I see the technology moved away from the niche of the 5585-X platform and down into the newer 55×5-X boxes.

Next up on our tour of the borderless network was Mark Emmerson and Tomer Hagay Nevel with Cisco Prime.  Prime is a new network management and monitoring solution that Cisco is rallying behind to unify all their disparate products.  Many of you out there might remember CiscoWorks.  And if any of you actually used it regularly, you probably just shuddered when I mentioned that name.  To say that CiscoWorks has a bit of a sullied reputation might be putting it mildly.  In fact, the first time I was ever introduced to the product the person I was talking too referred to it as Cisco(Sometimes)Works.  Now, with Cisco Prime, Cisco is getting back to a solution that is useful and easy to configure.  Cisco Prime LAN Management Solution is focused on the Borderless Networks platforms specifically, with the ability to do things like archive configurations of devices and push out firmware updates when bugs are fixed or new features need to be implemented.  As well, Cisco is standardizing on the Prime user interface for all of the GUIs in their products, so you can expect a consistent experience whether you’re using Prime LMS or the Identity Services Engine (which will be folded into Prime at a later date).  The only downside to the UI right now is that there is still a reliance on Adobe Flash.  While this is still a great leap forward from Java and other nasty things like ActiveX controls, I think we need to start leveraging all the capabilities in HTML5 to create scalable UIs for customers.  Sure, much of the development of HTML5 UIs is driven by people that want to use them on devices that don’t or won’t support Flash (like the iPad).  But don’t you think it’s a bit easier to share your UI between all the devices when it’s not dependent on a third party scripting language?  After all, Aruba’s managed to do it.  We wrapped up the Prime demo with a peak at the new Collaboration Manager product.  I’ve never been one to use a product like this to manage my communications infrastructure.  However, with some of the very cool features like hop-by-hop Telepresence call monitoring and troubleshooting, I may have to take another look at it in the future.

Our last presentation at Cisco came courtesy of Nikhil Sharma, a Technical Marketing Engineer (TME) working on the Catalyst 4500 switch as well as some other fixed configuration devices.  Nikhil showed us something very interesting that’s capable now on the Supervisor 7E running IOS XE.  Namely…Wireshark.  As someone that spends a large amount of time running Wireshark on networks as well as someone that installs it on every device I own, having a copy of Wireshark available on the switch I’m troubleshooting is icing on the cake.  The 4500 Wireshark can capture packets in either the control plane or the data plane to extend your troubleshooting options when faced with a particularly vexing issue.  Once you’ve assembled your packet captures in the now-familiar PCAP format, you can TFTP or SFTP the file to another server to break it down in your viewer of choice. Another nice feature of the 4500 Wireshark is that the packet captures are automatically rate limited to protect the switch CPU from melting into a pile of slag if you end up overwhelming it with a packet tsunami.  If only we could get a protection like that from a nastier command like debug ip packet detail.

The ability to run Wireshark on the switch is due in large part to IOS XE.  This is a reimplementation of IOS running on top of a Linux kernel with a hardware abstraction layer.  It also allows the IOS software running in the form of a system daemon to utilize one core of the dual core CPU in the Sup7E.  The other core can be dedicated to running other third party software like Wireshark.  I think I’m going to have to do some more investigation of IOS XE to find out what kind of capabilities and limitations are in this new system.  I know it’s not Junos.  It’s also not Arista’s EOS.  But it’s a step forward for Cisco.

If you’d like to learn more about Cisco’s Borderless networks offerings, you can check out the Borderless Networks website at  You can also follow their Twitter account as @CiscoGeeks.

Tom’s Take

Borderless is a little closer to my comfort level than most of the Data Center stuff.  While I do enjoy learning about FabricPath and NX-OS and VXLAN, I realize that when my journey to the fantasy land that is Tech Field Day is over, I’m going to go right back to spending my days configuring ASAs and Catalyst 4500s.  With Cisco spotlighting some of the newer technologies in the portfolio for us at NFD3, I got an opportunity to really dig in deeper with the TMEs supporting the product.  It also helps me avoid peppering my local Cisco account team with endless questions about the ASA CX or asking them for a demo 4500 with a Sup7E so I can Wireshark to my heart’s content.  That huge sigh of relief you just heard was from a very happy group of people.  Now, if I can just figure out what “Borderless” really means…

Tech Field Day Disclaimer

Cisco Data Center was a sponsor of Network Field Day 3.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 3. In addition, they provided me a USB drive containing marketing collateral and copies of the presentation as well as a pirate eyepatch and fake pirate pistol (long story).  They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Cisco Data Center – Network Field Day 3

Day two of Network Field Day 3 brought us to Tasman Drive in San Jose – the home of a little networking company named Cisco.  I don’t know if you’ve heard of them or not, but they make a couple of things I use regularly.  We had a double session of four hours at the Cisco Cloud Innovation Center (CCIC) covering a lot of different topics.  For the sake of clarity I’m going to split the two posts along product lines.  The first will deal with the Cisco Data Center team and their work on emerging standards.

Han Yang, Nexus 1000v Product Manager, started us off with a discussion centered around VXLAN.  VXLAN is an emerging solution to “the problem” (drawing by Tony Bourke):

The Problem

The Problem - courtesy of Tony Bourke

The specific issue we’re addressing with VXLAN is “lots of VLANS”.  As it turns out, when you try to create multitenant clouds for large customers, you tend to run out of VLANs pretty quickly.  Seems 4096 VLANs ranks right up there with 640k of conventional memory on the “Seemed Like A Good Idea At The Time” scale of computer miscalculations.  VXLAN seeks to remedy this issue by wrapping the original frame in a VXLAN header that contains an additional 24-bit VXLAN header along with an additional 802.1q tag:

VXLAN allows the packet to be encapsulated by the vSwitch (in this case a Nexus 1000v) and be tunneled over the network before arriving in the proper destination where the VXLAN header is stripped off, leaving the tag underneath.  The hypervisor isn’t aware of VXLAN at all.  It merely serves as an overlay.  VXLAN does require multicast to be enabled in your network, but for your PIM troubles you get an additional 16 million sub divisions to your network structure.  That means you shouldn’t run out of VLANs any time soon.

Han gave us a great overview of VXLAN and how it’s going to be used a bit more extensively in the data center in the coming months as we begin to attempt to scale out and break through our limitation of VLANs in large clouds.  Here’s hoping that VXLAN really begins to take off and becomes the de facto standard of NVGRE.  Because I still haven’t forgiven Microsoft for Teredo.  I’m not about to give them a chance to screw up the cloud too.

Up next was Victor Moreno, a technical lead in the Data Center Business Unit.  Victor has been a guest on Packet Pushers before on show 54 talking about the Locator/ID Separation Protocol (LISP).  Victor talked to us about LISP as well as the difficulties in creating large-scale data centers.  One key point of Victor’s talk was about moving servers (or workloads as he put them).  Victor pointed out that moving all of the LAN extensions like STP and VTP across the site was totally unnecessary.  The most important part of the move is preservation of IP reachability.  In the video above, this elicited some applause from the delegates because it’s nice to see that people are starting to realize that extending the layer 2 domain everywhere might not be the best way to do things.

Another key point that I took from Victor was about VXLAN headers and LISP headers and even Overlay Transport Virtualization (OTV) headers.  It seems they all have the same 24-bit ID field in the wrapper.  Considering that Cisco is championing OTV and LISP and was an author on the VXLAN draft, this isn’t all that astonishing.  What really caught me was the idea that Victor proposed wherein LISP was used to implement many of the features in VXLAN so that the two protocols could be very interoperable.  This also eliminates the need to continually reinvent the wheel every time a new protocol is needed for VM mobility or long-distance workload migration.  Pay close attention to a slide about 22:50 into the video above.  Victor’s Inter-DC and Intra-DC slide detailing which protocol works best in a given scenario at a specific layer is something that needs to be committed to memory for anyone that wants to be involved in data center networking any time in the next few years.

If you’d like to learn more about Cisco’s data center offerings, you can head over to the data center page on Cisco’s website at  You can also get data center specific information on Twitter by following the Cisco Data Center account, @CiscoDC.

Tom’s Take

I’m happy that Cisco was able to present on a lot of the software and protocols that are going into building the new generation of data center networking.  I keep hearing things like VXLAN, OTV, and LISP being thrown around when discussing how we’re going to address many of the challenges presented to us by the hypervisor crowd.  Cisco seems to be making strides in not only solving these issues but putting the technology at the forefront so that everyone can benefit from it.  That’s not to say that their solutions are going to end up being the de facto standard.  Instead, we can use the collective wisdom behind things like VXLAN to help us drive toward acceptable methods of powering data center networks for tomorrow.  I may not have spent a lot of my time in the data center during my formal networking days, but I have a funny feeling I’m going to be there a lot more in the coming months.

Tech Field Day Disclaimer

Cisco Data Center was a sponsor of Network Field Day 3.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 3. In addition, they provided me a USB drive containing marketing collateral and copies of the presentation as well as a pirate eyepatch and fake pirate pistol (long story).  They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Infineta – Network Field Day 3

The first day of Network Field Day 3 wrapped up at the offices of Infineta Systems.  Frequent listeners of the Packet Pushers podcast will remember them from Show 60 and Show 62.  I was somewhat familiar with their data center optimization technology before the event, but I was interested to see how they did their magic.  That desire to see behind the curtains would come back to haunt me.

Infineta was primed to talk to us.  They even had a special NFD3 page setup for the streaming video and more information about their solutions.  We arrived on site and were ushered into a conference room where we got setup for the ensuing fun.

Haseeb Budhani (@haseebbudhani), Vice President of Products, kicked off the show with a quick overview of Infineta’s WAN optimization product line.  Unlike companies like Riverbed or Cisco WAAS, Infineta doesn’t really care about optimizing branch office traffic.  Infineta focuses completely on the data center at 10Gbps speeds.  Those aren’t office documents, kids.  That’s heavy duty data for things like SAN replication, backup and archive jobs, and even scaling out application traffic.  Say you’re a customer wanting to do VMDK snapshots across a gigabit WAN link between sites on two different coasts.  Infineta allows you to reduce the amount of time that transferring the data takes while at the same time allowing you to better utilize the links.  If you’re only seeing 25-30% link utilization in a scenario like this, Infineta can increase that to something on the order of 90%.  However, the proof for something like this doesn’t come in a case study on Powerpoint.  That means demo time!  Here is one place where I think Infineta hit a home run.  Their demo was going to take several minutes to compress and transfer data.  Rather than waiting for the demo to complete at the end of the presentation and boring the delegates with ever-increasing scrollbars, Infineta kicked off the demo and let it run in the background.  That’s great thinking to keep our attention focused on the goods of the solution even while the proof of things is working in the background.  While the demo was chugging along in the background, Infineta brought in someone that did something I never thought possible.  They found someone that out-nerded Victor Shtrom.

That fine gentleman is Dr. K. V. S. Ramarao (@kvsramarao) or “Ram” as he is affectionately known.  He was a professor of Computer Science at Pitt.  And he’s ridiculously smart.  I jokingly said that I was going to need to go back to college to write this blog post because of all the math that he pulled out in discussion of how Infineta does their WAN optimization.  Even watching the video again didn’t help me much.  There’s a LOT of algorithmic math going on in this explanation.  The good Dr. Ramarao definitely earned his Ph.D if he worked on this.  If you are at all interested in the theoretical math behind large-scale data deduplication, you should watch the above video at least three times.  Then do me a favor and explain it to me like I was a kindergartner.

The wrap up for Infineta was a bit of reinforcement of the key points that differentiate them from the rest of the market.  All in all, a very good presentation and a great way to keep the nerd meter way off the charts.

If you’d like to learn more about Infineta Systems, you can find them at  You can also follow them on Twitter as @Infineta

Tom’s Take

Data centers in the modern world are increasing the amount of traffic they can produce exponentially.  They are no longer bound to a single set of hard disks or a physical memory limit.  We also ask a lot more of our servers when we task them with sub-second failover across three or more timezones.  Since WAN links are keeping up nearly as fast with the explosion of moving data and the reduction in time that it has to arrive in the proper place, we need to look at how to reduce the data being put on the wire.  I think Infineta has done a very good job of fitting into this niche of the market.  By basing their product on some pretty solid math, they’ve shown how to scale their solution to provide much better utilization of WAN links while still allowing for magical things like vMotion to happen.  I’m going to be keeping a closer eye on Infineta, especially when I find myself in need to migrating a server from Los Angeles to New York in no time flat.

Tech Field Day Disclaimer

Infineta was a sponsor of Network Field Day 3.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 3. In addition, they provided me a t-shirt, coffee mug, pen, and USB drive containing product information and marketing collateral.  They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Arista – Network Field Day 3

The third stop for the Network Field Day 3 express took us to the offices of Arista Networks.  I’m marginally familiar with Arista from some ancillary conversations I’ve had with their customers.  I’m more familiar with one of their employees, Doug Gourlay (@dgourlay) the Vice President of Marketing.  Doug was a presenter at Network Field Day 1 and I’ve seen him at some other events as well.  He’s also written an article somewhat critical of OpenFlow for Network World, so I was very interested to see what he had to say at an event that has been so involved in OpenFlow recently.

After we got settled at Arista, Doug wasted no time getting down to business.  If nothing else can be said about Arista, they’re going to win points by having Doug out front.  He’s easily one of the most natural presenters I’ve seen at a Tech Field Day event.  He clearly understands his audience and plays to what we want to see from our presentations.  Doug offered to can his entire slide deck and have a two hour conversation about things with just a whiteboard for backup.  I think this kind of flexibility makes for a very interesting presentation.  This time, however, there were enough questions about some of the new things that Arista was doing that slides were warranted.

The presentation opened with a bit about Arista and what they do.  Doug was surprising frank in admitting that Arista focused on one thing – making 10 gigabit switches for the data center.  His candor in one area was a bit refreshing, “Every company that ever set out to compete against Cisco…and try to be everything to everybody has failed.  Utterly.”  I don’t think he said this out of deference for his old employer.  On the contrary, I think it comes from the idea that too many companies have tried to emulate the multiple product strategy that Cisco has done with their drive into market adjacencies and subsequently scaled back.  One might argue some are still actively competing in some areas.  However, I think Arista’s decision to focus on a specific product segment gives them a great competitive advantage.  This allows the Arista developers to focus on different things, like making switches easier to manage or giving you more information about your network to allow you to “play offense” when figuring out problems like the network being slow.  Doug said that the idea is to make the guys in the swivel chairs happy.  Having a swiveling chair in my office, I can identify with that.

After a bit more background on Arista, we dove head first into the new darling of their product line – the FX series.  The FX series is a departure from the existing Arista switch lines in that it uses Intel silicon instead of the Broadcom Trident chipset in the SX series.  It also sports some interesting features like a dual core processor, a 50GB SSD, and an onboard rubidium atomic clock.  That last bullet point plays well into one of Arista’s favorite verticals – financial markets.  If you can timestamp packets with a precision time stamp from RFC1588, you don’t have to worry about when they arrived or exited the switch.  The timestamp tells you when to replay them and how to process them.  Plus, 300 picoseconds of drift a year sure beats the hell out of relying on NTP.  The biggest feature of the FX series though is the Field Programmable Gate Array (FPGA) onboard.  Arista has included these little gems in the FX series to allow customers even more flexibility to program their switches after the fact.  For those customers that can program in VHDL or are willing to outsource the programming to one of Arista’s partners, you can make the hardware on this switch do some very interesting things like hardware accelerated video transcoding or inline risk analysis for financial markets.  You’re only limited by your imagination and ability to write code.  While programming FPGAs won’t be for everyone out there, it fits in rather well with the niche play that Arista is shooting for.

At this point, Arista “brought in a ringer” as Stephen Foskett (@SFoskett) put it.  Doug introduced us to Andy Bechtolsheim.  Andy is currently the Chief Development Officer at Arista.  However, he’s probably more well known for another company he founded – Sun Microsystems.  He was also the first person to write a check to invest in a little Internet company then known as “Google, Inc.”  Needless to say, Andy has seen a lot of Internet history.  We only got to talk to him for about half an hour but that time was very well spent.  It was interesting to see him analyze things going on in the current market (like OpenFlow) and kind of poke holes all over the place.  From any other person it might sound like clever marketing or sour grapes.  But from someone like Bechtolsheim it sounded more like the voice of someone that has seen much of this before.  I especially liked his critique of those in academics creating a “perfect network” and seeing it fail in implementation because people don’t really build networks like that in real life.  There’s a lot of wisdom in the above video and I highly recommend a viewing or two.

The remainder of our time at Arista was a demo of Arista’s EOS platform that runs the switches.  Doug and his engineer/developer Andre wanted to showcase some of the things that make EOS so special.  EOS is currently running a Fedora Core 2.6.32 Linux kernel as the heart of the operating system.  It also allows you to launch a bash shell to interact with the system.  One of the keys here is that you can use Linux programs to aid in troubleshooting.  Like, say, running tcpdump on a switchport to analyze traffic going in and out.  Beats trying to load up Wireshark, huh?  The other neat thing was the multi-switch CLI enabled via XMPP.  By connecting to a group of switches you can issue commands to each of them simultaneously to query things like connected ports or even issue upgrades to the switches.  This answered a lingering question I had from NFD1.  I thought to myself, “Sure, having your switches join and XMPP chat room is cool.  But besides novelty, what’s the point?”  This shows me the power of using standard protocols to drive innovation.  Why reinvent the wheel when you can simply leverage something like XMPP to do something that I haven’t seen from any other switch vendor.  You can even lock down the multi-switch CLI to prevent people from issuing a reload command to a switch group.  That prevents someone from being malicious and crashing your network at the height of business.  It also protects you from your own stupidity so that you don’t do the same thing inadvertently.  There’s even more fun things from EOS, such as being able to display the routes on a switch at a given point in time historically.  Thankfully for the NFD3 delegates, we’re going to get our chance to play around with all the cool things that EOS is capable of, as Arista provided us with a USB stick containing a VM of EOS.  I hope I get the chance to try it out and put it through some interesting paces.

If you’d like to learn more about Arista Networks, you can check out their website at  You can also follow them on Twitter as @aristanetnews.

Tom’s Take

Odds are good that without Network Field Day, I would never have come into contact with Arista.  Their primary market segment isn’t one that I play into very much in my day job.  I am glad to have the opportunity to finally see what they have to offer.  The work that they are doing not only with software but with hardware like FPGAs and onboard atomic clocks shows attention to detail that is often missed by other vendors.  The ability to learn their OS in a VM on my machine is just icing on the cake.  I’m looking forward to seeing what EOS is capable of in my own time.  And while I’m not sure whether or not I’ll ever find an opportunity to use their equipment in the near future, chance does favor the prepared mind.

Tech Field Day Disclaimer

Arista was a sponsor of Network Field Day 3.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 3. In addition, they provided me a USB drive containing marketing collateral and copies of the presentation as well as a copy of EOS in a virtual machine.  The USB drive also functioned as a bottle opener.  They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.