Tech Field Day Recap: Day 2

Group pictures always take longer when you use cameras with film

The mythical HP Dirty Chai machine brings pilgrims from far and wide

iPerf is a great way to cause AP meltdown

Roundtables are great, even if they take place at square tables

AirMagnet needs a laptop with a minimum of 8 USB ports to really rock it

Do not underestimate the power of Diet Snapple Peach Iced Tea

Hands-on demos rock the party

Fountain pens hold the key to my future lottery success

The Underhill account is alive and well at Antonella’s

Picking up the Tech Field Day tab is an expensive proposition at best

And so ends another fine day of tech-y fieldness in partly cloudy California.  Good times were had by all.  New friends were made.  Old friends were rekindled.  Alcohol was consumed on occasion.  Last but not least, knowledge was disseminated and consumed by all the delegates to be digested slowly over the course of the next few days, like a fine meal of gnocchi and cannolis.

I have a lot to write about and a lot to catch up on.  Thanks to the graciousness of the crew from Wireless Tech Field Day, I have the opportunity to learn more about something that interests me and can be useful to many.  I will spend the next few weeks talking about all the things I’ve learned in the past 48 hours and hopefully giving you some insights and discussion topics.

Tom’s Take

Tech Field Day isn’t about technology, or vendors, or fine Italian dining.  It’s about people.  Meeting great people and talking about topics ranging from wireless spectrum analyzation to animated GIF manufacturing is what really makes this event so special.  If you are at all interested in being involved, get over to the Gestalt IT website and let us know.  It’s the first step into a much more connected community and the kind of comradery that makes our little industry so much fun to be involved in.

Wireless Tech Field Day Recap: Day 1

Greg is unfamiliar with a substance known as “gravy”

Orville Redenbacher makes a tasty Wi-Fi interference detector

Metageek has the lunchbox all the kids at school want

Eating your own dogfood can be rough in beta

802.11u is something I need to research more

Devin Akin should be the new spokesman for Red Bull

Andrew looks stunning in gold and glitter

AP hide-and-seek works even better when the AP is turned on

One day, I will get to see the computers in the History Museum

Claire and Matthew love taking the scenic route

Tech Field Day Wireless Day 1 is in the books.  Lots of good info, amped presenters, and engaging demos all around.  I once again learned that I have a lot to learn, even about something I thought I was comfortable with.  The amount of knowledge that I am osmosing from the excellent delegates is going to give me a lot to think about and chew on for a while to come.  It’s a very different feel here versus TFD #5, what with all the wireless knowledge concentrated into one room.  Vertical Field Days are a hoot.

If you would like to follow along with the rest of the gang, there are several ways to get engaged.  You can head over to http://www.techfieldday.com and watch the live video stream to see if I’ve lost any more hair this time around.  You can also follow the official Tech Field Day twitter account @TechFieldDay for updates about what’s going on.  If you search for the hastag #TechFieldDay on Twitter, you can see the delegates discussing the presentations in real time as well as seeing the feedback from the presenting companies.  If you have any questions or comments about what you see, don’t hesitate to use the #TechFieldDay hastag to get our attention.  Don’t forget the Tech Field Day is as much about you as it is anything else.  The more knowledge that you can contribute to the gestalt, the better it gets.

802.11Nerd – Wireless Field Day

I guess I made an impression on someone in San Jose.  Either that, or I’ve got some unpaid parking tickets I need to take care of.  At any rate, I have been invited to come to San Jose March 16th-18th for the first ever Wireless Field Day!  This event grew out of Tech Field Day thanks to the influence of Jennifer Huber and Stephen Foskett.  Jennifer and Stephen realized that having a Field Day focused on wireless technologies would be great to gather the leading wireless bloggers in the industry together in one place and see what happens.  That very distinguished list includes:

Marcus Burton CWNP @MarcusBurton
Samuel Clements Sam’s wireless @Samuel_Clements
Rocky Gregory Intensified @BionicRocky
Jennifer Huber Wireless CCIE, Here I Come! @JenniferLucille
Chris Lyttle WiFi Kiwi’s Blog @WiFiKiwi
Keith Parsons Wireless LAN Professionals @KeithRParsons
George Stefanick my80211 @WirelesssGuru
Andrew vonNagy Revolution Wi-Fi @RevolutionWiFi
Steve Williams WiFi Edge @SteveWilliams_

List HERE.  This list is also a handy one in case you need people to follow on Twitter that are wireless gurus.  I’m hoping that I can pick their brains during our three days together to help refine my wireless skills, as I am becoming more and more involved in wireless designs and deployments.

After our last Tech Field Day, a couple of people wondered why we bothered flying everyone out to California to listen to these presentations when this was something that could easily be done over streaming video and chat room questions or perhaps Webex.  I agree that many of the presentations were something that could have been done over a presence medium.  However, many of the best reasons to have a Tech Field Day never made it on camera.  By gathering all of these minds together in one place to discuss technologies, you drive critical thinking and innovation.  For instance, I had taken for granted that most people in the IT industry knew we needed to move to IPv6.  However, Curtis Preston opened my eyes to the server admin side of things during a non-televised lunch discussion at TFD 5.  Some of our roundtable discussions were equally enlightening.  The point is that Tech Field Day is more than just the presentations.  Ask yourself this:  Given a chance to have a Webex with the President of the US or flying to Washington D.C. and meeting him in person, which would you rather do?  You can have the same discussion with him over the Internet, but there’s just something about meeting him in person that can’t be replicated over a broadband link.

How Do I Get Involved With Tech Field Day?

I’m going to spill some secret sauce here.  The secret to getting into a Tech Field Day doesn’t involve secret payoffs or a good-old-boy network.  What’s involved is much easier than all that.

1.  Read the TFD FAQ and the Becoming a Field Day Delegate pages first and foremost.  Indicate your desire to become a delegate.  You can’t go if you don’t tell someone you want to be there.  Filling out the delegate form submits a lot of pertinent information to Gestalt IT that helps in the selection process.

2.  Realize that the selection process is voted upon by past delegates and has selection criteria.  In order to be the best possible delegate for a Tech Field Day, you have to be an open-minded blogger willing to listen to the presentations and think about them critically.  There’s no sense in bringing in delegates that will refuse to listen to a presentation from Arista because all they’ve ever used is Force10 and they won’t accept Arista having good technology.  If you want to learn more about all the products and vendors out in the IT ecosystem, TFD is the place for you.

3.  Write about what you’ve learned.  One of the hardest things for me after Tech Field Day 5 was consolidating what I had learned into a series of blog posts.  TFD is a fire hose of information, and there is little time to process it as it happens.  Copious notes are a must.  As is having the video feeds to look at later to remember what your notes meant.  But it is important to get those notes down and put them up for everyone else to see.  Because while your audience may have been watching the same video stream you were watching live, they may not have the same opinion of things.  The hardest part of TFD 5 for me wasn’t writing about Druva and Drobo.  It was writing about Infoblox and HP.  These reviews had some parts where I was critical of presentation methods or information.  These were my feelings on the subjects and I wanted to make sure that I shared them with everyone.  Tech Field Day isn’t just about fun and good times.  Occasionally, the delegates must look at things with a critical eye and make sure they let everyone know where they stand.

Be sure to follow @TechFieldDay on Twitter for more information about Wireless Field Day as the date approaches in mid-March.  You can also follow the #TechFieldDay hash tag for updates live as the delegates tweet about them.  For those of you that might not want to see all the TFD-related posts, you can also use the #TechFieldDay tag to filter posts in most major Twitter clients.  I’m also going to talk to the delegates and see if having an IRC chatroom is a good idea again.  We had a lot of good sidebar discussion going on during the presentations, but I only want to keep this aspect of things if it provides value for both the delegates and those following along online.  If you have an opinion about methods that the Internet audience can get involved, don’t hesitate to let me know.

Tech Field Day Disclaimer

Tech Field Day is made possible by the sponsors.  Each of the sponsors of the event is responsible for a portion of the travel and lodging costs.  In addition, some sponsors are responsible for providing funding for the gatherings that occur after the events are finished for the day.  However, the sponsors understand that their financing of Tech Field Day in no way guarantees them any consideration during the analysis and writing of reviews.  That independence allows the delegates to give honest and direct opinions of the technology and the companies that present it.

Tech Field Day – HP

The final presenters for Tech Field Day 5 were from HP.  HP presented on two different architectures that at first seemed to be somewhat unrelated.  The first was their HP StoreOnce data deduplication appliances.  The second was an overview of the technologies that comprise the HP Networking converged networking solutions.  These two technologies are very intrinsic to the future of the datacenter solutions offered by HP.

After a short marketing overview about HP and their direction in the market, as well as reinforcement of their commitment to open standards (more on this later), we got our first tech presentation from Jeff DiCorpo.  He talked to us about the HP StoreOnce deduplication appliances.  These units are designed to sit inline with your storage and servers and deduplicate the data as it flies past.  The idea of inline dedupe is quite appealing to those customer that have many remote branch offices and would prefer to reduce the amount of data being sent across the wire to a central backup location.  By deduping the data in the branch before sending it along, the backup windows can be shorter and the costs associated with starving other applications with high data usage can be avoided.  I haven’t really been delving into the backup solutions focused on the datacenter, but as I heard about what HP is doing with their line of appliances, it started to make a little more sense to me.  The trend to me appears to be one where the data is being centralized again in one location, much like the old days of mainframe computing.  For those locations that don’t have the ability or the need to centralize data in a large SAN environment, the HP StoreOnce appliances can shorten backup times for that critical remote site data.  The appliances can even be used internal to your datacenter to dedupe the data before it is presented to the backup servers.  The limits of the things that can be done with deduplication seem to be endless.  My networking background tends to have me thinking about data in relatively small streams.  But as I start encountering more and more backup data that needs priority treatment, the more I think that some kind of deduplication software or hardware is needed to reduce those large data streams.  There was a lot of talk at Tech Field Data about dedupe, and the HP solution appears to be an interesting one for the datacenter.

Afterwards, Jay Mellman of HP Networking talked to us about the value proposition of HP Converged Networking.  While not a pure marketing overview, there were the typical case studies and even a “G” word printed in the bottom corner of one slide.  Once Jay was finished, I did ask a few questions about the position of HP Networking in regards to their number one competitor, Cisco.  Jay admitted that HP is doing its best to force Cisco to change the way they do business.  The Cisco quarterly results had been released while I was at TFD, and the fact that there was less revenue was not lost on HP.  I asked Jay about the historical position of HP Network (formerly Procurve) and his stance that the idea of an edge-centric design was a better model than Cisco’s core-focused guidelines.  Having worked with both sets of hardware and seen reference documentation for each vendor, I can say that there is most definitely disagreement.  Cisco tends to focus its designs around strong cores of Catalyst 6500 or Nexus 7000 switches.  The access layer tends to be simple port aggregation where few decisions are made.  This is due to the historical advantage Cisco has enjoyed with its core products.  HP has always maintained that keeping the intelligence of the network out in the edge, what Cisco would term the “access layer”, is what allows them to be very agile and keep the processing of network traffic closer to the intended target.  I think part of this edge-centric focus has been because the historic core switching offerings from HP have been somewhat spartan compared to the Cisco offering.  I think this situation was remedied with the acquisition of 3Com/H3C and their A-series chassis switches.  This gives HP a great platform to launch back into the core.  As such, I’ve seen a lot more designs from HP that are beginning to talk about core networking.  Who’s right in all this?  I can’t say.  This is one of those OSPF – IS-IS kind of arguments.  Each has their appeal and their deficiencies.

After Jay, we heard from Jeff about the tech specs of the A-series switches.  He talked about the support HP has for the open standards in the datacenter.  Casually mentioned was the support for standards such as TRILL and QCN, but not for Cisco FabricPath.  As expected, Jeff made sure to point out that FabricPath was Cisco proprietary and wasn’t supported by the A-series.  He did speak about Intelligent Resilient Framework (IRF), which is a technology used by HP to unify the control plane of a set of switches to make it appear as one unified fabric.  To me, this sounds a lot like the VSS solution that Cisco uses on their core switches.  HP is positioning this as an option to flatten the network by creating lots of trunked (Etherchanneled) connections between the devices in the datacenter.  I specifically asked if they were using this as a placeholder until TRILL is ratified as a standard.  The answer was ‘yes’.  As IRF is a technology acquired from the H3C purchase, it only runs on the A-series switches.  In addition, there are enhancements above and beyond those offered by TRILL that will ensure IRF will still be used even after TRILL is finalized and put into production.  So, with all that in mind, allow me to take my turn at Johnny Carson’s magnificent Karnac routine:

The answer is: Cisco FabricPath OR HP IRF

The question? What is a proprietary technology used by a vendor in lieu of an open standard that allows a customer to flatten their datacenter today while still retaining several key features that will allow it to be useful even after ratification of the standard?

The presentation continued to talk about the trends and technolgy in the datacenter for enabling multi-hop Fiber Channel over Ethernet (FCoE) and the ability of the HP Flexfabric modules to support many different types of connectivity in the C7000 blade chassis.  I think that this is where the Cisco/HP battle is going to be won or lost.  By racing towards a fast and cost-effective multi-hop FCoE solution, HP and Cisco are hoping to have a large install base ready for the standards to become totally finalized.  When that day comes, they will be able to work alongside the standard and enjoy the fruits of a hard-fought war.  Time will tell whether or not this approach will work or who will come out on top, if anyone.

I think HP has some interesting arguments for their datacenter products.  They’ve also been making servers for a long time and they have a very compelling solution set for customers that incorporates storage, which is something Cisco currently lacks without a partner like EMC.  What I would like to see HP focus more on in their solution presentation is telling me what they can do and what the are about.  Conversely, they should spend a little less time comparing themselves to Cisco and taking each opportunity to mention how Cisco doesn’t support standards and has no previous experience in the server market.  To be honest, I don’t hear that from Cisco or IBM when I talk to them about servers or storage or networking.  I hear what they have to offer.  HP, if you can give me all the information I need to make my decision and your product is the one that fits my needs the best, you shouldn’t have to worry about what my opinion of your competitors is.

Tech Field Day Disclosure

HP was a sponsor of Tech Field Day 5, and as such was responsible for a portion of my airfare and hotel accommodations.  In addition, HP provided their Executive Briefing Center in Cupertino, CA for the Friday presentations.  They also served a great hot breakfast and allowed us unlimited use of their self-serve Starbucks coffee, espresso and chai machine.  We returned the favor by running it out of steamed milk for use in the yummy Dirty Chai.  HP also provided the delegates with a notepad and pen.  At no time did HP ask for nor were they promised any kind of consideration in this article.  Any and all analysis and opinions are mine and mine alone.

Tech Field Day – InfoBlox

Infoblox was our second presenter on Day 2 of Tech Field Day 5.  They came into the HP Executive Briefing Center and instead of firing up the overhead projector, they started pulling the whiteboard over to the center of the room.  Once they got started, the founder and CTO, Stu Bailey, informed us that they would have zero slides.  No slides? Yay!  Here’s someone that was paying attention to Force 10 from Net Field Day.  No slides, just a whiteboard and some really brilliant guys.

As I am sitting here typing this article, I’m listening to the audio of the presentation in the background.  I think Stu is probably a very brilliant guy, and starting a company is one of the most challenging things a person can do.  With that being said, I think Stu suffers from a problem I have from time to time: Resolution.  I often tell stories to people and I misjudge the resolution of the information I’m imparting.  My stories are utterly fascinating and I love giving out the little details and settings.  However, my audience is less impressed with my story.  They get distracted and lost waiting for me to wrap things up.  I get caught up in the minutia and forget to tell the story.  I freely admit that I have this problem, and I do my best to avoid it when I’m giving presentations.  As I listen to the audio of the session, I’m reminded of this.  I love history lessons more than anyone else in the world.  In fact, I have the History Channel on my favorites list.  However, in this kind of technical session with no slides to keep my focus, the firehose of the history of Infoblox is kind of overwhelming.  Whiteboarding works really well when you are putting topics out there that your audience is going to ask questions about so you can demonstrate and expand topics on the fly.  During a history lesson, many of the things that you are discussing are pretty much agreed upon by people, so you don’t have any real explanation to display.  I think some of the people started tuning out since the what of Infoblox was getting lost in the why of Infoblox.  Stu, if you want to help yourself for the next presentation, you need to hook your audience.  Give us the problem up front in a couple of minutes.  Let me try based on what I heard and saw:

In today’s world, network infrastructure is siloed and hard to manage.  The number of people required to be involved in new system deployments and change management makes it difficult to coordinate these activities.  In addition, the possibility exists that a misconfiguration or forgotten step could create downtime beyond expectations.  What Infoblox is trying to bring to the table is the ability to automate these processes so that the deployment and management of the network and its associated services can be streamlined.  Changes can be delegated to less skilled personnel so that the network is no longer entirely dependent on one person’s knowledge of a particular service or configuration.  Infoblox allows you to concentrate on making your network run optimally through standard repeatable processes.  Infoblox also allows you to see your network and service configurations at a glance.

Folks, that is Infoblox in a nutshell, at least as I see it.  Infoblox draws all of your DHCP and DNS servers together into an automated database that allows you to make changes across your network and it’s services instantly without the need to make the changes individually.  This would have been a great lead-in to the second part of the presentation, where we got to see how Infoblox works.  Based on discussions I had with my networking and systems brethren, it appears that Infoblox is attacking the aspect of a network that doesn’t have standardized procedures for implementation and change management.  In a mid-to-large size company, bringing a new DNS server online or implementing a branch office server are step-by-step processes that follow a detailed checklist.  Once all the checks are made, the change or implementation is complete.  Infoblox automates the checklist so that a few clicks can make those changes without the chance of missing a step.  Whether or not your environment needs that kind of oversight is a question you have to answer for yourself.  I can see applications where some or all of the features of Infoblox would be a godsend.  To be honest, I’d really like to see it in action before I pass total judgement on the software itself.  I just wish this message would have been put out there for us to digest as we investigated the whys of Infoblox.  A history lesson explaining the need for each piece of Infoblox should have been tied back to an overview similar to the one above, where each piece was introduced.  As the history of the individual pieces is revealed, they can be tied back to the relevant section of the overview. Think about it like a Chekov’s Gun for Presentations:  The DNS IPAM seen in section two, minute one should first be seen no later that section one, minute two.

After the Infoblox presentation, the next product on the block was NetMRI.  Now, I’ve heard of this product before.  However, the last time I heard about it, the association was with Netcordia and Terry Slattery, CCIE #1026.  As soon as I heard that Infoblox had purchased Netcordia and the NetMRI software, the sudden move of Terry to Chesapeake Netcraftsmen made a little more sense to me.  NetMRI is a great tool and appears to be the heart of the Infoblox offerings upon which things like IPAM for DNS/DHCP and the Infoblox Grid use to make the network changes.  Those familiar with NetMRI know that it allows you to collect statistics on your devices and monitor changes to the configurations of those devices.  By leveraging the NetMRI tools into the Grid product, Infoblox allows you to monitor and make changes to a wide variety of devices as needed.  This helps add more to their existing IPAM offerings.

If they really want to kill the market with this, they really need to drive home the need of IPAM and network configuration management to their customers.  Most people are going to look at this and say, “Why do I need it?  I can do everything with Windows tools or Excel spreadsheets.”  That is the historical kind of thinking that has allowed networks to spiral out of control to the point where the need complex management tools to keep them running at peak efficiency.  I’m sure Terry saw this when he created NetMRI and made it his mission to get this kind of thing put into the network devices.  By adding this product to their portfolio, Infoblox needs to drive home the need for ALL devices to be managed and documented.  If they can do that, I think they’re going to find their message much more succinct and the value and lot easier to present.  I think you guys have a great product that is needed.  You just have to let me know why I need it, not just why you made it.

If you’d like to learn more about the offerings from Infoblox, head over to their website at http://www.infoblox.com.  You can also follow them on Twitter as @infoblox.

Tech Field Day Disclaimer

Infoblox was a sponsor of Tech Field Day 5, and as such they were reponsible for a portion of my airfare and hotel accommodations.  They did not ask for nor were they promised any kind of consideration in the writing of this article.  Any and all of the opinions and analysis expressed herein are mine and mine alone.

Tech Field Day – Netex

Day 2 of Tech Field Day was powered by Starbucks.  Starting off at the hotel with a visit to the Starbucks counter was a no-brainer, but upon arrival the wonderful HP Executive Briefing Center in Cupertino, CA, the Holy Grail of caffeine addicts was discovered – a self-serve Starbucks espresso machine.  As such, many fabulous Dirty Chai drinks were consumed during the day, which may have led to my perking up and asking more questions during day 2.  Maybe that, or we finally got to the networking part that I knew a little more about.  I’m still blaming the Dirty Chai, though.

Netex was first on deck.  And they nailed it.  Not necessarily their message, but their presentation.  They kept their message short.  They had a hands-on example that kept us awake first thing in the morning.  They tempted us with beer.  They didn’t talk longer than they needed to and left plenty of time for questions.  And they still got done early.  Spot on, guys.  I’ll go on record as saying that it’s not necessary to fill your entire presentation with talking.  You need to leave some time for questions that might come up during the presentation as well as questions at the end after you’ve delivered your message.  The reason there are time constraints on presentations is to keep people from rambling on forever.  I don’t mind staying five or ten minutes extra so long as the reason for the overtime was due to a lot of good questions.  At the same time, only leaving two or three minutes at the end of a two hour slide deck due to constant chattering isn’t going to make any friends.

Netex revolves around a product called HyperIP.  HyperIP is a virtual machine that does something rather interesting.  It attempts to fix the TCP message window / global synchronization issue by avoiding it.  For those not familiar, TCP like to increase the window size as data begins transmitting so as to use the link in the most efficient manner.  However, eventually TCP will saturate the reciever with data and the reciever will ask the sender to back off.  TCP does this by backing down halfway, then ramping up again as the sender catches up with the data stream.  Imagine reading me a list of numbers over the phone.  You may start out by reading groups of 3 numerals, then as I get comfortable you may move to groups of 5 then 6, constantly increasing the amount of numerals per group.  Eventually, I’m not going to be able to remember all the numerals in each group, so I’m going to ask you to stop and go back to groups with smaller numerals.  In TCP we try to fix issues like this with things such as Weighted Random Early Detection (WRED).  WRED tries to avoid forcing the sender to back off totally by instead dropping less critical packets in the stream and making these be retransmitted later.  As such, the TCP window size can be kept as large as possible for as long as possible to allow the maximum amount to data to be transmitted in the most efficient way possible for a given link.  It should be noted that WRED only works on TCP due to the ability of TCP to retransmit lost packets due to TCP acknowledgements.  UDP can’t use WRED since these packets would be lost and never retransmitted (more on this in a minute).

HyperIP acts as a gateway for your server devices.  Instead of your backup server pointing to the WAN router, it points to the HyperIP VM.  The HyperIP VM then terminates the TCP stream and “caches” the data.  It contacts another HyperIP VM at the destination site and negotiates the most efficient window size.  It then transmits the data between sites using large UDP packets.  The analogy they used was that instead of transporting individual bottles of beer, the bottles were packaged into”kegs” and transported more efficiently.  When I asked how the HyperIP VMs dealt with packet loss since UDP is not tolerant of packet loss.  I was informed that the HyperIP system kept track of the UDP packets on both sides in a kind of lookup table, so it one was missed it could be retransmitted.  Once the UDP packets arrive on the other side of the WAN link, they are transmitted via TCP to the destination server.

The current use case to me seems to be for backup traffic or other large, bursty types of communication.  Netex admitted that this technology won’t do much for smaller conversations, such as HTTP traffic.  It also only affects TCP, UDP, and ICMP, so more esoteric protocols are out (sorry AppleTalk users).  I’m having an issue with the way HyperIP actually does it’s job.  It seems to me like they are re-inventing the wheel and trying to accomplish something that Network Cool Guys can accomplish with proper QoS design.  In fact, the traffic patterns shown in the presentation after the application of HyperIP look an awfully lot like the traffic patterns after you apply WRED to a WAN link.  HyperIP does have the ability to add some compression to the data stream, so there is the opportunity to reduce the amount of data being sent.  For those that might be using some basic QoS on slow links already and might be thinking about implementing a HyperIP setup, be sure you are classifying your VoIP traffic as finely as possible by using DSCP marking at the source or marking it by protocol / port.  I’d hate to see your priority queue fill up with HyperIP kegs and starve out the CEO’s conference call.

I can see a use case for HyperIP in situations where your company doesn’t have a QoS-focused technical person but has a lot of depth in the server admin area.  Matthew Norwood even called it “QoS for the Less Fortunate”.  I’m not saying that it’s not a fine product or that it doens’t have it’s uses.  I’m just saying that it can’t do anything for me that I can’t do already from the CLI.  But, try it out yourself if your curious.  There should be a 30-day free trial available by the end of February.  Just remember that you’re going to need to buy your VMs in pairs to make the work properly.

If you are interested in learning more about Netex and their HyperIP offering, head over to their website at http://www.netex.com.  You can also follow them on Twitter as @HyperIP

Tech Field Day Disclaimer

Netex was a sponsor of Tech Field Day 5, and as such was partly responsible for my airfare and hotel accommodations.  In addition, they provided a 1 GB USB drive containing information about their product, as well as a bottle opener.  They also may or may not have allowed us the use of their practical example, which consisted of an ice chest filled with cold Corona beer.  I can neither confirm or deny that these beers were consumed by the pool at the hotel after the end of Tech Field Day, day 2.  Netex neither asked for nor were they granted any consideration for this article.  The opinions and analysis expressed herein are mine and mine alone.

Tech Field Day – Xangati

Monitoring of key devices in your network is a very big business.  Knowing what’s going on with your devices can keep you in the loop when troubles start to happen.  Almost as important, though, it event correlation.  Taking data from multiple sources and presenting it in such as way as to see how the minor events leading up to a problem can have an important impact is critical in larger infrastructures.  Many companies have software designed for this purpose, and one of them was kind enough to present to us at Tech Field Day 5.

Xangati is focused on virtualization and their software acts as a dashboard the collects information from various different sources in your network, from ESX boxes to network interfaces.  It presents this information to you in an easy-to-read format, the oft-used “single pane of glass” metaphor.  One neat thing that their software allows you to do is go back in time to see the events taking place right up to the point where your VMs went belly up, for instance.  This DVR-like functionality is very helpful when you find yourself in a situation where no one problem was the root cause of your issue, but instead you find yourself succumbing to the weight of multiple minor issues, the “Death by 1,000 Cuts” syndrome.  With Xangati, you can replay a mountain of data to find the root cause of your issue without needing to sift through endless router logs or VMware alerts.  One pane of glass means one source of easy-to-digest information.

For the moment, Xangati appears to be focused on providing their services in a report-only mode.  At a roundtable afterwards, though, Sean Clark brought up the point that this could be viewed as a great framework for allowing some kind of automated DRS-type solution that draws on the firehose of information gathered by the current Xangati tool.  Is this something that might lead them to being a target ripe for acquisition?  Or is this a capability that might be developed in house at some point?  I can’t say for sure, but I know that getting good information about what’s going on in your network is the first step in being proactive about troubleshooting.  And based on what I’ve seen of Xangati’s tools, I think they’ve got the right idea to get the information to you when you need it the most.  I’m sure I’m going to take a second look at this product as time allows.

If you’d like to hear more about what Xangati has to offer, you can check them out at http://www.xangati.com or follow them on Twitter as @xangatipress.

Tech Field Day Disclaimer

Xangati was a sponsor of Tech Field Day 5, and as such was partly responsible for my airfare and hotel accommodations.  In addition, they were the sponsor of our Thursday night meal and trip to the Computer History Museum.  At no time did they ask for or receive any consideration in the writing of this article.  The opinions and analysis expressed herein are mine and mine alone.

Tech Field Day – Druva

I grew up on a farm when I was younger.  My mother’s family contains many farmers.  This has afforded me some interesting opportunities.  One of these was watching a calf being born.  Just hours after birth, the calf can stand on its own.  It’s a magical experience that shows you how something so small can grow and change in such a short time.  And I got to experience something similar on Thursday afternoon at Tech Field Day 5.

Druva is a new company that officially launched at TFD5.  Now, they weren’t a “brand new” company with a lot of dreams and talk.  Much like the calf above, they’ve found their legs and are standing on their own now.  They have taken an interesting approach to backup technology and used it to address a segment of the market which I honestly hadn’t thought of before.  And, I really like their name.  “Druva” is the name for Polaris, the North Star in Hindu mythology.  For centuries, people have depended on the North Star to guide them.  Yet it is a simple resource that is always there and available when needed.  These were the guidelines that helped Druva develop their offering.

Druva is attacking the endpoint backup market.  They believe that the hardest devices in your environment to keep safe are not the servers and SANs, but the user laptops and desktops and mobile devices.  There is a large amount of data contained on these devices that is rarely backed up in most cases and can lead to severe downtime in the event of a theft or a technical problem of some kind.  As well, more and more of this data is being squirreled away on iPads and iPhones, devices that are difficult to reliably backup from an enterprise admin perspective.

Druva is ready to prove what they say.  Their server/client software download is a mere 40MB.  In a world today where I can barely make 10-slide presentations smaller than that, Druva can protect my laptop from data loss in the event it gets thrown into the office lobster tank.  After installing the program on a server, you can configure lots of different options to create and import user accounts that represent the target devices that need to be backed up.  Once created, you send an email to the user to validate them and them download a client to their system to allow backups to begin.  Druva deduplicates the data before it’s ever sent from the client system based on the fact that 80-90% of the data contained on corporate workstations is MS Office and MS Outlook.  By knowing how to efficiently hash that data and deduplicate it, they can streamline the backup process allowing much less data to be sent over slow links and shorten the time the user is impacted by the backup window.

Druva’s live backup and restore demo was hampered not by Druva technical challenges but by connectivity issues.  Their laptop was connected to a Cradlepoint personal hotspot device brought along by Stephen Foskett and with all the other devices using guest wireless and the Cradlepoint, the connection was saturated.  It almost felt like being on dial-up again.  I was impressed to see the amount of data being sent over the link, a scant 63MB.  I don’t know how big the folder was originally, but if it was a standard document folder containing hundreds of MB of data, there was definitely proof the dedupe works.

We were able to perform a single file restore to one of Druva’s iPads they had brought along.  So, once again Apple saves the day.  All in all, I like this product and think it has some capabilites that are sorely missing from the backup solutions offered by some of their competitors.  And just like good tech people, the whiz kids at Druva aren’t resting on their laurels.  They were talking to us about branching out and finding new uses for this technology and new ways to think about backups for more than just endpoints.  I can’t wait to see how the grow and change in the coming months and I wish them the best of luck in their endeavors.

A funny note about Druva.  We were having issues before the presentation figuring out which Twitter account was their main one, @druva or @druvainc.  We talked with Jaspreet and he told us that once upon a time, the name of the company was actually “Druvaa”.  One of their customers remarked that if they were really in the business of data deduplication, why did they have two A’s in their name? So Druva deduped their own name.  That’s dedication, folks.

If you are interested in checking out Druva, head over to their website at http://www.druva.com/ and download their product to try out.  You can also follow them on Twitter as @druvainc.

Tech Field Day Disclaimer

Druva was a sponsor of Tech Field Day 5, and as such was partly responsible for my airfare and hotel accommodations.  Druva did not ask for nor were they promised any consideration in this review.  My opinions and analysis are my own.

Tech Field Day – Drobo

Drobo, the company formerly known as Data Robotics, is one that has a long history with Tech Field Day. They were presenters at TFD 1 and have been associated with several since then. I hadn’t heard much about them prior to my TFD 5 trip, so I was quite eager to hear about some of their offerings.

After lunch at Drobo, we launched right into discussions of their products. Leading the charge was Mario Blandini, and to describe him as animated is a disservice. Mario is excited and ready to talk. He showed us a picture of him talking about Drobo in character as pitchman Billy Mays, blue shirt and all. That character fits him totally. Drobo also won my first annual “Fewest Slides with a Point” award, as they had a very quick deck of 4-5 slides that included a short video for intro purposes. After that, we killed the video feed for a whiteboard session that delved into some of the “secret sauce” that Drobo uses in their Beyond RAID technology. While I can’t talk about it, and in some cases didn’t quite get the really technical details, it did make me rethink how RAID works in legacy applications.  Drobo has put a lot of thought into their methods of drive utilization, and their whole concept of “beyond RAID” makes some sense to me.  I really think I’d need some more one-on-one time to totally get it down, as storage is not my first language.  As a side note, the whiteboard at Drobo was a pane of glass anchored to a beige wall. This scored cool points for form, but the markers were a little hard to read against the beige and sometimes didn’t make nice, clear marks. Should you be of the bent to go for the glass whiteboard for your home or office, be sure the background is bright to help those of us with terrible eyesight.

Once the cameras came back up, Drobo unveiled a new 12-bay storage appliance designed for business, the B1200i. This coincided with a new focus on this market driven by the tagline “Drobo Means Business”. It showed in the product development as well. The home-use products I saw previously appeared to be geared toward the consumer market, with unified housing and smaller form factors. The new Drobo 12-bay was designed to work with SMB/small enterprise setups, with rack mounting capability and removable FRU parts that don’t require the whole unit to be replaced when something breaks. It even has iSCSI support to allow it to be attached to Windows servers and VMware boxes easily. We were able to demo the unit, showing the capabilities of removing drives from the array and reinserting them out-of-order while the unit chugged right along playing a Quicktime movie trailer. Normally, reinserting a RAID drive in the wrong spot could be considered a Resume Generating Event (RGE), but Drobo has no problems with it at all.  Once the software rebuilt the array, all the pretty lights on the front went back to green and you’d never know anything happened.  Coupled with the fact that all the drives in the unit were of mismatched sizes, I was even more impressed.

We were also treated to a demo of the redesigned Drobo dashboard software. This slick looking piece of software allows one to administer multiple Drobo units as well as view status such capacity, health, and firmware levels. Everything pops up in a nice dashboard view, allowing you to drill down to the individual unit quickly. You can also launch a discovery process to go out and find all the units connected to your local subnet. This would be helpful in a case where you aren’t familiar with the network topology, or where someone might have plugged in a unit and forgotten how to contact it.  From a security standpoint, I was a little worried that it was so easy to discover the units.  Sure, you have to have a username and password to access them, but even knowing they are out there can give you a few avenues of attack.  If there were a way to turn or discovery or simply get more information about which ports are being used by discovery so they can be disabled by us paranoid security types, it would help out.

I was highly impressed with the ease of use of the unit, from both setup and maintenance aspects. This appears to be a unit that I can have at my home, or perhaps even in a small branch office that can just be provisioned without the traditional RAID headaches. In fact, that type of low-tech maintenance is perfect for the person that needs to send a unit to a branch office in New Mexico that may not have a dedicated tech resource. Managing the unit with the dashboard software is simple, and should a problem develop with a drive, you can just ship a new one there and tell them to replace the funny colored light instead of the need to walk them through the ritual of rebuilding RAID arrays. I’m considering pulling the trigger on ordering one of these puppies to store some of my important files at home, like the mountain of pictures my wife seems to have accumulated over the last few years. If you’re considering ordering one too, be sure to use the “DRIHOLLING” coupon code on their website for, well, the best Drobo deal ever (it’s case sensitive BTW).

If you are intersted in learning more, head over to their website at http://www.drobo.com or check them out on Twitter as @drobo.

EDIT

If you are interested in getting a Drobo unit all for yourself, the good folks at Drobo have given me a discount code that’s good for the following discounts:

$50 off on Drobo 4-bay
$100 off on Drobo 4-bay with drives
$100 off on Drobo S & Drobo FS
$150 off on Drobo S and Drobo FS with drives
$150 off on DroboPro & DroboPro FS
$200 off DroboPro & DroboPro FS with drives

Just use the code DRIHOLLING (case matters).  And enjoy your new Drobo!

Tech Field Day Disclaimer

Drobo was a sponsor of Tech Field Day 5, and as such was partly responsible for my airfare and hotel accommodations. In addition, they provided lunch and the use of their facility for our sessions. We were also provided refreshment in the form of cupcakes with enough frosting to spackle my bathroom, which were quite delicious. Drobo did not ask for, nor did they receive any consideration for this article. The opinions expressed here are my own and were not influenced in any way by Drobo.

Tech Field Day – Symantec

Our first session at Tech Field Day 5 was a trip the Symantec campus to hear about some interesting backup solutions from both NetBackup and BackupExec.  I’ve been an on-and-off user of BackupExec for many years now, dating back to version 8 running on Netware boxes and it was still a Veritas product.  However, things have changed significantly today when it comes to backing up devices.  Thanks to Symantec, I have a much clearer picture now of what that entails.

We started out the day by hearing from one of Symantec’s NetBackup product specialists, George Winter.  He described how their product allowed them to do some amazing things, especially in the VMware arena.  You can imagine that my ears perked up at this point, as VMware is something that I’m becoming increasingly attached to from both the network and the server end.  I’ve never had the pleasure of using VMware Consolidated Backup, but from the cheers in the room when we were told that NetBackup instead uses the new VMware storage API calls, allowing a NetBackup appliance to get the information it needs to backup VMware guests without the need for the “agent” software program that has typically been needed in the past.  This is nice for me, since I don’t have to go through the trouble of installing agents on each guest as I bring them online.  I can just tell NetBackup to go out and backup the whole server, or a selected subset of guests chosen by groups.  NetBackup is even smart enough to know that if I add a guest to a folder that is currently being backed up that I probably want the new host backed up as well, so it adds the host automatically.   There was a great live demo of the ease of use in setting up the system and selecting backup options.  Demos are always great for engin…I mean Network Rock Stars because we can see things in action and generate questions based on options we can see in the live client and now some canned flash demo that glosses over the knobs and switches.

After the first session, we were graced by the presence of Enrique Salem, the CEO of Symantec.  He took some time out of his busy schedule to talk to us about the vision of Symantec and some of the emerging opportunities he sees for his company in the next year.  He appears to be a driven guy and dedicated to his principles.  So dedicated in fact that he not only gave us his e-mail address, but his cell number as well.  In front of a live video audience, no less!  Men with this kind of dedication earn big points with me because they aren’t afraid to talk to their customers and partners about their products.

A quick break paved the way for the BackupExec team to step up and start talking about the word that would quickly become the underlying theme to Tech Field Day 5 – dedupe.  For those of you network folks that may not completely understand dedupe, it is the process of removing similar data from a backup stream by use of hashing values in order to reduce the amount of data being transmitted, especially over slow WAN links.  Every time I think about it, it reminds of the basic method in which programs like WinRAR and WinZIP use to compress files.  If you really want to know more about data deduplication, you should head over to Curtis Preston’s Backup Central website.  Curtis is now my go-to person when I have a backup question, and he should be yours as well.

We learned more about how Symantec can use dedupe to reduce bandwidth consumption and tame processor utilization, which are ideas that appeal to me greatly.  BackupExec appears to be positioned more toward the SMB/small enterprise end of the market when it comes to backup software.  This is the realm that I play in more than anything else, so this product speaks to me.  There are a lot of options for backing up the VMware hosts that are found in the NetBackup product line, yet scaled down to allow SMB admins to easily use them to quickly backup and restore data.  They even have the capability of performing single file restoration to guest VMs even though the only thing backed up was the VMDK disk files.  Quite interesting if you ask me, as most of the restore requests I receive are for a single Word document, not a whole server.

Overall, I was very happy with what I heard from Symantec.  Their products appear to fully embrace the new landscape of server virtualization and the challenges that it presents to legacy backup solutions.  My own experience with Symantec in the past has varied from use to use, but this presentation went a long way to repairing some of my hard feelings about their solutions, especially in the BackupExec arena.  I’ll definitely be taking another look at them soon.

You can get more information from http://www.symantec.com or follow them on Twitter as @backupexec and @netbackup.

Tech Field Day Disclaimer

Symantec was a sponsor of Tech Field Day 5, and as such was partly responsible for my airfare and hotel accommodations.  In addition, they provided me with a very delicious hot breakfast and some “swag” that included a steel water bottle, t-shirt, notepad and pen set, and a 2GB Symantec-branded USB drive that contained copies of the presentation we were given.  At no time did they ask for or receive any kind of consideration in the writing of this review.