Blu-Ray Blues

I don’t know if it made the news or not, but apparently Apple refreshed the Macbook Pro line this week.  Not a groundbreaking update, mind you, but more along the lines of a processor refresh and move back to ATI/AMD discrete graphics over the existing NVIDIA chips.  There was also the unveiling of the new Thunderbolt port, based on Intel’s Light Peak technology.  This new port is designed to be a high-speed data access pathway for multiple devices.  For now, the Mac will use it for storage and DisplayPort.  Remember this, you’ll see it again later.

There was a long list of rumored hardware that might make it in to the new units, from SSD boot drives to liquid metal cases to reduce weight.  As with many far-out rumors, there was little fire behind the smoke and mirrors.  One thing that I didn’t see in the rumor mill which has been generating some discussion the past few days was the inclusion of a Blu-Ray drive in the Macbook.  People have asked for the high capacity drive to be an option on the Macbook for a couple of years now.  Some people want the option to pop in an HD movie and watch away on their laptop.  Others would love the opportunity to have a Blu-Ray burner and create their own content in Final Cut Pro to later burn to disc.  Still others want to use that burner to archive large amounts of data and keep their drives nice and clean.  The arguments say that it’s time for Apple to step into the now and include an HD optical option.  They cite the fact that Apple was key in the formation of the Blu-Ray spec.  While I can empathize with those looking for an internal Blu-Ray option for their shiny new Macbook, I seriously doubt that it’s ever going to happen.  Why?

1.  Blu-Ray competes with iTunes. For those of you that want to use your Macbook to watch movies in all their HD glory, your current option is to use iTunes to purchase or rent them.  And that’s just the way Apple likes it.  If Apple were to include a Blu-Ray option on the Macbook, it would cut into the sales of HD content on iTunes.  Given the option to pay for wireless access at the airport and spend my time downloading a movie through iTunes and hope it gets pulled down by the time my flight takes off, or simply throwing a couple of Blu-Ray discs in my bag before I leave on my trip, I’ll gladly take the second option.  It’s just easier for me keep my entertainment content on removable media that can easily be swapped and doesn’t need an external battery pack to operate.  Plus, I’m the kind of person that tends to keep lots of data on my drive, so the available space for downloading those large HD movie files might not be available.  However, Apple doesn’t make any money from my Blu-Ray purchases from Amazon.  I think for that reason they’ll stick to the lowly DVD drive for the foreseeable future.

2.  The future of the Macbook isn’t optical. When the Macbook Air was released in October, Tim Cook heralded it as “the Mac of the future”.  While many focused on the solid state drive (SSD) providing the on-board storage or the small form factor, others looked at the removal of the SuperDrive and remarked that Apple was making a bold statement.  Think about the last time you used a CD or DVD to load a program.  I have lots of old programs on CD/DVD, but most of the new software I load is installed from a downloaded program file.  Even the large ISO files I download are mounted as virtual CD drives and installed that way to expedite the setup process.  Now, with the Mac App Store, Apple is trying to introduce a sole-source repository for software like they have on the iPhone/iPad/iPod.  By providing an online software warehouse and then removing the SuperDrive on their “newest” laptop, Apple wants to see if people are really going to miss the drive.  Much like the gradual death of the floppy drive, the less people think about the hardware, the more likely they won’t miss it if a computer company “forgets” to include it on cutting edge models.  Then, it’s a simple matter to remove it across all their lines and move on to bigger and better things.  At this point, I think Apple sees optical drives as a legacy option on their laptop lines, so going to the length of adding a new technology like Blu-Ray would be taking a technological step back for them.  Better to put that R&D effort into newer things.

3.  Thunderbolt creates different options for storage.  Notice the first peripheral showcased alongside Thunderbolt was a storage array.  I don’t think this was coincidental when considering our current argument.  For those Blu-Ray fans that talk about using the drive to burn Final Cut-created movies or data backups, Apple seems to be steering you in the direction of using direct storage attached through their cool new port.  Having an expandable drive array attached to a high-speed port negates the need for a Blu-Ray unit for backups.  Add in the fact that the RAID array would be more reliable than a plastic disc and you can see the appeal of the new Thunderbolt technology.  For you aspiring directors, copying you new motion picture masterpiece to a LaCie Thunderbolt-enabled external drive would allow you to distribute it as simply as you could on a Blu-Ray disc without needing to worry about having a file size limitation of the optical media.  For what it’s worth, if you go out and price a Blu-Ray burner online you’ll find that you can get an external RAID array for almost the same price.  I’d recommend the fine products from Drobo (don’t forget to use the coupon code DRIHOLLING to save a little more off the top).

As you can see, I think Apple has some very compelling reasons for not including a Blu-Ray drive on their Macbooks.  Whether it be idea that optical discs are “old” technology or the desire to not include competition for their cash cow, Apple doesn’t seem compelled to change out their SuperDrive technology any time soon.  But if I were you, I wouldn’t worry about getting the Blu-Ray blues any time soon.  With the way things are going with app stores and Thunderbolt storage arrays, in a few years you’ll look back on the SuperDrive in your old Macbook with the same fondness you had for the 5 1/4″ drive on your old Apple II.

A Chrome-Plated Workout

I’ve had my CR-48 for about two weeks now, and I’ve put it through it’s paces.  I used it to take notes at Tech Field Day 5.  I set up an IRC channel for people to ask questions during the event.  I’ve written numerous blog posts on the little laptop.  I’ve used it to chat with people halfway around the world.  All in all, I’m impressed with the unit.  That’s not to say that everything about it has me thrilled.

The Good

I like the fact that the CR-48 is instantly on when I lift the lid.  The SSD and the lightweight OS team up to make it quite easy to just grab and fire up to start using for notes or web surfing.  It’s not quite as fast as an iPad, but much faster than hauling out my Lenovo w701 behemoth.  I like having the CR-48 handy for things I would rather do with a keyboard.

More than a few people have remarked to me that it looks “just like a MacBook”.  And I’ve come to see it much like a MacBook Air.  Obviously it’s not as sleek as Apple’s little wonder, but I like the form factor and the screen resolution much better than some of the other netbooks I’ve used.  It doesn’t feel cramped and toy-like.  In fact, it feels more Mac-like than any other laptop I’ve used.  I’m sure that is intentional on the part of Google.

Having the 3G Verizon radio is pretty handy in situations where there is no Wi-Fi available.  More than once I found myself unable to connect to a certificate-based wireless system (a known issue) or stuck in a place with terrible reception.  With the CR-48, I just switch over to the 3G radio and keep plugging away.  The 100MB allowed with the trial is a little anemic for heavy-duty use, but the bigger plans seem fairly priced should I find the need to upgrade to one.  When I tried activating the radio over the phone, the Verizon rep made sure to point out that they had plans available in all sizes for me to purchase, but somehow skipped over the part about me having 100MB for free each month.  Luckily I read the instructions.

The Bad

The CR-48 isn’t without it’s annoyances.  The touchpad is probably the most persistent issue I had.  The tap-to-click functionality found on most trackpads was bordering on annoying for me.  I’m a touch typist with hands the size of a gorilla.  I tend to rest my thumbs at the bottom of the keyboard as I type and on this laptop that means brushing the trackpad more often than not.  With the default settings, I often found myself sending e-mail or canceling tweets without realizing what happened, or my cursor shooting over to a random section of my blog post and my words spilling into other thoughts.  I finally gave up and disabled the tap-to-click setup, ironically making it more like a MacBook.

I also made the mistake of letting the battery run down all the way.  It was already low from use and I let it go to sleep without plugging it in.  Sure enough, it drained down and wouldn’t power back up.  Once I plugged it in I was able to use it, but it wouldn’t charge no matter how long I left it plugged in.  It took some searching on the Internet to find an acceptable solution (of which there appear to be many) before settling on a combination of things.  I pulled the battery for about 2 minutes, then reattached it and CAREFULLY plugged the adapter back in.  As soon as I saw the orange charging light come on, I finished pushing the charger all the way in and it worked for me after that.  There are rumors that the port and/or the charger are a little substandard, so this is something that is going to bear a little more inspection.  Speaking of the charger, the fact that it uses a three-pronged plug is a little annoying when I’m trying to find a place to plug in.  I’ve taken to carrying a little 2-prong grounding adapter in my bag just so I can plug in anywhere.  Not an expensive solution, but something I wish I didn’t have to do.

One final annoyance was a minor issue that turned into a humorous solution.  When I unboxed the unit and fired it up the first time, it seemed that playing two audio streams on top of each other would cause the speaker to short out and sound like I was choking a robot.  There was evidently a fix for it, but there seemed to be an issue with the netbook pulling the new update as it was only a point release and very minor.  Every time I checked the system updater, it told me the system was up to date.  The fix I found on the Internet suggested to click the Update button repeatedly until the system finally recognized the new update.  Literally, I clicked 50 times in order to get the update.  It did fix my audio issues, but you would think the update system would recognize a new release was out without me needing to be spastic with the update button.

Tom’s Take

Over all I’m thrilled with the CR-48 after a couple of weeks of exposure.  I keep it in my bag at all times, ready to go when necessary.  When I head back to Wireless Field Day in March, I’m planning on leaving the behemoth behind and only taking my CR-48 and my iPad for connectivity.  I figure cutting down on the extra 12 pounds of weight will be good for my posture and not having to haul an extra laptop out at the TSA Security and Prostate Screeing Checkpoint is always welcome to not only myself but the other passengers as well.  I’m also debating whether or not to flip over into developer mode to see if that has any additional tricks I can try out.  I don’t know if it’ll increase my productivity any more, but having a few extra knobs and switches to play with is never a bad thing.

802.11Nerd – Wireless Field Day

I guess I made an impression on someone in San Jose.  Either that, or I’ve got some unpaid parking tickets I need to take care of.  At any rate, I have been invited to come to San Jose March 16th-18th for the first ever Wireless Field Day!  This event grew out of Tech Field Day thanks to the influence of Jennifer Huber and Stephen Foskett.  Jennifer and Stephen realized that having a Field Day focused on wireless technologies would be great to gather the leading wireless bloggers in the industry together in one place and see what happens.  That very distinguished list includes:

Marcus Burton CWNP @MarcusBurton
Samuel Clements Sam’s wireless @Samuel_Clements
Rocky Gregory Intensified @BionicRocky
Jennifer Huber Wireless CCIE, Here I Come! @JenniferLucille
Chris Lyttle WiFi Kiwi’s Blog @WiFiKiwi
Keith Parsons Wireless LAN Professionals @KeithRParsons
George Stefanick my80211 @WirelesssGuru
Andrew vonNagy Revolution Wi-Fi @RevolutionWiFi
Steve Williams WiFi Edge @SteveWilliams_

List HERE.  This list is also a handy one in case you need people to follow on Twitter that are wireless gurus.  I’m hoping that I can pick their brains during our three days together to help refine my wireless skills, as I am becoming more and more involved in wireless designs and deployments.

After our last Tech Field Day, a couple of people wondered why we bothered flying everyone out to California to listen to these presentations when this was something that could easily be done over streaming video and chat room questions or perhaps Webex.  I agree that many of the presentations were something that could have been done over a presence medium.  However, many of the best reasons to have a Tech Field Day never made it on camera.  By gathering all of these minds together in one place to discuss technologies, you drive critical thinking and innovation.  For instance, I had taken for granted that most people in the IT industry knew we needed to move to IPv6.  However, Curtis Preston opened my eyes to the server admin side of things during a non-televised lunch discussion at TFD 5.  Some of our roundtable discussions were equally enlightening.  The point is that Tech Field Day is more than just the presentations.  Ask yourself this:  Given a chance to have a Webex with the President of the US or flying to Washington D.C. and meeting him in person, which would you rather do?  You can have the same discussion with him over the Internet, but there’s just something about meeting him in person that can’t be replicated over a broadband link.

How Do I Get Involved With Tech Field Day?

I’m going to spill some secret sauce here.  The secret to getting into a Tech Field Day doesn’t involve secret payoffs or a good-old-boy network.  What’s involved is much easier than all that.

1.  Read the TFD FAQ and the Becoming a Field Day Delegate pages first and foremost.  Indicate your desire to become a delegate.  You can’t go if you don’t tell someone you want to be there.  Filling out the delegate form submits a lot of pertinent information to Gestalt IT that helps in the selection process.

2.  Realize that the selection process is voted upon by past delegates and has selection criteria.  In order to be the best possible delegate for a Tech Field Day, you have to be an open-minded blogger willing to listen to the presentations and think about them critically.  There’s no sense in bringing in delegates that will refuse to listen to a presentation from Arista because all they’ve ever used is Force10 and they won’t accept Arista having good technology.  If you want to learn more about all the products and vendors out in the IT ecosystem, TFD is the place for you.

3.  Write about what you’ve learned.  One of the hardest things for me after Tech Field Day 5 was consolidating what I had learned into a series of blog posts.  TFD is a fire hose of information, and there is little time to process it as it happens.  Copious notes are a must.  As is having the video feeds to look at later to remember what your notes meant.  But it is important to get those notes down and put them up for everyone else to see.  Because while your audience may have been watching the same video stream you were watching live, they may not have the same opinion of things.  The hardest part of TFD 5 for me wasn’t writing about Druva and Drobo.  It was writing about Infoblox and HP.  These reviews had some parts where I was critical of presentation methods or information.  These were my feelings on the subjects and I wanted to make sure that I shared them with everyone.  Tech Field Day isn’t just about fun and good times.  Occasionally, the delegates must look at things with a critical eye and make sure they let everyone know where they stand.

Be sure to follow @TechFieldDay on Twitter for more information about Wireless Field Day as the date approaches in mid-March.  You can also follow the #TechFieldDay hash tag for updates live as the delegates tweet about them.  For those of you that might not want to see all the TFD-related posts, you can also use the #TechFieldDay tag to filter posts in most major Twitter clients.  I’m also going to talk to the delegates and see if having an IRC chatroom is a good idea again.  We had a lot of good sidebar discussion going on during the presentations, but I only want to keep this aspect of things if it provides value for both the delegates and those following along online.  If you have an opinion about methods that the Internet audience can get involved, don’t hesitate to let me know.

Tech Field Day Disclaimer

Tech Field Day is made possible by the sponsors.  Each of the sponsors of the event is responsible for a portion of the travel and lodging costs.  In addition, some sponsors are responsible for providing funding for the gatherings that occur after the events are finished for the day.  However, the sponsors understand that their financing of Tech Field Day in no way guarantees them any consideration during the analysis and writing of reviews.  That independence allows the delegates to give honest and direct opinions of the technology and the companies that present it.

Why Virtualize Communications Manager (CallManager)?

With version 8.x of Cisco’s Communications Manager (CallManager or CUCM) software, the capability to virtualize the OS in VMware is the most touted feature.  Many people that I talk to are happy for this option, as VMware is quickly becoming an indispensable tool in the modern datacenter.  The ability to put CUCM on a VM gives the server admins a lot more flexibility in supporting the software.  However, some people I talk to about virtual CUCM say “So what?”.  They’re arguments talk about the fact that it’s only supported on Cisco hardware at the moment, or that it only supports ESXi, or even that they don’t see the utility of putting an appliance server on a VM.  I’ve been thinking about the tangible reasons for virtualizing CUCM beyond the marketing stuff I keep seeing floating around that involves words like flexibility, application agility, and so on.

1.  Platform Independence – A key feature of putting CUCM in a VM is the ability to divorce the OS/Application from a specific hardware platform.  Anyone who has tried to install CUCM on a non-MCS knows the pain of figuring out the supported HP/IBM hardware.  Cisco certified only certain server models to run CUCM.  This means that if the processor in your IBM-purchased server is 200Mhz faster than the one quoted on the specs, your CUCM installation will fail.  This means that Cisco has a hard time buying servers when they OEM them from IBM or HP.  Cisco has to buy a LOT of servers of the exactly same specifications.  Same processor, same RAM, same hard disk configurations.  This means moving to new technology when it’s available become difficult, as the hardware must be certified for use with the software, then it must be moved into the supply chain.  Look at how long it has taken to get an upgraded version of the 7835 and 7845 servers.  Those are the workhorses of large CUCM deployments, and they have only been revised 3 times since their introduction years ago.

Now, think about virtualization.  Since you’ll be using the same OVA/OVF templates every time to create your virtual machines, you don’t need to worry about ensuring the same processor and RAM in each batch of hardware purchases.  You get that from the VM itself.  All you need to do is define what virtual hardware you are going to need.  Now, all you really need to do is worry about certifying the underlying VM hardware.  Luckily, VMware has taken care of that for you.  They certify hardware to run their ESX/ESXi software, so all you need to do as a vendor like Cisco is tell the users what their minimum supported specs are supposed to be.  For those of you that claim that this is garbage since vCUCM is only supported on Cisco hardware right now, think about the support scenario from Cisco’s perspective.  Would you rather have your TAC people troubleshooting software issues on a small set of mostly-similar hardware while they work out the virtualization bugs?  Or do you want to slam your TAC people with every conceivable MacGyver-esque config slapped together for a lab setup?  Amusingly, one of those sounds a whole lot more like Apple’s hardware approach, and the other sounds a lot like Microsoft’s approach.  Which support system do you like better?  I have no doubts that the ability to virtualize CUCM on non-Cisco hardware will be coming sooner rather than later.  And when it does, it will give Cisco a great opportunity to position CUCM to quickly adapt to changing infrastructures and eliminate some of the supply chain and ordering issues that have plagued the platform for the last year or so.  It also makes it much easier to redeploy your assets quickly in case of strategic alliance dissolution.

2.  Failover / Fault Tolerance – Firstly, vMotion is NOT supported on vCUCM installation today.  Part of the reason is that the call quality of a cluster can’t be confirmed to be 100% reliable when a CUCM server has 100 calls going out of an MGCP gateway and suddenly vMotions to a cluster on the other side of a datacenter WAN link.  My own informal lab testing says that you CAN vMotion a CUCM VM.  It’s just not supported or recommended.  Now, once the bugs have been worked out of that particular piece of technology, think about the ramifications.  I’ve heard some people tell me they would really like to use CUCM in their environments, but because the Publisher / Subscriber model doesn’t support 100% uptime in a failover scenario, they just can’t do it.  With vMotion and HA handling the VMs, hardware failures are no longer an issue.  If there is a scenario where an ESXi server is about to go down for maintenance or a faulty hard disk, the publisher can be moved without triggering a subscriber failover.  Likewise, if the ESXi system housing the publisher gets hosed, the publisher can be failed over to another system with no impact.  I don’t see a change to the Pub/Sub model coming any time soon, but the impact of having an offline publisher is greatly reduced when you can rely on other mechanisms to ensure that the system is up.  Another thing to think about is the fault tolerance of the hardware itself.  Normally, we have an MCS server with two power supplies and a RAID 1 setup, along with one or two NICs. Now, think about the typical server used in virtualization in a datacenter.  Multiple power supplies, multiple NICs, and if there is onboard storage, it’s usually RAID 5 or better.  In many cases, the VMs are stored on a very fault-tolerant SAN.  Those hardware specs are worlds better than any you’re every going to be able to achieve with MCS hardware.  I’d feel more comfortable having my CUCM servers virtualized on that kind of hardware even without vMotion and HA.

3.  True appliance behavior – A long time ago, CallManager used to be a set of software services running on top of an operating system.  Of course, that OS was Windows 2000, and it was CallManager version 3.x and 4.x.  Eventually, Cisco moved away from the Services-on-OS model and went to an appliance solution.  Around the 6.x release time frame, I heard some strong rumors that said Cisco was going to look at abstracting the services portion of CUCM from the OS and allow that package to run on just about anything.  Alas, that plan never really came to fruition.  The appliance model works well for things like CUCM and Unity Connection, so the hassle of porting all those services to run on Windows and Solaris and MacOS was not really worth it.  Now, flash forward to the present day.  By allowing CUCM to run in a VM, we’ve essentially created a service platform divorced from a customer’s OS preference.  In CUCM, the OS really acts as a hardware controller and a way to access the database.  In the terms of server admins and voice people, the OS might as well not exist.  All we’re concerned about is the web interface to configure our phones and gateways.  Now, there has been grousing in the past from the server people when the VoIP guys want to walk in a drop a new server down that consumes powers and generates heat in their perfectly designed datacenter.  Now that CUCM can be entirely virtualized, the only cost is creating a new VM from an OVF template and letting the VoIP people load their software.  After that, it simply serves as an application running in the VMware cloud.  This is what Cisco was really going after when they said they wanted to make CUCM run as a service.  Little to no impact, and able to be deployed quickly.

Those are my thoughts about CUCM virtualization.  I think this a bold step forward for Cisco, and once they get up to speed by allowing us to do the things we take for granted with virtualization, like running on any supported hardware and vMotion/HA, the power of a virtualized CUCM model will allow us to do some interesting things going forward.  No longer will we be bound by old hardware or application loading limitations.  Instead, we can concentrate on the applications themselves and all the things they can do for us.

Tech Field Day – HP

The final presenters for Tech Field Day 5 were from HP.  HP presented on two different architectures that at first seemed to be somewhat unrelated.  The first was their HP StoreOnce data deduplication appliances.  The second was an overview of the technologies that comprise the HP Networking converged networking solutions.  These two technologies are very intrinsic to the future of the datacenter solutions offered by HP.

After a short marketing overview about HP and their direction in the market, as well as reinforcement of their commitment to open standards (more on this later), we got our first tech presentation from Jeff DiCorpo.  He talked to us about the HP StoreOnce deduplication appliances.  These units are designed to sit inline with your storage and servers and deduplicate the data as it flies past.  The idea of inline dedupe is quite appealing to those customer that have many remote branch offices and would prefer to reduce the amount of data being sent across the wire to a central backup location.  By deduping the data in the branch before sending it along, the backup windows can be shorter and the costs associated with starving other applications with high data usage can be avoided.  I haven’t really been delving into the backup solutions focused on the datacenter, but as I heard about what HP is doing with their line of appliances, it started to make a little more sense to me.  The trend to me appears to be one where the data is being centralized again in one location, much like the old days of mainframe computing.  For those locations that don’t have the ability or the need to centralize data in a large SAN environment, the HP StoreOnce appliances can shorten backup times for that critical remote site data.  The appliances can even be used internal to your datacenter to dedupe the data before it is presented to the backup servers.  The limits of the things that can be done with deduplication seem to be endless.  My networking background tends to have me thinking about data in relatively small streams.  But as I start encountering more and more backup data that needs priority treatment, the more I think that some kind of deduplication software or hardware is needed to reduce those large data streams.  There was a lot of talk at Tech Field Data about dedupe, and the HP solution appears to be an interesting one for the datacenter.

Afterwards, Jay Mellman of HP Networking talked to us about the value proposition of HP Converged Networking.  While not a pure marketing overview, there were the typical case studies and even a “G” word printed in the bottom corner of one slide.  Once Jay was finished, I did ask a few questions about the position of HP Networking in regards to their number one competitor, Cisco.  Jay admitted that HP is doing its best to force Cisco to change the way they do business.  The Cisco quarterly results had been released while I was at TFD, and the fact that there was less revenue was not lost on HP.  I asked Jay about the historical position of HP Network (formerly Procurve) and his stance that the idea of an edge-centric design was a better model than Cisco’s core-focused guidelines.  Having worked with both sets of hardware and seen reference documentation for each vendor, I can say that there is most definitely disagreement.  Cisco tends to focus its designs around strong cores of Catalyst 6500 or Nexus 7000 switches.  The access layer tends to be simple port aggregation where few decisions are made.  This is due to the historical advantage Cisco has enjoyed with its core products.  HP has always maintained that keeping the intelligence of the network out in the edge, what Cisco would term the “access layer”, is what allows them to be very agile and keep the processing of network traffic closer to the intended target.  I think part of this edge-centric focus has been because the historic core switching offerings from HP have been somewhat spartan compared to the Cisco offering.  I think this situation was remedied with the acquisition of 3Com/H3C and their A-series chassis switches.  This gives HP a great platform to launch back into the core.  As such, I’ve seen a lot more designs from HP that are beginning to talk about core networking.  Who’s right in all this?  I can’t say.  This is one of those OSPF – IS-IS kind of arguments.  Each has their appeal and their deficiencies.

After Jay, we heard from Jeff about the tech specs of the A-series switches.  He talked about the support HP has for the open standards in the datacenter.  Casually mentioned was the support for standards such as TRILL and QCN, but not for Cisco FabricPath.  As expected, Jeff made sure to point out that FabricPath was Cisco proprietary and wasn’t supported by the A-series.  He did speak about Intelligent Resilient Framework (IRF), which is a technology used by HP to unify the control plane of a set of switches to make it appear as one unified fabric.  To me, this sounds a lot like the VSS solution that Cisco uses on their core switches.  HP is positioning this as an option to flatten the network by creating lots of trunked (Etherchanneled) connections between the devices in the datacenter.  I specifically asked if they were using this as a placeholder until TRILL is ratified as a standard.  The answer was ‘yes’.  As IRF is a technology acquired from the H3C purchase, it only runs on the A-series switches.  In addition, there are enhancements above and beyond those offered by TRILL that will ensure IRF will still be used even after TRILL is finalized and put into production.  So, with all that in mind, allow me to take my turn at Johnny Carson’s magnificent Karnac routine:

The answer is: Cisco FabricPath OR HP IRF

The question? What is a proprietary technology used by a vendor in lieu of an open standard that allows a customer to flatten their datacenter today while still retaining several key features that will allow it to be useful even after ratification of the standard?

The presentation continued to talk about the trends and technolgy in the datacenter for enabling multi-hop Fiber Channel over Ethernet (FCoE) and the ability of the HP Flexfabric modules to support many different types of connectivity in the C7000 blade chassis.  I think that this is where the Cisco/HP battle is going to be won or lost.  By racing towards a fast and cost-effective multi-hop FCoE solution, HP and Cisco are hoping to have a large install base ready for the standards to become totally finalized.  When that day comes, they will be able to work alongside the standard and enjoy the fruits of a hard-fought war.  Time will tell whether or not this approach will work or who will come out on top, if anyone.

I think HP has some interesting arguments for their datacenter products.  They’ve also been making servers for a long time and they have a very compelling solution set for customers that incorporates storage, which is something Cisco currently lacks without a partner like EMC.  What I would like to see HP focus more on in their solution presentation is telling me what they can do and what the are about.  Conversely, they should spend a little less time comparing themselves to Cisco and taking each opportunity to mention how Cisco doesn’t support standards and has no previous experience in the server market.  To be honest, I don’t hear that from Cisco or IBM when I talk to them about servers or storage or networking.  I hear what they have to offer.  HP, if you can give me all the information I need to make my decision and your product is the one that fits my needs the best, you shouldn’t have to worry about what my opinion of your competitors is.

Tech Field Day Disclosure

HP was a sponsor of Tech Field Day 5, and as such was responsible for a portion of my airfare and hotel accommodations.  In addition, HP provided their Executive Briefing Center in Cupertino, CA for the Friday presentations.  They also served a great hot breakfast and allowed us unlimited use of their self-serve Starbucks coffee, espresso and chai machine.  We returned the favor by running it out of steamed milk for use in the yummy Dirty Chai.  HP also provided the delegates with a notepad and pen.  At no time did HP ask for nor were they promised any kind of consideration in this article.  Any and all analysis and opinions are mine and mine alone.

So? So, so-so.

By now, many of you have read my guidelines to presentations HERE and HERE.  I sit through enough presentations that I have my own opinions of how they should be done.  However, I also give presentations from time to time.  With the advent of my new Flip MinoPRO, I can now record my presentations and upload to whatever video service I choose to annoy this week.  As such, allow me to present you with the first Networking Nerd presentation:

47 minutes of me talking.  I think that’s outlawed by the Geneva Convention in some places.  So you can follow along, here’s a link to my presentation in PowerPoint format.

I don’t like looking at pictures of myself, and I don’t like hearing myself talk.  You can imagine how much fun it was for me to look at this.  I tried to give an IPv6 presentation to a group of K-12 technology directors that don’t spend  a lot of time dealing with routing and IP issues.  I wanted to give them some ideas about IPv6 and what they needed to watch out for in their networks in the coming months.  I have about a month to prepare for this, and I spent a good deal of that time practicing so my delivery was smooth.

What’s my opinion of my performance?  Well, as you can tell by the title of this post, I immediately picked up on my unconscious habit of saying “so”.  Seems I use that word to join sections of conversation.  I think if I put a little more conscious thought into things, I might be able to cut that part down a bit.  No sense putting words like “so”, “um”, and “uh” in places where they don’t belong.  They are crutches that need to be removed whenever possible.  That’s one of the reasons I like writing blog posts much more than spoken presentations: I can edit my writing if I think I’ve overused a word.  Plus, I don’t have to worry about not saying “um” while I type.

You’ll notice that I try to inject some humor into my presentation.  I feel that humor helps lighten the mood in presentations where the audience may not grasp everything all at once.  Humor has it’s place, so it’s best to leave it out of things like eulogies and announcing the starting lineup at a Yankees game.  But if you watch a lot of “serious” types of presentations, a little levity goes a long way toward making things feel a lot less formal and way more fun.

I also try to avoid standing behind a lectern or a podium when I speak.  I tend to use my hands quite a bit to illustrate points and having something sitting in front of me that blocks my range of motion tends to mess with my flow a little.  I also tend to pace and wander around a bit as I talk.  Having to be held to a physical object like a lectern would drive me nuts.  I would have preferred to have some kind of remote in my pocket that I could advance the slides with and use a laser pointer to illustrate things on the slides, but I lost mine some time ago and it has yet to turn up.  Luckily, I had someone in the room that was willing to advance my slide deck.  Otherwise, there would have been a lot of walking back and forth and out of frame.  Note to presenters, invest in a remote or two so you can keep the attention focused on you and your presentation without the distraction of walking back and forth or being forced to stay close to your laptop.

Let me know what you think, good or bad.  If you think I spaced out on my explanation of the content, corrections are always welcomed.  If you don’t like my gesticulations, I want to know.  Even tell me if you thought my Spinal Tap joke was a little too corny.  The only way I can get better as a presenter is to get feedback.  And since there were 8 people in the room, 7 of which I knew quite well, I don’t think I’m going to get any feedback forms.

Tech Field Day – InfoBlox

Infoblox was our second presenter on Day 2 of Tech Field Day 5.  They came into the HP Executive Briefing Center and instead of firing up the overhead projector, they started pulling the whiteboard over to the center of the room.  Once they got started, the founder and CTO, Stu Bailey, informed us that they would have zero slides.  No slides? Yay!  Here’s someone that was paying attention to Force 10 from Net Field Day.  No slides, just a whiteboard and some really brilliant guys.

As I am sitting here typing this article, I’m listening to the audio of the presentation in the background.  I think Stu is probably a very brilliant guy, and starting a company is one of the most challenging things a person can do.  With that being said, I think Stu suffers from a problem I have from time to time: Resolution.  I often tell stories to people and I misjudge the resolution of the information I’m imparting.  My stories are utterly fascinating and I love giving out the little details and settings.  However, my audience is less impressed with my story.  They get distracted and lost waiting for me to wrap things up.  I get caught up in the minutia and forget to tell the story.  I freely admit that I have this problem, and I do my best to avoid it when I’m giving presentations.  As I listen to the audio of the session, I’m reminded of this.  I love history lessons more than anyone else in the world.  In fact, I have the History Channel on my favorites list.  However, in this kind of technical session with no slides to keep my focus, the firehose of the history of Infoblox is kind of overwhelming.  Whiteboarding works really well when you are putting topics out there that your audience is going to ask questions about so you can demonstrate and expand topics on the fly.  During a history lesson, many of the things that you are discussing are pretty much agreed upon by people, so you don’t have any real explanation to display.  I think some of the people started tuning out since the what of Infoblox was getting lost in the why of Infoblox.  Stu, if you want to help yourself for the next presentation, you need to hook your audience.  Give us the problem up front in a couple of minutes.  Let me try based on what I heard and saw:

In today’s world, network infrastructure is siloed and hard to manage.  The number of people required to be involved in new system deployments and change management makes it difficult to coordinate these activities.  In addition, the possibility exists that a misconfiguration or forgotten step could create downtime beyond expectations.  What Infoblox is trying to bring to the table is the ability to automate these processes so that the deployment and management of the network and its associated services can be streamlined.  Changes can be delegated to less skilled personnel so that the network is no longer entirely dependent on one person’s knowledge of a particular service or configuration.  Infoblox allows you to concentrate on making your network run optimally through standard repeatable processes.  Infoblox also allows you to see your network and service configurations at a glance.

Folks, that is Infoblox in a nutshell, at least as I see it.  Infoblox draws all of your DHCP and DNS servers together into an automated database that allows you to make changes across your network and it’s services instantly without the need to make the changes individually.  This would have been a great lead-in to the second part of the presentation, where we got to see how Infoblox works.  Based on discussions I had with my networking and systems brethren, it appears that Infoblox is attacking the aspect of a network that doesn’t have standardized procedures for implementation and change management.  In a mid-to-large size company, bringing a new DNS server online or implementing a branch office server are step-by-step processes that follow a detailed checklist.  Once all the checks are made, the change or implementation is complete.  Infoblox automates the checklist so that a few clicks can make those changes without the chance of missing a step.  Whether or not your environment needs that kind of oversight is a question you have to answer for yourself.  I can see applications where some or all of the features of Infoblox would be a godsend.  To be honest, I’d really like to see it in action before I pass total judgement on the software itself.  I just wish this message would have been put out there for us to digest as we investigated the whys of Infoblox.  A history lesson explaining the need for each piece of Infoblox should have been tied back to an overview similar to the one above, where each piece was introduced.  As the history of the individual pieces is revealed, they can be tied back to the relevant section of the overview. Think about it like a Chekov’s Gun for Presentations:  The DNS IPAM seen in section two, minute one should first be seen no later that section one, minute two.

After the Infoblox presentation, the next product on the block was NetMRI.  Now, I’ve heard of this product before.  However, the last time I heard about it, the association was with Netcordia and Terry Slattery, CCIE #1026.  As soon as I heard that Infoblox had purchased Netcordia and the NetMRI software, the sudden move of Terry to Chesapeake Netcraftsmen made a little more sense to me.  NetMRI is a great tool and appears to be the heart of the Infoblox offerings upon which things like IPAM for DNS/DHCP and the Infoblox Grid use to make the network changes.  Those familiar with NetMRI know that it allows you to collect statistics on your devices and monitor changes to the configurations of those devices.  By leveraging the NetMRI tools into the Grid product, Infoblox allows you to monitor and make changes to a wide variety of devices as needed.  This helps add more to their existing IPAM offerings.

If they really want to kill the market with this, they really need to drive home the need of IPAM and network configuration management to their customers.  Most people are going to look at this and say, “Why do I need it?  I can do everything with Windows tools or Excel spreadsheets.”  That is the historical kind of thinking that has allowed networks to spiral out of control to the point where the need complex management tools to keep them running at peak efficiency.  I’m sure Terry saw this when he created NetMRI and made it his mission to get this kind of thing put into the network devices.  By adding this product to their portfolio, Infoblox needs to drive home the need for ALL devices to be managed and documented.  If they can do that, I think they’re going to find their message much more succinct and the value and lot easier to present.  I think you guys have a great product that is needed.  You just have to let me know why I need it, not just why you made it.

If you’d like to learn more about the offerings from Infoblox, head over to their website at  You can also follow them on Twitter as @infoblox.

Tech Field Day Disclaimer

Infoblox was a sponsor of Tech Field Day 5, and as such they were reponsible for a portion of my airfare and hotel accommodations.  They did not ask for nor were they promised any kind of consideration in the writing of this article.  Any and all of the opinions and analysis expressed herein are mine and mine alone.

Tech Field Day – Netex

Day 2 of Tech Field Day was powered by Starbucks.  Starting off at the hotel with a visit to the Starbucks counter was a no-brainer, but upon arrival the wonderful HP Executive Briefing Center in Cupertino, CA, the Holy Grail of caffeine addicts was discovered – a self-serve Starbucks espresso machine.  As such, many fabulous Dirty Chai drinks were consumed during the day, which may have led to my perking up and asking more questions during day 2.  Maybe that, or we finally got to the networking part that I knew a little more about.  I’m still blaming the Dirty Chai, though.

Netex was first on deck.  And they nailed it.  Not necessarily their message, but their presentation.  They kept their message short.  They had a hands-on example that kept us awake first thing in the morning.  They tempted us with beer.  They didn’t talk longer than they needed to and left plenty of time for questions.  And they still got done early.  Spot on, guys.  I’ll go on record as saying that it’s not necessary to fill your entire presentation with talking.  You need to leave some time for questions that might come up during the presentation as well as questions at the end after you’ve delivered your message.  The reason there are time constraints on presentations is to keep people from rambling on forever.  I don’t mind staying five or ten minutes extra so long as the reason for the overtime was due to a lot of good questions.  At the same time, only leaving two or three minutes at the end of a two hour slide deck due to constant chattering isn’t going to make any friends.

Netex revolves around a product called HyperIP.  HyperIP is a virtual machine that does something rather interesting.  It attempts to fix the TCP message window / global synchronization issue by avoiding it.  For those not familiar, TCP like to increase the window size as data begins transmitting so as to use the link in the most efficient manner.  However, eventually TCP will saturate the reciever with data and the reciever will ask the sender to back off.  TCP does this by backing down halfway, then ramping up again as the sender catches up with the data stream.  Imagine reading me a list of numbers over the phone.  You may start out by reading groups of 3 numerals, then as I get comfortable you may move to groups of 5 then 6, constantly increasing the amount of numerals per group.  Eventually, I’m not going to be able to remember all the numerals in each group, so I’m going to ask you to stop and go back to groups with smaller numerals.  In TCP we try to fix issues like this with things such as Weighted Random Early Detection (WRED).  WRED tries to avoid forcing the sender to back off totally by instead dropping less critical packets in the stream and making these be retransmitted later.  As such, the TCP window size can be kept as large as possible for as long as possible to allow the maximum amount to data to be transmitted in the most efficient way possible for a given link.  It should be noted that WRED only works on TCP due to the ability of TCP to retransmit lost packets due to TCP acknowledgements.  UDP can’t use WRED since these packets would be lost and never retransmitted (more on this in a minute).

HyperIP acts as a gateway for your server devices.  Instead of your backup server pointing to the WAN router, it points to the HyperIP VM.  The HyperIP VM then terminates the TCP stream and “caches” the data.  It contacts another HyperIP VM at the destination site and negotiates the most efficient window size.  It then transmits the data between sites using large UDP packets.  The analogy they used was that instead of transporting individual bottles of beer, the bottles were packaged into”kegs” and transported more efficiently.  When I asked how the HyperIP VMs dealt with packet loss since UDP is not tolerant of packet loss.  I was informed that the HyperIP system kept track of the UDP packets on both sides in a kind of lookup table, so it one was missed it could be retransmitted.  Once the UDP packets arrive on the other side of the WAN link, they are transmitted via TCP to the destination server.

The current use case to me seems to be for backup traffic or other large, bursty types of communication.  Netex admitted that this technology won’t do much for smaller conversations, such as HTTP traffic.  It also only affects TCP, UDP, and ICMP, so more esoteric protocols are out (sorry AppleTalk users).  I’m having an issue with the way HyperIP actually does it’s job.  It seems to me like they are re-inventing the wheel and trying to accomplish something that Network Cool Guys can accomplish with proper QoS design.  In fact, the traffic patterns shown in the presentation after the application of HyperIP look an awfully lot like the traffic patterns after you apply WRED to a WAN link.  HyperIP does have the ability to add some compression to the data stream, so there is the opportunity to reduce the amount of data being sent.  For those that might be using some basic QoS on slow links already and might be thinking about implementing a HyperIP setup, be sure you are classifying your VoIP traffic as finely as possible by using DSCP marking at the source or marking it by protocol / port.  I’d hate to see your priority queue fill up with HyperIP kegs and starve out the CEO’s conference call.

I can see a use case for HyperIP in situations where your company doesn’t have a QoS-focused technical person but has a lot of depth in the server admin area.  Matthew Norwood even called it “QoS for the Less Fortunate”.  I’m not saying that it’s not a fine product or that it doens’t have it’s uses.  I’m just saying that it can’t do anything for me that I can’t do already from the CLI.  But, try it out yourself if your curious.  There should be a 30-day free trial available by the end of February.  Just remember that you’re going to need to buy your VMs in pairs to make the work properly.

If you are interested in learning more about Netex and their HyperIP offering, head over to their website at  You can also follow them on Twitter as @HyperIP

Tech Field Day Disclaimer

Netex was a sponsor of Tech Field Day 5, and as such was partly responsible for my airfare and hotel accommodations.  In addition, they provided a 1 GB USB drive containing information about their product, as well as a bottle opener.  They also may or may not have allowed us the use of their practical example, which consisted of an ice chest filled with cold Corona beer.  I can neither confirm or deny that these beers were consumed by the pool at the hotel after the end of Tech Field Day, day 2.  Netex neither asked for nor were they granted any consideration for this article.  The opinions and analysis expressed herein are mine and mine alone.

Tech Field Day – Xangati

Monitoring of key devices in your network is a very big business.  Knowing what’s going on with your devices can keep you in the loop when troubles start to happen.  Almost as important, though, it event correlation.  Taking data from multiple sources and presenting it in such as way as to see how the minor events leading up to a problem can have an important impact is critical in larger infrastructures.  Many companies have software designed for this purpose, and one of them was kind enough to present to us at Tech Field Day 5.

Xangati is focused on virtualization and their software acts as a dashboard the collects information from various different sources in your network, from ESX boxes to network interfaces.  It presents this information to you in an easy-to-read format, the oft-used “single pane of glass” metaphor.  One neat thing that their software allows you to do is go back in time to see the events taking place right up to the point where your VMs went belly up, for instance.  This DVR-like functionality is very helpful when you find yourself in a situation where no one problem was the root cause of your issue, but instead you find yourself succumbing to the weight of multiple minor issues, the “Death by 1,000 Cuts” syndrome.  With Xangati, you can replay a mountain of data to find the root cause of your issue without needing to sift through endless router logs or VMware alerts.  One pane of glass means one source of easy-to-digest information.

For the moment, Xangati appears to be focused on providing their services in a report-only mode.  At a roundtable afterwards, though, Sean Clark brought up the point that this could be viewed as a great framework for allowing some kind of automated DRS-type solution that draws on the firehose of information gathered by the current Xangati tool.  Is this something that might lead them to being a target ripe for acquisition?  Or is this a capability that might be developed in house at some point?  I can’t say for sure, but I know that getting good information about what’s going on in your network is the first step in being proactive about troubleshooting.  And based on what I’ve seen of Xangati’s tools, I think they’ve got the right idea to get the information to you when you need it the most.  I’m sure I’m going to take a second look at this product as time allows.

If you’d like to hear more about what Xangati has to offer, you can check them out at or follow them on Twitter as @xangatipress.

Tech Field Day Disclaimer

Xangati was a sponsor of Tech Field Day 5, and as such was partly responsible for my airfare and hotel accommodations.  In addition, they were the sponsor of our Thursday night meal and trip to the Computer History Museum.  At no time did they ask for or receive any consideration in the writing of this article.  The opinions and analysis expressed herein are mine and mine alone.

Tech Field Day – Druva

I grew up on a farm when I was younger.  My mother’s family contains many farmers.  This has afforded me some interesting opportunities.  One of these was watching a calf being born.  Just hours after birth, the calf can stand on its own.  It’s a magical experience that shows you how something so small can grow and change in such a short time.  And I got to experience something similar on Thursday afternoon at Tech Field Day 5.

Druva is a new company that officially launched at TFD5.  Now, they weren’t a “brand new” company with a lot of dreams and talk.  Much like the calf above, they’ve found their legs and are standing on their own now.  They have taken an interesting approach to backup technology and used it to address a segment of the market which I honestly hadn’t thought of before.  And, I really like their name.  “Druva” is the name for Polaris, the North Star in Hindu mythology.  For centuries, people have depended on the North Star to guide them.  Yet it is a simple resource that is always there and available when needed.  These were the guidelines that helped Druva develop their offering.

Druva is attacking the endpoint backup market.  They believe that the hardest devices in your environment to keep safe are not the servers and SANs, but the user laptops and desktops and mobile devices.  There is a large amount of data contained on these devices that is rarely backed up in most cases and can lead to severe downtime in the event of a theft or a technical problem of some kind.  As well, more and more of this data is being squirreled away on iPads and iPhones, devices that are difficult to reliably backup from an enterprise admin perspective.

Druva is ready to prove what they say.  Their server/client software download is a mere 40MB.  In a world today where I can barely make 10-slide presentations smaller than that, Druva can protect my laptop from data loss in the event it gets thrown into the office lobster tank.  After installing the program on a server, you can configure lots of different options to create and import user accounts that represent the target devices that need to be backed up.  Once created, you send an email to the user to validate them and them download a client to their system to allow backups to begin.  Druva deduplicates the data before it’s ever sent from the client system based on the fact that 80-90% of the data contained on corporate workstations is MS Office and MS Outlook.  By knowing how to efficiently hash that data and deduplicate it, they can streamline the backup process allowing much less data to be sent over slow links and shorten the time the user is impacted by the backup window.

Druva’s live backup and restore demo was hampered not by Druva technical challenges but by connectivity issues.  Their laptop was connected to a Cradlepoint personal hotspot device brought along by Stephen Foskett and with all the other devices using guest wireless and the Cradlepoint, the connection was saturated.  It almost felt like being on dial-up again.  I was impressed to see the amount of data being sent over the link, a scant 63MB.  I don’t know how big the folder was originally, but if it was a standard document folder containing hundreds of MB of data, there was definitely proof the dedupe works.

We were able to perform a single file restore to one of Druva’s iPads they had brought along.  So, once again Apple saves the day.  All in all, I like this product and think it has some capabilites that are sorely missing from the backup solutions offered by some of their competitors.  And just like good tech people, the whiz kids at Druva aren’t resting on their laurels.  They were talking to us about branching out and finding new uses for this technology and new ways to think about backups for more than just endpoints.  I can’t wait to see how the grow and change in the coming months and I wish them the best of luck in their endeavors.

A funny note about Druva.  We were having issues before the presentation figuring out which Twitter account was their main one, @druva or @druvainc.  We talked with Jaspreet and he told us that once upon a time, the name of the company was actually “Druvaa”.  One of their customers remarked that if they were really in the business of data deduplication, why did they have two A’s in their name? So Druva deduped their own name.  That’s dedication, folks.

If you are interested in checking out Druva, head over to their website at and download their product to try out.  You can also follow them on Twitter as @druvainc.

Tech Field Day Disclaimer

Druva was a sponsor of Tech Field Day 5, and as such was partly responsible for my airfare and hotel accommodations.  Druva did not ask for nor were they promised any consideration in this review.  My opinions and analysis are my own.