Automating vSphere with VMware vCenter Orchestrator – Review

I’ll be honest.  Orchestration, to me, is something a conductor does with the Philharmonic.  I keep hearing the word thrown around in virtualization and cloud discussions but I’m never quite sure what it means.  I know it has something to do with automating processes and such but beyond that I can’t give a detailed description of what is involved from a technical perspective.  Luckily, thanks to VMware Press and Cody Bunch (@cody_bunch) I don’t have to be uneducated any longer:

One of the first books from VMware Press, Automating vSphere with VMware vCenter Orchestrator (I’m going to abbreviate to Automating vSphere) is a fine example of the type of reference material that is needed to help sort through some of the more esoteric concepts surrounding virtualization and cloud computing today.  As I started reading through the introduction, I knew immediately that I was going to enjoy this book immensely due to the humor and light tone.  It’s very easy to write a virtualization encyclopedia.  It’s another thing to make it readable.  Thankfully, Cody Bunch has turned what could have otherwise been a very dry read into a great reference book filled with Star Trek references and Monty Python humor.

Coming in at just over 200 pages with some additional appendices, this book once again qualifies as “pound cake reading”, in that you need to take your time and understand that length isn’t the important part, as the content is very filling.  The author starts off by assuming I know nothing about orchestration and filling me in on the basics behind why vCenter Orchestrator (vCO) is so useful to overworked server/virtualization admins.  The opening chapter makes a very good case for the use of orchestration even in smaller environments due to the consistency of application and scalability potential should the virtualization needs of a company begin to increase rapidly.  I’ve seen this myself many times in smaller customers.  Once the restriction of one server to one operating system is removed, virtualized servers soon begin to multiply very quickly.  With vCO, managing and automating the creation and curation of these servers is effortless.  Provided you aren’t afraid to get your hands dirty.  The rest of Part I of the book covers the installation and configuration of vCO, including scenarios where you want to split the components apart to increase performance and scalability.

Part II delves into the nuts and bolts of how vCO works.  Lots of discussions about workflows that have containers that perform operations.  When presented like this, vCO doesn’t look quite as daunting to an orchestration rookie.  It’s important to help the new people understand that there really isn’t a lot of magic in the individual parts of vCO.  The key, just like a real orchestra, is bringing them together to create something greater than the sum of its parts.  The real jewel of the book to me was Part III, as case study with a fictional retail company.  Case studies are always a good way to ground readers in the reality and application of nebulous concepts.  Thankfully, the Amazing Smoothie company is doing many of the things I would find myself doing for my customers on a regular basis. I enjoyed watching the workflows and Javascript come together to automate menial tasks like consolidating snapshots or retiring virtual machines.  I’m pretty sure that I’m going to find myself dog-earing many of the pages in this section in the future as I learn to apply all the nuggets contained within to real life scenarios for my own environment as well as that of my customers.

If you’d like to grab this book, you can pick it up at the VMware Press site or on Amazon.


Tom’s Take

I’m very impressed with the caliber of writing I’m seeing out of VMware Press in this initial offering.  I’m not one for reading dry documentation or recitation of facts and figures.  By engaging writers like Cody Bunch, VMware Press has made it enjoyable to learn about new concepts while at the same time giving me insight into products I never new I needed.  If you are a virtualization admin that manages more than two or three servers, I highly recommend you take a peak at this book.  The software it discusses doesn’t cost you anything to try, but the sheer complexity of trying to configure it yourself could cause you to give up on vCO without a real appraisal of its capabilities.  Thanks to VMware Press and Cody Bunch, the amount of time and effort you save from buying this book will easily be offset by gains in productivity down the road.

Book Review Disclaimer

A review copy of Automating vSphere with VMware vCenter Orchestrator was provided to me by VMware Press.  VMware Press did not ask me to review this book as a condition of providing the copy.  VMware Press did not ask for nor were they promised any consideration in the writing of this review.  The thoughts and opinions expressed herein are mine and mine alone.

Cisco ASA CX – Next Generation Firewall? Or Star Trek: Enterprise Firewall?

There’s been a lot of talk recently about the coming of the “next generation” firewall.  A simple firewall is nothing more than a high-speed packet filter.  You match on criteria such as access list or protocol type and then decide what to do with the packet from there.  It’s so simple in fact that you can setup a firewall on a Cisco router like Jeremy Stretch has done.  However, the days of the packet filtering firewall are quickly coming to an end.  Newer firewalls must have the intelligence to identify traffic not by IP address or port number.  In today’s network world, almost all applications tunnel themselves over HTTP, either due to their nature as web-based apps or the fact that they take advantage of port 80 being open through almost every firewall.  The key to being able to identify malicious or non-desired traffic attempting to use HTTP as a “common carrier” is to inspect the packet at a deeper level than just port number.  Of course, many of the firewalls that I’ve looked at in the past that claim to do deep packet inspection either did a very bad job of it or did such a great job inspecting that the aggregate throughput of the firewall dropped to the point of being useless.  How do we balance the need to look more closely at the packet with the desire to not have it slow our network to the point of tears?

Cisco has spent a lot of time and money on the ASA line of firewalls.  I’ve installed quite a few of them myself and they are pretty decent when it comes to high speed packet filtering.  However, my customers are now asking for the deeper packet inspection that Cisco hasn’t yet been able to provide.  Next-Gen vendors like Palo Alto and Sonicwall (now a part of Dell) have been playing up their additional capabilities to beat the ASA head-on in competitions where blocking outbound NetBIOS-over-TCP is less important than keeping people off of Farmville.  To answer the challenge, Cisco recently announced the CX addition to the ASA family.  While I haven’t yet had a chance to fire one of these things up, I thought I’d take a moment to talk about it and aggregate some questions and answers about the specs and capabilities.

The ASA CX is a Security Services Processor (SSP) module that today runs on the ASA 5585-X model.  It’s a beastly server-type device that has 12GB or 24GB or RAM, 600GB of RAID-1 disk space and 8GB of flash storage.  The lower-end model can take up to 2Gbps throughput and the bigger brother can handle 5Gbps.  It scans over 1000 applications and more than 75,000 “micro” applications to determine whether the user is listening to iTunes in the cloud or watching HD video on Youtube.  The ASA CX also utilizes other products in the Cisco Secure-X portfolio to feed it information.  The Cisco AnyConnect Secure VPN client allows the CX to identify traffic that isn’t HTTP-based, as right now the CX can only identify traffic via HTTP User Agent in the absence of AnyConnect.  In addition, the Cisco Security Intelligence Operation (SIO) Manager can aggregate information from different points on the network to give the admins a much bigger picture of what is going on to prevent things such as zero-day attack outbreaks and malware infections.

One of the nice new features of the ASA CX that’s been pointed out by Greg Ferro is the user interface for the CX module.  Rather than relying on the Java-based ADSM client or forcing users to learn yet another CLI convention, Cisco decided to include a copy of the Cisco Prime Security Manager on-box to manage the CX module.  This is arguably the best way for Cisco to have created an easy way for customers to easily utilize the features of the new CX module.  I’ve recently had a chance to play around with the Identity Services Engine (ISE) and while the UI is very slick and useful, I cried a little when I started using the ADE-OS interface on the CLI.  It’s not the same as the IOS or CUCM CLI that I’m used to, so I spent much of my time figuring out how to do things I’ve already learned to do twice before.  Instead, with the CX Prime Security Manager interface, Cisco has allowed me to take a UI that I’m already comfortable with and apply it to the new features in the firewall module.  In addition, I can forego the use of the on-box Prime instance and instead register the CX to an existing Prime installation for a single point of management for all my security needs.  I’m sure that the firewall itself still needs to use ASDM for configuration and that the Prime instance is only for the CX module but this is still a step in the right direction.

There are some downsides to the CX right now.  That’s to be expected in any 1.0-type launch.  Firstly, you need an ASA 5585-X to run the thing.  That’s a pretty hefty firewall.  It’s an expensive one too.  It makes sense that Cisco will want to ensure that the product works well on the best box it has before trying to pare down the module to run effectively on the lower ASA-X series firewall.  Still, I highly doubt Cisco will ever port this module to run on the plain ASA series.  So if you want to do Next-Gen firewalling, you’re going to need to break out the forklift no matter what.  In the 1.0 CX release, there’s also no support for IPS, non-web based application identification without AnyConnect, or SSH decryption (although it can do SSL/TLS decryption on the fly).  It also doesn’t currently integrate with ISE for posture assessment and identity enforcement.  That’s going to be critical in the future to allow full integration with the rest of Secure-X.

If you’d like to learn more at the new ASA CX, check out the pages on Cisco’s website.  There’s also an excellent Youtube walkthrough:


Tom’s Take

Cisco has needed a Next-Gen firewall for quite a while.  When the flagship of your fleet looks like the Stargazer instead of the Enterprise-D, it’s time for a serious upgrade.  I know that there have been some challenges in Cisco’s security division as of late, but I hope that they’ve been sorted out and the can start moving down the road.  At the same time, I’ve got horrible memories of the last time Cisco tried to extend the Unified Threat Management (UTM) profile of the ASA with the Content Security and Control (CSC) module.  That outsourced piece of lovely was a source of constant headache for the one or two customers that had it.  On top of it all, everything inside was licensed from Trend Micro.  That meant that you had to pay them a fee every year on top of the maintenance you were paying to Cisco!  Hopefully by building the CX module with Cisco technologies such as Network-Based Application Recognition (NBAR) version 2, Cisco can avoid having the new shiny part of it’s family being panned by the real firewall people out there and languish year-to-year before finally being put out of it’s misery, much like the CSC module or Star Trek: Enterprise.  I’m sure that’s why they decided to call the new module the CX and not the NX.  No sense cursing it out of the gate.

Minimizing MacGyver

I’m sure at this point everyone is familiar with (Angus) MacGyver.  David Lee Zlotoff created a character expertly played by Richard Dean Anderson that has become beloved by geeks and nerds the world over.  This mulletted genius was able to solve any problem with a simple application of science and whatever materials he had on hand.  Mac used his brains before his brawn and always before resorting to violence of any kind.  He’s a hero to anyone that has ever had to fix an impossible problem with nothing.  My cell phone ringtone is the Season One theme song to the TV show.  It’s been that way ever since I fixed a fiber-to-Ethernet media converter with a paper clip.  So it is with great reluctance that I must insist that it’s time network rock stars move on from my dear friend MacGyver.

Don’t get me wrong.  There’s something satisfying about fixing a routing loop with a deceptively simple access list.  The elegance of connecting two switches back-to-back with a fiber patch cable that has been rigged between three different SC-to-ST connectors is equally impressive.  However, these are simply parlor tricks.  Last ditch efforts of our stubborn engineer-ish brains to refuse to accept failure at any cost.  I can honestly admit that I’ve been known to say out loud, “I will not allow this project to fail because of a missing patch cable!”.  My reflexes kick in, and before I know it I’m working on a switch connected to the rest of the network by a strange combination of bailing wire and dental floss.  But what has this gained me in the end?

Anyone that has worked in IT knows the pain of doing a project with inadequate resources or insufficient time.  It seems to be a trademark of our profession.  We seem like miracle workers because we can do the impossible from less than nothing.  Honestly though, how many times have we put ourselves into these positions because of hubris or short-sightedness?  How many times have we equivocated to ourselves that a layer 2 switch will work in this design?  Or that a firewall will be more than capable of handling the load we place on it even if we find out later that the traffic is more than triple the original design?  Why do we subject ourselves to these kinds of tribulations knowing that we’ll be unhappy unless we can use chewing gum and duct tape to save the day?

Many times, all it takes is a little planning up front to save the day.  Even MacGyver does it. I always wondered why he carried a roll of duct tape wherever he went.  The MacGyver Super Bowl Commercial from 2009 even lampooned his need for proper preparation.  I can’t tell you the number of times I’ve added an extra SFP module or fiber patch cable knowing that I would need it when I arrived on site.  These extra steps have saved me headaches and embarrassment.  And it is this prior proper planning that network engine…rock stars must rely on in order to do our jobs to the fullest possible extent.  We must move away from the bailing wire and embrace the bill of materials.  No longer should we carry extra patch cables.  Instead we should remember to place them in the packages before they ship.  Taking things for granted will end in heartache and despair.  And force us to rely less on our brains and more on our reflexes.

Being a Network MacGyver makes me gleam with pride because I’ve done the impossible.  Never putting myself in the position to be MacGyver makes me even more pleased because I don’t have to break out the duct tape.  It means that I’ve done all my thinking up front.  I’m content because my project should just fall into place without hiccups.  The best projects don’t need MacGyver.  They just need a good plan.  I hope that all of you out there will join me in leaving dear Angus behind and instead following a good plan from the start.  We only make ourselves look like miracle workers when we’ve put ourselves in the position to need a miracle.  Instead, we should dedicate ourselves to doing the job right before we even get started.

Janetter – The Twitter Client That Tweetdeck Needs To Be

Once I became a regular Twitter user, I abandoned the web interface and instead started using a client.  For a long while, the de facto client for Windows was Tweetdeck.  The ability to manage lists and segregate users into classifications was very useful for those that follow a very eclectic group of Twitterers.  Also very useful to me was the multiple column layout, which allowed me to keep track of my timeline, mentions, and hashtag searches.  This last feature was the most attractive to me when attending Tech Field Day events, as I tend to monitor the event hashtag closely for questions and comments.  So it was that I became a regular user of Tweetdeck.  It was the only reason I installed Adobe Air on my desktop and laptop.  It was the first application I launched in the morning and the last I closed at night.

That was before the dark times.  Before…Twitter.

Last May, Twitter purchased Tweetdeck for about $40 million.  I was quite excited at first.  The last time this happened, Twitter turned the Tweetie client for iPhone and Mac into the official client for those platforms.  I liked the interface in the iPhone and hoped that Twitter would pour some development into Tweetdeck and turn it into the official cross-platform client for power users.  Twitter took their time consolidating the development team and updating Tweetdeck as they saw fit.  About six months later, Twitter released Tweetdeck 1.0, and increase from Tweetdeck’s last version of 0.38.2.  Gone was the dependency on Adobe Air, instead using HTML5.  That was probably the only good thing about it.  The interface was rearranged.  Pieces of critical information, like date and time of tweets was gone.  The interface defaulted to using “real” names instead of Twitter handles.  The multiple column layout was broken.  All in all, it took me about a day to delete the 1.0 app from my computer and go back to the version 0.38 Air app.  I’d rather have an old client that works than a newer broken client.

As the weeks passed, I realized that Tweetdeck Air was having a few issues.  People would randomly be unfollowed.  Tweets would have issues being posted.  Plus, I knew that I would eventually be forced to upgrade if Twitter released a killer new feature.  I wanted a new client but I wanted it to be like the old Tweetdeck.  I was about to give up hope when Matthew Norwood (@MatthewNorwood) mentioned that he’d been using a new client.  Behold – Janetter:

It even looks like the old Tweetdeck!  It uses the Chromium rendering engine (Webkit on Mac) to display tweets.  This also means that the interface is fully skinnable with HTML5 and CSS.  Support for multiple accounts, multiple columns, lists, and filtering/muting make it just as useful as the old Tweetdeck.  Throw in in-line image previews and the ability to translate highlighted phrases and you see that there’s a lot here to utilize.  I started using it as my Windows client full time and I use the Mac version when I need to monitor hashtags.  I find it very easy to navigate and use.

That’s not to say there aren’t a couple of caveats.  Keeping up with a large volume of tweets can be cumbersome if you step away from the keyboard.  The auto scrolling is a bit buggy sometimes.  As well, sometimes I get random tweets that were read being marked as unread.  The default user interface is a bit of a trainwreck (I recommend the Deep Sea theme).  Despite these little issues, I find Janetter to be a great replacement overall for the Client Formerly Known As Tweetdeck for those of you that miss the old version but can’t bring yourself to install what Twitter released a very “1.0 product”.  Perhaps with a little time and some proper feedback, Twitter will remake their version of Tweetdeck into what it used to be with some polish and new features.

Head over to http://janetter.net to see more features or download a copy for your particular flavor of operating system.  You can also download Janetter through the Mac App Store.

Is Dell The New HP? Or The Old IBM?

Dell announced it’s intention today to acquire Sonicwall, a well-respected firewall vendor.  This is just a latest in a long line of fairly recent buys for Dell, including AppAssure, Force10, and Compellent.  There’s been a lot of speculation about the reasons behind the recent flurry of purchases coming out of Austin, TX.  I agree with the majority of what I’m hearing, but I thought I’d point out a few things that I think make a lot of sense and might give us a glimpse into where Dell might be headed next.

Dell is a wonderful supply chain company.  I’ve heard them compared to Walmart and the US military in the same breath when discussing efficiency of logistic management.  Dell has the capability of putting a box of something on your doorstep within days of ordering.  It just so happens that they make computer stuff.  For years, Dell seemed to be content to partner with companies to utilize their supply chain to deliver other people’s stuff.  After a while, Dell decided to start making that stuff for themselves and cut out the middle man.  This is why you see things like Dell printers and switches.  It didn’t take long for Dell to change it’s mind, though.  It made little sense to devote so much R&D to copying other products.  Why not just spend the money on buying those companies outright?  I mean, that’s how HP does it, right?  And so we start the acquisition phase for Dell.  Since acquiring Equallogic in 2008, they’ve bought 5 other companies that make everything from enterprise storage to desktop management. They only thing they’ve missed on was acquiring 3PAR, which happened because HP threw a pile of cash at 3PAR to not go to Dell.  I’m sure that was more about denying Dell an enterprise storage vendor than it was using 3PAR to its fullest capabilities.

Dell still has a lot of OEM relationships, though.  Their wireless solution is OEMed from Aruba.  They resell Juniper and Brocade equipment as their J-series and B-series respectively.  However, Dell is trying to move into the data center to fight with HP, Cisco, and IBM.  HP already owns a data center solution top to bottom.  Cisco is currently OEMing their solution with EMC (vBlock).  I think Dell realizes that it’s not only more profitable to own the entire solution in the DC, it’s also safer in the long term.  You either support all your own equipment, or you have to support everyone’s equipment.  And if you try to support someone else’s stuff, you have to be very careful you don’t upset the apple cart.  Case in point: last year many assumed Cisco was on the outs with EMC because they started supporting NetApp and Hyper-V.  If you can’t keep your OEM DC solution partners happy, you don’t have a solution.  From Dell’s perspective, it’s much easier to appease everyone if they’re getting their paychecks from the same HR department.  Dell’s acquisitions of Force10 and, now, Sonicwall seem to support the idea that they want the “one throat to choke” model of solution delivery.  Very strategic.

They only problem that I have with this kind of Innovation by Acquisition strategy is that it only works when upper management is competent and focused.  So long as Michael Dell is running the show in Austin, I’m confident that Dell will make solid choices and bring on companies that complement their strategies.  Where the “buy it” model breaks down is when you bring in someone that runs counter to your core beliefs.  Yes, I’m looking at HP now.  Ask them how they feel about Mark Hurd basically shutting down R&D and spending their war chest on Palm/WebOS.  Ask them if they’re still okay with Leo Apotheker reversing that decision only months later and putting PSG on a chopping block because he needed some cash to buy a software company (Autonomy) because software is all he knows.  If the ship has a good captain, you get where you’re going.  If the cook’s assistant is in charge, you’re just going to steam in circles until you run out of gas.  HP is having real issues right now trying to figure out who they want to be.  A year of second guessing and trajectory reversals (and re-reversals) have left many shell shocked and gun shy, afraid to make any more bold moves until the dust settles.  The same can be said of many other vendors.  In this industry, you’re only as successful as your last failed acquisition.

On the other hand, you also have to keep moving ahead and innovating.  Otherwise the mighty giants get left behind.  Ask IBM how it feels to now be considered an also-ran in the industry.  I can remember not too long ago when IBM was a three-letter combination that commanded power and respect.  After all, as the old saying goes “No one every got fired for buying IBM.”  Today, the same can’t be said.  IBM has divested all of its old power to Lenovo, spinning off the personal systems and small server business to concentrate more on the data center and services division.  It’s made them a much leaner, meaner competitor.  However, it’s also reaved away much of what made them so unstoppable in the past.  People now look to companies like Dell and HP to provide top-to-bottom support for every part of their infrastructure.  I can speak from experience here.  I work for a company founded by an ex-IBMer.  For years we wouldn’t sell anything that didn’t have a Big Blue logo on it.  Today, I can’t tell you the last time I sold something from IBM.  It feels like the industry that IBM built passed them by because they sold off much of who they were trying to be what they wanted.  Now that they are where they want to be, no one recognizes who they were.  They will need to start fighting again to regain their relevance.  Dell would do good to avoid acquiring too much too fast to avoid a similar fate.  Once you grow too large, you have to start shedding things to stay agile.  That’s when you start losing your identity.


Tom’s Take

So far, reaction to the Sonicwall purchase has been overwhelmingly positive.  It sets the stage for Dell to begin to compete with the Big Boys of Networking across their product lines.  It also more or less completes Dell’s product line by bringing everything they need in-house.  They only major piece they are still missing is wireless.  They OEM from Aruba today, but if they want to seriously compete they’ll need to acquire a wireless company sooner rather than later.  Aruba is the logical target, but are they too big to swallow so soon after Sonicwall?  And what of their new switching line?  No sense trampling on PowerConnect or Force10.  That leaves other smaller vendors like Aerohive or Meraki.  Either one might be a good fit for Dell.  But that’s a blog post for another day.  For right now, Dell needs to spend time making the transition with Sonicwall as smooth as possible.  That way, they can just be Dell.  Not the New HP.  And not the Old IBM.

DST Configuration – Just In the Nick of Time

Today is the dreaded day in the US (and other places) when we must sacrifice an hour of our precious time to the sun deity so that he might rise again in the morning.  While this is great for being outdoors and enjoying the sunshine all the way into the late evening hours, it does wreak havoc on our networking equipment that relies on precise timing to let us know when a core dump happened or when that last PRI call came in when running debug isdn q931.  However, getting the right time running on our devices can be a challenge.  In this post, I will cover configuring Daylight Savings Time on Cisco, HP, and Juniper network equipment for the most pervasive OS deployments.  Note that some configurations are more complicated than others.  Also, I will be using Central Time (CST/CDT) for my examples, which is GMT -6 (-5 in DST).  Adjust as necessary for your neck of the woods.  I’m also going to assume that you’ve configured NTP/SNTP on your devices.  If not, read my blog post about it and go do that first.  Don’t worry, I’ll still be here when you get back.  I have free time.

Cisco

I’ve covered the basics of setting DST config on Cisco IOS before, but I’ll put it here for the sake of completeness.  In IOS (and IOS XR), you must first set the time zone for your device:

R1(config)# clock timezone <name> <GMT offset>
R1(config)# clock timezone CST -6

Easy, right?  Now for the fun part.  Cisco has always required manual configuration of DST on their IOS devices.  This is likely due to them being shipped all around the world and various countries observing DST (or not) and even different regions observing it differently.  At any rate, you must the clock summer-time command to configure your IOS clock to jump when needed.  Note that in the US, DST begins at 2:00 a.m. local time on the second Sunday in March and ends a 2:00 a.m. local time on the first Sunday in November.  That will help you decode this code string:

R1(config)# clock summer-time <name> recurring <week number start> <day> <month> <time to start> <week number end> <day> <month> <time to end>
R1(config)# clock summer-time CDT recurring 2 Sun Mar 2:00 1 Sun Nov 2:00

Now your clock will jump when necessary on the correct day.  Note that this was a really handy configuration requirement to have in 2007, when the US government decided to change DST from the previous requirement of the first Sunday in April at the start and the last Sunday in October to end.  With Cisco, manual reconfiguration was required, but no OS updates were needed.

HP (Procurve/E-Series and H3C/A-Series)

As near as I can tell, all HP Networking devices derive their DST settings from the OS.  That’s great…unless you’re working on an old device or one that hasn’t been updated since the last presidential administration.  It turns out that many old HP Procurve network devices still have the pre-2007 US DST rules hard-coded in the OS.  In order to fix them, you’re going to need to plug in a config change:

ProCurve(config)# time daylight-time-rule user-defined begin-date 3/8 end-date 11/1

I know what you’re thinking.  Isn’t that going to be a pain to change every year if the dates are hard-coded?  Turns out the HP guys were ahead of us on that one too.  The system is smart enough to know that DST always happens on a Sunday.  By configuring the rule to occur on March 8th (the earliest possible second Sunday in March) and November 1st (the earliest possible first Sunday in November), the system will wait until the Sunday that matches or follows that date to enact the DST for the device.  Hooray for logic!  Note that if you upgrade the OS of your device to a release that supports the correct post-2007 DST configuration, you won’t need to remove the above configuration.  It will work correctly.

Juniper

Juniper configures DST based on the information found in the IANA Timezone Database, often just called tz.  First, you want to get your device configured for NTP.  I’m going to refer you to Rich Lowton’s excellent blog post about that.  After you’ve configured your timezone in Junos, the system will automatically correct your local clock to reflect DST when appropriate.  Very handy, and it makes sense when you consider that Junos is based heavily on BSD for basic OS operation.  One thing that did give me pause about this has nothing to do with Junos itself, but with the fact that there have been issues with the tz database, even as late as last October.  Thankfully, that little petty lawsuit was sidestepped thanks to the IANA taking control of the tz database.  Should you find yourself in need of making major changes to the Junos tz database without the need to do a complete system update, check out these handy instructions for setting a custom timezone over at Juniper’s website.  Just don’t be afraid to get your hands dirty with some BSD commands.


Tom’s Take

Daylight Savings Time is one of my least favorite things.  I can’t see the advantage of having that extra hour of daylight to push the sunlight well past bedtime for my kids.  Likewise, I think waking up to sunrise is overrated.  As a networking professional, DST changes give me heartburn even when everything runs correctly.  And I’m not even going to bring up the issues with phone systems like CallManager 4.x and the “never going to be patched” DST issues with Windows 2000.  Or the Java issues with 79xx phones that still creep up to this day and make DST and confusing couple of weeks for those that won’t upgrade technology. Or even the bugs in the iPhone with DST that cause clocks to spring the wrong way or alarms to fail to fire at the proper time.  In the end though, network enginee…rock stars are required to pull out our magical bags and make everything “just work”.  Thanks to some foresight by major networking vendors, it’s fairly easy to figure out DST changes and have them applied automagically.  It’s also easy to change things when someone decides they want their kids to have an extra hour of daylight to go trick-or-treating at Halloween (I really wish I was kidding).  If you make sure you’ve taken care of everything ahead of time, you won’t have to worry about losing more than just one hour of sleep on the second Sunday in March.

Cisco CoLaboratory – Any Questions? Any Answers?

Cisco has recently announced the details of their CoLaboratory program for the CCNP certification.  This program is focused on those out there certified as CCNPs with a couple of years of job experience that want to help shape the future of the CCNP certification.  You get to spend eight weeks helping develop a subset of exam questions that may find their way into the question pool for the various CCNP or CCDx tests.  And you’re rewarded for all your hard work with a one-year extension to your current CCNP/CCDx certification.

I got a chance to participate in the CCNA CoLab program a couple of years ago.  I thought it would be pretty easy, right?  I mean, I’ve taken the test.  I know the content forwards and backwards.  How hard could it be to write questions for the test?  Really Hard.  Turns out that there are a lot of things that go into writing a good test question.  Things I never even thought of.  Like ensuring that the candidate doesn’t have a good chance of guessing the answer.  Or getting rid of “all of the above” as an answer choice.  Turns out that most of the time “all of the above” is the choice, it’s the most often picked answer.  Same for “none of the above”.  I spent my eight weeks not only writing good, challenging questions for aspiring network rock stars, but I got a crash course in why the Cisco tests look and read the way they do.  I found a new respect for those people that spend all their time trying to capture the essence of very dry reading material in just a few words and maybe a diagram.

I also found that I’ve become more critical of shoddy test writing.  Not just all/none of the above type stuff either.  How about questions that ask for 3 correct answers and there are only four choices?  There’s a good chance I’ll get that one right even just guessing.  Or one of my favorite questions to make fun of: “Each answer represents a part of the solution.  Choose all correct steps that apply.”  Those questions are not only easy to boil down to quick binary choices, but I hate that often there is one answer that sticks out so plainly that you know it must be the right answer.  Then there’s the old multiple choice standby: when all else fails, pick the longest answer.  I can’t tell you how much time I spent on my question submissions writing “good” bad answers.  There’s a whole methodology that I never knew anything about.  And making sure the longest answer isn’t the right one every time is a lot harder than you might think.

Tom’s Take

In the end, I loved my participation in the Cisco CoLaboratory program.  It gave me a chance to see tests from the other side of the curtain and learn how to better word questions and answers to extract the maximum amount of knowledge from candidates.  If you are at all interested in certifications, or if you’ve ever sat in a certification test and said to yourself, “This question is stupid!  I could write a better question than this.”, you should head over to the Cisco CoLaboratory page and sign up to participate.  That way you get to come up with good questions.  And hopefully better answers.