Death to OEQs!

Just when you think things can’t get any more interesting, a little nugget of news slips out and makes your day fun.  An announcement about changes to the CCIE Security exam leaked out this morning and was quickly retracted to be polished before being reissued tomorrow or the next day.  However, Natalie Timms, the CCIE Security Program Manager confirmed in this thread that the changes were the removal of the Open Ended Questions (OEQs) and more addition of hands-on configuration.  As soon as I saw this, my wheels starting spinning.

Note: What follows is mostly conjecture based on opinions and conversations I’ve had with people in the industry.  Many of these facts are not confirmed as solid, but will be cited where appropriate.  Please don’t go telling people that my words are the gospel truth.  I don’t know any more than anyone else.

I think this movement is the beginning of the end of the OEQs.  They’ve been gone from the R&S lab for over a year now. The Voice lab has done away with them as well.  In the case of the R&S lab, they kept the new troubleshooting section in place as it served the same purpose as the OEQs, a section that could be rapidly changed to provide a method of varying the difficulty of the lab quickly.  The Voice lab introduced troubleshooting into the lab itself, either by making you diagnose broken things in your equipment or by forcing you to debug errors and do things like copy them to text files like you would if you were going to forward the files to TAC.  Integration of troubleshooting allows Cisco to have a good gauge of the candidate’s abilities and more closely ties the exam to the real world skills of a network enginee…rock star.

The remaining CCIE tracks (Wireless, Service Provider/Operations, Storage, and Security) still have OEQs attached to them.  Makes for an interesting briefing in the morning when the proctor has to give 3 different sets of instructions based on what the initial setup of your lab might look like.  Candidates hate the OEQs.  They are a trivia section at best.  People say that they are easy, CCNA-to-CCNP level questions that any CCIE candidate should be able to answer.  I find the lack of specificity in the old OEQs I took to be maddening in some cases, and the lack of proctor assistance was irritating.  In fact, the continued inclusion of OEQs on the other CCIE tracks has made them a little less appealing to me, should I find myself crazy enough to even think about attempting it all over again.

With the announcement, retraction, and eventual re-announcement of the removal of the OEQs from the Security track, I’ve got high hopes now.  I think Cisco has enough data based on their year of R&S and Voice troubleshooting to see it as a viable alternative to Trivial Pursuit: CCIE Edition.  I’ll bet that there is going to be a section similar to the Voice lab where faults are injected (or user-created) in the lab and you’ll be required to diagnose and perhaps log them in files on the desktop.  This makes the most sense, as some of the hardware can be emulated like the IOU images that run in the troubleshooting section but emulation of the specific ASICs and software on something like an ASA would be problematic at best.  By adding troubleshooting, the Security lab will start feeling more like a real-world scenario.

The Wireless track is due for a revamp in November.  Don’t be shocked to see the OEQs get stripped from it as well.  Wireless is a hard track with all the specific hardware required and would also lend itself well to a Voice-style troubleshooting inside the lab exam.  The CCIE Storage exam is on its last legs and is most likely about to be replaced by a new CCIE track more focused around Cisco’s Unified Computing System (UCS), along with Nexus switching, Wide-Area Application Services (WAAS) as well as Fiber Channel over Ethernet (FCoE) storage that will require the MDS switches from the old Storage lab.  This CCIE Data Center track (if that’s what it ends up being called) is probably one of the worst-kept ‘secrets’ in the CCIE world, as I’ve had several people mention to me, and a couple of candidates even ask the proctors when the lab would be retooled to include it.  In the interest of complete fairness, the proctor’s comment was “No comment.”

That leaves Service Provider and Service Provider Operations left as the only OEQ-enabled labs here.  I doubt that Cisco will leave the OEQs here if it removes them from other tracks.  The SP lab recently received a refresh and the SPO lab is very new.  I think that there will be an announcement very similar to the Security lab that removes the OEQs, but I think rather than injecting faults in this lab, they may try for a troubleshooting section down the road similar to the R&S lab.  This could be accomplished with the IOU images that are in use now for the R&S TS section.  Addition of the IOS-XR content would require something different, perhaps the mythical “Titanium” emulator for XR that I keep hearing about yet have never seen (much like IOU only a few months ago).  The addition of a real TS section would change the content drastically though, so it would require 6 months notice before being implemented.  In that time, however, they could use an in-lab troubleshooting method just like the other tracks.

——————————————————————————————————————–

*EDIT*

Thanks to Youssef El Fathi for pointing out that the SP lab has not had OEQs since the 3.0 revision early in 2011.  The thread confirming this from June 8th is HERE.  If that truly is the case, then I don’t see any reason why there should continue to be OEQs in any other tracks.

——————————————————————————————————————–

Tom’s Take

There you have it.  A road map for eliminating the OEQs and banishing them to the same circle of hell as ARCNet and MicroChannel buses.  While I can’t confirm any of my suspicions outside the semi-firm announcement of the removal of OEQs from the Security exam, it makes the most sense that Cisco is ready to implement this change track-wide in the lab.  OEQs take a lot of time to grade and are slightly subjective.  Troubleshooting is pretty easy in comparison – it either works or it doesn’t.  By standardizing on troubleshooting instead of OEQs as the preferred rapid-change method of candidate testing, it makes things a little more fair all around.  I plan on finding the CCIE program managers when I go to Cisco Live this year and asking them about upcoming changes to the tracks so that I can nail down what might be happening.  If they tell me that the OEQs really are going away please don’t mistake my tears for sorrow.  They’ll be tears of unadulterated joy.

I’m going to say it again to avoid an international incident: This is all conjecture at this point.  If I turn out to be wrong, so be it.  However, I feel the time of the OEQs is over.  Don’t tell everyone on Groupstudy or or OSL that this is the absolute truth until you get a confirmed press release from someone whose name ends in “@cisco.com”.

BYOD: High School Never Ends

There is a lot of buzz around about the porting of applications to every conceivable platform.  Most of it can be traced back to a movement in the IT/user world known as Bring Your Own Device (BYOD), the idea that a user can bring in their own personal access device and still manage to perform their job functions.  I’m going to look at BYOD and why I think that it’s more of the same stuff we’ve been dealing with since lunch period in high school.

BYOD isn’t a new concept.  Contractors and engineers have been doing it for years.  Greg Ferro and Chris Jones would much prefer bringing their own Macbooks to a customer’s site to get the job done.  Matthew Norwood would prefer to have just about anything other than the corporate dinosaur that he babies through boot up and shut down.  Even I have my tastes when it comes to laptops.  Recently though, the explosion of smartphones and tablets has caused a shift toward more ubiquitous computing.  It now seems to be a bullet point requirement that your software or hardware has a support app in a cloud app repository or the capability to be managed from a 3.5″ capacitive touch screen.  Battle lines are drawn around whether or not your software is visible on a Fruit Company Mobile Device or a Robot Branded Smarty Phone.  Users want to drag in any old tablet and expect to do their entire job function from 7″ screen.

However, while BYOD is all about running software from any endpoint, the driving forces behind it aren’t quite as noble.  I think once I start describing how I see things, you’ll start noticing a few parallels, especially if you have teenagers.

- BYOD is about prestige.  Who usually starts asking about running an app on an iPad?  Well, besides the office Gadget Nerd that ran out and stood in line for 4 hours and ran out of the store screeching like kid in a candy store?  Odds are, it’s the CxO that comes to you and informs you that they’ve just purchased a Galaxy Tablet and they would like it setup.  The device is gingerly handed to you to perform your IT voodoo on, all while the executive waits patiently.  Usually, there is some kind of interjection from them about how they got a good deal and how the drone at the store told them it had a lot of amazing features.  The CxO usually can’t wait to show it around after you’ve finished syncing their mail and calendar and pictures of their expensive dogs.  Wanna know why?  Because it’s a status symbol.  They want to show off all the things it can do to those that can’t get one.  Whether it be due to being overpriced or unavailable from any supply chain, there are some people that revel in rubbing people’s noses in opulence.  By showing off how their tablet or smartphone gets emails and surfs the web, they are attempting to widen the IT class gap.  Sound like high school to you?  Air Jordans? Expensive blue jeans? Ringing any bells?  The same kind of people that liked to crow that their parents bought them a BMW in high school are the same ones that will gladly show off their iPad or Galaxy Tab solely for the purpose of snubbing you.  They could really care less about doing their job from it.

- BYOD is about entitlement.  I could go on and on about this one, but I’ll try to keep it on topic.  There seems to be a growing movement in the younger generation that you as a company owe them something for coming to work for you.  They want things like nap time or gold stars next to their names for doing something.  No, really.  This naturally extends to their choice of work device.  I’m going to pick on Mac users here because that particular device comes up more often that not, but it extends to Linux users and Windows users as well.  The “entitled” user thinks that you should change your entire network architecture to suit their particular situation.  Something like this:

User: I can’t get my mail.

Admin: You’re using the Fail Mail client.  We’re on Exchange.  You’ll need to use Outlook.

User: I’m not installing Office on my system!  Microsoft is a cold-hearted company that murders orphans in Antarctica.  Fail Mail donates $.25 of every shareware license to the West Pacific Tree Slug Preservation Society.  I want to use my mail client.

Admin: I guess you could use the webmail…

User: How about you use the Fail Mail Server instead?  They donate $2 of every purchase to fungus research.  I think it’s a much more capable server than dumb old Exchange anyway.

Admin: <facepalm>

I hope this doesn’t sound familiar.  One of the great joys of IT is telling users you aren’t going to reinvent the wheel just to mollify them.  However, in many cases the user demanding your change everything happens to sign your paycheck.  That does have the effect of ripping out one mail server or reprogramming a whole tool because it used/didn’t use Flash/HTML 5.

- BYOD is about never changing your perspective.  I have an iPad.  And an iPhone.  And a behemoth Lenovo w701 laptop.   And I use them all.  Often, I use them at the same time.  I see each as a very capable tool for what it’s designed to do.  I don’t read ebooks on my iPhone.  I don’t run virtual machines on my iPad.  And I don’t use my laptop for texting or phone calls.  Just like I don’t use screwdrivers like chisels or use a pipe wrench like a hammer.  However, there are some people that like picking up one device and never putting it down.  These people seem to believe that the world would be a more perfect place if they could sit in their chair and do their whole job from a touch screen.  They feel that moving to a laptop to type a blog post is a travesty.  Being forced to use a high-powered graphical desktop for CAD work is unthinkable.  I have to admit that I’ve tried to see things from their perspective.  I’ve tried to use my iPad to take notes and remotely administer servers.  Guess what?  I just couldn’t do it.  I’m a firm believer that tools should be used according to their design, rather than having a 56-in-one tool that does a lot of things poorly.

Tom’s Take

I think keeping your tools capable and portable is a very good thing.  I hate software that can only be run from a Windows 2000 server or needs a special hardware dongle to even start.  I love that tools are becoming web-enabled and can be used from any PC/Mac/toaster.  However, I also think that things need to be kept in perspective.  BYOD is a Charlie Foxtrot just waiting to happen if the motivations behind it aren’t honest and sincere.  Simply porting your management app to the App Store so the CxO can show off his new iPad while complaining that we need to scrap the company website because it uses Flash and no one will bother using their dumb old laptop ever again is really, really bad.  Give me a compelling reason to use your app, like a new intuitive interface or a remote capability I wouldn’t normally have.  Just putting your tablet app out so you can sound cool or fit in with the popular crowd won’t work any better than wearing parachute pants did in high school.  Except, this time you won’t get stuffed into a locker.  You’ll just lose my business.

Adrift In A Sea of Lulz

As I write this, it’s been about 24 hours since the hacking collective known as Lulzsec scuttled their ship and scattered to the four winds.  There’s been a lot of speculation as to what motivated the 50 days of hacking that has stirred up quite a bit of talk about exploiting security holes as well as what would cause the poster children of anti-sec hacking to disappear as quickly as they emerged.

Lulzsec emerged almost two months ago from the fires of the now-infamous Sony PSN hack.  It appears to have been formed by some of the Gn0sis people that hacked into the Gawker Media database and some other disaffected members of Anonymous.  After they popped up on the radar, they started posting a lot of supposedly secured information about all manner of things, from X-Factor contestant databases to FBI security contractors.  They also participated in other hacks, like taking cia.gov offline for a few hours.  Most recently, they posted a dump of the Arizona Department of Public Safety servers and some 750,000 AT&T subscriber accounts.  Their activities have caused a lot of questions about perimeter security and probably cost a few security professionals their jobs.

To Lulzsec, this was all a game.  A giant F-you to the whole security community at large.  Their manifesto reads a lot like some teenagers I know.  They do what they want, how they want, when the want.  At first, there was no rhyme or reason to their attacks.  Later, they started talking about their “anti-sec” agenda, the idea that information shouldn’t be buried and needs to be disseminated by whatever means necessary.  Indeed, their anti-sec agenda also extended to the idea that people with inadequate security needed to be exposed and publicly embarrassed to resolve these issues.

Just as soon as they burst into the limelight, Lulzsec announced they were disbanding.  Theories abound as to the reason for their dissolution.  Did the feds get to close?  Was the lifting of the anonymous veil through leaking of personal information the last straw?  Did they simply get bored?  Answers won’t be forthcoming from the members themselves.  They seem to have faded right back into the anonymity they spawned from.  I think the answer to what is going on probably lies somewhere in the middle of all these things.

The Lulzsec hacks appear to have mostly centered around SQL injections.  The time-honored tradition of exploiting databases with carefully crafted packet strings continues to be quite popular even today.  I think Lulzsec used this attack vector to great success against Sony and a couple of other choice targets up front.  After their initial success, their patterns seemed to be haphazard.  I think this is due to the nature of using their one attack against a variety of sites rather than targeting specific ones.  It was a brute force method of anarchy, kind of like using a screwdriver to do all your tool-related tasks.  It works really well for screwing, but not so well for hammering or sawing.  Once they managed to expose the FBI partner databases and take down the CIA’s small public facing webserver, that brought significant attention from all angles, not typically something you want if you are trying to stay anonymous.  Then, other groups inside the scene started either getting jealous of the attention or decided to fight fire with fire.  That led to d0xing, the term used to describe the leaking of personal information that can be used to identify someone.  Through exposure in the public and the looming investigation from some upset 3-letter agencies, I think the first members to jump the Lulz Boat left here.  Rather than face what might be coming, they ducked out and headed back to the darkness that had protected them so well.  This has been somewhat confirmed in interviews with the publicly-known members.

The remaining Lulzsec members then seemed to have gone on a recruitment drive.  They tried to bring more talent into the fold.  I don’t think this newer group was quite as determined or successful as the first, though.  That led to a slowdown in target penetration.  You might argue that they’ve been releasing stuff right up to the end.  True…but all we know is that those sites were hacked, we don’t know when.  For all we know, AT&T could have been the second site they hacked.  AT&T was their ace in the hole.  If they were for real and ready to keep going for a long time to come, they would have released AT&T right away.  By putting it away and saving it for last, it appears they wanted one big splash before they were forgotten.  A vigorous, active Lulzsec would have been able to keep hitting bigger targets than AT&T.  After their success rate started dropping off, I think the remaining “old” members of Lulzsec probably did get bored.  Without new conquests to fuel their fame, the rush wasn’t there any more.  They decided to go out with a bang and quit while they were still ahead.  The remaining new recruits will probably go on to be folding into newer organizations that spring up in place of Lulzsec, the new breed to SQL injectors (or whatever is next), just like the Lulz Boat appears to have sailed with many Gn0sis crew members on board.

Tom’s Take

The black hat in me cheered Lulzsec for what they’ve accomplished.  The white hat is appalled.  Again, the truth lies somewhere in the middle.  I look at Lulzsec like the Joker in The Dark Knight.  A group of anarchist hackers that just want to have fun and burn everything down around them.  No agenda, no statements, just exploitation for fun.  A group of chaotic neutral script kiddies.  However, the very limelight they sought burned them enough to force them back into the shadows.  The way I look at it, the key to being a successful hacker is to not get caught.  Don’t get famous, and definitely don’t draw attention to yourself.  Kevin Mitnick had to learn that the hard way.  Something tells me that more than one of the passengers of the good ship Lulz will learn it the same way sooner rather than later.

My Phone Number is AAA-BCDA

Anyone that’s used a phone knows that there are letters on the keypad that make it handy to spell out words for those not gifted with the ability to remember long strings of numbers.  It’s also handy for marketing, for instance 1-800-FLOWERS.  Those that still use T9 predictive texting from a digit keypad probably have the letter positions memorized by now.  But what you may not know is that there are actually four letters on a telephone dialpad.

Dual-Tone Multi Frequency (DTMF) dialing is the modern way telephones signal the voice network over analog telephone lines.  Each keypress is a combination of two specific tones that correspond to the pitch of a key.  For instance, the ’1′ key on a keypad is a combination of 697Hz played in conjunction with 1209Hz.  The ’2′ key uses the same 697Hz signal, but plays is with a 1336Hz tone.  The ’4′ key under the ’1′ key uses a 770Hz tone in conjunction with the 1209Hz tone.  Each DTMF tone is a combination of high-pitched tones and low pitched tones.  Normal telephone keypads are laid out like this:

1209 HZ 1336 HZ 1477 HZ
697 Hz 1 2 3
770 Hz 4 5 6
852 Hz 7 8 9
941 Hz * 0 #

You can click on each of those links to listen to the tone they make (Thanks Wikipedia!!!).

The military once used a special kind of phone system known as AutoVon (Thanks to Matthew Norwood for the correction and Jason Schmidt for pointing it out as well).  This was a phone system designed to survive a nuclear attack.  One of the key differentiators of AutoVon besides being hardened against the Russians was the addition of another column of DTMF keys.  These allowed the person dialing the phone to find an open line quickly, or in the event of a full network, to boot users off that were on lower-priority calls.  The keys were denoted with the letters A-D and had functions with suspiciously familiar sounding names: Flash Override (A), Flash (B), Immediate (C), and Priority (D). I’m sure most of you networking people out there know where those names are used in our little world.  Users that dialed a C before their number could boot those on regular calls or on Priority calls off in the event of line congestion.  Flash Override was reserved for use by the President of the United States, as it could boot off anyone on a call.  This same kind of preemption capability lives on in CUCM as Multilevel Precedence and Preemption (MLPP).  AutoVon was eventually replaced in the 1990s with a newer telephone network for use by the Defense Department.  However, the legacy of the additional keys that most of us have never seen lives on.

This is the above table, including the new A-D DTMF tones:

1209 Hz 1336 Hz 1477 Hz 1633 Hz
697 Hz 1 2 3 A
770 Hz 4 5 6 B
852 Hz 7 8 9 C
941 Hz * 0 # D

If you are a user of Cisco Unified Communications Manager Express (CUCME), you have access to the AutoVon A-D DTMF tones (from here on out, I’m going to call this “Army Dialing”).  The system can replicate the tones from these four keys.  You might say, “Cool.  What in the hell would I ever use this for?  No one can dial these numbers.”  Yep.  No one can dial these numbers from a regular phone keypad.  Think about it like this: you have access to a whole group of numbers that can only be dialed by the people you allow access.  The most popular use of this setup is for phone-to-phone intercoms.  By restricting the intercom number to an “Army Dial” number, no one can dial that intercom number on accident unless they have a button on their phone that speed dials the number.  Here’s an example:

CUCME(config)# ephone-dn 13
CUCME(config-ephone-dn)# number A100
CUCME(config-ephone-dn)# intercom A101 label “Networking Nerd”
CUCME(config-ephone-dn)# exit
CUCME(config)# ephone-dn 14
CUCME(config-ephone-dn)# number A101
CUCME(config-ephone-dn)# intercom A100 label “Junior Admin”
CUCME(config-ephone-dn)# exit
CUCME(config)# ephone 2
CUCME(config-ephone)# button 2:13
CUCME(config-ephone)# exit
CUCME(config)# ephone 3
CUCME(config-ephone)# button 2:14

This way, my intercom line can only be dialed from a phone with a speed dial button associated with the number.  I control who can call me (mwa ha ha…).  This could also be used for multicast paging directory numbers.  That way, only the designated phones have the ability to page and you can prevent unnecessary chatter on the speakers.

I’m sure if you put your mind to it, you could find all sorts of interesting applications for this kind of feature.

Happy Twitterversary To Me!

Today marks the one year anniversary of my first tweet on Twitter.  I’d sing the “Happy Birthday” song, but the royalties on that little gem would cost me a fortune.  Instead, I’m going to spend some time talking about why I think Twitter is so very useful for IT people.

I have always spent a lot of time reading blogs.  Great content in concise, easy-to-digest format.  Especially when I started studying for my CCIE lab.  However, last year I noticed that some of my CCIE blogs weren’t being updated anymore, specifically CCIE Candidate and CCIE Pursuit.  I figured that CCIE Candidate wasn’t being updated quite as regularly anymore due to Ethan getting his number, so I decided to do a little digging.  Turns out Ethan had a new, non-CCIE focused blog at PacketAttack, but also had an account on Twitter (@ecbanks).  Now, I had my misgivings about Twitter.  It was a microblogging site dedicated to people telling me what the had for lunch or when they were taking a constitutional.  All the previous experiences I had seen on Twitter led me to believe that it wasn’t exactly a fun place to be.

However, after reading through Ethan’s tweets, I realized that there was a lot of good information and discussion that was being posted there.  I searched around and found a couple of other good tweet streams, including one from a real-life friend that I didn’t get to see much, Brad Stone (@bestone).  After mulling the decision back and forth for a day or two, I decided to take the plunge.  I tried several names before I finally came up with one that I thought personified both my desire for technical discussion and my outlook on things, @networkingnerd.  Once I signed up for Twitter, I started following a few people that I had found, like Ethan, Brad, and Narbik Kocharians (@narbikk).  I knew that the only way I could get more involved with what was going on was to start talking and see if anyone was listening.  At first, it felt like the guys in the park standing on a soapbox with a bullhorn, shouting for all the world to hear but no one really listening.  Once I figured out how to address someone with a tweet to get their attention, the followers started taking off a little more.  Part of the key for me was staying focused on networking and tech and injecting a little snarkiness and humor along the way (something that would pay off later when I started blogging).

Another part of the reason I got involved with Twitter was to feel like a larger part of the IT community.  Last year, my annual sojurn to Cisco Live was coming up fast, and Cisco had been releasing a lot of good information and tips for Cisco Live attendees on Twitter.  Now, when I go to Cisco Live, I have a group of 5-6 people that I usually hang out with and do things like take the Cisco Challenge in the World of Solutions or heckle the bands at the Customer Appreciation Event.  However, thanks to Twitter this year I’ve got 50-60 people that I’m going to be hanging out with and meeting for the first time in real life.  Twitter also helped me get more information about events like Tech Field Day, which I had no idea about.  Later, Twitter helped me get my first invite to Tech Field Day, both through my involvement in the community and Twitter’s gateway effect that drove me to start blogging out my longer thoughts (like this one).

Twitter isn’t for everyone.  Some people have a hard time keeping up with the firehose of information that you get blasted with.  Others have a really hard time condensing thoughts down to less that 140 characters.  Still others never really find the right group to get involved in and write Twitter of as stupid or childish.  My counter to thinking such as that is “You get what you put into it.”  I search out new and fun people to follow all the time.  I’m not afraid to unfollow someone if their tweets become pointless and overly distracting.  Twitter, for me, is about discussion.  Helping answer questions, learning about industry news before my bosses, even railing against hated protocols.  All of these things have increased the payoff I have received from Twitter in the last year.

At the same time, I make sure to respect the wishes of those that follow me.  I tend to relegate my non-IT related posts to something like Facebook.  I may post personal things on Twitter from time to time, but I tend to think of them more as little details about my life that help fill in the dark spots about me.  I don’t post about sports, even though my Facebook wall in the fall is a virtual commentary on college football every week. I don’t let applications tweet things for me if I can help it.  I don’t link my 4square account or let an unfollower app shout things no one else is interested in.  I have total control over my Twitter account to be sure that those that take time out of their schedules to listen to what I have to say will hear my words and not those of some robot.  Those that let their Twitter streams become a wasteland of contest entries and “I just unfollowed X people that didn’t follow me back” updates from applications usually fall by the wayside sooner or later.

Tom’s Take

People I know in real life make fun of me when I tell them I’m on Twitter.  They crack jokes about updates from the water closet or useless junk spamming my Twitter feed.  However, when the joking stops and they ask me what’s so compelling about it, I tell them “On Twitter, I learn things I actually WANT to know.”  My Facebook feed is a bunch of game updates and garbage about stuff I really don’t care to know most of the time.  Until my Twitter followers started friending me on Facebook, no one on Facebook knew about the depths of my nerdiness.  On Twitter, I feel free to talk about things like BGP or NAT without fear that I’m going to be deluged with comments from people who are hopelessly lost or would rather talk about the Farmville animals.  On Twitter, I’m free to indulge myself.  And the community that I’ve become a part of helps me develop and become a better person.  Without Twitter, I would never have been able to find so many people across the world that share my interests.  I never would have been pushed to increase the depth of my knowledge.  Dare I say it, I probably wouldn’t have been driven to get my CCIE nearly as much as I was thanks to the help of my Twitter friends.  In short, I’m glad I’ve had my first year on Twitter be as successful as it has been.  Here’s to many more.

Friday (+1) Links – 6/18/2011

So…I might have missed a Friday link post or two.  To be honest, I was so bogged down in last-minute cramming for the CCIE lab exam I didn’t look up to figure out what day it was.  Thankfully, some interesting things have happened in that time, so I’ve got a few interesting things to share:

Cisco Expands UC Virtualization Support To Add HP and IBM 

Until recently, Cisco customers were required to use the Unified Computing System (UCS) platform when running Unified Communications (UC) applications in a virtual environment. On June 7th Cisco introduced a new support model called “Specification-Based Hardware Support“. With this announcement Cisco widens virtualized platform support to include IBM and HP.

For those that constantly complained that virtualizing CUCM/CUC was only possible on Cisco UCS, here you go.  There are a few supported platforms from IBM and HP, but take care that your whitebox server probably isn’t going to ever be supported.

Screw 140 Characters: 32,000 Characters on How to Fix RIM and Blackberry 

Please note that since we wrote this for a class we had some specific items we needed to include, such as specific financial profitability targets for our recommendations, which would otherwise seem pretty odd in a blog post like this.

Good paper outlining the background of RIM and the troubles they’re going through right now.  While I don’t know if RIM can right this sinking ship any time soon, it appears that some people believe that RIM still has a chance to stay relevant.

Stuxnet Deconstructed Shows One Scary Virus 

Ready to shake in your shoes? This video breaking down how Stuxnet works and where it could go next is flat out frightening. (And if this wasn’t a government program, I’ll eat a centrifuge.)

Not surprised in the least.  This is the 50,000 foot overview of Stuxnet with some fancy infographic stuff thrown in.  Great if you’ve been wondering about Stuxnet.  Then head over here and read my ruminations about it.

An Outsider’s View of Junosphere

It’s no secret that learning a vendor’s equipment takes lots and lots of time at the command line interface (CLI).  You can spend all the time you want pouring over training manuals and reference documentation, but until you get some “stick time” with the phosphors of a console screen, it’s probably not going to stick.  When I was studying for my CCIE R&S, I spent a lot of time using GNS3, a popular GUI for configuring Dynamips, the Cisco IOS simulator developed by the community.  There was no way I would be about to afford the equipment to replicate the lab topologies, as my training budget wasn’t very forgiving outside the test costs and any equipment I did manage to scrounge up usually went into production soon after that.  GNS3 afforded me the opportunity to create my own lab environments to play with protocols and configurations.  I’d say 75-80% of my lab work for the CCIE was done on GNS3.  The only things I couldn’t test were hardware-specific configurations, like the QoS found on Catalyst switches, or things that caused massive processor usage, like configuring NTP on more than two routers.  I would have killed to have had access to something a little more stable.

Cisco recently released a virtual router offering based around IOS-on-Unix (IOU), a formerly-internal testing tool that was leaked and cracked for use by non-Cisco people.  The official IOU simulation from Cisco revolves around their training material, so using it to setup your own configurations is very difficult.  Juniper Networks, on the other hand, has decided to release their own emulated OS environment built around their own hardware operating system, Junos.  This product is called Junosphere.  I was recently lucky enough to take part in a Packet Pushers episode where we talked with some of the minds behind Junosphere.  What follows here are my thoughts about the product based on this podcast and some people in the industry that I’ve talked to.

Junosphere is a cloud-based emulation platform being offered by Juniper for the purpose of building a lab environment for testing or education purposes.  The actual hardware being emulated inside Junosphere is courtesy of VJX, a virtual Junos instance that allows you to see the routing and security features of the product.  According to this very thorough Q&A from Chris Jones, VJX is not simply a hacked version of Junos running in a VM.  Instead, it is a fully supported release track code that simply runs on virtual hardware instead of something with blinking lights.  This opens up all sorts of interesting possibilities down the road, very similarly to Arista Networks vEOS virtualized router.  VJX evolved out of code that Juniper developers originally used to test the OS itself, so it has strong roots in the ability to emulate the Junos environment.  Riding on top of VJX is a web interface that allows you to drag-and-drop network topologies to create testing environments, as well as the ability to load preset configurations, such as those that you might get from Juniper to coincide with their training materials.  To reference this to something people might be more familiar with, VJX is like Dyanmips, and the Junosphere lab configuration program is more like GNS3.

Junosphere can be purchased from a Juniper partner or directly from Juniper just like you would with any other Juniper product.  The reservation system is currently set up in such a way as to allot 24-hour blocks of time for Junosphere use.  Note that those aren’t rack tokens or split into 8-hour sessions.  You get 24 continuous hours of access per SKU purchase.  Right now, the target audience for Junosphere seems to be the university/academic environment.  However, I expect that Juniper will start looking at other markets once they’ve moved out of the early launch phase of their product.  I’m very much aware that this is all very early in the life cycle of Junosphere and emulated enviroments, so I’m making sure to temper my feelings with a bit of reservation.

As it exists right now, Junosphere would be a great option for the student wanting to learn Junos for the first time in a university or trade school type of setting.  By having continuous access to the router environments, these schools can add the cost of Junosphere rentals onto the student’s tuition costs and allow them 24-hour access to the router pods for flexible study times.  For self-study oriented people like me, this first iteration is less compelling.  I tend to study at odd hours of the night and whenever I have a free moment, so 24-hour access isn’t nearly as important to me as having blocks of 4 or 8 hours might be.  I understand the reasons behind Juniper’s decision to offer the time the way they have.  By offering 24-hour blocks, they can work out the kinks of VJX being offered to end users that might not be familiar with the quirks of emulated environments, unlike the developers that were the previous user base for the product.

Tom’s Take

I know that I probably need to learn Junos at some point in the near future.  It makes all the sense in the world to try and pick it up in case I find myself staring at an SRX in the future.  With emulated OS environments quickly becoming the norm, I think that Junosphere has a great start on providing a very important service.  As I said on Packet Pushers, to make it more valuable to me, it’s going to need to be something I can use on my local machine, ala GNS3 or IOU.  That way, I can fire it up as needed to test things or to make sure I remember the commands to configure IS-IS.  Giving me the power to use it without the necessity of being connected to the Internet or needing to reserve timeslots on a virtual rack is the entire reason behind emulating the software in the first place.  I know that Junosphere is still in its infancy when it comes to features and target audiences.  I’m holding my final judgement of the product until we get to the “run” phase of the traditional “crawl, run, walk” mentality of service introduction.  It helps to think about Junosphere as a 1.0 product.  Once we get the version numbers up a little higher, I hope that Juniper will have delivered a product that will enable me to learn more about their offerings.

For more information on Junosphere, check out the Junosphere information page at http://www.juniper.net/us/en/products-services/software/junos-platform/junosphere/.

A Lot Of People Take The Lab Seven Times…

A couple people have asked about some highlights in my lab experiences while going for my CCIE. Here are a few of the more humorous points.

My first lab attempt was in December of 2008. This was back before Open-Ended Questions (OEQs) or the Troubleshooting section. I got my teeth kicked in by this first lab. By lunchtime, I was pretty much shell shocked. I didn’t talk to anyone and spent a lot of time staring at my lab binder. At 2:00 p.m., I was wrestling with a BGP problem that I refused to let go of until I solved it, even if it cost me the rest of my lab. I got up and decided to get a drink from the break room. In RTP, the breakroom and bathroom are down the hall from the lab. As I worked out the possible solutions to my issues in my head, I woke up from my mental fog and found myself in the bathroom wondering where the Coke machine was. That’s when I knew my goose was cooked on that attempt.

Number two was the first with the OEQs.  I switched lab locations from RTP to San Jose.  Firstly, because the lab started an hour later and I love my sleep.  Secondly because I had the time change going from Central to Pacific working for me instead of going from Central to Eastern and always being behind.  I nailed the trivia section at the beginning but got hammered on the configuration section.  I realized that I was good at the theory, but I needed to concentrate on the application.  Number three was my last shot at the version 3 lab. I got enough points to pass the configuration section, but I missed too many trivia questions. I was livid. It’s like meeting someone for the first date and calling them by the wrong name 5 minutes after you meet. The rest of the night is a wash no matter what, so why bother putting yourself through it?

Number four was my first version 4 lab attempt. I refused to take the new lab so long as the OEQs were still there. Two things kicked me out of my self-imposed funk. First was the announcement of the elimination of the OEQs. Secondly was some words of encouragement from my friend Narbik Kocharians. At Cisco Live 2010, he told me that I just needed to keep it up until I got my number. So I took him up on his advice. Attempt four hurt a lot. The TS section wasnt kidding around, and I got stomped by the lab. I felt almost the same as I did after attempt number one. The sole bright spot was my ever-increasing subscore. While I didn’t get enough points to pass either section, I was getting close to the top.  I just needed to find the drive to put myself over the summit.

Attempts five and six were my “near misses”. On five, I passed the TS but failed the configuration.  I was upset after that one.  I thought I had done a damn good job, only to get my score report back less than an hour after I left the lab building.  I retraced all my steps in my mind to find where I could have screwed up.  All the anger in the world wouldn’t get me past my failing mark, though.  IPExpert instructor Marko Milivojevic put it a little differently to me.  He told me there was no sense complaining about it. Get ready for the next one and get it done. On six, I failed TS and passed config. So, if you averaged those two, I passed. ;) Attempt six really bolstered my confidence. I knew I had failed the TS section after the first two hours. But rather than leave and enjoy the California sunshine, I stuck around and finished. In return, I was able to pass the config section for the first time.  It was a bright spot that led me to have a little hope that passing this thing was possible.

I didn’t say much about attempt number seven because I was anxious. I felt a little embarrassed that I was up that high.  I was worried that I’d disappoint everyone that had been keeping up with my battle with the dreaded lab. I decided to take the Navy SEAL approach. Get in, do the job, then talk about it after success. I was relaxed as I strolled into the lab Thursday morning.  There were a couple of first timers there, and I could tell they were nervous.  It reminded me of my first attempt.  I’m pretty sure Tom Eggers and Tong Ma recognized me from my last attempt.  I could have given the pre-lab briefing for the proctors.  For every previous attempt, I’d been seated at the same workstation. This time, I was beside my usual spot. Maybe a change of location would be a good thing.

I started the TS section and told myself not to get caught up on any questions. If I couldn’t get it in 10 minutes, move on to the next one. I started to panic a little by the third question, but after I made a change that fixed a bunch of things all at once, I was elated and plowed right through, finishing about 20 minutes early. Once I was sure all the conditions were satisfied and my configs were correct and saved, I jumped into the lab. Normally, I get up after reading over the lab and go to the bathroom and get a drink. This time, however, I was in the zone and I didn’t stop to think. I just kept going. Before I realized it, Tong was handing out the lunch vouchers.

After a nice lunch, I came back and dove right back in. A couple of silly mistakes right off the bat made me refocus on doing things right.  I cursed myself for such simple errors, knowing that the difference between typing the right command and the wrong one was razor thin inside Building C.  I shut out distractions and kept going. In fact, I didn’t even notice the guy beside me on his first attempt get up and leave just after lunch. Guess my old pod got him too. By the time I finished my first run through, it was 2:00. I looked up and thought to myself that this was very doable from this point on. I had 3 hours to make sure I was right this time. After rearming with a Mountain Dew, my first that day, I went back over my configs with a fine tooth comb. Not a cursory inspection, but a real Navy SEAL dressing down. I forced myself to reverify every command, no stone left unturned. It’s a good thing I did, too. I found mistakes that would have cost me 7 points had I not corrected them. Those little lapses of attention very likely would have had me coming back for attempt number eight.

Once I double checked everything, literally in this case as each task had two check marks next to it on my paper, I reverified a few things I wasn’t sure about. After I satisfied myself with the answers, I turned in my scratch paper. It was 3:30, an hour and a half before the end of time. When I walked out of Building C, I knew I had done my absolute best. I was confident that this would finally be the one. After catching up on Twitter and email, I celebrated with my usual trip to In-N-Out Burger. Back at the hotel, the minutes started ticking away. After reading about the last few passing attempts from my friends online, I knew the longer it took for your score report to come back, the better it would be for my chances. No report after an hour was good. After 3 hours, I was giddy. I’m sure the hotel bar had nothing at all to do with that. By 11:00, I was equally concerned and hopeful. Had the grading script messed up? Were the proctors going over my configs to shake out any flaws? I had given up the idea that I’d get my results before bed. After hopping in the shower, I walked over to shut off my computer before bed. It was then that Outlook delivered the dreaded email.

CCIE Score Report

I clicked on the link and logged into the site. I scanned to the bottom of the list and saw FAIL. My heart sank. How could I have failed?!? Then I realized I was reading the results of my first attempt in RTP. The newest score was at the top of the list. My eyes flitted over the four most wonderful numbers I’d ever seen. P-A-S-S, quickly followed by a glance at the five most amazing numbers ever, 2-9-2-1-3. The whoop I let out surely had to wake the whole hotel. I called my lovely wife at 2:00 her local time and told her the good news. She informed me she was very happy. And also going back to bed. I, however, was wired. It was like the feeling of winning the big game a thousand times over. No more doubt, no more anticipation. No more second guessing myself and wondering if it would all ever be worth it. I emailed my boss and my Cisco account team. I chatted on Twitter with the poor souls awake at that hour. I made lists of who I needed to talk to. I tried to calm down. I finally fell asleep an hour later, but I didn’t really rest. The elation at my accomplishment kept me on high for a while. But at least I wasn’t dreaming about Indiana Jones and the Lost Prefix (that has happened before).  The next morning was filled with phone calls before my flight.  My boss answered on his speaker phone but quickly switched to his handset when I told him I was going to give him my lab results.  After informing him I passed, he laughed and said, “I could have left you on speaker for that good news.”  I boarded the plane home with a spring in my step for the first time in a long while.  Nothing could chisel the smile from my face.

Tom’s Take

What is best in life?  To crush the lab, see it driven before you, and hear the lamentations of the proctors.  Okay, maybe a little cheesy, but it kind of sums things up.  My father asked me how many questions I got wrong and still passed.  I told him “Since they don’t give you a breakdown, I’ll always think I got them all right.”  I can’t give the best advice about lab strategy.  As you can see, there were lots of dumb mistakes and missed chances.  I underestimated some sections and they paid me back in full.  But if nothing else, know that perseverance is the key to the lab.  Not giving up, not backing down, not letting yourself think for an instant that it isn’t possible.  Doubt is one of the biggest enemies of the CCIE hopeful.  Don’t let it cost you your chance at a number.

I’m Not A Name, I’m A Number!

You have no idea how long I’ve waited to see this little snippet in my email:

Three years.  Seven lab attempts.  Several close calls.  Lots of studying.  Lots of second guessing.  Lots of anger.  But in the end, the elation of those four letters makes it all worthwhile.

There are a lot of people to thank.  I’m going to call out three people in particular who are head and shoulders above the rest though.

First, my family and my beautiful wife Kristin especially.  Thank you for putting up with my odd schedule and my even odder behavior for the last three years.  Thank you for not letting give up and making me keep my nose to the grindstone for this achievement.  I promise from this point forward that Daddy won’t have his nose buried in a configuration guide.  At least not too often.

Secondly, to my amazing boss, Mr. Mike Hibbs.  The man that never gave up on me.  Every time I flew back home with my head hung low with shame burning on my forehead, he told me to pick myself up off the ground and try again.  He footed the bill for me even when I didn’t think I could do it any more.  I owe you more than I can ever hope to repay in a lifetime.

And lastly (but certainly not least), to my mentor and friend Wes Williams.  Four years ago, Wes and I were having lunch and I told him that I was going to take the CCIE exam.  He paused and told me in his slow drawl, “Well, if you keep your head on straight and study hard, I think you’ll do it with no problem.”  Wes, you left us far too soon.  You saw the potential in me from the very start and never let me settle for second best.  I always said that if I passed the lab that I would owe my success to you.  I just wish you could have been here to see it.  This one’s for you, Wes.  I hope I made you proud.

There’s a lot more story for me to tell than I can hope to stuff into this blog post right now.  I plan on laying out some more of it after I’ve had time to come down off this emotional high.  I want to thank everyone for their encouragement and support for the last three years.  I knew that the only time I would have truly failed the CCIE is the day I decided the quite trying.  And now I never have to worry about that day.

From now on, CCIE #29213 belongs to me.

Configuring Cisco Unified Communications Manager and Unity Connection – Review

Voice engineering is a world apart from the run-of-the-mill routing and switching work most network rock stars do regularly.  Lots of browser screens, few opportunities for CLI work, and an ever-evolving interface make for interesting work even in the best of times.  Technology changes so quickly that people who have been out of the loop for more than a couple of years may find themselves adrift in a sea of confusion.

When the first edition of Configuring Cisco CallManager and Unity came out, it quickly became a go-to reference for voice engineers that wanted to learn all about Cisco’s preeminent call processing platform. Today, however, that volume is severely out of date, referencing CallManager 4.x and Unity 4.x, both long retired. With the changes that have been introduced since the move away from Windows-based platforms and Exchange, it was time to update the Cisco Press tome of voice knowledge. Not coincidentally, I give you Configuring Cisco Unified Communications Manager and Unity Connection, Volume Two.

Configuring Cisco Unified Communications Manager and Unity Connection

Title just rolls right off the tongue, doesn’t it?  Along with the change to CallManager, now abbreviated CUCM, we get updates to the platform in the book. This volume focuses on CUCM version 8.x and Unity Connection version 8.0. There is also some coverage of Unity 8.0 as well, since those of you with strange curses may find yourself running into it like a patch of poison ivy.

For those of you that are new to CUCM v8, or new to CUCM in general, this book is a wonderful resource that guides you step-by-step through the menu options and settings in CUCM.  There is very little discussion about voice theory or SIP proxy setup or Nyquist’s Theorem. Instead, the meat of the book tells you how to make CUCM sing, from esoteric Enterprise Service parameters to the confusing Calling Search Space (CSS) setup. It guides and teaches do that you can spend time setting things up the right way and less time scratching your head. The style is simple and easy to follow and unlike online documentation, doesn’t read like stereo instructions.

The second half of the book deals with Unity and Unity Connection. Setup, PBX Integration, and even digital networking get their share of coverage. The instructions and features are presented generically so that they may apply to both platforms as necessary. Only in places where a feature is only related to one platform is there specification, such as the need to sprinkle holy water on Unity to make it boot up. Call Handler configuration gets a chapter as well, and I found the information there very good reference material for a feature that can become complicated quite fast.

Tom’s Take

If you are a new voice rock star that has a CUCM server to set up and no experience with the knobs and switches on the platform, go buy this book now. It will guide you through your first deployment much more gently than searching for hours through acres of documentation. For the grizzled veterans of CallManager 4.x who are just getting back into the game after years of therapy deprogramming all those Windows admin skills, this is also a must read. It will get you up to speed on new features like SUBSCRIBE CSS and new interface features.

For the voice rock stars that have been configuring CUCM through version 5 & 6, the purchase of this book is a little less compelling. Many of these things are things we do every day or each time we setup CUCM, so it may feel like a bit of a rehashing. I found some of the more trivia-oriented content, like explanations of Service parameters and less-used feature configuration, to be of great value. I’m going to toss this book into my voice bag and keep it handy for those times when I need to configure a Unity Interview Handler and I don’t have Internet access on site. Think of it more as a Physician’s Desk Reference rather than Encyclopedia Britannica.

Disclaimer

Cisco Press provided an evaluation copy of this book.  At no time did they ask for, nor did they receive any consideration in this review. The analysis and opinions presented here represent my views and mine alone