Death to OEQs!

Just when you think things can’t get any more interesting, a little nugget of news slips out and makes your day fun.  An announcement about changes to the CCIE Security exam leaked out this morning and was quickly retracted to be polished before being reissued tomorrow or the next day.  However, Natalie Timms, the CCIE Security Program Manager confirmed in this thread that the changes were the removal of the Open Ended Questions (OEQs) and more addition of hands-on configuration.  As soon as I saw this, my wheels starting spinning.

Note: What follows is mostly conjecture based on opinions and conversations I’ve had with people in the industry.  Many of these facts are not confirmed as solid, but will be cited where appropriate.  Please don’t go telling people that my words are the gospel truth.  I don’t know any more than anyone else.

I think this movement is the beginning of the end of the OEQs.  They’ve been gone from the R&S lab for over a year now. The Voice lab has done away with them as well.  In the case of the R&S lab, they kept the new troubleshooting section in place as it served the same purpose as the OEQs, a section that could be rapidly changed to provide a method of varying the difficulty of the lab quickly.  The Voice lab introduced troubleshooting into the lab itself, either by making you diagnose broken things in your equipment or by forcing you to debug errors and do things like copy them to text files like you would if you were going to forward the files to TAC.  Integration of troubleshooting allows Cisco to have a good gauge of the candidate’s abilities and more closely ties the exam to the real world skills of a network enginee…rock star.

The remaining CCIE tracks (Wireless, Service Provider/Operations, Storage, and Security) still have OEQs attached to them.  Makes for an interesting briefing in the morning when the proctor has to give 3 different sets of instructions based on what the initial setup of your lab might look like.  Candidates hate the OEQs.  They are a trivia section at best.  People say that they are easy, CCNA-to-CCNP level questions that any CCIE candidate should be able to answer.  I find the lack of specificity in the old OEQs I took to be maddening in some cases, and the lack of proctor assistance was irritating.  In fact, the continued inclusion of OEQs on the other CCIE tracks has made them a little less appealing to me, should I find myself crazy enough to even think about attempting it all over again.

With the announcement, retraction, and eventual re-announcement of the removal of the OEQs from the Security track, I’ve got high hopes now.  I think Cisco has enough data based on their year of R&S and Voice troubleshooting to see it as a viable alternative to Trivial Pursuit: CCIE Edition.  I’ll bet that there is going to be a section similar to the Voice lab where faults are injected (or user-created) in the lab and you’ll be required to diagnose and perhaps log them in files on the desktop.  This makes the most sense, as some of the hardware can be emulated like the IOU images that run in the troubleshooting section but emulation of the specific ASICs and software on something like an ASA would be problematic at best.  By adding troubleshooting, the Security lab will start feeling more like a real-world scenario.

The Wireless track is due for a revamp in November.  Don’t be shocked to see the OEQs get stripped from it as well.  Wireless is a hard track with all the specific hardware required and would also lend itself well to a Voice-style troubleshooting inside the lab exam.  The CCIE Storage exam is on its last legs and is most likely about to be replaced by a new CCIE track more focused around Cisco’s Unified Computing System (UCS), along with Nexus switching, Wide-Area Application Services (WAAS) as well as Fiber Channel over Ethernet (FCoE) storage that will require the MDS switches from the old Storage lab.  This CCIE Data Center track (if that’s what it ends up being called) is probably one of the worst-kept ‘secrets’ in the CCIE world, as I’ve had several people mention to me, and a couple of candidates even ask the proctors when the lab would be retooled to include it.  In the interest of complete fairness, the proctor’s comment was “No comment.”

That leaves Service Provider and Service Provider Operations left as the only OEQ-enabled labs here.  I doubt that Cisco will leave the OEQs here if it removes them from other tracks.  The SP lab recently received a refresh and the SPO lab is very new.  I think that there will be an announcement very similar to the Security lab that removes the OEQs, but I think rather than injecting faults in this lab, they may try for a troubleshooting section down the road similar to the R&S lab.  This could be accomplished with the IOU images that are in use now for the R&S TS section.  Addition of the IOS-XR content would require something different, perhaps the mythical “Titanium” emulator for XR that I keep hearing about yet have never seen (much like IOU only a few months ago).  The addition of a real TS section would change the content drastically though, so it would require 6 months notice before being implemented.  In that time, however, they could use an in-lab troubleshooting method just like the other tracks.



Thanks to Youssef El Fathi for pointing out that the SP lab has not had OEQs since the 3.0 revision early in 2011.  The thread confirming this from June 8th is HERE.  If that truly is the case, then I don’t see any reason why there should continue to be OEQs in any other tracks.


Tom’s Take

There you have it.  A road map for eliminating the OEQs and banishing them to the same circle of hell as ARCNet and MicroChannel buses.  While I can’t confirm any of my suspicions outside the semi-firm announcement of the removal of OEQs from the Security exam, it makes the most sense that Cisco is ready to implement this change track-wide in the lab.  OEQs take a lot of time to grade and are slightly subjective.  Troubleshooting is pretty easy in comparison – it either works or it doesn’t.  By standardizing on troubleshooting instead of OEQs as the preferred rapid-change method of candidate testing, it makes things a little more fair all around.  I plan on finding the CCIE program managers when I go to Cisco Live this year and asking them about upcoming changes to the tracks so that I can nail down what might be happening.  If they tell me that the OEQs really are going away please don’t mistake my tears for sorrow.  They’ll be tears of unadulterated joy.

I’m going to say it again to avoid an international incident: This is all conjecture at this point.  If I turn out to be wrong, so be it.  However, I feel the time of the OEQs is over.  Don’t tell everyone on Groupstudy or or OSL that this is the absolute truth until you get a confirmed press release from someone whose name ends in “”.


BYOD: High School Never Ends

There is a lot of buzz around about the porting of applications to every conceivable platform.  Most of it can be traced back to a movement in the IT/user world known as Bring Your Own Device (BYOD), the idea that a user can bring in their own personal access device and still manage to perform their job functions.  I’m going to look at BYOD and why I think that it’s more of the same stuff we’ve been dealing with since lunch period in high school.

BYOD isn’t a new concept.  Contractors and engineers have been doing it for years.  Greg Ferro and Chris Jones would much prefer bringing their own Macbooks to a customer’s site to get the job done.  Matthew Norwood would prefer to have just about anything other than the corporate dinosaur that he babies through boot up and shut down.  Even I have my tastes when it comes to laptops.  Recently though, the explosion of smartphones and tablets has caused a shift toward more ubiquitous computing.  It now seems to be a bullet point requirement that your software or hardware has a support app in a cloud app repository or the capability to be managed from a 3.5″ capacitive touch screen.  Battle lines are drawn around whether or not your software is visible on a Fruit Company Mobile Device or a Robot Branded Smarty Phone.  Users want to drag in any old tablet and expect to do their entire job function from 7″ screen.

However, while BYOD is all about running software from any endpoint, the driving forces behind it aren’t quite as noble.  I think once I start describing how I see things, you’ll start noticing a few parallels, especially if you have teenagers.

– BYOD is about prestige.  Who usually starts asking about running an app on an iPad?  Well, besides the office Gadget Nerd that ran out and stood in line for 4 hours and ran out of the store screeching like kid in a candy store?  Odds are, it’s the CxO that comes to you and informs you that they’ve just purchased a Galaxy Tablet and they would like it setup.  The device is gingerly handed to you to perform your IT voodoo on, all while the executive waits patiently.  Usually, there is some kind of interjection from them about how they got a good deal and how the drone at the store told them it had a lot of amazing features.  The CxO usually can’t wait to show it around after you’ve finished syncing their mail and calendar and pictures of their expensive dogs.  Wanna know why?  Because it’s a status symbol.  They want to show off all the things it can do to those that can’t get one.  Whether it be due to being overpriced or unavailable from any supply chain, there are some people that revel in rubbing people’s noses in opulence.  By showing off how their tablet or smartphone gets emails and surfs the web, they are attempting to widen the IT class gap.  Sound like high school to you?  Air Jordans? Expensive blue jeans? Ringing any bells?  The same kind of people that liked to crow that their parents bought them a BMW in high school are the same ones that will gladly show off their iPad or Galaxy Tab solely for the purpose of snubbing you.  They could really care less about doing their job from it.

– BYOD is about entitlement.  I could go on and on about this one, but I’ll try to keep it on topic.  There seems to be a growing movement in the younger generation that you as a company owe them something for coming to work for you.  They want things like nap time or gold stars next to their names for doing something.  No, really.  This naturally extends to their choice of work device.  I’m going to pick on Mac users here because that particular device comes up more often that not, but it extends to Linux users and Windows users as well.  The “entitled” user thinks that you should change your entire network architecture to suit their particular situation.  Something like this:

User: I can’t get my mail.

Admin: You’re using the Fail Mail client.  We’re on Exchange.  You’ll need to use Outlook.

User: I’m not installing Office on my system!  Microsoft is a cold-hearted company that murders orphans in Antarctica.  Fail Mail donates $.25 of every shareware license to the West Pacific Tree Slug Preservation Society.  I want to use my mail client.

Admin: I guess you could use the webmail…

User: How about you use the Fail Mail Server instead?  They donate $2 of every purchase to fungus research.  I think it’s a much more capable server than dumb old Exchange anyway.

Admin: <facepalm>

I hope this doesn’t sound familiar.  One of the great joys of IT is telling users you aren’t going to reinvent the wheel just to mollify them.  However, in many cases the user demanding your change everything happens to sign your paycheck.  That does have the effect of ripping out one mail server or reprogramming a whole tool because it used/didn’t use Flash/HTML 5.

– BYOD is about never changing your perspective.  I have an iPad.  And an iPhone.  And a behemoth Lenovo w701 laptop.   And I use them all.  Often, I use them at the same time.  I see each as a very capable tool for what it’s designed to do.  I don’t read ebooks on my iPhone.  I don’t run virtual machines on my iPad.  And I don’t use my laptop for texting or phone calls.  Just like I don’t use screwdrivers like chisels or use a pipe wrench like a hammer.  However, there are some people that like picking up one device and never putting it down.  These people seem to believe that the world would be a more perfect place if they could sit in their chair and do their whole job from a touch screen.  They feel that moving to a laptop to type a blog post is a travesty.  Being forced to use a high-powered graphical desktop for CAD work is unthinkable.  I have to admit that I’ve tried to see things from their perspective.  I’ve tried to use my iPad to take notes and remotely administer servers.  Guess what?  I just couldn’t do it.  I’m a firm believer that tools should be used according to their design, rather than having a 56-in-one tool that does a lot of things poorly.

Tom’s Take

I think keeping your tools capable and portable is a very good thing.  I hate software that can only be run from a Windows 2000 server or needs a special hardware dongle to even start.  I love that tools are becoming web-enabled and can be used from any PC/Mac/toaster.  However, I also think that things need to be kept in perspective.  BYOD is a Charlie Foxtrot just waiting to happen if the motivations behind it aren’t honest and sincere.  Simply porting your management app to the App Store so the CxO can show off his new iPad while complaining that we need to scrap the company website because it uses Flash and no one will bother using their dumb old laptop ever again is really, really bad.  Give me a compelling reason to use your app, like a new intuitive interface or a remote capability I wouldn’t normally have.  Just putting your tablet app out so you can sound cool or fit in with the popular crowd won’t work any better than wearing parachute pants did in high school.  Except, this time you won’t get stuffed into a locker.  You’ll just lose my business.

Adrift In A Sea of Lulz

As I write this, it’s been about 24 hours since the hacking collective known as Lulzsec scuttled their ship and scattered to the four winds.  There’s been a lot of speculation as to what motivated the 50 days of hacking that has stirred up quite a bit of talk about exploiting security holes as well as what would cause the poster children of anti-sec hacking to disappear as quickly as they emerged.

Lulzsec emerged almost two months ago from the fires of the now-infamous Sony PSN hack.  It appears to have been formed by some of the Gn0sis people that hacked into the Gawker Media database and some other disaffected members of Anonymous.  After they popped up on the radar, they started posting a lot of supposedly secured information about all manner of things, from X-Factor contestant databases to FBI security contractors.  They also participated in other hacks, like taking offline for a few hours.  Most recently, they posted a dump of the Arizona Department of Public Safety servers and some 750,000 AT&T subscriber accounts.  Their activities have caused a lot of questions about perimeter security and probably cost a few security professionals their jobs.

To Lulzsec, this was all a game.  A giant F-you to the whole security community at large.  Their manifesto reads a lot like some teenagers I know.  They do what they want, how they want, when the want.  At first, there was no rhyme or reason to their attacks.  Later, they started talking about their “anti-sec” agenda, the idea that information shouldn’t be buried and needs to be disseminated by whatever means necessary.  Indeed, their anti-sec agenda also extended to the idea that people with inadequate security needed to be exposed and publicly embarrassed to resolve these issues.

Just as soon as they burst into the limelight, Lulzsec announced they were disbanding.  Theories abound as to the reason for their dissolution.  Did the feds get to close?  Was the lifting of the anonymous veil through leaking of personal information the last straw?  Did they simply get bored?  Answers won’t be forthcoming from the members themselves.  They seem to have faded right back into the anonymity they spawned from.  I think the answer to what is going on probably lies somewhere in the middle of all these things.

The Lulzsec hacks appear to have mostly centered around SQL injections.  The time-honored tradition of exploiting databases with carefully crafted packet strings continues to be quite popular even today.  I think Lulzsec used this attack vector to great success against Sony and a couple of other choice targets up front.  After their initial success, their patterns seemed to be haphazard.  I think this is due to the nature of using their one attack against a variety of sites rather than targeting specific ones.  It was a brute force method of anarchy, kind of like using a screwdriver to do all your tool-related tasks.  It works really well for screwing, but not so well for hammering or sawing.  Once they managed to expose the FBI partner databases and take down the CIA’s small public facing webserver, that brought significant attention from all angles, not typically something you want if you are trying to stay anonymous.  Then, other groups inside the scene started either getting jealous of the attention or decided to fight fire with fire.  That led to d0xing, the term used to describe the leaking of personal information that can be used to identify someone.  Through exposure in the public and the looming investigation from some upset 3-letter agencies, I think the first members to jump the Lulz Boat left here.  Rather than face what might be coming, they ducked out and headed back to the darkness that had protected them so well.  This has been somewhat confirmed in interviews with the publicly-known members.

The remaining Lulzsec members then seemed to have gone on a recruitment drive.  They tried to bring more talent into the fold.  I don’t think this newer group was quite as determined or successful as the first, though.  That led to a slowdown in target penetration.  You might argue that they’ve been releasing stuff right up to the end.  True…but all we know is that those sites were hacked, we don’t know when.  For all we know, AT&T could have been the second site they hacked.  AT&T was their ace in the hole.  If they were for real and ready to keep going for a long time to come, they would have released AT&T right away.  By putting it away and saving it for last, it appears they wanted one big splash before they were forgotten.  A vigorous, active Lulzsec would have been able to keep hitting bigger targets than AT&T.  After their success rate started dropping off, I think the remaining “old” members of Lulzsec probably did get bored.  Without new conquests to fuel their fame, the rush wasn’t there any more.  They decided to go out with a bang and quit while they were still ahead.  The remaining new recruits will probably go on to be folding into newer organizations that spring up in place of Lulzsec, the new breed to SQL injectors (or whatever is next), just like the Lulz Boat appears to have sailed with many Gn0sis crew members on board.

Tom’s Take

The black hat in me cheered Lulzsec for what they’ve accomplished.  The white hat is appalled.  Again, the truth lies somewhere in the middle.  I look at Lulzsec like the Joker in The Dark Knight.  A group of anarchist hackers that just want to have fun and burn everything down around them.  No agenda, no statements, just exploitation for fun.  A group of chaotic neutral script kiddies.  However, the very limelight they sought burned them enough to force them back into the shadows.  The way I look at it, the key to being a successful hacker is to not get caught.  Don’t get famous, and definitely don’t draw attention to yourself.  Kevin Mitnick had to learn that the hard way.  Something tells me that more than one of the passengers of the good ship Lulz will learn it the same way sooner rather than later.

My Phone Number is AAA-BCDA

Anyone that’s used a phone knows that there are letters on the keypad that make it handy to spell out words for those not gifted with the ability to remember long strings of numbers.  It’s also handy for marketing, for instance 1-800-FLOWERS.  Those that still use T9 predictive texting from a digit keypad probably have the letter positions memorized by now.  But what you may not know is that there are actually four letters on a telephone dialpad.

Dual-Tone Multi Frequency (DTMF) dialing is the modern way telephones signal the voice network over analog telephone lines.  Each keypress is a combination of two specific tones that correspond to the pitch of a key.  For instance, the ‘1’ key on a keypad is a combination of 697Hz played in conjunction with 1209Hz.  The ‘2’ key uses the same 697Hz signal, but plays is with a 1336Hz tone.  The ‘4’ key under the ‘1’ key uses a 770Hz tone in conjunction with the 1209Hz tone.  Each DTMF tone is a combination of high-pitched tones and low pitched tones.  Normal telephone keypads are laid out like this:

1209 HZ 1336 HZ 1477 HZ
697 Hz 1 2 3
770 Hz 4 5 6
852 Hz 7 8 9
941 Hz * 0 #

You can click on each of those links to listen to the tone they make (Thanks Wikipedia!!!).

The military once used a special kind of phone system known as AutoVon (Thanks to Matthew Norwood for the correction and Jason Schmidt for pointing it out as well).  This was a phone system designed to survive a nuclear attack.  One of the key differentiators of AutoVon besides being hardened against the Russians was the addition of another column of DTMF keys.  These allowed the person dialing the phone to find an open line quickly, or in the event of a full network, to boot users off that were on lower-priority calls.  The keys were denoted with the letters A-D and had functions with suspiciously familiar sounding names: Flash Override (A), Flash (B), Immediate (C), and Priority (D). I’m sure most of you networking people out there know where those names are used in our little world.  Users that dialed a C before their number could boot those on regular calls or on Priority calls off in the event of line congestion.  Flash Override was reserved for use by the President of the United States, as it could boot off anyone on a call.  This same kind of preemption capability lives on in CUCM as Multilevel Precedence and Preemption (MLPP).  AutoVon was eventually replaced in the 1990s with a newer telephone network for use by the Defense Department.  However, the legacy of the additional keys that most of us have never seen lives on.

This is the above table, including the new A-D DTMF tones:

1209 Hz 1336 Hz 1477 Hz 1633 Hz
697 Hz 1 2 3 A
770 Hz 4 5 6 B
852 Hz 7 8 9 C
941 Hz * 0 # D

If you are a user of Cisco Unified Communications Manager Express (CUCME), you have access to the AutoVon A-D DTMF tones (from here on out, I’m going to call this “Army Dialing”).  The system can replicate the tones from these four keys.  You might say, “Cool.  What in the hell would I ever use this for?  No one can dial these numbers.”  Yep.  No one can dial these numbers from a regular phone keypad.  Think about it like this: you have access to a whole group of numbers that can only be dialed by the people you allow access.  The most popular use of this setup is for phone-to-phone intercoms.  By restricting the intercom number to an “Army Dial” number, no one can dial that intercom number on accident unless they have a button on their phone that speed dials the number.  Here’s an example:

CUCME(config)# ephone-dn 13
CUCME(config-ephone-dn)# number A100
CUCME(config-ephone-dn)# intercom A101 label “Networking Nerd”
CUCME(config-ephone-dn)# exit
CUCME(config)# ephone-dn 14
CUCME(config-ephone-dn)# number A101
CUCME(config-ephone-dn)# intercom A100 label “Junior Admin”
CUCME(config-ephone-dn)# exit
CUCME(config)# ephone 2
CUCME(config-ephone)# button 2:13
CUCME(config-ephone)# exit
CUCME(config)# ephone 3
CUCME(config-ephone)# button 2:14

This way, my intercom line can only be dialed from a phone with a speed dial button associated with the number.  I control who can call me (mwa ha ha…).  This could also be used for multicast paging directory numbers.  That way, only the designated phones have the ability to page and you can prevent unnecessary chatter on the speakers.

I’m sure if you put your mind to it, you could find all sorts of interesting applications for this kind of feature.

Happy Twitterversary To Me!

Today marks the one year anniversary of my first tweet on Twitter.  I’d sing the “Happy Birthday” song, but the royalties on that little gem would cost me a fortune.  Instead, I’m going to spend some time talking about why I think Twitter is so very useful for IT people.

I have always spent a lot of time reading blogs.  Great content in concise, easy-to-digest format.  Especially when I started studying for my CCIE lab.  However, last year I noticed that some of my CCIE blogs weren’t being updated anymore, specifically CCIE Candidate and CCIE Pursuit.  I figured that CCIE Candidate wasn’t being updated quite as regularly anymore due to Ethan getting his number, so I decided to do a little digging.  Turns out Ethan had a new, non-CCIE focused blog at PacketAttack, but also had an account on Twitter (@ecbanks).  Now, I had my misgivings about Twitter.  It was a microblogging site dedicated to people telling me what the had for lunch or when they were taking a constitutional.  All the previous experiences I had seen on Twitter led me to believe that it wasn’t exactly a fun place to be.

However, after reading through Ethan’s tweets, I realized that there was a lot of good information and discussion that was being posted there.  I searched around and found a couple of other good tweet streams, including one from a real-life friend that I didn’t get to see much, Brad Stone (@bestone).  After mulling the decision back and forth for a day or two, I decided to take the plunge.  I tried several names before I finally came up with one that I thought personified both my desire for technical discussion and my outlook on things, @networkingnerd.  Once I signed up for Twitter, I started following a few people that I had found, like Ethan, Brad, and Narbik Kocharians (@narbikk).  I knew that the only way I could get more involved with what was going on was to start talking and see if anyone was listening.  At first, it felt like the guys in the park standing on a soapbox with a bullhorn, shouting for all the world to hear but no one really listening.  Once I figured out how to address someone with a tweet to get their attention, the followers started taking off a little more.  Part of the key for me was staying focused on networking and tech and injecting a little snarkiness and humor along the way (something that would pay off later when I started blogging).

Another part of the reason I got involved with Twitter was to feel like a larger part of the IT community.  Last year, my annual sojurn to Cisco Live was coming up fast, and Cisco had been releasing a lot of good information and tips for Cisco Live attendees on Twitter.  Now, when I go to Cisco Live, I have a group of 5-6 people that I usually hang out with and do things like take the Cisco Challenge in the World of Solutions or heckle the bands at the Customer Appreciation Event.  However, thanks to Twitter this year I’ve got 50-60 people that I’m going to be hanging out with and meeting for the first time in real life.  Twitter also helped me get more information about events like Tech Field Day, which I had no idea about.  Later, Twitter helped me get my first invite to Tech Field Day, both through my involvement in the community and Twitter’s gateway effect that drove me to start blogging out my longer thoughts (like this one).

Twitter isn’t for everyone.  Some people have a hard time keeping up with the firehose of information that you get blasted with.  Others have a really hard time condensing thoughts down to less that 140 characters.  Still others never really find the right group to get involved in and write Twitter of as stupid or childish.  My counter to thinking such as that is “You get what you put into it.”  I search out new and fun people to follow all the time.  I’m not afraid to unfollow someone if their tweets become pointless and overly distracting.  Twitter, for me, is about discussion.  Helping answer questions, learning about industry news before my bosses, even railing against hated protocols.  All of these things have increased the payoff I have received from Twitter in the last year.

At the same time, I make sure to respect the wishes of those that follow me.  I tend to relegate my non-IT related posts to something like Facebook.  I may post personal things on Twitter from time to time, but I tend to think of them more as little details about my life that help fill in the dark spots about me.  I don’t post about sports, even though my Facebook wall in the fall is a virtual commentary on college football every week. I don’t let applications tweet things for me if I can help it.  I don’t link my 4square account or let an unfollower app shout things no one else is interested in.  I have total control over my Twitter account to be sure that those that take time out of their schedules to listen to what I have to say will hear my words and not those of some robot.  Those that let their Twitter streams become a wasteland of contest entries and “I just unfollowed X people that didn’t follow me back” updates from applications usually fall by the wayside sooner or later.

Tom’s Take

People I know in real life make fun of me when I tell them I’m on Twitter.  They crack jokes about updates from the water closet or useless junk spamming my Twitter feed.  However, when the joking stops and they ask me what’s so compelling about it, I tell them “On Twitter, I learn things I actually WANT to know.”  My Facebook feed is a bunch of game updates and garbage about stuff I really don’t care to know most of the time.  Until my Twitter followers started friending me on Facebook, no one on Facebook knew about the depths of my nerdiness.  On Twitter, I feel free to talk about things like BGP or NAT without fear that I’m going to be deluged with comments from people who are hopelessly lost or would rather talk about the Farmville animals.  On Twitter, I’m free to indulge myself.  And the community that I’ve become a part of helps me develop and become a better person.  Without Twitter, I would never have been able to find so many people across the world that share my interests.  I never would have been pushed to increase the depth of my knowledge.  Dare I say it, I probably wouldn’t have been driven to get my CCIE nearly as much as I was thanks to the help of my Twitter friends.  In short, I’m glad I’ve had my first year on Twitter be as successful as it has been.  Here’s to many more.

Friday (+1) Links – 6/18/2011

So…I might have missed a Friday link post or two.  To be honest, I was so bogged down in last-minute cramming for the CCIE lab exam I didn’t look up to figure out what day it was.  Thankfully, some interesting things have happened in that time, so I’ve got a few interesting things to share:

Cisco Expands UC Virtualization Support To Add HP and IBM 

Until recently, Cisco customers were required to use the Unified Computing System (UCS) platform when running Unified Communications (UC) applications in a virtual environment. On June 7th Cisco introduced a new support model called “Specification-Based Hardware Support“. With this announcement Cisco widens virtualized platform support to include IBM and HP.

For those that constantly complained that virtualizing CUCM/CUC was only possible on Cisco UCS, here you go.  There are a few supported platforms from IBM and HP, but take care that your whitebox server probably isn’t going to ever be supported.

Screw 140 Characters: 32,000 Characters on How to Fix RIM and Blackberry 

Please note that since we wrote this for a class we had some specific items we needed to include, such as specific financial profitability targets for our recommendations, which would otherwise seem pretty odd in a blog post like this.

Good paper outlining the background of RIM and the troubles they’re going through right now.  While I don’t know if RIM can right this sinking ship any time soon, it appears that some people believe that RIM still has a chance to stay relevant.

Stuxnet Deconstructed Shows One Scary Virus 

Ready to shake in your shoes? This video breaking down how Stuxnet works and where it could go next is flat out frightening. (And if this wasn’t a government program, I’ll eat a centrifuge.)

Not surprised in the least.  This is the 50,000 foot overview of Stuxnet with some fancy infographic stuff thrown in.  Great if you’ve been wondering about Stuxnet.  Then head over here and read my ruminations about it.

An Outsider’s View of Junosphere

It’s no secret that learning a vendor’s equipment takes lots and lots of time at the command line interface (CLI).  You can spend all the time you want pouring over training manuals and reference documentation, but until you get some “stick time” with the phosphors of a console screen, it’s probably not going to stick.  When I was studying for my CCIE R&S, I spent a lot of time using GNS3, a popular GUI for configuring Dynamips, the Cisco IOS simulator developed by the community.  There was no way I would be about to afford the equipment to replicate the lab topologies, as my training budget wasn’t very forgiving outside the test costs and any equipment I did manage to scrounge up usually went into production soon after that.  GNS3 afforded me the opportunity to create my own lab environments to play with protocols and configurations.  I’d say 75-80% of my lab work for the CCIE was done on GNS3.  The only things I couldn’t test were hardware-specific configurations, like the QoS found on Catalyst switches, or things that caused massive processor usage, like configuring NTP on more than two routers.  I would have killed to have had access to something a little more stable.

Cisco recently released a virtual router offering based around IOS-on-Unix (IOU), a formerly-internal testing tool that was leaked and cracked for use by non-Cisco people.  The official IOU simulation from Cisco revolves around their training material, so using it to setup your own configurations is very difficult.  Juniper Networks, on the other hand, has decided to release their own emulated OS environment built around their own hardware operating system, Junos.  This product is called Junosphere.  I was recently lucky enough to take part in a Packet Pushers episode where we talked with some of the minds behind Junosphere.  What follows here are my thoughts about the product based on this podcast and some people in the industry that I’ve talked to.

Junosphere is a cloud-based emulation platform being offered by Juniper for the purpose of building a lab environment for testing or education purposes.  The actual hardware being emulated inside Junosphere is courtesy of VJX, a virtual Junos instance that allows you to see the routing and security features of the product.  According to this very thorough Q&A from Chris Jones, VJX is not simply a hacked version of Junos running in a VM.  Instead, it is a fully supported release track code that simply runs on virtual hardware instead of something with blinking lights.  This opens up all sorts of interesting possibilities down the road, very similarly to Arista Networks vEOS virtualized router.  VJX evolved out of code that Juniper developers originally used to test the OS itself, so it has strong roots in the ability to emulate the Junos environment.  Riding on top of VJX is a web interface that allows you to drag-and-drop network topologies to create testing environments, as well as the ability to load preset configurations, such as those that you might get from Juniper to coincide with their training materials.  To reference this to something people might be more familiar with, VJX is like Dyanmips, and the Junosphere lab configuration program is more like GNS3.

Junosphere can be purchased from a Juniper partner or directly from Juniper just like you would with any other Juniper product.  The reservation system is currently set up in such a way as to allot 24-hour blocks of time for Junosphere use.  Note that those aren’t rack tokens or split into 8-hour sessions.  You get 24 continuous hours of access per SKU purchase.  Right now, the target audience for Junosphere seems to be the university/academic environment.  However, I expect that Juniper will start looking at other markets once they’ve moved out of the early launch phase of their product.  I’m very much aware that this is all very early in the life cycle of Junosphere and emulated enviroments, so I’m making sure to temper my feelings with a bit of reservation.

As it exists right now, Junosphere would be a great option for the student wanting to learn Junos for the first time in a university or trade school type of setting.  By having continuous access to the router environments, these schools can add the cost of Junosphere rentals onto the student’s tuition costs and allow them 24-hour access to the router pods for flexible study times.  For self-study oriented people like me, this first iteration is less compelling.  I tend to study at odd hours of the night and whenever I have a free moment, so 24-hour access isn’t nearly as important to me as having blocks of 4 or 8 hours might be.  I understand the reasons behind Juniper’s decision to offer the time the way they have.  By offering 24-hour blocks, they can work out the kinks of VJX being offered to end users that might not be familiar with the quirks of emulated environments, unlike the developers that were the previous user base for the product.

Tom’s Take

I know that I probably need to learn Junos at some point in the near future.  It makes all the sense in the world to try and pick it up in case I find myself staring at an SRX in the future.  With emulated OS environments quickly becoming the norm, I think that Junosphere has a great start on providing a very important service.  As I said on Packet Pushers, to make it more valuable to me, it’s going to need to be something I can use on my local machine, ala GNS3 or IOU.  That way, I can fire it up as needed to test things or to make sure I remember the commands to configure IS-IS.  Giving me the power to use it without the necessity of being connected to the Internet or needing to reserve timeslots on a virtual rack is the entire reason behind emulating the software in the first place.  I know that Junosphere is still in its infancy when it comes to features and target audiences.  I’m holding my final judgement of the product until we get to the “run” phase of the traditional “crawl, run, walk” mentality of service introduction.  It helps to think about Junosphere as a 1.0 product.  Once we get the version numbers up a little higher, I hope that Juniper will have delivered a product that will enable me to learn more about their offerings.

For more information on Junosphere, check out the Junosphere information page at