Adrift In A Sea of Lulz

As I write this, it’s been about 24 hours since the hacking collective known as Lulzsec scuttled their ship and scattered to the four winds.  There’s been a lot of speculation as to what motivated the 50 days of hacking that has stirred up quite a bit of talk about exploiting security holes as well as what would cause the poster children of anti-sec hacking to disappear as quickly as they emerged.

Lulzsec emerged almost two months ago from the fires of the now-infamous Sony PSN hack.  It appears to have been formed by some of the Gn0sis people that hacked into the Gawker Media database and some other disaffected members of Anonymous.  After they popped up on the radar, they started posting a lot of supposedly secured information about all manner of things, from X-Factor contestant databases to FBI security contractors.  They also participated in other hacks, like taking cia.gov offline for a few hours.  Most recently, they posted a dump of the Arizona Department of Public Safety servers and some 750,000 AT&T subscriber accounts.  Their activities have caused a lot of questions about perimeter security and probably cost a few security professionals their jobs.

To Lulzsec, this was all a game.  A giant F-you to the whole security community at large.  Their manifesto reads a lot like some teenagers I know.  They do what they want, how they want, when the want.  At first, there was no rhyme or reason to their attacks.  Later, they started talking about their “anti-sec” agenda, the idea that information shouldn’t be buried and needs to be disseminated by whatever means necessary.  Indeed, their anti-sec agenda also extended to the idea that people with inadequate security needed to be exposed and publicly embarrassed to resolve these issues.

Just as soon as they burst into the limelight, Lulzsec announced they were disbanding.  Theories abound as to the reason for their dissolution.  Did the feds get to close?  Was the lifting of the anonymous veil through leaking of personal information the last straw?  Did they simply get bored?  Answers won’t be forthcoming from the members themselves.  They seem to have faded right back into the anonymity they spawned from.  I think the answer to what is going on probably lies somewhere in the middle of all these things.

The Lulzsec hacks appear to have mostly centered around SQL injections.  The time-honored tradition of exploiting databases with carefully crafted packet strings continues to be quite popular even today.  I think Lulzsec used this attack vector to great success against Sony and a couple of other choice targets up front.  After their initial success, their patterns seemed to be haphazard.  I think this is due to the nature of using their one attack against a variety of sites rather than targeting specific ones.  It was a brute force method of anarchy, kind of like using a screwdriver to do all your tool-related tasks.  It works really well for screwing, but not so well for hammering or sawing.  Once they managed to expose the FBI partner databases and take down the CIA’s small public facing webserver, that brought significant attention from all angles, not typically something you want if you are trying to stay anonymous.  Then, other groups inside the scene started either getting jealous of the attention or decided to fight fire with fire.  That led to d0xing, the term used to describe the leaking of personal information that can be used to identify someone.  Through exposure in the public and the looming investigation from some upset 3-letter agencies, I think the first members to jump the Lulz Boat left here.  Rather than face what might be coming, they ducked out and headed back to the darkness that had protected them so well.  This has been somewhat confirmed in interviews with the publicly-known members.

The remaining Lulzsec members then seemed to have gone on a recruitment drive.  They tried to bring more talent into the fold.  I don’t think this newer group was quite as determined or successful as the first, though.  That led to a slowdown in target penetration.  You might argue that they’ve been releasing stuff right up to the end.  True…but all we know is that those sites were hacked, we don’t know when.  For all we know, AT&T could have been the second site they hacked.  AT&T was their ace in the hole.  If they were for real and ready to keep going for a long time to come, they would have released AT&T right away.  By putting it away and saving it for last, it appears they wanted one big splash before they were forgotten.  A vigorous, active Lulzsec would have been able to keep hitting bigger targets than AT&T.  After their success rate started dropping off, I think the remaining “old” members of Lulzsec probably did get bored.  Without new conquests to fuel their fame, the rush wasn’t there any more.  They decided to go out with a bang and quit while they were still ahead.  The remaining new recruits will probably go on to be folding into newer organizations that spring up in place of Lulzsec, the new breed to SQL injectors (or whatever is next), just like the Lulz Boat appears to have sailed with many Gn0sis crew members on board.

Tom’s Take

The black hat in me cheered Lulzsec for what they’ve accomplished.  The white hat is appalled.  Again, the truth lies somewhere in the middle.  I look at Lulzsec like the Joker in The Dark Knight.  A group of anarchist hackers that just want to have fun and burn everything down around them.  No agenda, no statements, just exploitation for fun.  A group of chaotic neutral script kiddies.  However, the very limelight they sought burned them enough to force them back into the shadows.  The way I look at it, the key to being a successful hacker is to not get caught.  Don’t get famous, and definitely don’t draw attention to yourself.  Kevin Mitnick had to learn that the hard way.  Something tells me that more than one of the passengers of the good ship Lulz will learn it the same way sooner rather than later.

My Phone Number is AAA-BCDA

Anyone that’s used a phone knows that there are letters on the keypad that make it handy to spell out words for those not gifted with the ability to remember long strings of numbers.  It’s also handy for marketing, for instance 1-800-FLOWERS.  Those that still use T9 predictive texting from a digit keypad probably have the letter positions memorized by now.  But what you may not know is that there are actually four letters on a telephone dialpad.

Dual-Tone Multi Frequency (DTMF) dialing is the modern way telephones signal the voice network over analog telephone lines.  Each keypress is a combination of two specific tones that correspond to the pitch of a key.  For instance, the ‘1’ key on a keypad is a combination of 697Hz played in conjunction with 1209Hz.  The ‘2’ key uses the same 697Hz signal, but plays is with a 1336Hz tone.  The ‘4’ key under the ‘1’ key uses a 770Hz tone in conjunction with the 1209Hz tone.  Each DTMF tone is a combination of high-pitched tones and low pitched tones.  Normal telephone keypads are laid out like this:

1209 HZ 1336 HZ 1477 HZ
697 Hz 1 2 3
770 Hz 4 5 6
852 Hz 7 8 9
941 Hz * 0 #

You can click on each of those links to listen to the tone they make (Thanks Wikipedia!!!).

The military once used a special kind of phone system known as AutoVon (Thanks to Matthew Norwood for the correction and Jason Schmidt for pointing it out as well).  This was a phone system designed to survive a nuclear attack.  One of the key differentiators of AutoVon besides being hardened against the Russians was the addition of another column of DTMF keys.  These allowed the person dialing the phone to find an open line quickly, or in the event of a full network, to boot users off that were on lower-priority calls.  The keys were denoted with the letters A-D and had functions with suspiciously familiar sounding names: Flash Override (A), Flash (B), Immediate (C), and Priority (D). I’m sure most of you networking people out there know where those names are used in our little world.  Users that dialed a C before their number could boot those on regular calls or on Priority calls off in the event of line congestion.  Flash Override was reserved for use by the President of the United States, as it could boot off anyone on a call.  This same kind of preemption capability lives on in CUCM as Multilevel Precedence and Preemption (MLPP).  AutoVon was eventually replaced in the 1990s with a newer telephone network for use by the Defense Department.  However, the legacy of the additional keys that most of us have never seen lives on.

This is the above table, including the new A-D DTMF tones:

1209 Hz 1336 Hz 1477 Hz 1633 Hz
697 Hz 1 2 3 A
770 Hz 4 5 6 B
852 Hz 7 8 9 C
941 Hz * 0 # D

If you are a user of Cisco Unified Communications Manager Express (CUCME), you have access to the AutoVon A-D DTMF tones (from here on out, I’m going to call this “Army Dialing”).  The system can replicate the tones from these four keys.  You might say, “Cool.  What in the hell would I ever use this for?  No one can dial these numbers.”  Yep.  No one can dial these numbers from a regular phone keypad.  Think about it like this: you have access to a whole group of numbers that can only be dialed by the people you allow access.  The most popular use of this setup is for phone-to-phone intercoms.  By restricting the intercom number to an “Army Dial” number, no one can dial that intercom number on accident unless they have a button on their phone that speed dials the number.  Here’s an example:

CUCME(config)# ephone-dn 13
CUCME(config-ephone-dn)# number A100
CUCME(config-ephone-dn)# intercom A101 label “Networking Nerd”
CUCME(config-ephone-dn)# exit
CUCME(config)# ephone-dn 14
CUCME(config-ephone-dn)# number A101
CUCME(config-ephone-dn)# intercom A100 label “Junior Admin”
CUCME(config-ephone-dn)# exit
CUCME(config)# ephone 2
CUCME(config-ephone)# button 2:13
CUCME(config-ephone)# exit
CUCME(config)# ephone 3
CUCME(config-ephone)# button 2:14

This way, my intercom line can only be dialed from a phone with a speed dial button associated with the number.  I control who can call me (mwa ha ha…).  This could also be used for multicast paging directory numbers.  That way, only the designated phones have the ability to page and you can prevent unnecessary chatter on the speakers.

I’m sure if you put your mind to it, you could find all sorts of interesting applications for this kind of feature.

Happy Twitterversary To Me!

Today marks the one year anniversary of my first tweet on Twitter.  I’d sing the “Happy Birthday” song, but the royalties on that little gem would cost me a fortune.  Instead, I’m going to spend some time talking about why I think Twitter is so very useful for IT people.

I have always spent a lot of time reading blogs.  Great content in concise, easy-to-digest format.  Especially when I started studying for my CCIE lab.  However, last year I noticed that some of my CCIE blogs weren’t being updated anymore, specifically CCIE Candidate and CCIE Pursuit.  I figured that CCIE Candidate wasn’t being updated quite as regularly anymore due to Ethan getting his number, so I decided to do a little digging.  Turns out Ethan had a new, non-CCIE focused blog at PacketAttack, but also had an account on Twitter (@ecbanks).  Now, I had my misgivings about Twitter.  It was a microblogging site dedicated to people telling me what the had for lunch or when they were taking a constitutional.  All the previous experiences I had seen on Twitter led me to believe that it wasn’t exactly a fun place to be.

However, after reading through Ethan’s tweets, I realized that there was a lot of good information and discussion that was being posted there.  I searched around and found a couple of other good tweet streams, including one from a real-life friend that I didn’t get to see much, Brad Stone (@bestone).  After mulling the decision back and forth for a day or two, I decided to take the plunge.  I tried several names before I finally came up with one that I thought personified both my desire for technical discussion and my outlook on things, @networkingnerd.  Once I signed up for Twitter, I started following a few people that I had found, like Ethan, Brad, and Narbik Kocharians (@narbikk).  I knew that the only way I could get more involved with what was going on was to start talking and see if anyone was listening.  At first, it felt like the guys in the park standing on a soapbox with a bullhorn, shouting for all the world to hear but no one really listening.  Once I figured out how to address someone with a tweet to get their attention, the followers started taking off a little more.  Part of the key for me was staying focused on networking and tech and injecting a little snarkiness and humor along the way (something that would pay off later when I started blogging).

Another part of the reason I got involved with Twitter was to feel like a larger part of the IT community.  Last year, my annual sojurn to Cisco Live was coming up fast, and Cisco had been releasing a lot of good information and tips for Cisco Live attendees on Twitter.  Now, when I go to Cisco Live, I have a group of 5-6 people that I usually hang out with and do things like take the Cisco Challenge in the World of Solutions or heckle the bands at the Customer Appreciation Event.  However, thanks to Twitter this year I’ve got 50-60 people that I’m going to be hanging out with and meeting for the first time in real life.  Twitter also helped me get more information about events like Tech Field Day, which I had no idea about.  Later, Twitter helped me get my first invite to Tech Field Day, both through my involvement in the community and Twitter’s gateway effect that drove me to start blogging out my longer thoughts (like this one).

Twitter isn’t for everyone.  Some people have a hard time keeping up with the firehose of information that you get blasted with.  Others have a really hard time condensing thoughts down to less that 140 characters.  Still others never really find the right group to get involved in and write Twitter of as stupid or childish.  My counter to thinking such as that is “You get what you put into it.”  I search out new and fun people to follow all the time.  I’m not afraid to unfollow someone if their tweets become pointless and overly distracting.  Twitter, for me, is about discussion.  Helping answer questions, learning about industry news before my bosses, even railing against hated protocols.  All of these things have increased the payoff I have received from Twitter in the last year.

At the same time, I make sure to respect the wishes of those that follow me.  I tend to relegate my non-IT related posts to something like Facebook.  I may post personal things on Twitter from time to time, but I tend to think of them more as little details about my life that help fill in the dark spots about me.  I don’t post about sports, even though my Facebook wall in the fall is a virtual commentary on college football every week. I don’t let applications tweet things for me if I can help it.  I don’t link my 4square account or let an unfollower app shout things no one else is interested in.  I have total control over my Twitter account to be sure that those that take time out of their schedules to listen to what I have to say will hear my words and not those of some robot.  Those that let their Twitter streams become a wasteland of contest entries and “I just unfollowed X people that didn’t follow me back” updates from applications usually fall by the wayside sooner or later.

Tom’s Take

People I know in real life make fun of me when I tell them I’m on Twitter.  They crack jokes about updates from the water closet or useless junk spamming my Twitter feed.  However, when the joking stops and they ask me what’s so compelling about it, I tell them “On Twitter, I learn things I actually WANT to know.”  My Facebook feed is a bunch of game updates and garbage about stuff I really don’t care to know most of the time.  Until my Twitter followers started friending me on Facebook, no one on Facebook knew about the depths of my nerdiness.  On Twitter, I feel free to talk about things like BGP or NAT without fear that I’m going to be deluged with comments from people who are hopelessly lost or would rather talk about the Farmville animals.  On Twitter, I’m free to indulge myself.  And the community that I’ve become a part of helps me develop and become a better person.  Without Twitter, I would never have been able to find so many people across the world that share my interests.  I never would have been pushed to increase the depth of my knowledge.  Dare I say it, I probably wouldn’t have been driven to get my CCIE nearly as much as I was thanks to the help of my Twitter friends.  In short, I’m glad I’ve had my first year on Twitter be as successful as it has been.  Here’s to many more.

Friday (+1) Links – 6/18/2011

So…I might have missed a Friday link post or two.  To be honest, I was so bogged down in last-minute cramming for the CCIE lab exam I didn’t look up to figure out what day it was.  Thankfully, some interesting things have happened in that time, so I’ve got a few interesting things to share:

Cisco Expands UC Virtualization Support To Add HP and IBM 

Until recently, Cisco customers were required to use the Unified Computing System (UCS) platform when running Unified Communications (UC) applications in a virtual environment. On June 7th Cisco introduced a new support model called “Specification-Based Hardware Support“. With this announcement Cisco widens virtualized platform support to include IBM and HP.

For those that constantly complained that virtualizing CUCM/CUC was only possible on Cisco UCS, here you go.  There are a few supported platforms from IBM and HP, but take care that your whitebox server probably isn’t going to ever be supported.

Screw 140 Characters: 32,000 Characters on How to Fix RIM and Blackberry 

Please note that since we wrote this for a class we had some specific items we needed to include, such as specific financial profitability targets for our recommendations, which would otherwise seem pretty odd in a blog post like this.

Good paper outlining the background of RIM and the troubles they’re going through right now.  While I don’t know if RIM can right this sinking ship any time soon, it appears that some people believe that RIM still has a chance to stay relevant.

Stuxnet Deconstructed Shows One Scary Virus 

Ready to shake in your shoes? This video breaking down how Stuxnet works and where it could go next is flat out frightening. (And if this wasn’t a government program, I’ll eat a centrifuge.)

Not surprised in the least.  This is the 50,000 foot overview of Stuxnet with some fancy infographic stuff thrown in.  Great if you’ve been wondering about Stuxnet.  Then head over here and read my ruminations about it.

An Outsider’s View of Junosphere

It’s no secret that learning a vendor’s equipment takes lots and lots of time at the command line interface (CLI).  You can spend all the time you want pouring over training manuals and reference documentation, but until you get some “stick time” with the phosphors of a console screen, it’s probably not going to stick.  When I was studying for my CCIE R&S, I spent a lot of time using GNS3, a popular GUI for configuring Dynamips, the Cisco IOS simulator developed by the community.  There was no way I would be about to afford the equipment to replicate the lab topologies, as my training budget wasn’t very forgiving outside the test costs and any equipment I did manage to scrounge up usually went into production soon after that.  GNS3 afforded me the opportunity to create my own lab environments to play with protocols and configurations.  I’d say 75-80% of my lab work for the CCIE was done on GNS3.  The only things I couldn’t test were hardware-specific configurations, like the QoS found on Catalyst switches, or things that caused massive processor usage, like configuring NTP on more than two routers.  I would have killed to have had access to something a little more stable.

Cisco recently released a virtual router offering based around IOS-on-Unix (IOU), a formerly-internal testing tool that was leaked and cracked for use by non-Cisco people.  The official IOU simulation from Cisco revolves around their training material, so using it to setup your own configurations is very difficult.  Juniper Networks, on the other hand, has decided to release their own emulated OS environment built around their own hardware operating system, Junos.  This product is called Junosphere.  I was recently lucky enough to take part in a Packet Pushers episode where we talked with some of the minds behind Junosphere.  What follows here are my thoughts about the product based on this podcast and some people in the industry that I’ve talked to.

Junosphere is a cloud-based emulation platform being offered by Juniper for the purpose of building a lab environment for testing or education purposes.  The actual hardware being emulated inside Junosphere is courtesy of VJX, a virtual Junos instance that allows you to see the routing and security features of the product.  According to this very thorough Q&A from Chris Jones, VJX is not simply a hacked version of Junos running in a VM.  Instead, it is a fully supported release track code that simply runs on virtual hardware instead of something with blinking lights.  This opens up all sorts of interesting possibilities down the road, very similarly to Arista Networks vEOS virtualized router.  VJX evolved out of code that Juniper developers originally used to test the OS itself, so it has strong roots in the ability to emulate the Junos environment.  Riding on top of VJX is a web interface that allows you to drag-and-drop network topologies to create testing environments, as well as the ability to load preset configurations, such as those that you might get from Juniper to coincide with their training materials.  To reference this to something people might be more familiar with, VJX is like Dyanmips, and the Junosphere lab configuration program is more like GNS3.

Junosphere can be purchased from a Juniper partner or directly from Juniper just like you would with any other Juniper product.  The reservation system is currently set up in such a way as to allot 24-hour blocks of time for Junosphere use.  Note that those aren’t rack tokens or split into 8-hour sessions.  You get 24 continuous hours of access per SKU purchase.  Right now, the target audience for Junosphere seems to be the university/academic environment.  However, I expect that Juniper will start looking at other markets once they’ve moved out of the early launch phase of their product.  I’m very much aware that this is all very early in the life cycle of Junosphere and emulated enviroments, so I’m making sure to temper my feelings with a bit of reservation.

As it exists right now, Junosphere would be a great option for the student wanting to learn Junos for the first time in a university or trade school type of setting.  By having continuous access to the router environments, these schools can add the cost of Junosphere rentals onto the student’s tuition costs and allow them 24-hour access to the router pods for flexible study times.  For self-study oriented people like me, this first iteration is less compelling.  I tend to study at odd hours of the night and whenever I have a free moment, so 24-hour access isn’t nearly as important to me as having blocks of 4 or 8 hours might be.  I understand the reasons behind Juniper’s decision to offer the time the way they have.  By offering 24-hour blocks, they can work out the kinks of VJX being offered to end users that might not be familiar with the quirks of emulated environments, unlike the developers that were the previous user base for the product.

Tom’s Take

I know that I probably need to learn Junos at some point in the near future.  It makes all the sense in the world to try and pick it up in case I find myself staring at an SRX in the future.  With emulated OS environments quickly becoming the norm, I think that Junosphere has a great start on providing a very important service.  As I said on Packet Pushers, to make it more valuable to me, it’s going to need to be something I can use on my local machine, ala GNS3 or IOU.  That way, I can fire it up as needed to test things or to make sure I remember the commands to configure IS-IS.  Giving me the power to use it without the necessity of being connected to the Internet or needing to reserve timeslots on a virtual rack is the entire reason behind emulating the software in the first place.  I know that Junosphere is still in its infancy when it comes to features and target audiences.  I’m holding my final judgement of the product until we get to the “run” phase of the traditional “crawl, run, walk” mentality of service introduction.  It helps to think about Junosphere as a 1.0 product.  Once we get the version numbers up a little higher, I hope that Juniper will have delivered a product that will enable me to learn more about their offerings.

For more information on Junosphere, check out the Junosphere information page at http://www.juniper.net/us/en/products-services/software/junos-platform/junosphere/.

A Lot Of People Take The Lab Seven Times…

A couple people have asked about some highlights in my lab experiences while going for my CCIE. Here are a few of the more humorous points.

My first lab attempt was in December of 2008. This was back before Open-Ended Questions (OEQs) or the Troubleshooting section. I got my teeth kicked in by this first lab. By lunchtime, I was pretty much shell shocked. I didn’t talk to anyone and spent a lot of time staring at my lab binder. At 2:00 p.m., I was wrestling with a BGP problem that I refused to let go of until I solved it, even if it cost me the rest of my lab. I got up and decided to get a drink from the break room. In RTP, the breakroom and bathroom are down the hall from the lab. As I worked out the possible solutions to my issues in my head, I woke up from my mental fog and found myself in the bathroom wondering where the Coke machine was. That’s when I knew my goose was cooked on that attempt.

Number two was the first with the OEQs.  I switched lab locations from RTP to San Jose.  Firstly, because the lab started an hour later and I love my sleep.  Secondly because I had the time change going from Central to Pacific working for me instead of going from Central to Eastern and always being behind.  I nailed the trivia section at the beginning but got hammered on the configuration section.  I realized that I was good at the theory, but I needed to concentrate on the application.  Number three was my last shot at the version 3 lab. I got enough points to pass the configuration section, but I missed too many trivia questions. I was livid. It’s like meeting someone for the first date and calling them by the wrong name 5 minutes after you meet. The rest of the night is a wash no matter what, so why bother putting yourself through it?

Number four was my first version 4 lab attempt. I refused to take the new lab so long as the OEQs were still there. Two things kicked me out of my self-imposed funk. First was the announcement of the elimination of the OEQs. Secondly was some words of encouragement from my friend Narbik Kocharians. At Cisco Live 2010, he told me that I just needed to keep it up until I got my number. So I took him up on his advice. Attempt four hurt a lot. The TS section wasnt kidding around, and I got stomped by the lab. I felt almost the same as I did after attempt number one. The sole bright spot was my ever-increasing subscore. While I didn’t get enough points to pass either section, I was getting close to the top.  I just needed to find the drive to put myself over the summit.

Attempts five and six were my “near misses”. On five, I passed the TS but failed the configuration.  I was upset after that one.  I thought I had done a damn good job, only to get my score report back less than an hour after I left the lab building.  I retraced all my steps in my mind to find where I could have screwed up.  All the anger in the world wouldn’t get me past my failing mark, though.  IPExpert instructor Marko Milivojevic put it a little differently to me.  He told me there was no sense complaining about it. Get ready for the next one and get it done. On six, I failed TS and passed config. So, if you averaged those two, I passed. 😉 Attempt six really bolstered my confidence. I knew I had failed the TS section after the first two hours. But rather than leave and enjoy the California sunshine, I stuck around and finished. In return, I was able to pass the config section for the first time.  It was a bright spot that led me to have a little hope that passing this thing was possible.

I didn’t say much about attempt number seven because I was anxious. I felt a little embarrassed that I was up that high.  I was worried that I’d disappoint everyone that had been keeping up with my battle with the dreaded lab. I decided to take the Navy SEAL approach. Get in, do the job, then talk about it after success. I was relaxed as I strolled into the lab Thursday morning.  There were a couple of first timers there, and I could tell they were nervous.  It reminded me of my first attempt.  I’m pretty sure Tom Eggers and Tong Ma recognized me from my last attempt.  I could have given the pre-lab briefing for the proctors.  For every previous attempt, I’d been seated at the same workstation. This time, I was beside my usual spot. Maybe a change of location would be a good thing.

I started the TS section and told myself not to get caught up on any questions. If I couldn’t get it in 10 minutes, move on to the next one. I started to panic a little by the third question, but after I made a change that fixed a bunch of things all at once, I was elated and plowed right through, finishing about 20 minutes early. Once I was sure all the conditions were satisfied and my configs were correct and saved, I jumped into the lab. Normally, I get up after reading over the lab and go to the bathroom and get a drink. This time, however, I was in the zone and I didn’t stop to think. I just kept going. Before I realized it, Tong was handing out the lunch vouchers.

After a nice lunch, I came back and dove right back in. A couple of silly mistakes right off the bat made me refocus on doing things right.  I cursed myself for such simple errors, knowing that the difference between typing the right command and the wrong one was razor thin inside Building C.  I shut out distractions and kept going. In fact, I didn’t even notice the guy beside me on his first attempt get up and leave just after lunch. Guess my old pod got him too. By the time I finished my first run through, it was 2:00. I looked up and thought to myself that this was very doable from this point on. I had 3 hours to make sure I was right this time. After rearming with a Mountain Dew, my first that day, I went back over my configs with a fine tooth comb. Not a cursory inspection, but a real Navy SEAL dressing down. I forced myself to reverify every command, no stone left unturned. It’s a good thing I did, too. I found mistakes that would have cost me 7 points had I not corrected them. Those little lapses of attention very likely would have had me coming back for attempt number eight.

Once I double checked everything, literally in this case as each task had two check marks next to it on my paper, I reverified a few things I wasn’t sure about. After I satisfied myself with the answers, I turned in my scratch paper. It was 3:30, an hour and a half before the end of time. When I walked out of Building C, I knew I had done my absolute best. I was confident that this would finally be the one. After catching up on Twitter and email, I celebrated with my usual trip to In-N-Out Burger. Back at the hotel, the minutes started ticking away. After reading about the last few passing attempts from my friends online, I knew the longer it took for your score report to come back, the better it would be for my chances. No report after an hour was good. After 3 hours, I was giddy. I’m sure the hotel bar had nothing at all to do with that. By 11:00, I was equally concerned and hopeful. Had the grading script messed up? Were the proctors going over my configs to shake out any flaws? I had given up the idea that I’d get my results before bed. After hopping in the shower, I walked over to shut off my computer before bed. It was then that Outlook delivered the dreaded email.

CCIE Score Report

I clicked on the link and logged into the site. I scanned to the bottom of the list and saw FAIL. My heart sank. How could I have failed?!? Then I realized I was reading the results of my first attempt in RTP. The newest score was at the top of the list. My eyes flitted over the four most wonderful numbers I’d ever seen. P-A-S-S, quickly followed by a glance at the five most amazing numbers ever, 2-9-2-1-3. The whoop I let out surely had to wake the whole hotel. I called my lovely wife at 2:00 her local time and told her the good news. She informed me she was very happy. And also going back to bed. I, however, was wired. It was like the feeling of winning the big game a thousand times over. No more doubt, no more anticipation. No more second guessing myself and wondering if it would all ever be worth it. I emailed my boss and my Cisco account team. I chatted on Twitter with the poor souls awake at that hour. I made lists of who I needed to talk to. I tried to calm down. I finally fell asleep an hour later, but I didn’t really rest. The elation at my accomplishment kept me on high for a while. But at least I wasn’t dreaming about Indiana Jones and the Lost Prefix (that has happened before).  The next morning was filled with phone calls before my flight.  My boss answered on his speaker phone but quickly switched to his handset when I told him I was going to give him my lab results.  After informing him I passed, he laughed and said, “I could have left you on speaker for that good news.”  I boarded the plane home with a spring in my step for the first time in a long while.  Nothing could chisel the smile from my face.

Tom’s Take

What is best in life?  To crush the lab, see it driven before you, and hear the lamentations of the proctors.  Okay, maybe a little cheesy, but it kind of sums things up.  My father asked me how many questions I got wrong and still passed.  I told him “Since they don’t give you a breakdown, I’ll always think I got them all right.”  I can’t give the best advice about lab strategy.  As you can see, there were lots of dumb mistakes and missed chances.  I underestimated some sections and they paid me back in full.  But if nothing else, know that perseverance is the key to the lab.  Not giving up, not backing down, not letting yourself think for an instant that it isn’t possible.  Doubt is one of the biggest enemies of the CCIE hopeful.  Don’t let it cost you your chance at a number.

I’m Not A Name, I’m A Number!

You have no idea how long I’ve waited to see this little snippet in my email:

Three years.  Seven lab attempts.  Several close calls.  Lots of studying.  Lots of second guessing.  Lots of anger.  But in the end, the elation of those four letters makes it all worthwhile.

There are a lot of people to thank.  I’m going to call out three people in particular who are head and shoulders above the rest though.

First, my family and my beautiful wife Kristin especially.  Thank you for putting up with my odd schedule and my even odder behavior for the last three years.  Thank you for not letting give up and making me keep my nose to the grindstone for this achievement.  I promise from this point forward that Daddy won’t have his nose buried in a configuration guide.  At least not too often.

Secondly, to my amazing boss, Mr. Mike Hibbs.  The man that never gave up on me.  Every time I flew back home with my head hung low with shame burning on my forehead, he told me to pick myself up off the ground and try again.  He footed the bill for me even when I didn’t think I could do it any more.  I owe you more than I can ever hope to repay in a lifetime.

And lastly (but certainly not least), to my mentor and friend Wes Williams.  Four years ago, Wes and I were having lunch and I told him that I was going to take the CCIE exam.  He paused and told me in his slow drawl, “Well, if you keep your head on straight and study hard, I think you’ll do it with no problem.”  Wes, you left us far too soon.  You saw the potential in me from the very start and never let me settle for second best.  I always said that if I passed the lab that I would owe my success to you.  I just wish you could have been here to see it.  This one’s for you, Wes.  I hope I made you proud.

There’s a lot more story for me to tell than I can hope to stuff into this blog post right now.  I plan on laying out some more of it after I’ve had time to come down off this emotional high.  I want to thank everyone for their encouragement and support for the last three years.  I knew that the only time I would have truly failed the CCIE is the day I decided the quite trying.  And now I never have to worry about that day.

From now on, CCIE #29213 belongs to me.

Configuring Cisco Unified Communications Manager and Unity Connection – Review

Voice engineering is a world apart from the run-of-the-mill routing and switching work most network rock stars do regularly.  Lots of browser screens, few opportunities for CLI work, and an ever-evolving interface make for interesting work even in the best of times.  Technology changes so quickly that people who have been out of the loop for more than a couple of years may find themselves adrift in a sea of confusion.

When the first edition of Configuring Cisco CallManager and Unity came out, it quickly became a go-to reference for voice engineers that wanted to learn all about Cisco’s preeminent call processing platform. Today, however, that volume is severely out of date, referencing CallManager 4.x and Unity 4.x, both long retired. With the changes that have been introduced since the move away from Windows-based platforms and Exchange, it was time to update the Cisco Press tome of voice knowledge. Not coincidentally, I give you Configuring Cisco Unified Communications Manager and Unity Connection, Volume Two.

Configuring Cisco Unified Communications Manager and Unity Connection

Title just rolls right off the tongue, doesn’t it?  Along with the change to CallManager, now abbreviated CUCM, we get updates to the platform in the book. This volume focuses on CUCM version 8.x and Unity Connection version 8.0. There is also some coverage of Unity 8.0 as well, since those of you with strange curses may find yourself running into it like a patch of poison ivy.

For those of you that are new to CUCM v8, or new to CUCM in general, this book is a wonderful resource that guides you step-by-step through the menu options and settings in CUCM.  There is very little discussion about voice theory or SIP proxy setup or Nyquist’s Theorem. Instead, the meat of the book tells you how to make CUCM sing, from esoteric Enterprise Service parameters to the confusing Calling Search Space (CSS) setup. It guides and teaches do that you can spend time setting things up the right way and less time scratching your head. The style is simple and easy to follow and unlike online documentation, doesn’t read like stereo instructions.

The second half of the book deals with Unity and Unity Connection. Setup, PBX Integration, and even digital networking get their share of coverage. The instructions and features are presented generically so that they may apply to both platforms as necessary. Only in places where a feature is only related to one platform is there specification, such as the need to sprinkle holy water on Unity to make it boot up. Call Handler configuration gets a chapter as well, and I found the information there very good reference material for a feature that can become complicated quite fast.

Tom’s Take

If you are a new voice rock star that has a CUCM server to set up and no experience with the knobs and switches on the platform, go buy this book now. It will guide you through your first deployment much more gently than searching for hours through acres of documentation. For the grizzled veterans of CallManager 4.x who are just getting back into the game after years of therapy deprogramming all those Windows admin skills, this is also a must read. It will get you up to speed on new features like SUBSCRIBE CSS and new interface features.

For the voice rock stars that have been configuring CUCM through version 5 & 6, the purchase of this book is a little less compelling. Many of these things are things we do every day or each time we setup CUCM, so it may feel like a bit of a rehashing. I found some of the more trivia-oriented content, like explanations of Service parameters and less-used feature configuration, to be of great value. I’m going to toss this book into my voice bag and keep it handy for those times when I need to configure a Unity Interview Handler and I don’t have Internet access on site. Think of it more as a Physician’s Desk Reference rather than Encyclopedia Britannica.

Disclaimer

Cisco Press provided an evaluation copy of this book.  At no time did they ask for, nor did they receive any consideration in this review. The analysis and opinions presented here represent my views and mine alone

The Seedless Garden

After weeks of speculation on the matter, it appears that RSA has finally decided to admit the obvious that the SecurID Token system has become compromised.  Honestly, I’m not shocked.  In fact, I said as much almost 2 months ago when debating the subject with the other Packet Pushers.  I remember hearing the original disclosure and thinking to myself “How could these hackers NOT have the keys to the kingdom?”  RSA categorized this hack as an Advanced Persistent Threat (APT), which is a great new umbrella term to describe hacks that persist for weeks or months without detection.  Of course, I don’t think clicking on an Excel spreadsheet pulled out of your junk mail folder qualifies as a particularly advanced penetration method, but as we’ve seen in the past few months (if not years), social engineering is a much more reliable infection vector.  That’s because you can always count on people to do things they aren’t supposed to.

RSA covered up the worst of the attack.  They put up a good smoke screen about needing to figure out what was stolen in breach.  They even went so far as to talk about having the budget to implement new security that they wouldn’t have been able to before, which to me smacks of fixing the gate after the horses have gotten out.  RSA didn’t admit up front that the seed of the SecurID tokens could have been compromised, although they admitted that some information relating to the SecurID system might have been involved.  They really didn’t admit much more than that.  In return, we got months of second guessing, supposition, and ultimately delays that caused Lockheed Martin, Northrup Grumman, and L-3 Communications to suffer from penetration attempts.  RSA never publicly told their customers to ditch their tokens, even though security professionals said that the worst case scenario of the seed exposure was probably the case.  In fact, Steve Gibson eerily said as much back on March 19th.

RSA should have come clean the day after the attack.  Even if it didn’t admit that it (likely) stored the token serial number in a database along with the seed used to generate the token’s algorithm, they should have at least advised their customers begin the process to replace the older tokens with newer ones to ensure that the old tokens couldn’t be used as an attack vector.  Why?  Well, if you have access to the customer database, it doesn’t take much guesswork to figure out user IDs (first initial, last name).  Once you have the serial number, you can figure out which algorithm was used on the token, since it appears RSA stored this data somewhere or made it easily accessible.  Given that information, brute force becomes the tool used to try and penetrate a vulnerable network.  There has been some speculation that there is some foreign governmental interference in this whole mess due to the fact that the three targets were all defense contractors.  While I won’t discount this possibility, it’s more likely that these targets were chosen due to their heightened aura of security, almost guaranteeing they would use RSA tokens in their remote access strategy.  Since US defense contractors probably buy these things by the truckload, their information was probably all over the hacked database.  This lit them up like a Christmas tree in the eyes of potential hackers.

If you’re using an RSA token right now, put it down.  Drop it in a thirty-three foot hole in the ground.  Bury it completely (rocks and boulders should be fine).  Then demand the RSA replace it with a new one.  Yes, you aren’t going to be able to destroy your whole remote access strategy and rip out all the RSA equipment.  That would cost you a small fortune.  Better to make RSA replace the tokens for you (at their cost) and investigate alternatives down the road.  While I believe that RSA may be able to recover from this with enough time and some management changes, the fact that they let it happen in the first place will sting them for a long time to come.

Tom’s Take

Security breaches are always a wonderful game of ‘worst case scenario’.  It tends to make most security professionals a little cynical, but it also keeps us from shooting ourselves in the foot.  If you are a respected company like RSA (was), there should be no excuse for this cover up.  You should always assume the worst case scenario in a situation like this.  The new replacement tokens should have started shipping to your most important customers weeks ago.  They newly-keyed devices should have been in the hands of your critical customers before they had the chance to ask why their keychain ornaments needed to be replaced.  Even if the algorithm wasn’t compromised (which we now know that it was), a little proactive goodwill may cost money up front, but it won’t come anywhere near the cost that a black eye like this will end up totaling in the long run.  Sony may have a big black eye from its security fiasco, but RSA is actually a security company.  People like Sony trust them to security data.  Finding out that they were hacked and their code stolen to leverage attacks on their customer is like shooting a cop with his own gun.  RSA should have known better and done the right thing up front.  No grandiose PR moves backed with vague statements that “something happened, we think”.  Come clean, fix the issue, and be ready the meet the fallout head on rather than being blindsided in the press after the fact when your customers are getting the Sony Treatment.  Better to have a garden of crops that will eventually grow back than the barren salted earth you’ve got now.

A Moment of Silence for Sony

It goes without saying that Sony currently has a target the size of Iowa painted on its back.  Between the breaches in the Playstation Network, Sony Online Entertainment, and now Sony Pictures, you would be hard pressed to find a company that has been more thoroughly embarrassed when it comes to user data security.  Every day brings word of another incursion.  I’m thinking that something is going to have to give sooner or later.

Sony started out this whole mess by going after George Hotz, a famous hacker that goes by the online name Geohot.  Geohot has done all manner of things, including a simple jailbreak for the iPhone known as Limera1n.   Geohot also had his eyes on rooting the Playstation 3, Sony’s premier gaming console.  While Sony had given you the option to install a Linux-based OS onto the console from the start, Geohot wanted to take it a step further and unlock the ability to run other kinds of code, as well as gaining access to the memory contents and hypervisor level of the console.  This would allow users to do things like emulate Playstation 2 games, which was an original feature of the console that was later dropped due to complexity and memory contraints.  Geohot also started work on creating a custom firmware for the console that would allow users to do as they wished, while still keeping certain features of the OS intact.  In April of 2010, Geohot announced that he was not pursuing the development any further, but in January 2011, he posted the root signing keys of the PS3 online.  This is probably the straw that broke Sony.  The root key would give anyone the ability to sign code and execute it on the console without raising any suspicion.  Sony sued Geohot, and after some legal maneuvering and lots of publicity, eventually settled the lawsuit in April 2011.  This was the catalyst for the difficulties that Sony has faced over the past two months.

In late April, Sony shut down large portions of the Playstation Network (PSN) for an extended period of time due to what was later termed an “external intrusion”.  After rushing to bring the network that controlled the majority of Playstation online multiplayer capabilities, Sony Online Entertainment was intruded upon as well as PSN in early May.  Rather than rushing things back online this time, more care was taken to excise any possible problems and no ETA was set to bring the services back to the public.  In the interim, Sony profusely apologized for the problems and even testified before the US Congress about the breaches.  Sony recently enabled PSN once more, only to fall victim to another hacking group exposing portions of the Sony Pictures online customer database.  In all, close to 40 million Sony customers have had their personal information exposed in one form or another in the past two months.  Email addresses, birthdates, and credit card numbers with Card Verification Value (CVV) verification codes have all been stolen.

What started out as a showdown in the desert between Sony and a group of hackers angered by the treatment of Geohot has now taken on the appearance of a rotting carcass slowly being picked over by anyone that wants to come along and poke it.  The question now isn’t whether Sony will be hacked again, but what might get stolen this time, and where it will be stolen from.  As a former customer of Sony Online Entertainment, I can be certain that some of my information is probably out in the wild.  I’ve since changed passwords and credit card numbers to avert any possible wrongdoing, but other customers haven’t been so lucky.  I’ve lost all confidence in Sony and their ability to keep my information secure.  While many point to the infamous rootkit incident as the point where Sony started to sour in the eyes of their customer base, I think the PSN outage points to a bigger issue.  If Sony wants to install software on my computer to monitor whether or not I’m ripping CDs that’s their business.  I can dislike them for doing something they shouldn’t and be done with it.  The only harm done was their ham-handed attempt to sneak something onto my PC.  But with this series of hacks, Sony has taken their corporate image and dragged it through the filthiest mud imaginable.  I now no longer dislike Sony because they do things they shouldn’t, but instead I’ve lost confidence in their ability to keep me safe.  Just like a bank failure, when a company can no longer assure me they can do business the way that it should be conducted it’s time to move my business elsewhere.

Sony faces some pretty rough territory in the coming future.  First, they really need to find out what raised the ire of their intruders and apologize for it.  Profusely.  It may be a little late now, but if they show a little remorse for whatever wrong they may have done it might call off the dogs for a bit.  Sony needs time to recover and reassess their security posture.  Secondly, Sony needs to can their security team and bring a set of fresh eyes into the picture.  It’s quite apparent that the current time wouldn’t know security if it bit them in the ass.  Passwords stored in clear text, arbitrary account recovery mechanisms, and general incompetence seem to abound.  It’s time to get a new CISO and make some drastic and public changes.  Announce what you are going to do and make sure your now-burned customers are aware of your new commitment to security.  You aren’t going to win anyone back by implementing new security features and burying them on page 20 of a 21 page press release.  Face it Sony, your reputation is shot either way.  Why not make the most of it and try to win back some fans by admitting your screwed up and then fixing it?

Tom’s Take

There’s no doubt Sony makes good technology.  Even when it fails.  However, a series of organizational policies that have left their customer base more violated than the speed limit is their worst failure to date.  There isn’t going to be a cool new feature to save them from this disaster.  No hope for a new version of software to work out these bugs.  It’s time to rewrite the security posture from the ground up.  Find an executive or two to fall on their swords for this whole mess and move on.  Make sure to keep your former customers in the loop about how you’re going to ensure that this never happens again.  I, for one, am done with Sony until I see some major changes in their handling of customer data.  No more TVs, cameras, Walkmen, or games until they prove to me that filling out an online profile isn’t going to expose me to all manner of dastardly things on the Internet and beyond.  Sony’s had their moment of silence in all this by refusing to come clean about the hack in the first place.  Again and again, they’ve kept their mouths shut about timetables and countermeasures.  And until I hear something from them about all this, they won’t hear anything from me at all.