Call To Independence

Paul Revere’s ride – Courtesy of Wikipedia

The life of an independent blogger is never boring.  With all the news coming out about acquisitions and speculations about lines of business converging and moving, we have a lot to write about.  When you factor in the realization that practically no one is secure anymore and the next major data breech is just around the corner, you can see how one might stay busy with all the things coming out that need to be written about.  However, I wanted to take a moment to talk about something that I’ve been hearing recently with regards to the independent blogging community that has me a bit distressed.

In the last couple of months, we’ve seen several of the voices in the blogging community moving on to working with vendors.  It started with Andrew von Nagy (@revolutionwifi) heading to Aerohive.  Since then, we’ve seen Marcus Burton (@marcusburton) jumping to Ruckus Wireless, Hans de Leenheer (@hansdeleenheer) moving to Veeam, and most recently, Derick Winkworth (@cloudtoad) landing at Juniper.  I’ve met each and every one of these people and I greatly admire their work and their voice in the community.  I’m very happy for them that they’ve found gainful employment with a vendor and the fact that they will be bringing their talents and opinions to those that want to hear them is a boon to everyone.  However, I had a chance to talk with Stephen Foskett (@SFoskett) the other day on the phone.  We were talking about some Tech Field Day related material when the subject of independent bloggers came up.  Stephen told me that he’d heard from some people out there that we’d lost people like Andrew and Marcus to vendors.  We both agreed that kind of terminology wasn’t the best phrasing for what had occurred.

Yes, it’s true that the bloggers above are no longer independent in the strictest sense of the word.  They now have a vendor patron that will shape their views and give them information and insights that they might not otherwise get elsewhere.  They also still possess the sense of independence and critical thinking that have always made them such great resources for us all.  They are going to keep creating amazing content and helping out the community in every way they can.  They just wear a different shirt to work everyday.  They aren’t dead to us.  We don’t have to recoil in horror every time we speak to them.  Some of the best and brightest people I know work for vendors.  Especially as of late, vendors have shown that they are willing to go out and get the best and brightest of the industry.  Independent bloggers are no different.  Every word that is written or every tweet that is tweeted gives a better picture of the talent of the independent blogging community.  We all listen, and so do the vendors.

Don’t look at a vendor hiring an independent and think to yourself, “Oh boy.  What are we going to do now?”  Instead, look at this as an opportunity.  There are hundreds of people out there that have stories to tell and information to share.  The independent community is overflowing with opportunity to step up and tell the world what you want them to hear.  When you listen to the opening comment videos that I’ve done recently for Tech Field Day events, I always close with the same line – Make sure that your voice is heard.  I chose that line very carefully.  A lot of people will say that an independent blogger needs to “find their voice.”  That statement makes no sense to me.  Those of you out there with more than 30 seconds of experience with something already have a voice.  You have a thinking strategy and an opinion and a way to form words out of those, whether they be out loud or on a printed page.  You don’t need to find your voice.  You need to project it.  Blogging is all about writing down your thoughts.  I initially started this place to codify those thoughts in my head that were 141+ characters and wouldn’t fit on my Twitter stream.  Instead, it’s evolved into a place where I can prognosticate about industry news or give my opinions about things.  The key is that I put all those thoughts down here and get them out there.  People read them.  People comment on them.  People discuss them.  Sometimes people even yell at me about them.  What’s important is that people are talking.  That’s the key to becoming an independent blogger.  Every time I get a new follower on Twitter or a new LinkedIn request, I always go out to see if that person has a blog.  I like to read the things they have to talk about.  I like to see what kind of discussions they are having with people. I like to know more about what makes them tick.  That’s the kind of information that can’t be conveyed in a profile or a 140-character stream.

Those of you out there in the community that are on the fence about making your voice heard need to stop what you are doing right now and go do it.  It doesn’t matter if you think it will amount to anything in the long run.  I sure didn’t think I’d be making 250 posts when I started.  When I was talking to Greg Ferro (@etherealmind) and Ethan Banks (@ecbanks) about their plans for the opening Packet Pushers up to independent bloggers, I told them that I thought it was a great idea because “Everyone has a blog post in them somewhere.”  If I had it to do over again today, I’d probably be a Packet Pushers blogger.  I don’t like the hassle of dealing with site administration stuff.  I don’t like picking themes or deciding what widgets to put in the sidebar.  I care more about the message and the information.  Packet Pushers is great for the blogger that wants to get their feet wet and put out a few posts to gauge interest.  People like Derick and Mrs. Y (@mrsyiswhy) blog almost exclusively on Packet Pushers.  It’s a great platform for the community.  For those of you that want to make a go of it yourself there are also great options available.  WordPress and Blogger offer great free platforms.  Just pick a theme and start writing.  My blog is still hosted by WordPress and likely will be for the foreseeable future.  I’m not in this to make money or rule the world.  I want to share my thoughts and opinions with the world.  I want to generate good technical posts to help people out of tight spots.  I want to make bad NAT videos.  Wordpress helps me do that, and they can help you too.  Even if you start out writing a post a month, the key is to start.  Once you’ve gotten a post or two under your belt, you may find you like it and you want to keep doing it.  I constantly push myself to keep writing because I know that if I stop, I’m not going to keep up with it like I should.  I’m not saying you have to make a post a day, but you need to start before it can become a habit.

In the end, the independent blogging community exists because people write.  People share ideas and start conversations.  The more people that are out there doing those things, the bigger and better the blogger community becomes.  That’s the reason why Google Plus has had such a hard time competing with Facebook.  Facebook is where the people are.  In the blogging community, we already have a large number of people out there reading posts.  In order for us to truly prosper, we need to grow.  When independent bloggers get the chance to go to a vendor, that means that there is all that much more opportunity for someone new to step up.  Participation guarantees citizenship in the independent blogger community.  If you have ever wanted to share with the rest of the world, now is the time to do it.  Sit down and think about that one blog post that everyone has in them.  Write it down tonight.  Don’t worry about grammar or spelling.  Just put the thoughts on paper.  Editing can happen later.  Once you have that good blog post down and committed to paper (or text file), then decide how you want to tell the world about it.  Whether it be Packet Pushers or your own blog, just get everything together and out there so people can start reading it.  Tell the community where to find your blog.  Twitter, Facebook, Google Plus, LinkedIn, Pinterest, and many others are good sounding boards.  Heck, you could rent an airplane to tow a banner around downtown New York City if you wanted.  They important thing is to make sure you are heard so we know where to go to read what you have to say.

If even one person reading this decides to start a blog or share their thoughts about the industry, then I will have succeeded in my call to arms.  I don’t want to hear people telling me that the independent blogging community is being diminished because vendors are hiring the best and brightest.  Instead, I want the vendors to be telling me that there are so many great independent bloggers out there that they couldn’t possibly hire them all even though they want to.  That’s the way to keep a community strong.  And I challenge each and every one of you to make us all great.

OS X 10.8 Mountain Lion – Review

Today appears to be the day that the world at large gets their hands on OS X 10.8, otherwise known as Mountain Lion. The latest major update in the OS X cat family, Mountain Lion isn’t so much a revolutionary upgrade (like moving from Snow Leopard to Lion) as opposed to an evolutionary one (like moving from Leopard to Snow Leopard). I’ve had a chance to use Mountain Lion since early July when the golden master (GM) build was released to the developer community. What follows are my impressions about the OS from a relatively new Mac user.

When you start your Mountain Lion machine for the first time, you won’t notice a lot that’s different from Lion. That’s one of the nicer things about OS X. I don’t have to worry that Apple is going to come out with some strange AOL-esque GUI update just around the corner. Instead, the same principles that I learned in Lion continue here as well. In lieu of a total window manager overhaul, a heavy coat of polish has been applied everywhere. Most of the features that are listed on the Mountain Lion website are included and likely not to be used by me that much. Instead, there are a few little quality of life (QoL) things that I’ve noticed. Firstly, Lion originally came with the dock indicator for open programs disabled. Instead of a little light telling you that Safari and Mail were open, you saw nothing. This spoke more to the capability introduced that reopened the windows that were open when you closed the program. Apple would rather you think less about a program being open or closed and instead on what programs you wanted to use to accomplish things. In Mountain Lion, the little light that indicates an open program has shrunk to a small lighted notch on the very bottom of the dock below an open program. It’s now rather difficult to determine which programs are open with a quick glance. Being one of those people that is meticulous about which programs I have open at any one time, this is a bit of step in the wrong direction. I don’t mind that Apple has changed the default indicator. Just give me an option to put the old one back.

My Mountain Lion Dock with the new open program indicators

Safari

Safari also got an overhaul. One of the things I like the most about Chrome is the Omnibox. The ability to type my searches directly into the address bar saves me a step, and since my job sometimes feels like the Chief Google Search Engineer, saving an extra step can be a big help. Another feature is the iCloud button. iCloud can now sync open tabs on your iPhone/iPad/iPod/Mountain Lion system. This could be handy for someone that opens a website on their mobile device but would like to look at it on a full-sized screen when they get to the office. Not a groundbreaking feature, but a very nice one to have. The Reading List feature is still there as well from the last update, but being a huge fan of Instapaper, I haven’t really tested it yet.

Dictation

Another new feature is dictation. Mountain lion has included a Siri like dictation feature in the operating system that allows you to say what you want rather than typing it out. Make no mistake though. This isn’t Siri. This is more like the dictation feature from the new iPad. Right now, it won’t do much more than regurgitate what you say. I’m not sure how much I’ll use this feature going forward, as I prefer to write with the keyboard as opposed to thinking out loud. Using the dictation feature does make it much more accurate, as the system learns your accent and idiosyncrasies to become much more adapt over time. If you’d like to get a feel for how well the dictation feature works, (the paragraph)

You’ve been reading was done completely by the dictation feature. I’ve left any spelling and grammar mistakes intact to give you a realistic picture. Seriously though, the word paragraph seems to make the dictation feature make a new paragraph.

Gatekeeper

I did have my first run-in with Gatekeeper about a week after I upgraded, but not for the reasons that I thought I would.  Apple’s new program security mechanism is designed to prevent drive-by downloads and program installations like the ones that embarrassed Apple as of late.  Gatekeeper can be set to allow only signed applications from the App Store to be installed or run on the system.  This gives Apple the ability to not only protect the non-IT savvy populace at large from malicious programs, but also gives Apple the ability to program a remote kill switch in the event that something nasty slips past the reviewers and starts wreaking havoc.  Yes, there have been more nefarious and sinister prognostications that Apple will begin to limit apps to only being able to be installed through the App Store or that Apple might flip the kill switch on software they deem “unworthy”, but I’m not going to talk about that here.  Instead, I wanted to point out the issue that I had with Gatekeeper.  I use a networking monitoring system called N-Able at work that gives me the ability to remote into systems on my customer’s networks.  N-Able uses a Java client to establish this remote connection, whether it be telnet, SSH, or RDP.  However, after my upgrade to Mountain Lion, my first attempt to log into a remote machine was met with a Java failure.  I couldn’t bypass the security warning and launch the app from a web browser to bring up my RDP client.  I checked all the Java security settings that got mucked with after the Flashback fiasco, but they all looked clean.  After a Google Glance, I found the culprit was Gatekeeper.  The default permission model allows Mac App Store apps to run as well as those from registered developers.  However, the server that I have running N-Able uses a self-signed certificate.  That evidently violates the Gatekeeper rules for program execution.  I changed Gatekeeper’s permission model to allow all apps to run, regardless of where the app was downloaded from.  This was probably something that would have needed to be done anyway at some point, but the lack of specific error messages pointing me toward Gatekeeper worried me.  I can foresee a lot of support calls in the future from unsuspecting users not understanding that their real problem isn’t with the program they are trying to open, but with the underlying security subsystem of their Mac instead.

Twitter Integration

Mountain Lion has also followed the same path as it’s mobile counterpart and allowed Twitter integration into the OS itself. This, to me, is a mixed bag. I’m a huge fan of Twitter clients on the desktop. Since Tapbots released the Tweetbot Alpha the same day that I upgraded to Mountain Lion, I’ve been using it as my primary communication method with Twitter. The OS still pops up an update when I have a new Twitter notification or DM, so I see that window before I check my client. The sharing ability in the OS to tweet links and pictures is a nice time saver, but it merely saves me a step of copying and pasting. I doubt I’m any more likely to share things with the new shortcuts as I was before. The forthcoming Facebook integration may be more to my liking. Not because I use Facebook more than I use Twitter. Instead, by having access to Facebook without having to open their website in a browser, I might be more motivated to update every once in a while.

AirPlay

I had a limited opportunity to play with AirPlay in Mountain Lion.  AirPlay, for those not familiar, is the ability to wirelessly stream video or audio from some device to receiver.  As of right now, the only out-of-the box receiver is the Apple TV.  The iPad 2 and 3 as well as the iPhone 4S have the capability to stream audio and video to this device.  Older Macs and mobile devices can only stream audio files, ala iTunes.  In Mountain Lion, however, any newer Mac running an i-Series processor can mirror their screen to an Apple TV (or other AirPlay receiver, provided you have the right software installed).  I tested it, and everything worked flawlessly.  Mountain Lion uses Bonjour to detect that a suitable AirPlay receiver is on the network, and the AirPlay icon appears in the notification area to let you know you can mirror your desktop over there.  The software takes care of sizing your desktop to an HD-friendly resolution and away you go.  There was a bit of video lag on the receiver, but not on the Mountain Lion system itself, so you could probably play games if you wanted, provided your weren’t relying on the AirPlay receiver as your primary screen.  For regular things, like presentations, everything went smooth.  The only part of this system that I didn’t care much for is the mirroring setup.  While I understand the idea behind AirPlay is to allow things like movies to be streamed over to an Apple TV, I would have liked the ability to attach an Apple TV as a second monitor input.  That would let me do all kinds of interesting things.  First and foremost, I could use the multi-screen features in Powerpoint and Keynote as they were intended to be used.  Or I could use AirPlay with a second HDMI-capable monitor to finally have a dual monitor setup for my MacBook Air.  But, as a first generation desktop product, AirPlay on Mountain Lion does some good things.  While I had to borrow the Apple TV that I used to test this feature, I’m likely to go pick one up just to throw in my bag for things like presentations.


Tom’s Take

Is Mountain Lion worth the $20 upgrade price? I would say “yes” with some reservations. Having a newer kernel and device drivers is never a bad thing. Software will soon require Mountain Lion to function, as in the case of the OS X version of Tweetbot when it’s finally released. The feature set is tempting for those that spend time sharing on Twitter or want to use iCloud to sync things back and forth. Notification Center is a plus for those that don’t want popup windows cluttering everything. If you are a heavy user of presentation software and own an AppleTV, the Airplay mirroring may be the tipping point for you. Overall, compared to those that paid much more for more minor upgrades, or paid for upgrades that broke their system beyond belief (I’m looking at you, Windows ME), upgrading to Mountain Lion is painless and offers some distinct advantages. For the price of a nice steak, you can keep the same performance you’ve had with your system running Lion and get some new features to boot. Maybe this old cougar can keep running a little while longer.

Study Advice – Listen To That Little Voice

During Show 109 of the Packet Pushers podcast, I had the unique honor to be involved in an episode that included the uber geek Scott Morris, distinguished Cisco Press author Wendell Odom, and the very first CCDE, Russ White.  Along with Natalie Timms, the CCIE Security program manager and Amy Arnold, we discussed a lot of various topics around the subject of certification.  One of the topics that came up about 37 minutes in was about being persistent in your studies.  Amy brought up a good point that you need to find a study habit that works for you.  I followed up with a comment that I still have a voice in the back of my head that tells me I need to study.  I promised a blog post about that, so here it is only a month late.

I took three years to get my CCIE.  Only the last year really involved intense study on a regular basis.  The previous 24 months, I spent a great deal of time and effort with my regular job.  I picked up a book from time to time and refresh my memory, but I wasn’t doing the kind of heavy duty labbing necessary to hone my CCIE skills.  After I had some conversations with my mentors about what the CCIE really meant to me, I jumped in and started doing as much studying as I could every night.  Almost all of my study time came after my kids went to bed.  Basically, from 8 p.m. until about 1 a.m. I fired up my GNS3 lab and tested various scenarios and brain teasers.  I took me a bit of time before I really settled into a routine, though.  There were lots of things that kept tugging at my attention.  The devilsh Internet, the seductive allure of my television, and the siren call of video games all competed to see which one could lure me away from the warm glow of my console screen.  I had to spend a great deal of time focusing on making a conscious decision to drop what I was doing and start working on my lab.  It’s a lot like running, in a way.  Most runners will tell you that if you can get outside and start running, the rest is easy.  It’s overcoming all the obstacles in your way that are trying to keep you from running.  You have to push past the distractions and keep moving no matter what.  Don’t let an email or a text message keep you from starting R1.  Don’t let a late-night snack run distract you from loading a troubleshooting configuration.  The real key is to get started.  Crack open those lab manuals and fire up your routers, whether they be real or virtual.  After that, the rest just falls into place.

There is a downside to all that training, though.  It’s now been 13 months since I passed my CCIE lab.  To this day, I stil have a little voice in the back of my head telling me that I need to be studying.  Every time I flip on the TV or sit down on the couch, I feel like I should have a book in my lap or have a lab diagram staring me in the face.  I’ve taken some certification tests since the lab, but I haven’t really taken a great deal of time to study something that isn’t familiar to me.  I talked about what I wanted to do at the beginning of the year, and I firmly believe now that I’m halfway through that I’ve missed some opportunities to get back on the horse, as it were.  I know that the only way to satisfy that voice that keeps telling me that I should be doing something is to feed it with chapters of study guides and time in front of the lab console again.  I don’t think it will take the same kind of time investment that the CCIE did, but who knows what it might build into in the end?  I certainly never thought I’d be taking the granddaddy of all certification tests when I first started learning about networking all those many years ago.

For those out there just starting to study for your certifications, I would echo Ethan’s advice during the podcast.  You need to make a habit out of studying.  Many people that I talk to want to study for tests, but they want to do it on someone else’s time.  They want their employer to mark off time for study or provide resources for learning.  While I’m all for this kind of idea and would love to see more employers doing things like this, there is a limit that you will eventually reach.  Your employer expects you to spend your time providing a service for them.  If you truly want to have as much study time as you want, you will have to do it outside working hours.  Your boss doesn’t care what you do from 5 p.m. on.  In the case of the CCIE, it was a whole lot easier for me to try and do mock labs on Saturday than it was to try and do them on Tuesday.  The work week doesn’t afford many uninterrupted opportunities for study.  Nights and weekends do.

Make sure you take your study habits as seriously as you do your job.  It might be easy to kid yourself into thinking that you can just pick up the book for five minutes before the next TV show comes one, but we both know that won’t work.  Unless you immerse yourself in studying, all that knowledge that you gained in those scant minutes of furious reading will evaporate when the theme song to that hit sitcom starts.  You don’t have to have total silence, though.  I find that I do some of my best studying when I have some noise in the background that forces me to pay attention to what I’m doing.  However, if you don’t apply some serious consideration to your studies, you’ll probably end up much like I did in the first couple of years of my studies – adrift and listless.  If you can knuckle down and treat it just like you would a troubleshooting task or an installation project, then you’ll do just fine.

The Card Type Command – Don’t Flop

If you’ve ever found yourself staring at a VWIC2-1MFT-T1/E1 or a NM1-T3/E3 module, you know that you’ve got some configuration work ahead of you.  Whether it be for a PRI circuit to hook up that new VoIP system or a DS3 to get a faster network connection, the T1/T3 circuit still exists in many places today.  However, I’ve seen quite a few people that have been stymied in their efforts to get these humble interface cards connected to a router.  I have even returned a T1/E1 card myself when I thought that it was defective.  Imagine the egg on my face when I discovered that the error was mine.

It turns out that ordering the T1/E1 or T3/E3 module from Cisco requires a little more planning on the installation side of things.  These cards can have a dual identity because the delivery mechanism for these circuits is identical.  In the case of a T1/E1, the delivery mechanism is almost always over an unshielded twisted pair (UTP) cable.  Almost all of the T3/E3 circuits that I’ve installed have been delivered over fiber but terminated via coax cables with BNC connectors.  The magic, then, is in the location.  A T1 circuit is typically delivered in North America, while the E1 circuit is European version.  There are also differences in the specifics of each circuit.  A T1 is 24 channels of 64kbits each.  An E1 is 32 channels of the same size.  This means that a T1 has an effective data rate of 1.544 Mbits while an E1 is a bit faster a 2.048 Mbits.  There are also framing differences and a slightly different signaling structure.  The long and short of it is that T1 and E1 circuits are incompatible with each other.  So how does Cisco manage to ship a module that supports both circuit types?

The key is that you must choose which circuit you are going to support when you install the card.  The card can’t automatically flip back and forth based on circuit detection.  Where the majority of issues come from in my line of work is that the card doesn’t show up as a configurable interface until you force a circuit type.  This is accomplished by using the card type command:

RouterA(config)#card type ?
 e1 E1
 e3 E3
 t1 T1
 t3 T3

Choose your circuit type and away you go!  As soon as you enter the card type, the appropriate serial interface is created.  You will still need to enter the controller interface to set parameters like the framing and line code.  However, the controller interface only shows up when the card type has been set as well.  So unless you’ve done the first step, there isn’t going to be a place to enter any additional commands.


Tom’s Take

Sometimes there are things that seem so elementary that you forget to do them.  Checking a power plug, flipping a light switch, or even remembering to look for little blinking lights.  We don’t think about doing all the easy stuff because we’re concentrating on the hard problems.  After all our hard work, we know it has to be something really messed up otherwise it would be fixed by now.  In the case of T1/E1 cards, I made that mistake.  I forgot to check everything before declaring the card dead on arrival.  Now, I find myself spending a lot of time providing that voice of reason for others when they’re sure that it has to be something else.  The little voice of reason doesn’t always have to be loud, sometimes it just has to say something at the right time.

Device Naming Conventions

At some point or another, we’ve all been faced with the ominous screen asking us to name a device.  Whether it be a NetBIOS name or a DNS hostname, in those critical minutes we’ve been under as much pressure as any other time in our careers.  What should we call this thing?  Should I name it something memorable?  Should it be useful?  What about some kind of descriptive codename?  I wanted to share a few things with you that I’ve found over the years that might get a chuckle or two.  Hopefully, they’ll also serve as a yardstick for naming things in the future.

More often than not, desktops that are deployed straight out of the box keep the name that they were programmed with at the factory.  This can be some strange combination of manufacturer or serial number or phases of the moon.  Unless you’re on top of things or you have a VAR doing the installation for you (yay me!), you’ve left the name alone because it’s something that you don’t necessarily care about.  Infrastructure devices, on the other hand, are devices that have to be named to function.  These are the ones that engender the most thought into what they should be called.  My first run-in with an odd naming convention came back in high school.  When I was but a wee lad trying out this scary Internet thing for the first time (through Compuserve, no less), I started emailing a friend that went to more tech-savvy school.  Her email address was hosted by the local university on a mail server they built.  It seems that the seven mail servers that hosted the university and its users were named after Disney’s seven dwarfs.  In particular, this server was named Bashful.  I always thought that was interesting, since my friend was anything but bashful.  As time went on, I realized that people started naming their computers funny things because they wanted to remember what it did or make it have some kind of special significance to them.  When it came time to name a whole set of networked computers, that’s when you usually delved into the depths of literature or popular culture to come up with naming sets.  Groups of collected individuals of diverse skill sets that help you remember what it is that your devices do.  It also affords you the chance to show how clever you think you might be.

Far and away, the most popular naming set for servers/routers/stuff is Greek Mythology.  I’ve worked on more Apollos and Zeus’s and Athenas that I have any other device in history.  Usually, you can figure out what a server is doing based on which deity it’s named after.  Zeus is the domain controller/master server.  Athena is the ticketing database or Sharepoint server.  Hermes is the VoIP server.  Funny thing though.  You hardly ever see Hades doing something.  Usually, it’s a server on the fifth or sixth reload that they don’t really care about.  Also, don’t ask what Tartarus is doing.  It’s never anything good, I assure you.  While the Greeks are popular when it comes to server naming, I’m seeing a huge uptick in Lord of the Rings characters.  This is a bit more problematic for me, since I’m not usually inclined to figure out why someone named a server Merry or Pippin.  Depending on how much server sprawl you have, you may even need to reach down to find characters that weren’t in the movies, like Tom Bombadil.  Of course, every time I see a LotR naming setup, I very much want to change the name of the primary domain controller to Mordor and then disable all user accounts on it.  Why?  Because no one simply logs into Mordor.

On the flip side, I’ve seen users that understand that naming things after Greek gods and Ian McKellen characters can be a bit confusing at times.  So they’ve swung to the complete opposite side of the spectrum and come up with their own naming convention for things.  Normally, I applaud this kind of forward-thinking approach.  However, if your code names only make sense to you, it’s not much better than naming your servers after Best Support Actor Academy Award winners.  For instance, does the server name SW2K332DC050 jump right out and tell you anything meaningful?  It took me many tries to finally figure out that this particular server is running Windows Server 2003 32-bit and is serving as a domain controller.  Of course, that was when the server was first installed.  Now, it’s a Windows Server 2008 R2 machine that’s not a domain controller and is instead running some web-based application.  Faced with a whole page full of names like that is like trying to read the phone book.  Someone coming into this environment would need a cheat sheet or at least access to the server admin team to figure out what server you were working on.

I’m a huge fan of naming conventions that convey the device’s type and purpose on one short line.  Being a VAR, it’s usually critical to me to be able to scan an environment quickly and determine what exactly I’m working with.  Calling a switch 7K-Core-1 allows me to know almost instantly that I’m working on a Nexus 7000 in the core and that there should be at least one other switch (Core-2) somewhere close by.  Naming a switch 2960S-IDC1-1 is almost as effective but can lead to issues when I don’t know where IDC1 is located.  Since I work mostly with K-12 education institutions, I usually fall back on familar location info, such as 3560-Lib-1 or 4500-Caf-2 to help me figure out where I need to start my search for these devices.  I’ve always told people that my documentation habits arise from the need for me to remember exactly what was going on when I did something six months ago.  This goes for naming conventions as well.  I may be looking at this device from a stuffy hotel room three time zones away and not have access to all of the pertinent information before a critical change must be made.  The more descriptive I can make a device name, the better the chances that I won’t accidentally remove EIGRP from the core router.

What types of naming conventions do you use?  Are you a dwarf/deity/fictional character type of person?  How about washing the hostname through an MD5 hash tool before applying it?  Maybe you just name it the first thing you see on your desk when you power it up.  I’d be curious to see what your ideas are.

VADD – Video Attention Deficit Disorder

While I was at Cisco Live, I heard a lot about video.  In fact, “video is the new voice” was the center square on my John Chambers Keynote Bingo card.  With the advances that Cisco has been making with the various Jabber clients across all platforms, Cisco really wants to drive home the idea that customers want to use video in every aspect of their life.  This may even be borne out when you think about all the social networks that have been adding video capabilities, such as Facebook or Skype.  Then there’s the new launch of AirTime from the guys that brought you Napster.  AirTime is a social network that is built entirely around video and how you can interact with complete strangers that share your interests.

I started thinking about video and the involvement that it has in everyone’s life today.  It seems that everything has a video-capable camera now.  Mobile phones, tablets, and laptops come standard with them.  They are built into desktop monitors and all-in-one computers.  It seems that video has become ubiquitous.  So too have the programs that we use to display video.  I can remember all the pain and difficulty of trying to setup programs like AIM and Yahoo! Messenger to work with a webcam not all that long ago.  Now we have Skype and Facetime and Google+ Hangouts.  On the business side we have things like Cisco Jabber for Telepresence (formerly Movi) and Webex.  I even have a dedicated video endpoint on my desk.  However, the more and more I thought about it, the more that I realized that I hardly used video in my everyday life.  I’ve done maybe two Facetime calls with my family since my wife and I purchased Facetime-capable devices last year.  My Skype calls never involve video components.  My Webex sessions always have video features muted.  Even my EX90 gathers dust most of the time unless it gets dialed to test a larger Telepresence unit.  If video is so great, why does it feel so neglected?

For me, the key came in an article about AirTime.  In the press conference, the founders talked about how social media today consists of “asynchronous communications”.  We leave messages on walls and timelines.  We get email or instant messages when people try to communicate with us that sit there, beckoning to us to respond.  In some cases, we even have voicemail messages or transcriptions thereof that call to our attention.  The AirTime folks claim that this isn’t a natural method of communication and that video is how we really want to talk to people.  Nuances and body language, not text and typing.  That’s a good a noble goal, but when I thought about how many Facetime devices are out there and how many people I knew with the capability that weren’t really using it, something didn’t add up.  Why does everyone have access to video and yet not want to use it?  Why do we prefer to stick to things like Twitter timelines or instant messages via your favorite service?

I think it’s because people today have Video Attention Deficit Disorder (VADD).  People don’t like using video because it forces them to focus.  Now that all my communications can happen without direct dependence on someone else, I find my attention drifting to other things.  I can fire off emails or tweets aimed at people I want to communicate with and go on about my other tasks without waiting for an answer.  Think about how easy it is to just say something via instant message versus waiting for a response in real time.  Twitter doesn’t really have awkward silence during a conversation.  Twitter doesn’t require me to maintain eye contact with the person I’m talking to, and that’s when I can even figure out if I’m supposed to be looking at the camera or the eyes of the projected video image.  When I’m on a video camera, I have to worry about how I look and what I’m doing when I’m not talking to someone.  Every time I watch a Google+ hangout that consists of more than two or three people, I often see people not directly speaking having wandering attention spans.  They look around the room for something to grab their attention or get distracted by other things.  That’s why asynchronous communication is so appealing.  I can concentrate on my message and not on the way it’s delivered.  In real-time conversations, I often find myself subconsciously thinking about things like making eye contact or concentrating on the discussion instead of letting my focus drift elsewhere.  Sometimes I even miss things because I’m more focused on paying attention than what I should be paying attention to.  Video conversation is much the same way.  Add in the fact that most conversation takes place on a computer that provides significant distraction and you can see why video is not an easy thing for people like me.


Tom’s Take

I’ve wanted to have a video phone ever since I first watched Blade Runner.  The idea that I can see the person that I’m talking to while I converse them was so far out back then that I couldn’t wait for the future.  Now, that future is here and I find myself neglecting that amazing technology in favor of things like typing out emails and tweets.  I’d much rather spend my time concentrating on the message and not the presentation.  Video calling is a hassle because I can’t hide anymore.  For those that don’t like personal interaction, video is just as bad and being there.  While I don’t deny that video will eventually win out because of all the extra communication nuances that it provides, I doubt that it will be anytime soon.  I figure it will take another generation of kids growing up with video calling being ubiquitous and commonplace for it to see any real traction.  After all, it wasn’t that long ago that the idea of using a mobile phone not tied to a landline was pretty far fetched.  The generation prior to mine still has issues with fully utilizing those types of devices.  My generation uses them as if they’d always been around.  I figure my kids will one day make fun of me when they try to call me on their fancy video phone and their dad answers with video muted or throws a coat over the camera.  If they really want to talk to me, they can always just email me.  That’s about all the attention I can spare.

Upgrading to Cisco Unified Presence Server 8.6(4) – Caveat Jabber

With the Jabber for Everyone initiative that Cisco has been pushing as of late, the question about compatibility between the Jabber client and Cisco Unified Presence Server (CUPS) has come into question more than once.  Cisco has been pretty clear on the matter since May – you must upgrade to CUPS 8.6(4) [PDF Link] to take advantage of the Jabber for Everyone.  This version was released on June 16th, and being the diligent network engineer that I am, I had already upgraded to 8.6(3) previously.  This week I finally had enough down time to upgrade to 8.6(4) to support Jabber for Everyone as well as some of the newer features of the Jabber client for Windows.  Of course, that’s where all my nightmares started.

I read the release notes and found that 8.6(4) was a refresh upgrade.  I’ve already done one of these previously on my CUCM server so I knew what to expect.  I prepped the upgrade .COP file for download prior to installing the upgrade itself.  Luckily for me, 8.6(3) is the final version prior to the refresh upgrade, so it doesn’t require the upgrade .COP file to perform the upgrade.  The necessary schema extensions and notification fields are already present.  With all of the release note prerequisites satisfied, I fired up my FTP server and began the upgrade process.  As is my standard procedure, I didn’t let the server upgrade to the new version automatically.  I figured I’d let the upgrade run for a while and reboot afterwards.  After a couple of hours, I ordered the server to reboot and perform the upgrade.  Imagine my surprise when the server came back up with 8.6(4) loaded, but none of the critical services were running.  Instead, the server reported that only backups and restorations were possible.  I was puzzled at this, as the upgrade had appeared to work flawlessly.  After tinkering with things for a bit, I decided to revert my changes and roll back to 8.6(3).  After a quick reboot, the old version came back up.  Only this time, the critical services were stuck in the “starting” state.  Seemed that I was doomed either way.  After I verified the MD5 checksum of the upgrade file, I started the upgrade for the second time.  While I waited for the server to install the second time, I strolled over to the Internet to find out if anyone was having issues with this particular upgrade.

After some consulting, it turns out that Cisco pulled a bone-headed mistake with this upgrade.  Normally, one can be certain that any hardware-specific changes will be contained to major version upgrades.  For instance, upgrading from Windows XP to Windows 7 might entail hardware requirement changes, like additional RAM.  Point releases are a little more problematic.  Cisco uses the minor version to constitute bi-annual system releases.  So CUCM 8.0 had a certain set of hardware requirements, but CUCM 8.5 had different ones.  In that particular case, it was a higher RAM requirement.  However, for CUPS 8.6(4), the RAM requirement doubled to 4 GB.  For a sub-minor point release.  Worse yet, this information didn’t actually appear in the release notes themselves.  Instead, I had to stumble across a totally separate page that listed specific hardware requirements for MCS server types.  Even within that page, the particular model of server that I am using (MCS-7825-I3) is listed as compatible (with caveats).  Turns out that any 8.6(x) release is supposed to require more than 4GB of RAM to function correctly.  Except I was able to install 8.6(3) with no issues on 2GB of RAM.  Since I knew I was going to need to test 8.6(4), I rummaged around the office until I was able to dig up the required RAM (PC2-5300 ECC in case you’re curious).  Without the necessary amount of RAM, the server will only function in “bridge mode” for migrations to new hardware.  This means that your data is still intact on the CUPS server, but none of the services will start to begin processing user requests.  At least knowing that might prevent some stress.

For those of you that aren’t lucky enough to have RAM floating around the office and you’ve gotten as far as I have, reverting the server back to 8.6(3) isn’t the easiest thing to do.  Turns out that moving back to 8.6(x) from 8.6(4) requires a little intervention.  As found on the Cisco Support Forums, rolling back can only be accomplished by installing the ciscocm.cup.pe_db_install.cop file.  But there are two problems.  First, this file is not available anywhere on Cisco’s website.  The only way you can get your hands on it is to request it from TAC during a support call.  That’s fortunate, because problem number 2 is that the file is unsigned.  That means that it will fail the installation integrity check when you try to install it on the CUPS server.  You have to have TAC remote connect to the server and work some support voodoo to get it working.  Now, I suppose if you have a way to gain root access to a Cisco Telephony OS shell, you could do something like the outlined steps in the forum post (as follows):

Here what's required to temporarily install unsigned COP files

cd /usr/local/bin/base_scripts
mv SIGNED_FILTER SIGNED_TEMP

Here what's required in Remote Access to remove the temporary fix

cd /usr/local/bin/base_scripts
mv SIGNED_TEMP SIGNED_FILTER

Note: This is totally unsupported by me.  I’m putting it here for posterity.  Don’t call me if you blow up your server.  Also, I don’t have the TAC .COP file either, so don’t bother asking for it.

That being said, the above instructions should get you back up and running on 8.6(3) until you can buy some RAM from Newegg or your other preferred vendor.


Tom’s Take

Yes, I should have read the release notes a little more closely.  Yes, I should have verified the compatibility before I ran wild with this upgrade.  However, having fallen on my sword for my own mistakes, I think it’s well within my rights to call Cisco out on this one as well.  How do you not put a big, huge, blinking red line in the release notes warning people that you need to check the amount of RAM in the server before performing an upgrade?  You figure something like this would be pretty important to know?  Worse yet, why did you do this on a sub-minor point release?  When I install Windows 7 Service Pack 1 or OS X 10.7.4, I feel pretty confident that the system requirements for the original OS version will suffice for the minor service pack.  Why up the hardware requirements for CUPS for a minor upgrade at best?  Especially one that you’re driving all your people to be on to support your big Jabber initiative?  Why not hold off on the requirement until the CUCM 9 system release became final (which happened about a week later)?  If I’m moving from 8.6 to 9.0, I would at least expect a bunch of hardware to be retired and for things to not work correctly when moving to a new, big major version.  From now on I’m going to be a lot more careful when checking the release notes.  Cisco, you should be a lot more diligent in using the release notes to call attention to things that are important for that release.  The more caveats we know about up front, the less likely we are to jabber about them afterwards.

Fix The Problem, Not The Blame

Courtesy of Zazzle.com

Ethan Banks is really turning out some good blog posts as of late.  His latest one about failure in particular really got me to thinking.  You should head over and read it before you continue.

After I read through Ethan’s post, I started thinking about why people tend to shift responsibility and fire up the “blamethrower” from time to time.  It reminded me of Rising Sun, a movie based on a Michael Crichton book of the same name.  The movie in particular stands out to me because of a quote from Sean Connery:

“The Japanese have a saying: ‘Fix the problem, not the blame.’ Find out what’s [screwed] up and fix it.  Nobody gets blamed.  We’re always after who [screwed] up.  Their way is better.”

This is the kind of thing that leads to people shirking failure.  People are so worried about getting blamed for things that they won’t admit to them.  Whether it be for something simple like misspelling someone’s name or something major like crashing the core router, people don’t want to get blamed.  Most of the time, I can’t fault them for that.  Think about what happens when something goes wrong.  More often than not someone higher up in the organization starts head hunting.  They stalk the halls asking, “Whose fault is this? I want them in my office now!”  How many times have you seen a situation where yelling at the responsible party took precedence over fixing things?  As a VAR providing support to multiple different types of customers, I can tell you that I’ve witnessed first hand several occasions where my job couldn’t begin until the responsible parties were dealt with.  Precious seconds and minutes can tick by while blame is appropriately assigned.

Personally, I take the opposite approach to things.  When I find myself in a situation of troubleshooting or solving problems, I make sure that blame is the last thing that is discussed.  When the CxO comes stalking through the office looking for someone to yell at, I always make sure to direct attention away from the people doing the work.  In my mind, the key to any successful problem resolution lies not in assigning blame but in fixing the problem.  After the crisis is over and cooler heads are prevalent is the time to begin examining causes discussing resolutions to prevent repeat performances.  The above quote from Rising Sun not only reflects my views about the uselessness of blame in a professional environment but serves to show how useful and refreshing fixing problems can be.  At times, I even assume more blame than necessary if it means moving things along.  My goal as a network engineer is problem resolution, not blame assignment.  That’s not to say that I won’t give someone a stern reprimand if necessary.  I’d just rather not have that happening in the heat of the moment when the network team is trying their best to keep the core from melting into a pile of slag.

To be an effective problem solver, make sure to focus all your efforts on fixing the problems.  By forcing all the stakeholders to expend their efforts on the real source of stress, your reputation will grow into something amazing.  People will talk about your ability to solve any problem.  They’ll comment that you’re cool under pressure and great at motivating people when things are at their worst.  You’ll be known as the person that solves problems quickly and makes sure that your team knows what went wrong to prevent it from happening in the future.  These are all very desired traits for people in a troubleshooting capacity.  They can all be yours provided you spend your time looking at the real issues and not worrying about those that are generated from them.