Unknown's avatar

About networkingnerd

Tom Hollingsworth, CCIE #29213, is a former network engineer and current organizer for Tech Field Day. Tom has been in the IT industry since 2002, and has been a nerd since he first drew breath.

Gamification Gone Wild

CloudCredBadge

VMware launched a new site recently called Cloud Credibility.  The idea is that you log in and start earning points that you can cash in on rewards for things such as pens, books, and even a chance to win a trip to VMWorld.  Some of the tasks are simple, like following VMware personalities on Twitter.  Others include leading VMUG sessions or hosting a podcast.  There’s been a lot of backlash in recent days about the verification of these tasks or how downright silly some of them are.  One post from Michael Ducy (@mfdii) went so far as to compare it to Klout.

This isn’t the first site to do something like this.  While Klout may be the most well known, you have to include sites like FourSquare as well.  Tech sites are not immune from this.  Cisco’s support forums have a point-earning component that plays into earning VIP status and they have announced social rewards as well.  Sites like Codecademy award badges for completing certain modules as you learn a programming language.  Even education is starting to get on the bandwagon, as this review from MIT discusses.

The term for this type of thing is gamification.  It specifically refers to the addition of game playing elements in a non-game setting.  Most often, this is expressed via points or achievement badges of some kind.  That’s how Cloud Cred works.  You do a task and you earn 10 points.  You do a bigger task and earn 100 points.  When you get to 500 or 1000, you can cash in those points for a meaningless prize or keep accruing them in hopes of winning something big.  While the currency is all virtual, the effect is quite real.

The only purpose that gamification serves to me is to hook people into staying on the site and pushing toward a lofty goal.  When you see whitepaper, you may not be inclined to read it unless it’s something interesting to you.  If you see the same whitepaper with a quiz at the end that earns you points toward a USB drive you might be more compelled to read it more closely, if only to learn enough to pass the quiz.  Now, if you make the USB drive cost twice the number of points that the quiz offers, you can make the reader find other things on the site to do to earn those points.  You keep them on your site digging through things if only to keep their point-earning streak going.  Then, you make the plateaus for prizes rewarding in their own right but also give the earners a look at a bigger prize.  Cash in the points you’ve earned on a notebook or a USB drive, but if you earn 10,000 more you can enter in a drawing to win a laptop!

Cloud Cred seems to serve dual purposes right now.  The first is to gain more social discussion of VMware and the technologies around their announced cloud computing initiatives.  The more people talking about what’s going on with VMware and cloud the better.  The second purpose looks to be peer review of whitepapers.  By having people reading over these and taking quizzes or pointing out errata, you raise the collective intelligence of your solutions and technical offerings.  Plus, rather than having to beat people over the head to get them to review the research, you just offer them some meaningless points that they will probably never cash in on tchotchkes that cost the marketing department about $.38 each.


Tom’s Take

I dislike the trend of gamification in technology and education.  Remember, this is coming from a gamer.  When I sit down after a day of working, I fire up my favorite game and play to gain levels and fake money and whatever else the developers have decided I should earn.  When I’m stitting at a desk from eight to five, I don’t want to be subjected to the same kind of rewards.  These things are designed to suck you in and keep you interacting long past the date you would have otherwise.  I wasted half an hour earning Cloud Cred just while writing this article.  Every time I would go back to check something, I found myself earning a few more points to try and hit the next tier.  If we’re going to reduce all of our support and technical offerings to the electronic equivalent of rats in a maze hitting the green button for another food pellet then maybe it’s time for us to rethink our strategies.  After all, the reward for work well done is the opportunity to do more.  It shouldn’t be a pen and a flashy star next to your forum name.

VMware Partner Exchange 2013

VMwarePEXTitle

Having been named a vExpert for 2012, I’ve been trying to find ways to get myself invovled with the virtualization community. Besides joining my local VMware Users Group (VMUG), there wasn’t much success. That is, until the end of February. John Mark Troyer (@jtroyer), the godfather of the vExperts, put out a call for people interested in attending the VMware Partner Exchange in Las Vegas. This would be an all-expenses paid trip from a vendor. Besides going to a presentation and having a one-on-one engagement with them, there were no other restrictions about what could or couldn’t be said. I figured I might as well take the chance to join in the festivites. I threw my name into the hat and was lucky enough to get selected!

Most vendors have two distinctly different conferences througout the year. One is focused on end-users and customers and usually carries much more technical content. For Cisco, this is Cisco Live. For VMware, this is VMWorld. The other conference revolves around existing partners and resellers. Instead of going over the gory details of vMotions or EIGRP, it instead focuses on market strategies and feature sets. That is what VMware Partner Exchange (VMwarePEX) was all about for me. Rather than seeing CLI and step-by-step config guides to advanced features, I was treated to a lot of talk about differentiation and product placement. This fit right in with my new-ish role at my VAR that is focused toward architecture and less on post-sales technical work.

The sponsoring vendor for my trip was tried-and-true Hewlett Packard. Now, I know I’ve said some things about HP in the past that might not have been taken as glowing endoresements. Still, I wanted to look at what HP had to offer with an open mind. The Converged Application Systems (CAS) team specifically wanted to engage me, along with Damian Karlson (@sixfootdad), Brian Knudtson (@bknudtson), and Chris Wahl (@chriswahl) to observe and comment on what they had to offer. I had never heard of this group inside of HP, which we’ll get into a bit more here in a second.

My first real day at VMwarePEX was a day-long bootcamp from HP that served as an introduction to their product lines and how the place themselves in the market alongside Cisco, Dell, and IBM. I must admit that this was much more focused on sales and marketing than my usual presentation lineup. I found it tough to concentrate on certain pieces as we went along. I’m not knocking the presenters, as they did a great job of keeping the people in the room as focused as possible. The material was…a bit dry. I don’t think there was much that could have helped it. We covered servers, networking, storage, applications, and even management in the six hours we were in the session. I learned a lot about what HP had to offer. Based on my previous experiences, this was a very good thing. Once you feel like someone has missed on your expectations you tend to regard them with a wary eye. HP did a lot to fix my perception problem by showing they were a lot more than some wireless or switching product issues.

Definition: Software

I attended the VMwarePEX keynote on Tuesday to hear all about the “software defined datacenter.” To be honest, I’m really beginning to take umberage with all this “software defined <something>” terminology being bandied about by every vendor under the sun. I think of it as the Web 2.0 hype of the 2010s. Since VMware doesn’t manufacture a single piece of hardware to my knowledge, of course their view is that software is the real differentiator in the data center. Their message no longer has anything to do with convincing people that cramming twenty servers into one box is a good idea. Instead, they now find themsevles in a dog fight with Amazon, Citrix, and Microsoft on all fronts. They may have pioneered the idea of x86 virtualization, but the rest of the contenders are catching up fast (and surpassing them in some cases).

VMware has to spend a lot of their time now showing the vision for where they want to take their software suites. Note that I said “suite,” because VMware’s message at PEX was loud and clear – don’t just sell the hypervisor any more. VMware wants you to go out and sell the operations managment and the vCloud suite instead. Gone are the days when someone could just buy a single license for ESX or download ESXi and put in on a lab system to begin a hypervisor build-out. Instead, we now see VMware pushing the whole package from soup to nuts. They want their user base to get comfortable using the ops management tools and various add-ons to the base hypervisor. While the trend may be to stay hypervisor agnostic for the most part, VMware and their competitors realize that if you feel cozy using one set of tools to run your environment, you’ll be more likely to keep going back to them as you expand.

Another piece that VMware is really driving home is the idea of the hybrid cloud. This makes sense when you consider that the biggest public cloud provider out there isn’t exactly VMware-friendly. Amazon has a huge marketshare among public cloud providers. They offer the ability to convert your VMware workloads to their format. But, there’s no easy way back. According to VMware’s top execs, “When a customer moves a workload to Amazon, they lose. And we lose them forever.” The first part of that statement may be a bit of a stretch, but the second is not. Once a customer moves their data and operations to Amazon, they have no real incentive to bring it back. That’s what VMware is trying to change. They have put out a model that allows a customer to build a private cloud inside their own datacenter and have all the features and functionality that they would have in Reston, VA or any other large data center. However, through the use of magic software, they can “cloudburst” their data to a VMware provider/partner in a public cloud data center to take advantage of processing surplus when needed, such as at tax time or when the NCAA tournement is taxing your servers. That message is also clear to me: Spend your money on in-house clouds first, and burst only if you must. Then, bring it all back until you need to burst again. It’s difficult to say whether or not VMware is going to have a lot of success with this model as the drive toward moving workloads into the public cloud gains momentum.

I also got the chance to sit down with the HP CAS group for about an hour with the other bloggers and talk about some of the things they are doing. The CAS group seems to be focused on taking all the pieces of the puzzle and putting them together for customers. That’s similar to what I do in the VAR space, but HP is trying to do that for their own solutions instead of forcing the customer to pay an integrator to do it. While part of me does worry that other companies doing something similar will eventually lead to the demise of the VAR I think HP is taking the right tactic in their specific case. HP knows better than anyone else how their systems should play together. By creating a group that can give customers and integrators good reference designs and help us get past the sticky points in installation and configuration, they add a significant amount of value to the equation. I plan to dig into the CAS group a bit more to find out what kind of goodies they have that might make be a better engineer overall.


Tom’s Take

Overall, I think that VMwarePEX is well suited for the market that it’s trying to address. This is an excellent place for solution focused people to get information and roadmaps for all kinds of products. That being said, I don’t think it’s the place for me. I’m still an old CLI jockey. I don’t feel comfortable in a presentation that has almost no code, no live demos, or even a glory shot of a GUI tool. It’s a bit like watching a rugby game. Sure, the action is somewhat familiar and I understand the majority of what’s going on. It still feels like something’s just a bit out of place, though. I think the next VMware event that I attend will be VMWorld. With the focus on technical solutions and “nuts and bolts” detail, I think I’ll end up getting more out of it in the long run. I appreciate HP and VMware for taking the time to let me experience Partner Exchange.

Disclaimer

My attendance at VMware Parter Exchange was a result of a all expenses paid sponsored trip provided by Hewlett Packard and VMware. My conference attendance, hotel room, meals and incidentals were paid in full. At no time did HP or VMware propose or restrict content to be written on this blog. All opinions and analysis provided herein and on any VMwarePEX-related posts is mine and mine alone.

Every Voice Adds To The Chorus

BlankStaff

A long time ago, I was in high school.  I wasn’t a basketball player or an artist.  My school didn’t have a computer club or a chess club.  Instead, I found myself in the choir.  Despite what my futile attempts at karaoke might otherwise indicate, I had a nice bass voice at one point in my life.  However, my school was pretty small.  Our entire choir consisted of about 12 kids.  Because we didn’t have a lot of guys in the group, we didn’t have the opportunity to split the male section into tenors (higher notes) and bass singers (lower notes).  We combined the guys into baritones, which can sing in the middle of the range but don’t usually stray to either extreme.  While this allowed us to sing in competition, we couldn’t really sing complex material written for four-part harmony.  Instead, we were forced to sing three-part harmony at a less difficult level.  It wasn’t until my senior year that we got enough people in the choir to split into four-part harmony and increase both the quality and the difficulty of our songs.  Those extra voices really did make the choir better.

When I was at VMware Partner Exchange in February, I talked to a lot of people about my activity online around social media and blogging.  A lot of people expressed both interest and discouragement at the thought of blogging.  Most of it went something like this:

“I want to blog.  I’ve got some ideas.  But I don’t want to feel obligated to do it.”

If you want to blog or write or even make witty comments, the most important thing to do is to say something.  The biggest bump in the road isn’t finding content to publish.  It’s finding the nerve to publish it in the first place.

Most people want to light the world on fire with a blog.  They want to write that single post that is going to be linked on Slashdot and Reddit and make everyone impressed.  In reality, that’s likely to never happen.  When you write for yourself and not for a “name” like the professional blogging sites, the odds of your posts getting linked to major news aggregators are slim.  In two and a half years of blogging that’s only happened to me twice.  Once was my Meraki story.  The other was when Matt Simmons linked to my post about learning why things work on the sysadmin subreddit.  I’ve never written posts for the purposes of getting linked.  I just write because I have something to say and want to share it.  Other people reading it is just an added bonus.

Bob McCouch wanted to start a blog after becoming CCIE .  He spent lots of time trying to come up with the perfect name.  I like Herding Packets, which is what he decided on.  At first, I think Bob may have been worried about what he was going to say on his post-CCIE blog.  Some want to use it to further their studies around a specific technology.  Others use it to plan for another big certification.  The point isn’t to write about something specific.  The real point is to get the writing juices flowing.  Here’s hoping that Bob keeps all the good stuff coming.

You don’t have to write about technical stuff all the time.  Staying that focused will eventually lead you to get burned out if you aren’t careful.  I try to keep things light with goofy posts from time to time, like my software release names post.  Stephen Foskett (@SFoskett) writes about random things like hot water heaters and toilets. Jeff Fry (@fryguy_pa) is a huge Disney fan.  They find ways to work their own interests into their writing to show their many facets.  Even within their own blog ecosystems, the very diverse voices they add to their own choral composition make things unique and interesting indeed.  If you ever find yourself in need of a quick post, never overlook the mundane things you do that might be exciting to someone else.

All of the above are excellent examples of how adding new and interesting voices to the overall choir serves to make the music much more enjoyable.  When more voices join into the conversation more time can be spent on analyzing up and coming topics.  The more words dedicated to discussing things like BYOD, SDN, and a thousand other topics, the better they can be understood by everyone.  The music becomes deeper and more meaningful with more voices involved in singing.  We aren’t just limited to the same four or five arrangements (or discussions) and instead can tackle the really tough pieces because of the varied voices.


Tom’s Take

I often say that everyone has at least one good blog post in them.  Once you’ve gotten that out, one often leads to two or three.  Unlike writing book chapters, blog posts are very free form and varied.  Some are like Michael Jackson, fast and lofty.  Others are like Barry White, robust and slow.  They all make music that people enjoy in their own way, and each of their songs adds to the overall variety and beauty of music.  In much the same way, blogging can only get better when people write down their thoughts and publish them for all to see.  Maybe you only want to post once a month.  Maybe you want to try and post every day.  It doesn’t matter if you want to publish short and sweet like a commercial jingle or more long-form like a symphony.  What’s important is making your voice heard.

Is It Time To Remove the VCP Class Requirement?

While I was at VMware Partner Exchange, I attended a keynote address. This in and of itself isn’t a big deal. However, one of the bullet points that came up in the keynote slide deck gave me a bit of pause. VMware is chaging some of their VSP and VTSP certifications to be more personal and direct. Being a VCP, this didn’t really impact me a whole lot. But I thought it might be time to tweet out one of my oft-requested changes to the certification program:

Oops. I started getting flooding with mentions. Many were behind me. Still others were vehemently opposed to any changes. They said that dropping the class requirement would devalue the certification. I responded as best I could in many of these cases, but the reply list soon outgrew the words I wanted to write down. After speaking with some people, both officially and unofficially, I figured it was due time I wrote a blog post to cover my thoughts on the matter.

When I took the VMware What’s New class for vSphere 5, I mentioned therein that I thought the requirement for taking a $3,000US class for a $225 test was a bit silly. I myself took and passed the test based on my experience well before I sat the class. Because my previous VCP was on VMware ESX 3 and not on ESX 4, I still had to sit in the What’s New course before my passing score would be accepted. To this day I still consider that a silly requirement.

I now think I understand why VMware does this. Much of the What’s New and Install, Configure, and Manage (ICM) classes are hands-on lab work. VMware has gone to great lengths to build out the infrastructure necessary to allow students to spend their time practicing the lab exercises in the courses. These labs rival all but the CCIE practice lab pods that I’ve seen. That makes the course very useful to all levels of students. The introductory people that have never really touched VMware get to experience it for real instead of just looking at screenshots in a slide deck. The more experienced users that are sitting the class for certification or perhaps to refresh knowledge get to play around on a live system and polish skills.

The problem comes that investment in lab equipment is expensive. When the CCIE Data Center lab specs were released, Jeff Fry calculated the list price of all the proposed equipment and it was staggering. Now think about doing that yourself. With VMware, you’re going to need a robust server and some software. Trial versions can be used to some degree, but to truly practice advanced features (like storage vMotion or tiering) you’re going to need a full setup. That’s a bit out of reach for most users. VMware addressed this issue by creating their own labs. The user gets access to the labs for the cost of the ICM or What’s New class.

How is VMware recovering the costs of the labs? By charging for the course. Yes, training classes aren’t cheap. You have to rent a room and pay for expenses for your instructor and even catering and food depending on the training center. But $3,000US is a bit much for ICM and What’s New. VMware is using those classes to recover the costs of the lab development and operation. In order to be sure that the costs are recovered in the most timely manner, the metrics need to make sense for class attendance. Given the chance, many test takers won’t go to the training class. They’d rather study from online material like the PDFs on VMware’s site or use less expensive training options like TrainSignal. Faced with the possiblity that students may elect to forego the expensive labs, VMware did what they had to so to ensure the labs would get used, and therefore the metrics worked out in their favor – they required the course (and labs) in order to be certified.

For those that say that not taking the class devalues the cert, ask yourself one question. Why does VMware only require the class for new VCPs? Why are VCPs in good standing allowed to take the test with no class requirement and get certified on a new version? If all the value is in the class, then all VCPs should be required to take a What’s New class before they can get upgraded. If the value is truly in the class, no one should be exempt from taking it. For most VCPs, this is not a pleasant thought. Many that I talked to said, “But I’ve already paid to go to the class. Why should I pay again?” This just speaks to my point that the value isn’t in the class, it’s in the knowledge. Besides VMware Education, who cares where people acquire the knowledge and experience? Isn’t a home lab just as good as the ones that VMware built.

Thanks to some awesome posts from people like Nick Marus and his guide to building an ESXi cluster on a Mac Mini, we can now acquire a small lab for very little out-of-pocket. It won’t be enough to test everything, but it should be enough to cover a lot of situations. What VMware needs to do is offer an alternate certification requirement that takes a home lab into account. While there may be ways to game the system, you could require a VMware employee or certified instructor or VCP to sign off on the lab equipment before it will be blessed for the alternate requirement. That should keep it above board for those that want to avoid the class and build their own lab for testing.

The other option would be to offer a more “entry level” certification with a less expensive class requirement that would allow people to get their foot in the door without breaking the bank. Most people see the VCP as the first step in getting VMware certified. Many VMware rock stars can’t get employed in larger companies because they aren’t VCPs. But they can’t get their VCP because they either can’t pay for the course or their employer won’t pay for it. Maybe by introducing a VMware Certified Administration (VCA) certification and class with a smaller barrier to entry, like a course in the $800-$1000US range, VMware can get a lot of entry level people on board with VMware. Then, make the VCA an alternate requirement for becoming a VCP. If the student has already shown the dedication to getting their VCA, VMware won’t need to recoup the costs from them.


Tom’s Take

It’s time to end the VCP class requirement in one form or another. I can name five people off the top of my head that are much better at VMware server administration than I am that don’t have a VCP. I have mine, but only because I convinced my boss to pay for the course. Even when I took the What’s New course to upgrade to a VCP5, I had to pull teeth to get into the last course before the deadline. Employers don’t see the return on investment for a $3,000US class, especially if the person that they are going to send already has the knowledge shared in the class. That barrier to entry is causing VMware to lose out on the visbility that having a lot of VCPs can bring. One can only hope that Microsoft and Citrix don’t beat VMware to the punch by offering low-cost training or alternate certification paths. For those just learning or wanting to take a less expensive route, having a Hyper-V certification in a world of commoditized hypervisors would fit the bill nicely. After that, the reasons for sticking with VMware become less and less important.

Aerohive Is Switching Things Up

Screen Shot 2013-03-03 at 12.01.20 PM

I’ve had the good fortune to be involved with Aerohive Networks ever since Wireless Field Day 1.  Since then, I’ve been present for their launch of branch routing.  I’ve also convinced the VAR that I work for to become a partner with them, as I believe that their solutions in the wireless space are of great benefit to my customer base.  It wasn’t long ago that some interesting rumors started popping up.  I noticed that Aerohive started putting out feelers to hire a routing and switching engineer.  There was also a routing and switching class that appeared in the partner training list.  All of these signs pointed to something abuzz on the horizon.

Today, Aerohive is launching a couple of new products.  The first of these is the aforementioned switching line.  Aerohive is taking their expertise in HiveOS and HiveManager and placing it into a rack with 24 cables coming out of it.  The idea behind this came when they analyzed their branch office BR100 and BR200 models and found that a large majority of their remote/branch office customers needed more than the 4 switch ports offered in those models.  Aerohive had a “ah ha” moment and decided that it was time to start making enterprise-grade switches.  The beauty of having a switch offering from a company like Aerohive is that the great management software that is already available for their existing products is now available for wired ports as well.  All of the existing polices that you can create through HiveManager can now be attached to an Aerohive switch port.  The GUI for port role configuration is equally nice:

Screen Shot 2013-03-03 at 4.14.11 PM

In addition, the management dashboard has been extended and expanded to allow for all kinds of information to be pulled out of the network thanks to the visibility that HiveManager has.  You can also customize these views to your heart’s content.  If you frequently find yourself needing to figure out who is monopolizing your precious bandwidth, you’ll be happy with the options available to you.

The first of three switch models, the SR2024, is available today.  It has 24 GigE ports, 8 PoE+ ports, 4 GigE uplinks, and a single power supply.  In the coming months, there will be two additional switches that have full PoE+ capability across 24 and 48 ports, redundant power supplies, and 10 GigE SFP+ uplinks.  For those that might be curious, I asked Abby Strong about the SFPs, and Aerohive will allow you to use just about anyone’s SFPs.  I think that’s a pretty awesome idea.

The other announcement from Aerohive is software based.  One of the common things that is seen in today’s wireless networks is containment of application traffic via multiple SSIDs. If you’ve got management users as well as end users and guests accessing your network all at once, you’ve undoubtedly created policies that allow them to access information differently.  Perhaps management has unfettered access to sites like Facebook while end users can only access it during break hours.  Guests are able to go where they want but are subject to bandwidth restrictions to prevent them from monopolizing resources.  In the past you would need three different SSIDs to accomplish something like this.  Having a lot of broadcasted SSIDs causes a lot of wireless congestion as well as user confusion and increased attack surface.  If only there was a way to have visibility into the applications that the users are accessing and create policies and actions based on that visibility.

Aerohive is also announcing application visibility in the newest HiveOS and HiveManager updates.  This allows administrators to peer deeply into the applications being used by users on the network and create policies on a per-user basis to allow or restrict them based on various criteria.  These policies follow the user through the network up to and including the branch office.  Later in the year, Aerohive will port these policies to their switching line.  However, when you consider that the majority of the users today are using mobile devices first and foremost, this is where the majority of the visibility needs to be.  Administrators can provide user-based controls and reporting to identify bandwidth hogs and take appropriate action to increase bandwidth for critical applications on the fly.  This allows for the most flexibility for both users and administrators.  In truth, it’s all the nice things about creating site-wide QoS policies without all the ugly wrench turning involved with QoS.  How could you not want that?


Tom’s Take

Aerohive’s dip into the enterprise switching market isn’t all that shocking.  They seem to be taking a page from Meraki and offering their software platform on a variety of hardware.  This is great for most administrators because once you’ve learned the software interface and policy creation, porting it between wired switch ports and wireless APs is seemless.  That creates an environment focused on solving problems with business decisions, not on problems with configuration guides.  The Aerohive switches are never going to outperform a Nexus 7000 or a Catalyst 4500.  For what they’ve been designed to accomplish in the branch office, however, I think they’ll fit the bill just fine.  And that’s something to be buzzing about.

Disclaimer

Aerohive provided a briefing about the release of these products.  I spoke with Jenni Adair and Abby Strong.  At no time did Aerohive or their representatives ask for any consideration in the writing of this post, nor were they assured of any of the same.  All of the analysis and opinions represented herein are mine and mine alone.

Why Is My SFP Not Working?

GenericSFP

It’s 3 am. You’ve just finished installing your new Catalyst switches into the rack and you’re ready to turn them up and complete your cutover. You’ve been fighting for months to get the funding to get these switches so your servers can run at full gigabit speed. You had to cut some corners here and there. You couldn’t buy everything new, so you’re reusing as much of your old infrastructure as possible. Thankfully, the last network guy had the foresight to connect the fiber backbone at gigabit speeds. You turn on your switches and wait for the interminably long ASIC and port tests to complete. As you watch the console spam scroll up on your screen, you catch sight of something that makes your blood run cold:

%GBIC_SECURITY_CRYPT-4-VN_DATA_CRC_ERROR: GBIC in port 65586 has bad crc
 %PM-4-ERR_DISABLE: gbic-invalid error detected on Gi1/0/50, putting Gi1/0/50 in err-disable state

Huh?!? Why aren’t my fiber connections coming up? Am I going to have to roll the install back? What is going on here?!?

You will see this error message if you have a third party SFP inserted into the Catalyst switch. While Cisco (and many others) OEM their SFP transceivers from different companies, they all have a burned-in chip that contains info such as serial number, vendor ID, and security info like a Cyclic Redundancy Check (CRC). If any of this info doens’t match the database on the switch, the OS will mark the SFP as not supported and disable the port. The fiber connection won’t come up and you’ll find yourself screaming at terminal window at 3:30 in the morning.

Why do vendors do this? Some claim it’s vendor lock in. You are stuck ordering your modules from the vendor at an inflated cost instead of buying them from a different source. Others claim it’s to help TAC troubleshoot the switch better in case of a failure. Still others say that it’s because the manufacturing tolerances on the vendor SFPs is much better than the third party offerings, even from the same OEM. I don’t have the answer, but I can tell you that Cisco, HP, Dell, and many others do this all the time.

HP is the most curious case that I’ve run into. Their old series A SFP modules (HP calls them mini-GBICs) didn’t even have an HP logo. They bore the information from Finisar, an electroics OEM. The above scenario happened to me when I traded out a couple of HP 2848 swtiches for some newer 2610s. The fiber ports locked up solid and would not come alive for anything. I ended up putting the old switches back in place as glorified fiber media converters until I figured out that new SFPs were needed. While not horribly expensive, it did add a non-trivial cost to my project, not to mention all the extra hours of troubleshooting and banging my head against a wall.

Cisco has an undocumented and totally unsupported solution to this problem. Once you start getting the console spam from above, just enter these commands:

service unsupported-transceiver
no errdisable detect cause gbic-invalid

These commands are both hidden, so you can’t ? them. When you enter the first command, you get the Ominous Warning Message of Doom:

Warning: When Cisco determines that a fault or defect can be traced to the use of third-party transceivers installed by a customer or reseller, then, at Cisco’s discretion, Cisco may withhold support under warranty or a Cisco support program. In the course of providing support for a Cisco networking product Cisco may require that the end user install Cisco transceivers if Cisco determines that removing third-party parts will assist Cisco in diagnosing the cause of a support issue.

It goes without saying that calling TAC with a non-Cisco SFP in the slot is going to get you an immediate punt or request to remove said offending SFP. You’ll likely argue that your know the issue isn’t with the SFP that was working just fine an hour ago. They will counter with not being able to support non-Cisco gear. You’ll complain that removing the SFP will create additional connectivity issues and eventually you’ll hang up in frustration. So, don’t call TAC if you use this command. In fact, I would counsel that you should only use this command as a short term band-aid to get your out of the data center at 3 am so you can order genuine SFPs the next morning. Sadly, I also know how budgets work and how likely you are to get several hundred dollars of extra equipment you “forgot” to order. So caveat implementor.

CAS – Catchy Acronym Syndrome

If you work in the technology industry, you know the pain of acronyms.  It seems like every tech term sooner or later devolves into a jumble of letters.  For some of the longer tech terms, I don’t mind this.  I can even understand if the acronym forms a word naturally, like RIP or RAID.  What I do have a problem with is the growing trend to name something with a very unwieldy moniker solely for the purpose of giving it a cool acronym.  It’s so pervasive that I’ve given this trend it’s own acronym – Catchy Acronym Syndrome (CAS).

You may find yourself suffering from CAS if you go out of your way to name your product after you’ve decided on the acronym for it.  If you’ve never referred to your product or protocol by its full name you may also be guilty of CAS.  Yes, for me this means that RAdio Detection and Ranging (RADAR) and Light Amplification by Stimulated Emission of Radiation (LASER) are prime examples of CAS.  Let’s look at a few of my favorite offenders:

RFCs

SIMPLE – SIP Internet Messaging and Presence Leveraging Extensions Why all those extra words?  SIP IM and Presence (SIMP) would have worked too.

RFC 6837 – NERD: Not-so-novel Endpoint ID (EID) to Routing Locator (RLOC) Database – Your acronym contains two other acronyms that are both almost as long as the one you created.  You’re not only guilty of CAS, you are the poster child for it.

RADIUS – Remote Access Dial-In User Server  I’m including this one because it’s obvious to me that the original intent was to create a word first.  Given the fact that the successor protocol is called Diameter, which itself isn’t an acronym for anything and is a play on words with RADIUS you can see how this made the list.

Business Units and Other Business Terms

Cisco High-End Routing and Optical – HERO I’m sure they had no ulterior motive for that one.

CARAT – Customer And Role Attribute Tracking Just keep sticking words in there until it makes a word.

PARTNER – Processing Automated Receivables Transactions and E-Routing The longest offender I could find

(more Cisco-specific offenders here)

This practice also exists today for the purposes of media exposure.  Take Advanced Persistent Threat (APT).  What does this term actually tell you?  It’s a very complicated idea, sometimes multiple attack vectors and exploits being used all at once.  Why such a simplistic acronym then?  Because the basic non-computer user reading the news can’t grasp a Persistent Attack and Theft Program, but they can get APT because it’s catchy.  Now, we’re developing acronyms like Advanced Volatile Threat (AVT) that don’t add any additional information beyond APT, but the new ones have to look similar to APT or regular people won’t understand they are security related.  When the entire purpose for making an acronym isn’t for descriptive purposes and instead serves to link your idea to another idea or ride on another acronym’s coat tails, you’ve violated CAS.


Tom’s Take

People who get started in technology hate the huge amount of acronyms that must be learned.  It doesn’t help that people today seem to be more content on creating protocols solely because they want to have a cool acronym.  I’ve made fun of acronyms for things like the Disaster Recovery Tool (DiRT) for years, but that was never an officially sanctioned acronym.  I’m sure it was more frustration from people who used it and wanted to sully the name a bit.  I get more and more irritated when the list of new RFCs comes out and some hotshot programmer named his proposal NERD or GEEK simply so he could use these common words to refer to a complex idea.  Gone are the days of descriptive names like RIP and RAID and DSLAM.  Instead, we have to deal with people trying to be catchy.  If you spend more time writing your protocol and less time trying to name it, you might not have to worry about being catchy.

Data Never Lies

lies

If you’ve been watching the media in the last couple of weeks, you’ve probably seen the spat that has developed between John Broder of the New York Times and Elon Musk of Tesla Motors.  Broder took a Tesla Model S sedan on a test drive from New Jersey to Connecticut to test out the theory that the new supercharger stations that have been installed along the way would help electric cars to take long road trips without fear of running out of electricity.  Along the way, he ran into some difficulty and ultimately needed to have the car towed to a charging station.  After the story came out, Elon Musk immediately defended his product with a promise of data to support that assertion.  A couple of days later, he put up a long post on the Tesla blog with lots of charts, claiming that the Model S had lots of data to support longer driving distances, failure to fully charge at supercharger stations, and even that Broder was driving in circles in a parking lot.  After this post, Broder responded with another post of his own clarifying the rebuttal made by Musk and reaffirming how the test was carried out.  It’s certainly made for some interesting press releases and blog posts.  There has also been a greater discussion about how we present facts and dat in a case to support our argument or prove the other party is wrong.

Data Doesn’t Lie

If nothing else, Elon Musk did the right thing by attaching all manner of charts and graphs to his blog post.  He provided data (albeit collated and indexed) from the vehicle that gave a more precise picture of what went on than the recollection of a reporter that admittedly didn’t remember what he did or didn’t do during portions of the test drive.  Data never lies.  It’s a collection of facts and information that tells a single story.  If equals 7, there’s no other thing that could be.  However, the failing in data usually doesn’t come from the data itself.  It comes from interpretation.

Data Doesn’t Lie.  People Do.

The problem with the Elon Musk post is that he used the data to support his assertion that Broder did things like taking a long detour through Manhattan and driving in circles for half a mile in a parking lot in an attempt to force the car to completely discharge its battery.  This is the part where the narrative starts to break down and where most critics are starting their analysis.  Musk was right to include the data.  However, the analysis he offers is a bit wild.  Does rapid acceleration and deceleration over a short span of distance mean Broder was driving in circles attempting to drain the car?  Or was he lost in the dark, trying to find the charging station in the middle of the night like he claims in his rebuttal?  The data can only tell us what the car did.  It can’t explain the intentions of someone that wasn’t being monitored by sensors.

Let The Data Do The Talking

How does this situation apply to us in the networking/virtualization/IT world?  We find ourselves adrift in a sea of data.  We have protocols providing us status information and feeding us statistics around the clock.  We have systems that will correlate that data and provide a big picture.  We have system to aggregate the correlated data and sort it into action items and critical alert levels.  With all this data, it’s very easy for us to make assumptions about what we see.  The human brain wants to make patterns out of what we see in front of us.  The problem comes when the conclusion we reach is incorrect.  We may have a preconceived notion of what we want the data to say.  Sometimes its confirmation bias.  Other times its reporting bias.  We come to incorrect conclusions because we keep trying to make the data tell our story instead of listening to what the data tells us.  Elon Musk wanted the data to tell him (and us) that his car worked just fine and that the driver must have had some ulterior motive.  John Broder used the same data to support that while his recollection of some finer details wasn’t accurate in the original article, he harbored no malice during his test.  The data didn’t lie in either case.  We just have to decide who’s story is more accurate.

Tom’s Take

The smartest thing that you can do when providing network data or server statistics is leave your opinion out of it.  I make it a habit to give all the data I can to the person requesting it before I ever open my mouth.  Sure, people pay me to look at all that information and make sense of it.  Yes, I’ve been biased in my conclusions before.  I realize that I’m nowhere near neutral in many of my interpretations, whether it be defending the actions of myself or my team or using the data to support the correctness of a customer’s assumptions.  The key to preventing a back-and-forth argument is to simply let the data do all the talking for you.  If the data never lies, it can’t possibly lose the argument.  Let the data help you.  Don’t make the data do your dirty work for you.

Restricted CUCM – Rated R

R-rated
If you’ve gone to download Cisco Unified Communications Manager (CUCM) software any time in the past couple of years, you’ve probable found yourself momentarily befuzzled by the option to download one of two different versions – Restricted and Unrestricted.  On the surface, without any research, you might be tempted to jump into the Unrestricted version. After all, no restrictions is usually a good thing, right?  In this case, that’s not what you want to do.  In fact, it could cause more problems than you think it might solve.

Prior to version 7.1(5), CUCM was an export restricted product.  Why would the government care about exporting a phone system?  The offending piece of code is in fact the media and signaling encryption that CUCM can provide in a secure RTP (SRTP) implementation.  Encryption has always been a very tightly controlled subject.  Initially developed heavily in World War II, the government needed to be sure to regulate the use of encryption (and cryptography) afterwards.  Normally, technology export is something that is controlled by the U.S. Department of Commerce.  However, since almost all applications for cryptography were military in nature it was classified as a munition by the military and therefore subject to regulation via the State Department.  And regulate it they did.  They decreed that no strong encryption software would be available to be exported out of the country without a hearing and investigation.  This usually meant that companies created “international versions” that contained the maximum strength encryption key that could be exported without a hearing – 40 bits.  This affected many programs in the early days of the Internet Age, such as Internet Explorer, Netscape Navigator, and even Windows itself.

In 1996, President Bill Clinton signed an order permitting cryptography software export rulings to be transferred to the Department of Commerce.  In fact, the order said that software was no longer to be treated as “technology” for the purposes of determining restrictions for export.  The Department of Commerce decided in 2000 to create new rules governing the export of strong encryption.  These restrictions were very permissive and have allowed encryption technology to flourish all over the world.  There are still a few countries on the Export Restriction list, such as those that are classified as terrorist states or rogue states as classified by the U.S. Government.  These countries may not be the recipient of strong encryption software.  In addition, even those countries that can receive such software are subject to inspection at any time by the U.S. Department of Commerce to ensure that the software is being used in line with the originally licensed purpose.  When you think of how many companies today have a multi-national presence, this could be a nightmare for regulatory compliance.

Cisco decided in CUCM 7.1(5) to create a version of software that eliminated the media and signaling encryption for voice traffic in an effort to avoid the need to police export destinations and avoid spot audits for CUCM software.  These Export Unrestricted versions are developed in parallel with other CUCM versions so all users can have the same functionality no matter their location.  CUCM Unrestricted versions do have a price when you install them, however.  Once you have upgraded a cluster to an Unrestricted version of CUCM, you can never go back to a Restricted (High Encryption) version.  You can’t migrate or insert any Restricted servers into the cluster.  The only way to go back is to blow everything away and reload from scratch.  Hence the reason you want to be very careful before you install the software.

If you’ve been running CUCM prior to version 7.1(5), you are running the Restricted version.  Unless you find yourself in a scenario where you need to install CUCM in a country that has Department of Commerce export restrictions or has some sort of import restriction on software (Russia is specifically called out in the Cisco release notes), you should stay on the Restricted version of CUCM.  There’s no real compelling reason for you to switch.  The cost is the same.  The licensing model is the same.  The only things you lose are the media encryption and the ability to ever upgrade to Restricted version.  Just like when going to the movies, all the good stuff is in the R-rated version.


Tom’s Take

I still get confused by the Restricted vs. Unrestricted thing from time to time.  Cisco needs to do a better job of explaining it on the download page.  I occasionally see references to the Unrestricted version being for places like Russia, but those warnings aren’t consistent between point releases, let along minor upgrades and major versions.  I think Cisco is trying to do the right thing by making this software as available to everyone in the world as they can.  With the rise of highly encrypted communications being used to launch things like command and control networks for massive botnets and distributed denial of service campaigns, I don’t doubt that we’ll see more restriction on cryptography and encryption coming sooner or later.  Until that time, we’ll just have to ensure we download the right version of CUCM to install on our servers.

Dell and the Home Gym

DellFlex

Unless you’ve been living under a very cozy rock for the last couple of weeks, you’ve heard that Michael Dell jumped in and bought his company back with the help of Microsoft and Silver Lake Capital.  There’s more than a fair amount of buzz surrounding this leveraged buyout.  What is Michael planning on doing with his company?  Why did he suddenly want to take it private?  What stake does Microsoft play in all of this?  I think Michael Dell felt it was time to hit the home gym, so to speak.

There’s usually a large influx of people into health clubs and gymnasiums around the first of the year.  These people made a resolution to get fit and decided to go out to do it.  They probably wanted to get out of the house after being cooped up during the holidays.  Maybe they wanted to go somewhere with a treadmill or a weight bench.  Perhaps they felt the only way that they could get fit was by being around other people that motivated them to get things done.  They all have their reasons.  There does exist a subset of the population that doesn’t go to the gym for various reasons.  In this case, I’m focusing on those that don’t like having the spotlight shined on them.  They either are afraid that they’ll look foolish in public just starting out with a workout program or they’re scared that others will judge them for their form or exercise choices.  They’d rather apply the work using a personalized workout program or spend the money to buy some of the equipment and setup their own gym in their garage.  They may work twice as hard in the comfort and protection of their own home to get fit.  These are the kinds of people that you don’t see for six or seven months only to run into them one day and say “Wow!  Look at you!”

To extend this metaphor to the market, Dell is in need of shaping up.  Whether they’ve acquired too many companies or their margins are getting slammed by the shift away from PCs, the fact is that Michael Dell has decided to make some changes.  However, he doesn’t want that to happen in the public health club, or stock market in this case.  Every action will be scrutinized.  Every decision will be debated by investors and talking heads on CNBC and CNN.  They will deride Dell for strategy mistakes and wonder why they made the decisions they did.  Doubt and uncertainty of direction will squeeze the life from Michael Dell’s baby.  If you don’t believe something like that could happen, why don’t you ask Meg Whitman what she thinks about the market right now?

Dell has decided to buy some home gym equipment and get fit in the privacy and comfort of their own home.  This isn’t a cheap solution by any stretch of the imagination.  Michael Dell put up a lot of his own money.  He’s borrowed from others and put his reputation and livelyhood on the line.  He’s done this because he feels that he knows how to get fit.  He doesn’t need the gym rats sitting around critiquing his form and telling him he needs to do more squats.  He wants to take the time to concentrate on the “exercises” he feels are most important in order to come out looking like he wants.  Maybe that involves staff reductions or spin offs.  At this point, no one really knows.  What can be certain is that no one will know until Dell wants them to know.  No investor speculation or outside interference will drive Michael Dell to do something he doesn’t want to do.  Better still, those same dynamics won’t have an opportunity to force him out like the CEO of Chesapeake Energy or the last two CEOs of HP.  He’s only going to quit this new fitness regimen when he decides he’s done.  As for the Microsoft question?  They basically provided a treadmill for Dell’s home gym with their investment.  That way, no matter what else Michael Dell decides to work out with, Microsoft is sure he’ll be running on their treadmill for his workouts.

You can’t help but applaud Michael Dell for wanting to fix things.  He’s certainly started a firestorm among his current investors, but I think he genuinely believes he can right the ship here.  Granted, he’s known as a very private person.  That means he doesn’t want to air his business in public if he can help it.  That, to me, is the driving motivation behind the buyout.  He wants to fix things privately and come back out on the other side a stronger, better company.  He wants people to say “Wow!” when he’s finished and compliment him on his new physique.  Once he’s put in all the hard work, I can assure you that you’ll see more of Dell’s new look in public.