CAS – Catchy Acronym Syndrome

If you work in the technology industry, you know the pain of acronyms.  It seems like every tech term sooner or later devolves into a jumble of letters.  For some of the longer tech terms, I don’t mind this.  I can even understand if the acronym forms a word naturally, like RIP or RAID.  What I do have a problem with is the growing trend to name something with a very unwieldy moniker solely for the purpose of giving it a cool acronym.  It’s so pervasive that I’ve given this trend it’s own acronym – Catchy Acronym Syndrome (CAS).

You may find yourself suffering from CAS if you go out of your way to name your product after you’ve decided on the acronym for it.  If you’ve never referred to your product or protocol by its full name you may also be guilty of CAS.  Yes, for me this means that RAdio Detection and Ranging (RADAR) and Light Amplification by Stimulated Emission of Radiation (LASER) are prime examples of CAS.  Let’s look at a few of my favorite offenders:

RFCs

SIMPLE – SIP Internet Messaging and Presence Leveraging Extensions Why all those extra words?  SIP IM and Presence (SIMP) would have worked too.

RFC 6837 – NERD: Not-so-novel Endpoint ID (EID) to Routing Locator (RLOC) Database – Your acronym contains two other acronyms that are both almost as long as the one you created.  You’re not only guilty of CAS, you are the poster child for it.

RADIUS – Remote Access Dial-In User Server  I’m including this one because it’s obvious to me that the original intent was to create a word first.  Given the fact that the successor protocol is called Diameter, which itself isn’t an acronym for anything and is a play on words with RADIUS you can see how this made the list.

Business Units and Other Business Terms

Cisco High-End Routing and Optical – HERO I’m sure they had no ulterior motive for that one.

CARAT – Customer And Role Attribute Tracking Just keep sticking words in there until it makes a word.

PARTNER – Processing Automated Receivables Transactions and E-Routing The longest offender I could find

(more Cisco-specific offenders here)

This practice also exists today for the purposes of media exposure.  Take Advanced Persistent Threat (APT).  What does this term actually tell you?  It’s a very complicated idea, sometimes multiple attack vectors and exploits being used all at once.  Why such a simplistic acronym then?  Because the basic non-computer user reading the news can’t grasp a Persistent Attack and Theft Program, but they can get APT because it’s catchy.  Now, we’re developing acronyms like Advanced Volatile Threat (AVT) that don’t add any additional information beyond APT, but the new ones have to look similar to APT or regular people won’t understand they are security related.  When the entire purpose for making an acronym isn’t for descriptive purposes and instead serves to link your idea to another idea or ride on another acronym’s coat tails, you’ve violated CAS.


Tom’s Take

People who get started in technology hate the huge amount of acronyms that must be learned.  It doesn’t help that people today seem to be more content on creating protocols solely because they want to have a cool acronym.  I’ve made fun of acronyms for things like the Disaster Recovery Tool (DiRT) for years, but that was never an officially sanctioned acronym.  I’m sure it was more frustration from people who used it and wanted to sully the name a bit.  I get more and more irritated when the list of new RFCs comes out and some hotshot programmer named his proposal NERD or GEEK simply so he could use these common words to refer to a complex idea.  Gone are the days of descriptive names like RIP and RAID and DSLAM.  Instead, we have to deal with people trying to be catchy.  If you spend more time writing your protocol and less time trying to name it, you might not have to worry about being catchy.

Data Never Lies

lies

If you’ve been watching the media in the last couple of weeks, you’ve probably seen the spat that has developed between John Broder of the New York Times and Elon Musk of Tesla Motors.  Broder took a Tesla Model S sedan on a test drive from New Jersey to Connecticut to test out the theory that the new supercharger stations that have been installed along the way would help electric cars to take long road trips without fear of running out of electricity.  Along the way, he ran into some difficulty and ultimately needed to have the car towed to a charging station.  After the story came out, Elon Musk immediately defended his product with a promise of data to support that assertion.  A couple of days later, he put up a long post on the Tesla blog with lots of charts, claiming that the Model S had lots of data to support longer driving distances, failure to fully charge at supercharger stations, and even that Broder was driving in circles in a parking lot.  After this post, Broder responded with another post of his own clarifying the rebuttal made by Musk and reaffirming how the test was carried out.  It’s certainly made for some interesting press releases and blog posts.  There has also been a greater discussion about how we present facts and dat in a case to support our argument or prove the other party is wrong.

Data Doesn’t Lie

If nothing else, Elon Musk did the right thing by attaching all manner of charts and graphs to his blog post.  He provided data (albeit collated and indexed) from the vehicle that gave a more precise picture of what went on than the recollection of a reporter that admittedly didn’t remember what he did or didn’t do during portions of the test drive.  Data never lies.  It’s a collection of facts and information that tells a single story.  If equals 7, there’s no other thing that could be.  However, the failing in data usually doesn’t come from the data itself.  It comes from interpretation.

Data Doesn’t Lie.  People Do.

The problem with the Elon Musk post is that he used the data to support his assertion that Broder did things like taking a long detour through Manhattan and driving in circles for half a mile in a parking lot in an attempt to force the car to completely discharge its battery.  This is the part where the narrative starts to break down and where most critics are starting their analysis.  Musk was right to include the data.  However, the analysis he offers is a bit wild.  Does rapid acceleration and deceleration over a short span of distance mean Broder was driving in circles attempting to drain the car?  Or was he lost in the dark, trying to find the charging station in the middle of the night like he claims in his rebuttal?  The data can only tell us what the car did.  It can’t explain the intentions of someone that wasn’t being monitored by sensors.

Let The Data Do The Talking

How does this situation apply to us in the networking/virtualization/IT world?  We find ourselves adrift in a sea of data.  We have protocols providing us status information and feeding us statistics around the clock.  We have systems that will correlate that data and provide a big picture.  We have system to aggregate the correlated data and sort it into action items and critical alert levels.  With all this data, it’s very easy for us to make assumptions about what we see.  The human brain wants to make patterns out of what we see in front of us.  The problem comes when the conclusion we reach is incorrect.  We may have a preconceived notion of what we want the data to say.  Sometimes its confirmation bias.  Other times its reporting bias.  We come to incorrect conclusions because we keep trying to make the data tell our story instead of listening to what the data tells us.  Elon Musk wanted the data to tell him (and us) that his car worked just fine and that the driver must have had some ulterior motive.  John Broder used the same data to support that while his recollection of some finer details wasn’t accurate in the original article, he harbored no malice during his test.  The data didn’t lie in either case.  We just have to decide who’s story is more accurate.

Tom’s Take

The smartest thing that you can do when providing network data or server statistics is leave your opinion out of it.  I make it a habit to give all the data I can to the person requesting it before I ever open my mouth.  Sure, people pay me to look at all that information and make sense of it.  Yes, I’ve been biased in my conclusions before.  I realize that I’m nowhere near neutral in many of my interpretations, whether it be defending the actions of myself or my team or using the data to support the correctness of a customer’s assumptions.  The key to preventing a back-and-forth argument is to simply let the data do all the talking for you.  If the data never lies, it can’t possibly lose the argument.  Let the data help you.  Don’t make the data do your dirty work for you.

Restricted CUCM – Rated R

R-rated
If you’ve gone to download Cisco Unified Communications Manager (CUCM) software any time in the past couple of years, you’ve probable found yourself momentarily befuzzled by the option to download one of two different versions – Restricted and Unrestricted.  On the surface, without any research, you might be tempted to jump into the Unrestricted version. After all, no restrictions is usually a good thing, right?  In this case, that’s not what you want to do.  In fact, it could cause more problems than you think it might solve.

Prior to version 7.1(5), CUCM was an export restricted product.  Why would the government care about exporting a phone system?  The offending piece of code is in fact the media and signaling encryption that CUCM can provide in a secure RTP (SRTP) implementation.  Encryption has always been a very tightly controlled subject.  Initially developed heavily in World War II, the government needed to be sure to regulate the use of encryption (and cryptography) afterwards.  Normally, technology export is something that is controlled by the U.S. Department of Commerce.  However, since almost all applications for cryptography were military in nature it was classified as a munition by the military and therefore subject to regulation via the State Department.  And regulate it they did.  They decreed that no strong encryption software would be available to be exported out of the country without a hearing and investigation.  This usually meant that companies created “international versions” that contained the maximum strength encryption key that could be exported without a hearing – 40 bits.  This affected many programs in the early days of the Internet Age, such as Internet Explorer, Netscape Navigator, and even Windows itself.

In 1996, President Bill Clinton signed an order permitting cryptography software export rulings to be transferred to the Department of Commerce.  In fact, the order said that software was no longer to be treated as “technology” for the purposes of determining restrictions for export.  The Department of Commerce decided in 2000 to create new rules governing the export of strong encryption.  These restrictions were very permissive and have allowed encryption technology to flourish all over the world.  There are still a few countries on the Export Restriction list, such as those that are classified as terrorist states or rogue states as classified by the U.S. Government.  These countries may not be the recipient of strong encryption software.  In addition, even those countries that can receive such software are subject to inspection at any time by the U.S. Department of Commerce to ensure that the software is being used in line with the originally licensed purpose.  When you think of how many companies today have a multi-national presence, this could be a nightmare for regulatory compliance.

Cisco decided in CUCM 7.1(5) to create a version of software that eliminated the media and signaling encryption for voice traffic in an effort to avoid the need to police export destinations and avoid spot audits for CUCM software.  These Export Unrestricted versions are developed in parallel with other CUCM versions so all users can have the same functionality no matter their location.  CUCM Unrestricted versions do have a price when you install them, however.  Once you have upgraded a cluster to an Unrestricted version of CUCM, you can never go back to a Restricted (High Encryption) version.  You can’t migrate or insert any Restricted servers into the cluster.  The only way to go back is to blow everything away and reload from scratch.  Hence the reason you want to be very careful before you install the software.

If you’ve been running CUCM prior to version 7.1(5), you are running the Restricted version.  Unless you find yourself in a scenario where you need to install CUCM in a country that has Department of Commerce export restrictions or has some sort of import restriction on software (Russia is specifically called out in the Cisco release notes), you should stay on the Restricted version of CUCM.  There’s no real compelling reason for you to switch.  The cost is the same.  The licensing model is the same.  The only things you lose are the media encryption and the ability to ever upgrade to Restricted version.  Just like when going to the movies, all the good stuff is in the R-rated version.


Tom’s Take

I still get confused by the Restricted vs. Unrestricted thing from time to time.  Cisco needs to do a better job of explaining it on the download page.  I occasionally see references to the Unrestricted version being for places like Russia, but those warnings aren’t consistent between point releases, let along minor upgrades and major versions.  I think Cisco is trying to do the right thing by making this software as available to everyone in the world as they can.  With the rise of highly encrypted communications being used to launch things like command and control networks for massive botnets and distributed denial of service campaigns, I don’t doubt that we’ll see more restriction on cryptography and encryption coming sooner or later.  Until that time, we’ll just have to ensure we download the right version of CUCM to install on our servers.

Dell and the Home Gym

DellFlex

Unless you’ve been living under a very cozy rock for the last couple of weeks, you’ve heard that Michael Dell jumped in and bought his company back with the help of Microsoft and Silver Lake Capital.  There’s more than a fair amount of buzz surrounding this leveraged buyout.  What is Michael planning on doing with his company?  Why did he suddenly want to take it private?  What stake does Microsoft play in all of this?  I think Michael Dell felt it was time to hit the home gym, so to speak.

There’s usually a large influx of people into health clubs and gymnasiums around the first of the year.  These people made a resolution to get fit and decided to go out to do it.  They probably wanted to get out of the house after being cooped up during the holidays.  Maybe they wanted to go somewhere with a treadmill or a weight bench.  Perhaps they felt the only way that they could get fit was by being around other people that motivated them to get things done.  They all have their reasons.  There does exist a subset of the population that doesn’t go to the gym for various reasons.  In this case, I’m focusing on those that don’t like having the spotlight shined on them.  They either are afraid that they’ll look foolish in public just starting out with a workout program or they’re scared that others will judge them for their form or exercise choices.  They’d rather apply the work using a personalized workout program or spend the money to buy some of the equipment and setup their own gym in their garage.  They may work twice as hard in the comfort and protection of their own home to get fit.  These are the kinds of people that you don’t see for six or seven months only to run into them one day and say “Wow!  Look at you!”

To extend this metaphor to the market, Dell is in need of shaping up.  Whether they’ve acquired too many companies or their margins are getting slammed by the shift away from PCs, the fact is that Michael Dell has decided to make some changes.  However, he doesn’t want that to happen in the public health club, or stock market in this case.  Every action will be scrutinized.  Every decision will be debated by investors and talking heads on CNBC and CNN.  They will deride Dell for strategy mistakes and wonder why they made the decisions they did.  Doubt and uncertainty of direction will squeeze the life from Michael Dell’s baby.  If you don’t believe something like that could happen, why don’t you ask Meg Whitman what she thinks about the market right now?

Dell has decided to buy some home gym equipment and get fit in the privacy and comfort of their own home.  This isn’t a cheap solution by any stretch of the imagination.  Michael Dell put up a lot of his own money.  He’s borrowed from others and put his reputation and livelyhood on the line.  He’s done this because he feels that he knows how to get fit.  He doesn’t need the gym rats sitting around critiquing his form and telling him he needs to do more squats.  He wants to take the time to concentrate on the “exercises” he feels are most important in order to come out looking like he wants.  Maybe that involves staff reductions or spin offs.  At this point, no one really knows.  What can be certain is that no one will know until Dell wants them to know.  No investor speculation or outside interference will drive Michael Dell to do something he doesn’t want to do.  Better still, those same dynamics won’t have an opportunity to force him out like the CEO of Chesapeake Energy or the last two CEOs of HP.  He’s only going to quit this new fitness regimen when he decides he’s done.  As for the Microsoft question?  They basically provided a treadmill for Dell’s home gym with their investment.  That way, no matter what else Michael Dell decides to work out with, Microsoft is sure he’ll be running on their treadmill for his workouts.

You can’t help but applaud Michael Dell for wanting to fix things.  He’s certainly started a firestorm among his current investors, but I think he genuinely believes he can right the ship here.  Granted, he’s known as a very private person.  That means he doesn’t want to air his business in public if he can help it.  That, to me, is the driving motivation behind the buyout.  He wants to fix things privately and come back out on the other side a stronger, better company.  He wants people to say “Wow!” when he’s finished and compliment him on his new physique.  Once he’s put in all the hard work, I can assure you that you’ll see more of Dell’s new look in public.

Network Field Day 5

NFD-Logo-wpcf_400x400

It’s time again for more zany fun in San Jose with the Tech Field Day crew!  I will be attending Network Field Day 5 in San Jose March 6-8.  This time, I was honored to be included as a member of the organizing committee for the event.  There were lots of discussions about timing of the event, sessions that would be interesting to the delegates and the viewers, and even a big long list of delegates to evaluate.  That last part is never fun.  There are so many great people out there that would be a great fit at any Field Day event.  Sadly, there are only so many people that can attend.  The list for Network Field Day 5 includes the following wonderful people:

https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/Carroll-wpcf_60x60.jpeg Brandon Carroll @BrandonCarroll
CCIE Instructor, Blogger, and Technology Enthusiast
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/brent-salisbury1-wpcf_60x60.jpeg Brent Salisbury @NetworkStatic
Brent Salisbury works as a Network Architect, CCIE #11972.
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/cmcnamara-headshot-2011-color-scaled-wpcf_42x60.jpg Colin McNamara @ColinMcNamara
Colin McNamara is a seasoned professional with over 15 years experience with network and systems technologies.
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/ethan-banks-headshot-500x667-wpcf_44x60.jpg Ethan Banks @ECBanks
Ethan Banks, CCIE #20655, is a hands-on networking practitioner who has designed, built and maintained networks for higher education, state government, financial institutions, and technology corporations.
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/Ferro-wpcf_60x39.jpg Greg Ferro @EtherealMind
Over the last twenty odd years, Greg has worked Sales, Technical and IT Management but mostly he delivers Network Architecture and Design. Today he works as a Freelance Consultant for F100 companies in the UK & Europe focussing on Data Centres, Security and Operational Automation.
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/09/johnherbert-wpcf_60x60.jpeg John Herbert @MrTugs
John has worked in the networking industry for 14 years, and obtained his CCIE Routing & Switching in early 2001.
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/OBrien-wpcf_60x60.jpeg Josh O’Brien @JoshOBrien77
Josh has worked in the industry for 14 years and is now serving as CTO in the Telemedicine sector.
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/IMG_0264-002-wpcf_60x60.jpg Paul Stewart @PacketU
Paul Stewart is a Network and Security Engineer, Trainer and Blogger who enjoys understanding how things really work.
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/Slattery-wpcf_60x50.jpg Terry Slattery
Terry Slattery, CCIE #1026, is a senior network engineer with decades of experience in the internetworking industry.

There’s likely to be a couple more people on that list before all is said and done.  I really wish that we could have an event with all the potential delegates.  Maybe one day after I finally buy my own 747 we’ll have enough airline seats to fly everyone to Silicon Valley.

Network Field Day 5 Sponsors

There will be an extra full lineup of sponsors this time around.  A few of the details are still being finalized, but here’s the lineup so far:

https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/Juniper-wpcf_100x28.gif https://i0.wp.com/techfieldday.com/wp-content/uploads/2013/01/Secret-Company-wpcf_100x30.png https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/solarwinds_RGB-300x84-wpcf_100x28.jpg

That “secret company” sounds nice and mysterious, doesn’t it? I can’t wait until they’re revealed.  I am always pleased with the lineup of sponsors at each Field Day event.  The leadership and vision provided by these vendors gives us all a great idea of where technology is headed.

What’s Field Day Like?

Network Field Day is not a vacation.  This event will involve starting a day early first thing Wednesday morning and running full steam for two and a half days.  We get up early and retire late.  Wall-to-wall meetings and transportation to and from vendors fill the days.  When you consider that most of the time we’re discussing vendors and presentations on the car ride to the next building, there’s very little downtime.  We’ve been known to have late night discussions about OpenFlow and automation until well after midnight.  If that’s your idea of a “vacation” then Tech Field Day is a paradise.

Tech Field Day – Join In Now!

Everyone at home is as much a participant in Tech Field Day as the delegates on site.  At the last event we premiered the ability to watch the streaming video from the presentations on mobile devices.  This means that you can tune in from just about anywhere now.  There’s no need to stay glued to your computer screen.  If you want to tune out to our last presentations of the day from the comfort of your couch with your favorite tablet device then feel free by all means.  Don’t forget that you can also use Twitter to ask questions and make comments about what you’re seeing and hearing.  Some of the best questions I’ve seen came from the home audience.  Use the hashtag #NFD5 during the event.  Note that I’ll be tagging the majority of my tweets that week with #NFD5, so if the chatter is getting overwhelming you can always mute or filter that tag.

Standard Tech Field Day Sponsor Disclaimer

Tech Field Day is a massive undertaking that involves the coordination of many moving parts.  It’s not unlike trying to herd cats with a helicopter.  One of the most important pieces is the sponsors.  Each of the presenting companies is responsible for paying a portion of the travel and lodging costs for the delegates.  This means they have some skin in the game.  What this does NOT mean is that they get to have a say in what we do.  No Tech Field Day delegate is every forced to write about the event due to sponsor demands. If a delegate chooses to write about anything they see at Tech Field Day, there are no restrictions about what can be said.  Sometimes this does lead to negative discussion.  That is entirely up to the delegate.  Independence means no restrictions.  At times, some Tech Field Day sponsors have provided no-cost evaluation equipment to the delegates.  This is provided solely at the discretion of the sponsor and is never a requirement.  This evaluation equipment is also not a contingency of writing a review, be it positive or negative.  The delegates are in this for the truth, the whole truth, and nothing but the truth.

Are We Living In A Culture Of Beta?

Cisco released a new wireless LAN controller last week, the 5760.  Blake and Sam have a great discussion about it over at the NSA Show.  It’s the next generation of connection speeds and AP support.  It also runs a new version of the WLAN controller code that unifies development with the IOS code team.  That last point generated a bit of conversation between wireless rock stars Scott Stapleton (@scottpstapleton) and George Stefanick (@wirelesssguru) earlier this week.  In particular, a couple of tweets stood out to me:

http://twitter.com/scottpstapleton/status/298620542603366400

Overall, the amount of features missing from this new IOS-style code release is bordering on the point of concern.  I understand that porting code to a new development base is never easy.  Being a fan of video games, I’ve had to endure the pain of watching features be removed because they needed to be recoded the “right way” in a code base instead of being hacked together.  Cisco isn’t the only culprit in this whole mess.  Software quality has been going downhill for quite a while now.

Our culture is living in a perpetual state of beta testing.  There’s lot of blame to go around on this.  We as consumers and users want cutting edge technology.  We’re willing to sacrifice things like stability or usability for a little peak at future awesomeness.  Companies are rushing to be the first-to-market on new technologies.  Being the first at anything is an edge when it comes to marketing and, now, patent litigation.  Producers just want to ship stuff.  They don’t really care if it’s finished or not.

Stability can be patched.  Bugs can be coded out in the next release.  What’s important is that we hit our release date.  Who cares if it’s an unrealistic arbitrary day on the calendar picked by the marketing launch team?  We have to be ready otherwise Vendor B will have their widget out and our investors will get mad and sell off the stock!  The users will all leave us for the Next Big Thing and we’ll go out of business!!!  

Okay, maybe not every conversation goes like that, but you can see the reasoning behind it.

Google is probably the worst offender of the bunch here.  How long was GMail in beta?  As it turns out…five years.  I think they probably worked out most of the bugs of getting electronic communications from one location to another after the first nine months or so.   Why keep it in beta for so long?  I think it was a combination of laziness and legality.  Google didn’t really want to support GMail beyond cursory forum discussion or basic troubleshooting steps.  By keeping it “beta” for so long, they could always fall back to the excuse that it wasn’t quite finished so it wasn’t supposed to be in production.  That also protected them from the early adopters that moved their entire enterprise mail system into GMail.  If you lost messages it wasn’t a big deal to Google.  After all, it’s still in beta, right?  Google’s reasoning for finally dropping the beta tag after five years was that it didn’t fit the enterprise model that Google was going after.  Turns out that the risk analysts really didn’t like having all their critical communication infrastructure running through a project with a “beta” tag on it, even if GMail had ceased being beta years before.

Software companies thrive off of getting code into consumer’s hands.  Because we’ve effectively become an unpaid quality assurance (QA) platform for them.  Apple beta code for iOS gets leaked onto the web hours after it’s posted to the developer site.  There’s even a cottage industry of sites that will upload your device’s UDID to a developer account so you can use the beta code.  You actually pay money to someone for the right to use code that will be released for free in a few months time.  In essence, you are paying money for a free product in order to find out how broken it is.  Silly, isn’t it?  Think about Microsoft.  They’ve started offering free Developer Preview versions of new Windows releases to the public.  In previous iterations, the hardy beta testers of yore would get a free license for the new version as a way of saying thanks for enduring a long string of incremental builds and constant reloading of the OS only to hit a work-stopping bug that erased your critical data. Nowadays, MS releases those buggy builds with a new name and people happily download them and use them on their hardware with no promise of any compensation.  Who cares if it breaks things?  People will complain about it and it will get fixed.  No fuss, no muss.  How many times have your heard someone say “Don’t install a new version of Windows until the first service pack comes out”?  It’s become such a huge deal that MS never even released a Service Pack for Windows 7, just an update rollup.  Even Cisco’s flagship NX-OS on the Nexus 7000 series switches has been accused of being a beta in progress by bloggers such as Greg Ferro (@etherealmind) in this Network Computing article (comment replies).  If the core of our data center is running on buggy unreliable code, what hope have we for the desktop OS or mobile platform?

That’s not to say that every company rushes products out the door.  Two of the most stalwart defenders of full proper releases are Blizzard and Valve.  Blizzard is notorious for letting release dates slip in order to ensure code quality.  Diablo 2 was delayed several times between the original projected date of December 1998 and its eventual release in 2000 and went on to become one of the best selling computer games of all time.  Missing an unrealistic milestone didn’t hurt them one bit.  Valve has one of the most famous release strategies in recent memory.  Every time someone asks found Gabe Newell when Valve will release their next big title, his response is almost always the same – “When it’s done.”  Their apparent hesitance to ship unfinished software hasn’t run them out of business yet.  By most accounts, they are one of the most respected and successful software companies out there.  Just goes to show that you don’t have to be a slave to a release date to make it big.

Tom’s Take

The culture of beta is something I’m all too familiar with.  My iDevices run beta code most of the time.  My laptop runs developer preview software quite often.  I’m always clamoring for the newest nightly build or engineering special.  I’ve mellowed a bit over the years as my needs have gone from bleeding edge functionality to rock solid stability.  I still jump the gun from time to time and break things in the name of being the first kid on my block to play with something new.  However, I often find that when the final stable release comes out to much fanfare in the press, I’m disappointed.  After all, I’ve already been using this stuff for months.  All you did was make it stable?  Therein lies the rub in the whole process.  I’ve survived months of buggy builds, bad battery life, and driver incompatibility only to see the software finally pushed out the door and hear my mom or my wife complain that it changed the fonts on an application or the maps look funny now.  I want to scream and shout and complain that my pain was more than you could imagine.  That’s when I usually realize what’s really going on.  I’m an unpaid employee fixing problems that should never even be in the build in the first place.  I’ve joked before about software release names, but it’s sadly more true than funny.  We spend too much time troubleshooting prerelease software.  Sometimes the trouble is of our own doing.  Other times it’s because the company has outsourced or fired their whole QA department.  In the end, my productivity is wasted fixing problems I should never see.  All because our culture now seems to care more about how shiny something is and less about how well it works.

New Wrinkles in the Fabric – Cisco Nexus Updates

There’s no denying that The Cloud is an omnipresent fixture in our modern technological lives.  If we aren’t already talking about moving things there, we’re wondering why it’s crashed.  I don’t have any answers about these kinds of things, but thankfully the people at Cisco have been trying to find them.  They let me join in on a briefing about the announcements that were made today regarding some new additions to their data center switching portfolio more commonly known by the Nexus moniker.

Nexus 6000

The first of the announcements is around a new switch family, the Nexus 6000.  The 6000 is more akin to the 5000 series than the 7000, containing a set of fixed-configuration switches with some modularity.  The Nexus 6001 is the true fixed-config member of the lot.  It’s a 1U 48-port 10GbE switch with 4 40GbE uplinks.  If that’s not enough to get your engines revving, you can look at the bigger brother, the Nexus 6004.  This bad boy is a 4U switch with a fixed config of 48 40GbE ports and 4 expansion modules that can double the total count up to 96 40GbE ports.  That’s a lot of packets flying across the wire.  According to Cisco, those packets can fly at a 1 microsecond latency port-to-port.  The Nexus 6000 is also an Fibre Channel over Ethernet (FCoE) switch, as all Nexus switches are.  This one is a 40GbE-capable FCoE switch.  However, as there are no 40GbE targets available in FCoE right now, it’s going to be on an island until those get developed.  A bit of future proofing, if you will.  The Nexus 6000 also support FabricPath, Cisco’s TRILL-based fabric technology, along with a large number of multicast entries in the forwarding table.  This is no doubt to support VXLAN and OTV in the immediate future for layer 2 data center interconnect.

The Nexus line also gets a few little added extras.  There is going to be a new FEX, the 2248PQ, that features 10GbE downlink ports and 40GbE uplink ports.  There’s also going to be a 40GbE expansion module for the 5500 soon, so your DC backbone should be able to run a 40GbE with a little investment.  Also of interest is the new service module  for the Nexus 7000.  That’s right, a real service module.  The NAM-NX1 is a Network Analysis Module (NAM) for the Nexus line of switches.  This will allow spanned traffic to be pumped though for analysis of traffic composition and characteristics without taking a huge hit to performance.  We’ve all known that the 7000 was going to be getting service modules for a while.  This is the first of many to roll off the line.  In keeping with Cisco’s new software strategy, the NAM also has a virtual cousin, not surprising named the vNAM.  This version lives entirely in software and is designed to serve the same function that its hardware cousin does only in the land of virtual network switches.  Now that the Nexus line has service modules, kind of makes you wonder what the Catalyst 6500 has all to itself now?  We know that the Cat6k is going to be supported in the near term, but is it going to be used as a campus aggregation or core?  Maybe as a service module platform until the SMs can be ported to the Nexus?  Or maybe with the announcement of FabricPath support for the Cat6k this venerable switch will serve as a campus/DC demarcation point?  At this point the future of Cisco’s franchise switch is really anyone’s guess.

Nexus 1000v InterCloud

The next major announcement from Cisco is the Nexus 1000v InterCloud.  This is very similar to what VMware is doing with their stretched data center concept in vSphere 5.1.  The 1000v InterCloud (1kvIC) builds a secure layer 2 GRE tunnel between your private could and a provider’s public could.  You can now use this tunnel to migrate workloads back and forth between public and private server space.  This opens up a whole new area of interesting possibilities, not the least of which is the Cloud Services Router (CSR).  When I first heard about the CSR last year at Cisco Live, I thought it was a neat idea but had some shortcomings.  The need to be deployed to a place where it was visible to all your traffic was the most worrisome.  Now, with the 1kvIC, you can build a tunnel between yourself and a provider and use CSR to route traffic to the most efficient or cost effective location.  It’s also a very compelling argument for disaster recovery and business continuity applications.  If you’ve got a category 4 hurricane bearing down on your data center, the ability to flip a switch and cold migrate all your workloads to a safe, secure vault across the country is a big sigh of relief.

The 1kvIC also has its own management console, the vNMC.  Yes, I know there’s already a vNMC available from Cisco.  The 1kvIC version is a bit special thought.  It not only gives you control over your side of the interconnect, but it also integrates with the provider’s management console as well.  This gives you much more visibility into what’s going on inside the provider instances beyond what we already have from simple dashboards or status screens on public web pages.  This is a great help when you think about the kinds of things you would be doing with intercloud mobility.  You don’t want to send your workloads to the provider if an engineer has started an upgrade on their core switches on a Friday night.  When it comes to the cloud, visibility is viability.

CiscoONE

In case you haven’t heard, Cisco wants to become a software company.  Not a bad idea when hardware is becoming a commodity and software is the home of the high margins.  Most of the development that Cisco has been doing along the software front comes from the Open Network Environment (ONE) initiative.  In today’s announcement, CiscoONE will now be the home for an OpenFlow controller.  In this first release, Cisco will be supporting OpenFlow and their own OnePK API extensions on the southbound side.  On the northbound side of things, the CiscoONE Controller will expose REST and Java hooks to allow interaction with flows passing though the controller.  While that’s all well and good for most of the enterprise devs out there, I know a lot of homegrown network admins that hack together their own scripts through Perl and Python.  For those of you that want support for your particular flavor of language built into CiscoONE, I highly recommend getting to their website and telling them what you want.  They are looking at adding additional hooks as time goes on, so you can get in on the ground floor now.

Cisco is also announcing OnePK support for the ISR G2 router platform and the ASR 1000 platform.  There will be OpenFlow support on the Nexus 3000 sometime in the near future, along with support in the Nexus 1000v for Microsoft Hyper-V and KVM.  And somewhere down the line, Cisco will have a VXLAN gateway for all the magical unicorn packet goodness across data centers that stretch via non-1kvIC links.


Tom’s Take

The data center is where the dollars are right now.  I’ve heard people complain that Cisco is leaving the enterprise campus behind as they charge forward into the raised floor garden of the data center.  These are the people driving the data that produces the profits that buy more equipment.  Whether it be massive Hadoop clusters or massive private cloud projects, the accounting department has given the DC team a blank checkbook today.  Cisco is doing its best to drive some of those dollars their way by providing new and improved offerings like the Nexus 6000.  For those that don’t have a huge investment in the Nexus 7000, the 6000 makes a lot of sense as both a high speed core aggregation switch or an end-of-row solution for a herd of FEXen.  The Nexus 1000v InterCloud is competing against VMware’s stretched data center concept in much the same way that the 1000v itself competes against the standard VMware vSwitch.  WIth Nicira in the driver’s seat of VMware’s networking from here on out, I wouldn’t be shocked to see more solutions that come from Cisco that mirror or augment VMware solutions as a way to show VMware that Cisco can come up with alternatives just as well as anyone else.

Anatomy of a Blog Post

Did the title of the post catch your eye?  It’s probably a play on words or a quote from a movie.  If the title didn’t do it, the picture normally linked right under it should.  It’s probably something goofy or illustrative of the title.  After that, the next few sentences launch into an overview of the problem.  My blog posts all start out like my real life stories – lots of context so we’re all on the same page before we start discussing things.  Without a good setting, the rest of the story is pretty pointless.  The last sentence of the first paragraph is usually a question or statement relating the background to the main point.

This is the paragraph where the central point discussion starts.  Now that everyone is on the same page, the real analysis can start.  With the opening setting in mind, it’s time to lead into whatever the main point of this blog post will be.  I usually bring up commonly discussed aspects of the problem, such as urban legend or commonly held beliefs.  That way, people are nodding their heads as they read along.  Everything should be laid out on the table as an overview before diving into the topics in depth.

This is a section header designed to catch your eye or a central point that I want to reinforce.

Here is where I start dissecting the points from above.  Each point gets a paragraph and a discussion about the salient points.  Falsehoods are refuted.  Truths are reinforced.  If this is a review, there is discussion of a major section or general theme of the reviewed item.  Self contained sections are easy to digest. Plus, I’ll just keep repeating them all until I’ve brought up all the points from the introductory paragraph.  It try to keep these depth discussions to around three paragraphs because it’s easier for people to remember things with less than twenty seven parts.

There's probably some code or output in this section.  It's easier
 to type in one of these boxes.  Plus, you can usually just copy 
and paste whatever it is into your device.

Here’s where I start trying to wrap everything up and bring all the points and discussion together.  That way the big picture has now been fully developed and fleshed out.  If there are any other pieces that aren’t germane to the discussion or forward-looking statements about how the situation may change in the future, I’ll put them here as things to ponder as you get up from your desk to walk around and hope they hit you later and make you want to leave a comment.


Tom’s Take

Alliteration is awesome, right?  This is the section where I offer my own opinion about things.  Yes, many of my posts are already overloaded with opinion, but here is where I relate the whole thing to me and my outlook on things.  This is also the section where I use the “I” word, whereas I try to avoid it above.  I literally draw a line on the page so people realize this is something a bit different that what comes above.  In many ways, this can serve as a too long, didn’t read portion if you’re only interested in opinion.  I freely admit that I borrowed this idea from Stephen Foskett and his “Stephen’s Stance” closers.  I’ll probably make a flippant comment here and there, but I try to keep things coherent and on point.  And finally, when I wrap up, I usually call back to the title of the post or central theme in a funny way to reinforce what I’ve just talked about.  Anatomically speaking, of course.

If you’re curious where I got the idea for this 300th blog post, you can watch the video from Da Vinci’s Notebook for “Title Of The Song”:

Incremental Awesomeness – Boiling Frogs

Frog on a Saucepan - courtesy of Wikipedia

Frog on a Saucepan – courtesy of Wikipedia

Unless you’ve been living under a big rock for the last couple of weeks, you’ve no doubt heard about the plunge that Apple stock took shortly after releasing their numbers for the previous quarter.  Apple sold $54 billion dollars worth of laptops, desktops, and mobile devices.  They made $13 billion dollars in profit.  They sold 47 million iPhones and almost 23 million iPads.  For all of these record-setting numbers, the investors rewarded Apple by driving the stock down below $500 dollars a share, shaving off a full 10% of Apple’s value in after-hours trading after the release of these numbers.  A lot of people were asking why a fickle group of investors would punish a company making as much quarterly profit as the gross domestic product of a small country.  What has it come to that a company can be successful beyond anyone’s wildest dreams and still be labeled a failure?

The world has become accustomed to incremental awesomeness.

Apple is as much to blame as anyone else in this matter, but almost every company is guilty of this in some form or another.  We’ve reached the point in our lives where we are subjected to a stream of minor improvements on things rather than huge, revolutionary changes.  This steady diet of non-life changing features has soured us on the whole idea of being amazed by things.  If you had told me even 5 years ago that I would possess a device in my pocket that had a camera, GPS, always-on Internet connection, appointment book, tape recorder, and video camera, I would have either been astounded or thought you crazy.  Today, these devices are passé.  We even call phones without these features “dumb phones” as if to demonize them and those that elect to use them.  We can no longer discern between the truly amazing and the depressingly commonplace.

When I was younger, I heard someone ask about boiling a frog alive.  I was curious as to what lesson may lie in such a practice.  If you place a frog into a pot of boiling water, it will hop right back out as a form of self-preservation.  However, if you place a frog in a pot of tepid water and slowly raise the temperature a few degrees every minute, you will eventually boil the frog alive without any resistance.  Why is that?  Well, by slowly raising the temperature of the water, the frog becomes accustomed to the change.  A few degrees one way or the other doesn’t matter to the frog.  However, those few degrees eventually add up to the boiling point.

We find ourselves in the same predicament.  Look at some of the things that users are quibbling over on the latest round of phones and other devices.  The Nexus 4 phone is a failure because it doesn’t have LTE.  The iPad Mini is useless because it doesn’t have a Retina screen.  The iPhone 5 is far from perfect because it’s missing NFC or it’s not a 5-inch phone.  The Nexus 7 needs more storage and shouldn’t be Wi-Fi only.  Look at any device out there and you will find that they are missing features that would keep them from being “perfect”.  Those features might as well be things like inability to read your mind or project information directly onto the cornea.  I’ve complained before that Google is setting up Google Glass to be a mundane gadget because they aren’t thinking outside their little box.  This kind of incremental improvement is what we’ve become accustomed to.  Think about the driverless car that Google is supposedly working on.  It’s an exciting idea, right? Now, think about that invention in 5 years time when it becomes ubiquitous.  When version 6 or 7 of the driverless car is out, we’re going to be complaining about how it doesn’t anticipate traffic conditions or isn’t able to fly.  We will have become totally unimpressed with how awesome the idea of a driverless car is because we’re concentrating on the things that it doesn’t have.

We want to be impressed and surprised by things.  Even when we are confronted with groundbreaking technology, we reject it at first out of spite.  Remember how the iPad was going to be a disaster because people don’t want to use a big iPhone?  Now look at how many are being used.  People want to walk away from a product announcement with a sense of awe and wonder, not a list of features and the same case as last year.  We’ve stopped looking at each new object with a sense of wonder and amazement and instead we focus on the difference from last year’s model.  Every new software or hardware release raises the temperature a few more degrees.  Before long, we’re going to be boiling in our own contempt and discontent.  And the next generation is going to have it even worse.  Even now, I find my kids are spoiled by the ability to watch TV shows on a tablet in any room in the house on their schedule instead of waiting for an episode to air.  They no longer even need to remember to record their favorite show on the DVR.  They just launch the app on their table and watch the show whenever they want.  Something that seems amazing and life-changing to me is commonplace to them.  All of this has happened before.  All of this will happen again.

Instead of judging on incremental advancements, we should start looking at things on the grand scale.  Yes, I know that some companies are going to constant underwhelm the buying public by delivering products that are slightly more advanced than the previous iteration for an increased cost.  However, when you step back and take a look at everything on a long enough time line, you’ll find that we are truly living in an age when technology is amazing and getting better every day.  Sure, I’m waiting for user interfaces like the ones from Minority Report or the Avengers.  I want a driverless car and a thought interface for my computer/phone/widget.  But after seeing what happens to companies that are successful beyond their wildest imaginations I’ll be doing a much better job of looking at things with the proper perspective.  After all, that’s the best way to keep from getting boiled.

Cisco Live 2013 CAE – Don’t Stop Believing

CiscoLive2013Logo

Cisco Live 2013 is coming to you this year from Orlando, FL.  After a 5-year absence, everyone’s favorite networking company on Tasman Drive returns to the Sunshine State to bring information and discussion to legions of network rock stars with Open Arms.  However, all work and no play makes networkers very dull.  That’s why there is an event to make us all feel appreciated.

What would Cisco Live be without the Customer Appreciation Event (CAE)?  In the past six years that I’ve attended Cisco Live, I’ve been a part of some very exciting times.  Watching Devo in the middle of San Francisco Bay.  Seeing KISS in Anaheim.  Watching the Barenaked Ladies on stage at the House of Blues in Orlando.  There’s always fun to be had and good time all around at the CAE.  This year promises to be no exception.

Universal entry with Cisco logo

The 2013 Customer Appreciation Event is going to be held inside Universal Studios Florida!  I had a great time in 2008 wandering around the Universal backlot.  I got to ride the rides, see the Back to the Future DeLorean, and watch an awesome concert.  It’s nice to have access to such a wonderful theme park and it’s super nice of them to host 10,000 invading nerds looking for geeky t-shirts and lots of pictures next to the T-800 outside the Terminator 3-D ride.  I’m going to make sure to bring an extra poncho again this year just in case we get one of those famous Florida thunderstorms, but I hope the rain holds off long enough for everyone to have a good time. With all the available attractions at Universal Studios Florida, there’s almost too much to do in one evening.  Really, there’s a good time to be had pretty much Any Way You Want It.  And that’s not even taking into consideration the star attraction for the CAE.

The headline band for the CAE always generates a lot of buzz.  Whether it’s KISS, the B-52s, or Weezer, people want to see the best.  The attendees Faithfully come to the CAE to be entertained.  In the last couple of years, Cisco Live has given fans the opportunity to vote on the headline band for the CAE.  This year’s vote was a close one that included some great artists like Beck and Jane’s Addiction.  But in the end, the fans went their Separate Ways with the other options.  I give you the Cisco Live 2013 headline band:

_AS__DSC1361DD.1 copy_JC

The Cisco Live 2013 Customer Appreciation Band – Journey!

Journey!  Folks, I can hear the kareoke now.  While I’m still a huge fan of all the other bands, I think having a headline act with such wide appeal promises to have an epic level of fun for everyone.  I’m really hoping that unlike last year, I’ll get to Stay Awhile at this CAE and enjoy all the entertainment to be had at Universal Studios.  I also hope I get to see all of the awesome attendees there as well.  I promise to keep the Touchin’, Lovin’, and Squeein’ to a minimum.  Okay, I promise I’m done with the Journey puns.  For now.

Cisco Live 2013 is still a few months off, but stay tuned for more great info coming up.  Once I find out who the special guest keynote speaker will be, I’ll be sure to let everyone know.  We’re also in the early stages of planning the big Tweetup and I’ll have the Cisco Live 2013 Twitter list posted soon.  There may also be a few more surprises in store, so be sure to keep your eyes peeled.