Minimizing MacGyver

I’m sure at this point everyone is familiar with (Angus) MacGyver.  David Lee Zlotoff created a character expertly played by Richard Dean Anderson that has become beloved by geeks and nerds the world over.  This mulletted genius was able to solve any problem with a simple application of science and whatever materials he had on hand.  Mac used his brains before his brawn and always before resorting to violence of any kind.  He’s a hero to anyone that has ever had to fix an impossible problem with nothing.  My cell phone ringtone is the Season One theme song to the TV show.  It’s been that way ever since I fixed a fiber-to-Ethernet media converter with a paper clip.  So it is with great reluctance that I must insist that it’s time network rock stars move on from my dear friend MacGyver.

Don’t get me wrong.  There’s something satisfying about fixing a routing loop with a deceptively simple access list.  The elegance of connecting two switches back-to-back with a fiber patch cable that has been rigged between three different SC-to-ST connectors is equally impressive.  However, these are simply parlor tricks.  Last ditch efforts of our stubborn engineer-ish brains to refuse to accept failure at any cost.  I can honestly admit that I’ve been known to say out loud, “I will not allow this project to fail because of a missing patch cable!”.  My reflexes kick in, and before I know it I’m working on a switch connected to the rest of the network by a strange combination of bailing wire and dental floss.  But what has this gained me in the end?

Anyone that has worked in IT knows the pain of doing a project with inadequate resources or insufficient time.  It seems to be a trademark of our profession.  We seem like miracle workers because we can do the impossible from less than nothing.  Honestly though, how many times have we put ourselves into these positions because of hubris or short-sightedness?  How many times have we equivocated to ourselves that a layer 2 switch will work in this design?  Or that a firewall will be more than capable of handling the load we place on it even if we find out later that the traffic is more than triple the original design?  Why do we subject ourselves to these kinds of tribulations knowing that we’ll be unhappy unless we can use chewing gum and duct tape to save the day?

Many times, all it takes is a little planning up front to save the day.  Even MacGyver does it. I always wondered why he carried a roll of duct tape wherever he went.  The MacGyver Super Bowl Commercial from 2009 even lampooned his need for proper preparation.  I can’t tell you the number of times I’ve added an extra SFP module or fiber patch cable knowing that I would need it when I arrived on site.  These extra steps have saved me headaches and embarrassment.  And it is this prior proper planning that network engine…rock stars must rely on in order to do our jobs to the fullest possible extent.  We must move away from the bailing wire and embrace the bill of materials.  No longer should we carry extra patch cables.  Instead we should remember to place them in the packages before they ship.  Taking things for granted will end in heartache and despair.  And force us to rely less on our brains and more on our reflexes.

Being a Network MacGyver makes me gleam with pride because I’ve done the impossible.  Never putting myself in the position to be MacGyver makes me even more pleased because I don’t have to break out the duct tape.  It means that I’ve done all my thinking up front.  I’m content because my project should just fall into place without hiccups.  The best projects don’t need MacGyver.  They just need a good plan.  I hope that all of you out there will join me in leaving dear Angus behind and instead following a good plan from the start.  We only make ourselves look like miracle workers when we’ve put ourselves in the position to need a miracle.  Instead, we should dedicate ourselves to doing the job right before we even get started.

Is Dell The New HP? Or The Old IBM?

Dell announced it’s intention today to acquire Sonicwall, a well-respected firewall vendor.  This is just a latest in a long line of fairly recent buys for Dell, including AppAssure, Force10, and Compellent.  There’s been a lot of speculation about the reasons behind the recent flurry of purchases coming out of Austin, TX.  I agree with the majority of what I’m hearing, but I thought I’d point out a few things that I think make a lot of sense and might give us a glimpse into where Dell might be headed next.

Dell is a wonderful supply chain company.  I’ve heard them compared to Walmart and the US military in the same breath when discussing efficiency of logistic management.  Dell has the capability of putting a box of something on your doorstep within days of ordering.  It just so happens that they make computer stuff.  For years, Dell seemed to be content to partner with companies to utilize their supply chain to deliver other people’s stuff.  After a while, Dell decided to start making that stuff for themselves and cut out the middle man.  This is why you see things like Dell printers and switches.  It didn’t take long for Dell to change it’s mind, though.  It made little sense to devote so much R&D to copying other products.  Why not just spend the money on buying those companies outright?  I mean, that’s how HP does it, right?  And so we start the acquisition phase for Dell.  Since acquiring Equallogic in 2008, they’ve bought 5 other companies that make everything from enterprise storage to desktop management. They only thing they’ve missed on was acquiring 3PAR, which happened because HP threw a pile of cash at 3PAR to not go to Dell.  I’m sure that was more about denying Dell an enterprise storage vendor than it was using 3PAR to its fullest capabilities.

Dell still has a lot of OEM relationships, though.  Their wireless solution is OEMed from Aruba.  They resell Juniper and Brocade equipment as their J-series and B-series respectively.  However, Dell is trying to move into the data center to fight with HP, Cisco, and IBM.  HP already owns a data center solution top to bottom.  Cisco is currently OEMing their solution with EMC (vBlock).  I think Dell realizes that it’s not only more profitable to own the entire solution in the DC, it’s also safer in the long term.  You either support all your own equipment, or you have to support everyone’s equipment.  And if you try to support someone else’s stuff, you have to be very careful you don’t upset the apple cart.  Case in point: last year many assumed Cisco was on the outs with EMC because they started supporting NetApp and Hyper-V.  If you can’t keep your OEM DC solution partners happy, you don’t have a solution.  From Dell’s perspective, it’s much easier to appease everyone if they’re getting their paychecks from the same HR department.  Dell’s acquisitions of Force10 and, now, Sonicwall seem to support the idea that they want the “one throat to choke” model of solution delivery.  Very strategic.

They only problem that I have with this kind of Innovation by Acquisition strategy is that it only works when upper management is competent and focused.  So long as Michael Dell is running the show in Austin, I’m confident that Dell will make solid choices and bring on companies that complement their strategies.  Where the “buy it” model breaks down is when you bring in someone that runs counter to your core beliefs.  Yes, I’m looking at HP now.  Ask them how they feel about Mark Hurd basically shutting down R&D and spending their war chest on Palm/WebOS.  Ask them if they’re still okay with Leo Apotheker reversing that decision only months later and putting PSG on a chopping block because he needed some cash to buy a software company (Autonomy) because software is all he knows.  If the ship has a good captain, you get where you’re going.  If the cook’s assistant is in charge, you’re just going to steam in circles until you run out of gas.  HP is having real issues right now trying to figure out who they want to be.  A year of second guessing and trajectory reversals (and re-reversals) have left many shell shocked and gun shy, afraid to make any more bold moves until the dust settles.  The same can be said of many other vendors.  In this industry, you’re only as successful as your last failed acquisition.

On the other hand, you also have to keep moving ahead and innovating.  Otherwise the mighty giants get left behind.  Ask IBM how it feels to now be considered an also-ran in the industry.  I can remember not too long ago when IBM was a three-letter combination that commanded power and respect.  After all, as the old saying goes “No one every got fired for buying IBM.”  Today, the same can’t be said.  IBM has divested all of its old power to Lenovo, spinning off the personal systems and small server business to concentrate more on the data center and services division.  It’s made them a much leaner, meaner competitor.  However, it’s also reaved away much of what made them so unstoppable in the past.  People now look to companies like Dell and HP to provide top-to-bottom support for every part of their infrastructure.  I can speak from experience here.  I work for a company founded by an ex-IBMer.  For years we wouldn’t sell anything that didn’t have a Big Blue logo on it.  Today, I can’t tell you the last time I sold something from IBM.  It feels like the industry that IBM built passed them by because they sold off much of who they were trying to be what they wanted.  Now that they are where they want to be, no one recognizes who they were.  They will need to start fighting again to regain their relevance.  Dell would do good to avoid acquiring too much too fast to avoid a similar fate.  Once you grow too large, you have to start shedding things to stay agile.  That’s when you start losing your identity.


Tom’s Take

So far, reaction to the Sonicwall purchase has been overwhelmingly positive.  It sets the stage for Dell to begin to compete with the Big Boys of Networking across their product lines.  It also more or less completes Dell’s product line by bringing everything they need in-house.  They only major piece they are still missing is wireless.  They OEM from Aruba today, but if they want to seriously compete they’ll need to acquire a wireless company sooner rather than later.  Aruba is the logical target, but are they too big to swallow so soon after Sonicwall?  And what of their new switching line?  No sense trampling on PowerConnect or Force10.  That leaves other smaller vendors like Aerohive or Meraki.  Either one might be a good fit for Dell.  But that’s a blog post for another day.  For right now, Dell needs to spend time making the transition with Sonicwall as smooth as possible.  That way, they can just be Dell.  Not the New HP.  And not the Old IBM.

2011 in Review, 2012 in Preview

2011 was a busy year for me.  I set myself some rather modest goals exactly one year ago as a way to keep my priorities focused for the coming 365 days.  How’d I do?

1. CCIE R&S: Been There. Done That. Got the Polo Shirt.

2. Upgrade to VCP4: Funny thing.  VMware went and released VMware 5 before I could get my VCP upgraded.  So I skipped straight over 4 and went right to 5.  I even got to go to class..

3. Go for CCIE: Voice: Ha! Yeah, I was starting to have my doubts when I put that one down on the list.  Thankfully, I cleared my R&S lab.  However, the thought of a second track is starting to sound compelling…

4. Wikify my documentation: Missed the mark on this one.  Spent way to much time doing things and not enough time writing them all down.  I’ll carry this one over for 2012.

5. Spend More Time Teaching: Never got around to this one.  Seems my time was otherwise occupied for the majority of the year.

Forty percent isn’t bad, right?  Instead, I found myself spending time becoming a regular guest on the Packet Pushers podcast and attending three Tech Field Day Events: Tech Field Day 5, Wireless Field Day 1, and Network Field Day 2.  I’ve gotten to meet a lot of great people from social media and made a lot of new friends.  I even managed to keep making blog posts the whole year.  That, in and of itself, is an accomplishment.

What now?  I try to put a couple of things out there as a way to hold myself to the fire and be accountable for my aspirations.  That way, I can look back in 2013 and hopefully hit at least 50% next time.  Looking forward to the next 366 days (356 if the Mayans were right):

1. Juniper – I think it’s time to broaden my horizons.  I’ve talked to the Juniper folks quite a bit in 2011.  They’ve given me a great overview of how their technology works and there is some great potential in it.  Juniper isn’t something I run into every day, but I think it would be in my best interest to start learning how to get around in the curly CLI.  After all, if they can convert Ivan, they must really have some good stuff.

2. Data Center – Another growth area that I feel I have a lot of catching up to do is in the data center.  I feel comfortable working on NX-OS somewhat, but the lack of time I get to configure it every day makes the rust a little thick some times.  If it wasn’t for guys like Tony Mattke and Jeff Fry, I’d have a lot more catching up to do.  When you look at how UCS is being positioned by Cisco and where Juniper wants to take QFabric, I think I need to spend some time picking up more data center technology.  Just in case I find myself stranded in there for an extended period of time.  Can’t have this turning into the Lord of the CLIs.

3. Advanced Virtualization – Since I finally upgraded my VCP to version 5, I can start looking at some of the more advanced certifications that didn’t exist back when I was a VCP3.  Namely the VCAP.  I’m a design junkie, so the DCD track would be a great way for me to add some of the above data center skills while picking up some best practices.  The DCA troubleshooting training would be ideal for my current role, since anything beyond a simple check of vCenter is all I can muster in the troubleshooting arena.  I’d rather spend some time learning how the ESXi CLI works than fighting with a mouse to admin my virtual infrastructure.

4. Head to The Cloud – No, not quite what you’re thinking.  I suffered an SSD failure this year and if it hadn’t been for me having two hard drives in my laptop, I’d probably have lost a good portion of my files as well.  I keep a lot of notes on my laptop and not all of them are saved elsewhere.  Last year I tried to wikify everything and failed miserably.   This year I think I’m going to take some baby steps and get my important documents and notes saved elsewhere and off my local drives.  I’m looking to replace my OneNote archive with Evernote and keep my important documents in Google Docs as opposed to local Microsoft Word.  By keeping my important documents in the cloud, I don’t have to sweat the next drive death quite as much.

The free time that I seem to have acquired now that I’ve conquered the lab seems to have been filled with a whole lot of nothing.  In this industry, you can’t sit still for very long or you’ll find yourself getting passed by almost everyone and everything.  I need to sharpen my focus back to these things to keep moving forward and spend less time sitting on my laurels.  I hope to spend even more time debating technology with the Packet Pushers and engaging with vendors at Tech Field Day.  Given how amazing and humbling 2011 was, I can’t wait to see what 2012 has in store for me.

IT Archetypes and Tech Field Day Delegates

Thanks to Ivan Pepelnjak’s weekly link post, I found myself reading a very interesting piece this weekend entitled The Rosetta Stone of IT Industry Analysts.  Brian Sommer took a humorous look at the types of people that he sees all the time in the analyst field.  From the grouchy old Curmudgeon to the prissy-pants Egoist, I had a very good laugh since I could identify with many of those caricatures.  Then I spent a little more time thinking about what that means to me and to those affiliated with Tech Field Day.

Obviously, many of these are oversimplifications and written for the sake of laughs.  However, I also found myself going through each of them and realizing that I’ve been that person many times in the past.  Whether it be the Fish Out of Water when people start talking about advanced fibre channel configurations or or the Snark when I have a chance to make a joke about something, I find myself floating in and out of these roles.  On the other hand, I do see that there are a couple that are great for those that are interested in Tech Field Day, as well as a couple that need to be avoided.

In the article, Brian specifically calls out the Rifleman as his preferred archetype for an analyst.  The Rifleman holds vendors to their word and cuts through the hype with a straight razor.  Their words are usually carefully chosen to ensure that the balloon of overpromises is deflated with a quick poke, usually followed by others jumping in to assist in the takedown.  For the Tech Field Day hopefuls (and delegates as well), this is the way to approach interactions with vendors.  If you can quickly understand where they are coming from and eliminate hype, you can gain the advantage and ensure that the audience, whether it be viewers on video or readers of you blog, can understand what makes a technology so great and grasp concepts with ease.

The Rifleman does run the risk of becoming the Curmudgeon or the Assassin without careful consideration.  It’s very easy to lose sight of the goals of being a skeptic when it comes to vendor presentations and begin thrashing presenters simply because it’s fun to be the bad guy.  In the IT analyst world, this is very simliar to the Dark/Light sides of the Force in Star Wars.  The slippery slope of beating people up gives way to becoming the grump that never likes anything and is more than likely just going to verbally abuse you whether you’re selling data center switches or air fresheners.  The key to avoid slipping down the dark path is to constantly ask yourself why you are being so sharp toward the vendors: Is it for your audience?  Or for your own glory?  I’ve been hard on some vendors before during Tech Field Day because I think they can do a better job of delivering their message or because they can make a better product.  I want to make sure the vendors understand where the audience is coming from.  I always try to put myself in the shoes of the people that will read my posts to be sure my motives are pure when I take someone to task.

I also do my best to avoid falling into the roles of the Ryan Secrest vendor cheerleader or the stoic Unmovable Object.  If I only spend my time giving useless platitudes to presenters and vendors my opinion isn’t worth much.  At the same time, never changing my mind or critically thinking about information being given to me is just as bad.  Without opening my mind to new ideas I become a liability in a setting like Tech Field Day where keeping up to date with people bring fresh ideas and products to market is a requirement.


Tom’s Take

The key to being a good Tech Field Day delegate is to be somewhat outgoing.  I’ve done my best to ensure I don’t spend my time at the back of the room sitting quietly and learning very little.  At the same time, I also understand that I need to be sure that my questions and commentary are carefully chosen to enhance the event and the participants rather than merely cutting them down for the sake of making a few look good.  With this list of IT analyst archetypes, I can do a much better job of identifying when I’m slipping too close to the undesirable attitudes that no one likes.  Instead, I can refocus myself on being more effective and ensuring that everyone involved, both participant and audience, gets the most they can out of the event.

Network Consumer Reports

I’m a huge fan of Consumer Reports magazine.  They do a great job of reviewing all manner of products from household appliances to SUVs.  They provide unbiased reviews for all products because they do not accept any outside advertising from companies nor do they accept any free samples from manufactures, instead choosing to purchase all of the items they review outright.  This gives them a substantial amount of credibility in the industry and their opinion has been known to influence the direction of many manufacturers, especially in the automotive arena.

Why is it that reviews in the networking space don’t have the same reputation?  Networking manufacturers are quick to refer to Gartner numbers or Tolly reports that back their equipment as being superior to their competitors.  For the most part, mention of either of these two names around network rock stars brings groans and cat calls.  The general consensus that I get from people I’ve talked to is that many of these reports are simply bought and paid for.  Joe Onisick has a great blog post about talking with the founder of Tolly about this very subject.  Many reports that are sponsored by a company are (suprisingly) critical of the sponsor’s competitors and give favorable reviews to said sponsors.  Not all that shocking when you think about it.  Even discounting the idea that the report could be massaged in favor of the sponsoring company, the odds are good that an unfavorable review would just be buried and never see the light of day.

This pattern of sponsored reports tends to leave the average network rock star jaded and distrustful of any testing that they haven’t done themselves.  Alas, when moving into a new field or testing equipment outside of the comfort zone it becomes quite easy to get lost and being making mistakes or missing key features or options.  Why can’t we do something about that?  Maybe we can…

I’d like to see a Consumer Reports type of service for networking.  It would have to adhere to the same rules that the Consumer’s Union uses for Consumer Reports.  No advertising, which also means that the reports can’t be used by the vendors for the purposes of selling their product.  That means no touting of the latest scores of the newest switches from Network Consumer Reports (NCR).  Also, all the equipment would need to be purchased outright from the vendors or through distributors or value added resellers (VARs).  This would also introduce some difficulties, as many vendors require complex designs before equipment will be sold or require the interaction of a VAR in order ensure the equipment will be installed correctly.  In order to ensure they fairness and impartiality of the tests, these people must be excluded from the configuration process and only be around for purchasing and delivery.  Only members of NCR would be allowed to install and configure the equipment.  Naturally, it’s going to take some skilled people to do that.

When the equipment for a given test or scenario arrives, it will be configured based on best practice guidelines for the vendor/manufacturer.  These practices should be found on the vendor’s website and be easily available.  No shortcuts or undocumented configurations would be allowed at first.  This is to ensure fairness as well as making the vendors responsible for the documentation that is provided to customers.  For a given test, traffic generators would be used to simulate all kinds of traffic patterns in a real world environment.  That would be similar to things that the real Consumer Reports does, like measuring fuel economy themselves rather than relying on the manufacturer’s EPA fuel economy numbers.  I’d rather see numbers I can believe with strict definitions of traffic rather than seeing tests that provide advantages, such as using different packet sizes for throughput versus latency.  Numbers you can trust are very important.

Once the tests are run and the reports have been generated, each vendor will be contacted with the reports and offered a small window of time to “tweak” things.  You have to offer this chance because invariably vendors begin grousing about not having a chance to fix the broken things.  Let’s say they are given 24 hours to modify the base configurations to increase throughput or reduce latency with the same traffic types used in the first test.  After the 24 hours, the tests will be readministered and the results recorded. However, any changes from the best practices will be documented.  If the new, “tweaked” configuration provides additional advantages, the report should then ask why those tweaks are not included in the best practices.  Each vendor will only be able to work on their own equipment and will not be informed of the results of any other vendor’s test.  In fact, they won’t even be informed which other vendors are being tested.  This is to ensure that no one has the opportunity to spread fear, uncertainty, or doubt (FUD) about a different competing solution.  Facts only here, folks.

After all of this, the reports will be published for all to see.  Perhaps there would be some kind of subscription service to reduce the astronomical cost associated with the acquisition and setup of the equipment.  This would only be necessary to avoid the need to rely on angel investors or the independently wealthy to capitalize such a large project.  Once the reports are published, the subscribers can trust in the content and use it however they see fit to begin to plan new projects or purchase equipment.


Tom’s Take

Why is it so hard to find a voice to trust when it comes to network reviews?  Why do I have to constantly ask myself “Who is behind this report?” I never worry about that when I read Consumer Reports.  I can trust the information they provide because I know it isn’t bought and paid for.  It would be wonderful to have something like that in the networking/storage/server space.  I’m sure the people out there right now do decent jobs of reviewing equipment, but none of them are the go-to type of publication like Consumer Reports.  Of course, bringing that kind of reporting to the IT world has a lot of huge challenges. Between getting capitalized and trying to find a way to buy large amounts of gear without raising any fuss from vendors, it would be a large undertaking.  However, if you can provide credibility with your reports and aid people in making good decisions for their businesses, I think you could make a go of it.  Let’s hope that this isn’t a pipe dream sometime down the road.

*Note: Consumer Reports is a trademark of Consumer’s Union and my use of their publications for examples in this post should not infer any kind of endorsement.

Declaration of Independence

Independence Hall, Philadelphia PA

Blogging is a very fun way to get your ideas out in the open and generate discussion about them.  It’s a great way to show people how your mind works and how you can apply critical thinking skills to problems.  It’s also a wonderful way to sneak movie references into long form prose.  But what happens when the words coming out of your mouth aren’t necessarily yours?

This blog is the sole creation of myself.  I take ideas from all over the place and write about them.  Some people stoke the fires of my creative mind.  Others say things that get me going off on a tangent.  Ultimately though, the words that spill out on this page are mine.  They represent my thoughts and feelings.  Though inspired by many sources, in the end the posts and comments I make belong to me.  I don’t consult anyone before posting.  Only rarely do I inform anyone about a pending post, and even then it is simply to ensure I’m not revealing privileged information or breaking the law.  Sometimes I’ve been contacted about material I’ve written and questioned about my feelings on the matter.  While I do reserve the right to change my mind in relation to a subject, I do take umbrage at being told to change my mind against my wishes.

It’s no secret that some people out there are simply regurgitating information being fed to them.  Public relations people come up with creative ways to write about things that look very humble and interesting at first, only to find later on that you’ve been led into a sales pitch by the nose.  I don’t like this method of tricking me into forming an opinion.  At the same time, I also realize how easy it can be to fall into the same trap when I am the content creator in question.  I’ve always tried my hardest to stay independent when it comes to the opinions and information disseminated on this blog.  I’ve done this because I owe it to my readers to ensure my impartiality and disconnection from things.  I want people to know who *I* am, not who someone or something thinks I should be.  Due to my involvement with Tech Field Day and my related posts, some in the community have accused me of being a vendor shill.  I regurgitate information I’ve been told to post without regard for accuracy.  In return, I receive some unknown benefit or use my influence to gain an army of acolytes to stroke my ever-growing ego.

I hope you all know by now nothing could be further from the truth.

When I talk to vendors or companies about my blog, I make it very clear from the start that I am independent.  What does that mean?  It means I write with three tenants in mind:

I Write What I Want – You can show me presentation after presentation and speak to me for hours on end about your product or cool new widget.  I may whittle this down to a 3 paragraph post.  That’s my prerogative as a blogger.  I tend to cut the fat away from things when I post them as a courtesy to my readers.  If I think something you have is cool, I may focus on it.  I also reserve the right to talk about your presentation and delivery methods.  The point is that I choose the topic.  I’ve been sent “suggested topic” emails in the past from companies.  I trashed them as soon as I read the subject line.  These are my words, and I’ll be the one to choose them, thank you very much.

I Write When I Want – As a rule, I respect product embargoes.  If you have a big PR campaign that is firing up next month and you give me a product briefing, I’ll respect the street date for your information.  However, don’t expect me to churn out a post timed to be out on the day of launch.  I choose when to post my information.  I do this to avoid traps like a company asking me to hold any negative opinions until months after launch to ensure the hype machine is operating at full efficiency.  Or worse yet, asking me to post the day before a competitor’s new product comes out in an attempt to steal their thunder.  I time my posts so that I don’t overload people with information.  If I need to put off talking about something for a day or two, that is my choice.  Having a pushy PR person breathing down my neck to contribute to the hype machine before launch day tends to get on my nerves.

I Write If I Want – Obligation is a funny thing.  Once you’ve been locked in by it, you have effectively lost your free will.  Tech Field Day works like this: I write if I want to.  Sure, the companies involved with Tech Field Day usually see me as a way to generate some press about them.  However, it is never expected that I am required to write about a presenting company.  I write about you because *I* want to, not because you want me to.  Requiring me to make a post about your widget is a great way to make me not do it.  Imagine if Walter Cronkite was told he must write about the president’s new social security plan if he ever wanted a one-on-one interview again. Crazy, right?  Journalists choose their topics ahead of time and do research.  The topics they don’t think will be important get shelved.  Much is the same with me.  Rather than flood your RSS feeds with useless garbage, I try to bring entertainment and information.  Inviting me to a product briefing in Rome and then telling me I have to write five posts about it to “pay” for my trip will get you a kind “no thank you”.  Persistence will be met with less kind words.

Keeping those things in mind every time I sit down to write helps me ensure that my opinion is independent and accurate.  If my opinion is bought and paid for, it serves no purpose for anyone.  If I become a mouthpiece, my mind is no longer of use to the community.

What about you?  What about the bloggers that are just starting out?  How can you remain independent?  Sometimes the choices aren’t as easy as black and white.  PR people get paid to influence opinion.  At best, they are a useful tool to help a company’s image.  At worst, they aren’t much better than con men.  The shady ones can trick you into doing what the want quite subtly.  They will give you suggested posts or offer to help you have a more effective message.  They’ll ask you for editorial control over your writing.  They may even write your message for you over the course of communication.

The key to remaining independent is to remember: When the words coming out of your mouth aren’t yours, you are no longer independent.  If you find yourself being coached by a vendor in what they want you to post, you’re now an unpaid employee of their PR department.  Don’t give in.  Make sure your terms are clear up front.  Ensure that the vendors and manufacturers know your feelings.  Don’t start down the slippery slope of letting someone else choose your words for you.  It’s okay to ask for advice so long as you keep in mind where the advice is coming from.

I do want to make it clear that there are some vendor-employed bloggers that I consider to be independent.  Christofer Hoff and Brad Hedlund immediately come to mind.  These guys might be employed by a vendor, but they aren’t shills.  They give their thoughts freely without reservation and make sure to define themselves outside their day jobs.  I try to do much the same.  While I do work for a Value Added Reseller (VAR), I consider my blogging activities to be a totally different aspect of my career.  I refuse to kowtow to a vendor or manufacturer for preferential treatment.  I’m nice to those that have earned my respect.  I’m frosty to those that have earned a cold shoulder.  I’m consistent because there is nothing coloring my opinion beyond my own thoughts (and the occasional glass of bourbon).

Tom’s Take

I hold these blogging truths to be self evident: All bloggers have the rights to life, liberty, and the pursuit of blogging happiness.  At the same time, they have the responsibility to be sure their voice is the one being heard, not someone else.  Bloggers are powerful tools in the new media because we are independent voices that express opinion without reservation.  Some won’t care what I have to say about the newest Acme Widget, but others most certainly will.  I won’t influence a product success or failure, but I can swing a few people one way or the other.  So I must be sure that my thoughts and words are carefully chosen to reflect what I feel.  I also need to be sure that no one else writes my words for me, either by overt action or shady inaction.  To let me readers be subjected to marketing fluff or untrue opinions that I didn’t write would be a great injustice in my mind.  To that I pledge my blogging honor.

Why I Hate The Term “Cloud”

Now that it has been determined that I Am The Cloud, I find it somewhat ironic that I don’t care much for that word. Sadly, while there really isn’t a better term out there to describe the mix of services being offered to abstract data storage and automation, it sill irritates me.

I draw a lot of network diagrams. I drag and drop switches and routers hither and yon. However, when I come to something that is outside my zone of control, whether it be a representation of the Internet, the Public Switched Telephone Network (PSTN) or private ISP connection circuit, how do I represent that in my drawing? That’s right, with a puffy little cloud. I use clouds when I have to show something that I don’t control. When there are a large number of devices beyond my control I wrap them in the wispy borders of the cloud.

So when I’m talking about creating a cloud inside my network, I feel uneasy calling it that.  Why? Because these things aren’t unknown to me. I created the server farms and the switch clusters to connect my users to their data. I build the pathways between storage arrays and campus LAN. There is nothing unknown to me. I’m proud of what I’ve built. Why would I hide it in the clouds? I’d rather show you what the infrastructure looks like.

To the users though, it really is a cloud. It’s a mystical thing sitting out there that takes my data and gives me back output. Users don’t care about TRILL connections or Fibre Channel over Ethernet (FCoE). If I told them their email got ferried around on the backs of unicorns, they’d probably belive me. They don’t really want to know what’s inside the cloud, so long as they can still get to what they want. In fact, when users want to create something cloud-like today without automation and provisioning servers in place, they’re likely to come ask me to do it. Hence the reason why I’m The Cloud.

Crunch Time

Everyone in IT has been there before.  The core switches are melting down.  The servers are formatting themselves.  Packets are being shuffled off to their doom.  Human sacrifce, dogs and cats living together, mass hysteria.  You know, the usual.  What happens next?

Strangely enough, how IT people react to stressful situations such as these has become a rather interesting study habit of mine.  I know how I react when these kinds of things start happening.  I go into my own “panic mode”.  It’s interesting to think about what changes happen when the stress levels get turned up and problems start mounting.  I start becoming short with people.  Not yelling or screaming, per se.  I start using short declarative sentences at an elevated tone of voice to get my point across.  I being looking for solutions to problems, however inelegant they may be.  Quick fixes rule over complicated designs.  I’ve trained myself to eliminate the source of stress or the cause of the problem.  I tend to tune out any other distractions until the issues at hand are sorted out.  Should I find myself in a situation where I can effect a solution to the problem, or if I’m waiting on someone or something to happen outside my directly control, that is the time when the stress starts mounting.  To those that share my “can do” attitude, this makes me look efficient and helpful in times of crisis.  To others, I look like a complete jerk.

I’ve also found that there are others in IT (and elsewhere) that have an entirely different method of dealing with stress: they shut down.  My observations have shown that these people become overwhelmed with the pressure of the situation almost immediately and begin finding ways to cope through indirect action.  Some begin blaming the problem on someone or something else.  Rather than search out the source of the trouble, they try to pin it on someone other than them, maybe in the hopes they won’t have to deal with it.  These people begin to withdraw into their own world.  They sit down and stare off into space.  They become quiet.  Some of them even break down and start to cry (yes, I’ve seen that happen before).  Until the initial shock of the situation has passed, they find themselves incapable of rendering any kind of assistance.

How do we as IT professionals deal with these two disparate types of panic modes?  You need to work out how to do that now so that you don’t have to come up with things on the fly when the core switches are dropping packets and the CxOs are screaming for heads, which is funny that the second category of blamers and inaction people always seem to be in management.

For people like me, the “doers”, we need to be doing something that can impact the problem.  No busy work, no research.  We need to be attacking things head-on.  Any second we aren’t in attack mode compounds the stress we’re under.  Even if we try a hundred things and ninety nine of them fail, we have to try to keep from going crazy.  Think of these “doers” like a wind-up toy: get us working on something and let us go.  You might not want to be around us while we’re working, lest you want some curt answers followed by looks of distaste when we have to stop and explain what we’re doing.  We’ll share…when we’re done.

For the other type of people, those that have a stress-induced Blue Screen of Death (BSoD), I’ve found that you have to do something to get them out of their initial funk.  Sometimes, this involves busy work.  Have them research the problem.  Have them go get coffee.  In most cases, have them do something other than be around you while you’re troubleshooting.  Once you can get them past the blame/sulk/cry state, they can become a useful resource for whatever needs to happen to get the problem solved.  Usually, they come back to me later and thank me for letting them help.  Of course, they also usually tell me I was a bit of an ass and should really be nicer when I’m in panic mode.  Oh well…

Tom’s Take

I don’t count on anyone in a stressful situation that isn’t me.  Most often, I don’t have the luxury of time to figure out how a person is going to react.  If you can help me I’ll get you doing something useful.  If not, I’m going to ignore or marginalize you until the problem is fixed.  Over the last couple of years, though, I’ve found that I really need to start working with every different group to ensure that communications are kept alive during stressful situations and no one’s feelings get hurt (even though I don’t normally care).  By consciously realizing that people generally fall into the “doer” or “BSoD” category, I can better plan for ways to utilize them when the time comes and make sure that the only thing going CRUNCH at crunch time is the problem.  And not someone’s head.

Forest for the Trees

If you work in data center networking today, you are probably being bombarded from all sides by vendors pitching their new fabric solutions.  Every major vendor from Cisco to Juniper to Brocade has some sort of new solution that allows you to flatten your data center network and push a collapsed control plane all the way to the edges of your network.  However, the more I look at it, the more it appears to me that we’re looking at a new spin on an old issue.

Chassis switches are a common sight for high-density network deployments.  They contain multiple interfaces bundled into line cards that are all interconnected via a hardware backplane (or fabric).  There is usually one or more intelligent pieces running a control plane and making higher level decisions (usually called a director or a supervisor).  This is the basic idea behind the switch architecture that has been driving networking for a long time now.  A while back, Denton Gentry wrote a very interesting post about the reasoning behind vendors supporting chassis-based networking the way they do.  By having a point of presence in your networking racks that provides an interface that can only be populated by hardware purchased from a vendor that you bought the enclosure from, they can continue to count on you as a customer until you grow tired enough to rip the whole thing out and start all over again with Vendor B.  Innovation does come and it allows you to upgrade your existing infrastructure over and over again with new line cards and director hardware.  However, you can’t just hop over to Vendor C’s website and buy and new module and just plug it into your Vendor A chassis.  That’s what we call “lock-in”.  Not surprisingly, this idea soon found its way into the halls of IBM, HP, and Sun to live on as the blade server enclosure.  Same principle, only revolving around the hardware that plugs into your network rather than being the network itself.  Chassis-based networking and server hardware makes a fortune for vendors every year due to repeat business.  Hold that thought, we’ll be back to it in just a minute.

Now, every vendor is telling you that data center networking is growing bigger and faster every day.  Your old fashioned equipment is dragging you down and if you want to support new protocols like TrILL and 40Gig/100Gig Ethernet, you’re going to have to upgrade.  Rest assured though, because we will interoperate with the other vendors out there to keep you from spending tons of money to rip out your old network and replace it with ours.  We aren’t proprietary.  Once you get our solution up and running, everything will be wine and roses.  Promise.  I may be over selling the rosy side here, but the general message is that interoperability is king in the new fabric solutions.  No matter what you’ve got in your network right now, we’ll work with it.

Now, if you’re a customer looking at this, I’ve got a couple questions for you to ask.  First, which port do I plug my Catalyst 4507 into on the QFabric Interconnect?  What is the command to bring up an IRF instance on my QFX3500?  Where should I put my HP12500 in my FabricPath deployment?  Odds are good, you’re going to be met with looks of shock and incredulence.  Turns out, interoperability in a fabric deployment doesn’t work quite like that.

I’m going to single out Juniper here and their QFabric solution not because I dislike them.  I’m going to do it because their solution most resembles something we already are familiar with – the chassis switch.  The QFX3500 QFabric end node switch is most like a line card where your devices plug in.  These are connected to QFX3008 QFabric Interconnect Switches that provide a backplane (or fabric) to ensure packets are forwarded at high speeds to their destinations.  There is also a supervisor on the deployment providing control plane and higher-level functions, in this case referred to as the QF/Director.  Sound familiar?  It should.  QFabric (and FabricPath and others) look just like exploded chassis switches.  Rather than being constrained to a single enclosure, the wizards at these vendors have pulled all the pieces out and spread them over the data center into multiple racks.

Juniper must get asked about QFabric and whether or not it’s proprietary a lot, because Abner Germanow wrote an article entitled “Is QFabric Proprietary?” where he says this:

Fact: A QFabric switch is no more or less proprietary than any Ethernet chassis switch on the market today.

He’s right, of course.  QFabric looks just like a really big chassis switch and behaves like one.  And, just like Denton’s blog post above, it’s going to be sold like one.

Now, instead of having a chassis welded to one rack in your data center, I can conceivably have one welded to every rack in your data center.  By putting a QFX3500/Nexus 5000 switch in the top of every rack and connecting it to QFabric/FabricPath, I provide high speed connectivity over a stretched out backplane that can run to every rack you have.  Think of it like an interstate highway system in the US, high speed roads that allow you to traverse between major destinations quickly.  So long as you are going somewhere that is connected via interstate, it’s a quick and easy trip.

What about interoperability?  It’s still there.  You just have to make a concession or two.  QFabric end nodes connect to the QF/Interconnects via 40Gbps connections.  They aren’t Ethernet, but they push packets all the same.  Since they aren’t standard Ethernet, you can only plug in devices that speak QFabric (right now, the QFX3500).  If you want to interconnect to a Nexus FabricPath deployment or a Brocade VCS cluster, you’re going to have to step down and use slower standardized connectivity, such as 10Gbps Ethernet.  Even if you bundle them into port channels, you’re going to take a performance hit for switching traffic off of your fabric.  That’s like exiting the interstate system and taking a two-lane highway.  You’re still going to get to your destination, it’s just going to take a little longer.  And if there’s a lot of traffic on that two-lane road, be prepared to wait.

Interoperability only exists insofar as to provide a bridge to your existing equipment.  In effect, you are creating islands of vendor solutions in the Ocean of Interoperability.  Once you install VCS/FabricPath/QFabric and see how effectively you can move traffic between two points, you’re going to start wanting to put more of it in.  When you go to turn up that new rack or deployment, you’ll buy the fabric solution before looking at other alternatives since you already have all the pieces in place.  Pretty soon, you’re going to start removing the old vendor’s equipment and putting in the new fabric hotness.  Working well with others only comes up when you mention that you’ve already got something in place.  If this was a greenfield data center deployment, vendors would be falling all over you to put their solution in place tomorrow.


Tom’s Take

Again, I’m not specifically picking on Juniper in this post.  Every vendor is guilty of the “interoperability” game (yes, the quotes are important).  Abner’s post just got my wheels spinning about the whole thing.  He’s right though.  QFabric is no more proprietary than a Catalyst 6500 or other chassis switches.  It all greatly depends on your point of view.  Being proprietary isn’t a bad thing.  Using your own technology allows you to make things work the way you want without worrying about other extraneous pieces or parts.  The key is making sure everyone knows which pieces only work with your stuff and which pieces work with other people’s stuff.

Until a technology like OpenFlow comes fully into its own and provides a standardized method for creating these large fabrics that can interconnect everything from a single rack to a whole building, we’re going to be using this generation of QFabric/FabricPath/VCS.  The key is making sure to do the research and keep and eye out for the trees so you know when you’ve wandered into the forest.

I’d like to thank Denton Gentry and Abner Germanow for giving me the ideas for this post, as well as Ivan Pepelnjak for his great QFabric dissection that helped me sort out some technical details.

You Don’t Need Gigabit, But We Do

Stacy Higginbotham wrote a thought-provoking article last week entitled “The Elephant in the Gigabit Network Room”.  Therein, she talks about how many providers are starting to bring gigabit connectivity to residential areas for prices in the $200-$300 range.  She also discusses that this is overkill for most customers, as many devices today can’t reach sustained transfer rates above 500 Mbps as well as the majority of the content being provided are low speed, bandwidth non-intensive services like Twitter.  She goes on to discuss that while there may be applications for using gigabit broadband, they are few and far between now and don’t equate to the cost when something like a 25 Mbps downstream cable modem would suffice just as well.

Allow me to disagree here.

I think one of the reasons why this article sounded flawed to me is because is sounds based on the idea that people still use one computer at a time.  The more I thought about it, the more I realized that the supposition that gigabit residential service for a single machine is overkill is indeed correct.  However, that’s where my opinion diverges.  I would argue that today’s residential networks are staring to resemble small enterprise networks with regard to bandwidth usage.

Think about all the things that you are doing with your home networks right now.  Sure, there’s a fair amount of low bandwidth web surfing going on.  We use Twitter to and Facebook to post status updates.  We check email.  We look up things on Wikipedia to win Internet arguments.  If that was it, I would say that even 100 Mbps or 25 Mbps service would be more than you’d ever need.  But go deeper.  We now use Netflix to stream movies to our televisions.  We use iTunes to download content to all manner of devices.  Hulu, Boxee, and Vudu are all clamoring for attention and bandwidth.  Even simple Bittorrent transfers can suck up an entire pipe.  Now imagine all this couple with the blah blah cloud services coming down the pipe.  We even use cloud-ish services today.  Gigabytes of pictures uploaded to Picasa and Flickr.  Video uploaded to Youtube and Vimeo.  Music streaming coming from Google, Amazon, Apple, and anyone else with a handheld device with a headphone jack.  We can even run our household phone system over the Internet.  Not to mention Facetime, Telepresence, and all manner of real-time video communications.  Sounds to me like that little cable modem is starting to get a bit crowded.

Another argument against gigabit networking is the inability of devices to use the full bandwidth.  Specifically, the lack of gigabit wireless networking is pointed out in the article.  Right now, she’s right.  However, with 802.11ac coming down the pipe and WiGig coming to the 60 Ghz spectrum sooner rather than later, I think it’s better if we have the broadband infrastructure in place sooner rather than later.  In the article, it is stated that a generic laptop only hit 420 Mbps downstream in a test.  Okay, so with a little optimization we could probably hit 600 Mbps easy.  Did they test several sites to be sure it wasn’t a transit network issue?  Did they pull from a close FTP server with a high-speed backbone?  Or were they clocking Windows Update?  Most machines will eat any amount of bandwidth you throw at them.  Even if you peaked at 500 Mbps out of the box, that’s still 5 times faster than a 100 Mbps network.  Think about what would happen in your enterprise if you granted users the ability to run gigabit all the way to the desktop.  Files could be transferred faster internally.  Content could be pushed with little effort.  Imagine again what might happen if you then brought those same users back down to 100 Mbps.  You’d have a mutiny on your hands.  When driving on the highway, 80 MPH only seems fast when you get going.  Once you’ve been cruising there for a while, 60 MPH seems like a standstill.  I think that even half a gigabit connection per machine is still amazingly fast, especially when that pipe starts getting crowded as I’ve outlined above.

The final argument is that there is no killer app that necessitates paying such high fees for gigabit service.  One service that is discussed by the author is online backup.  This, however, is dismissed as being too infrequent to be useful to a customer paying a monthly charge.  Let me ask this of you out there: how crazy did the idea of downloading music on the Internet seem when the fastest connection we could muster was 56k?  How about watching movies in our house solely over the internet when 128k ISDN was the fastest kid on the block (that was exorbitantly high priced for its time too)?  Why code an app if you know it can’t work to its fullest potential today?  What about continuous online backup?  If you’ve already got the pipe to handle it why not keep a running backup of your files out in the blah blah cloud?  HD streaming video to multiple devices simultaneously?  What about the burgeoning website designs that seem to be taking more and more bandwidth every day with Flash landing pages, Flash adds, Shockwave menus and more?  If we start running gigabit to our house, I can promise you that there will be apps written to take advantage of those big fat pipes.

Tom’s Take

Yes, running a gigabit pipe into my house would probably be overkill right now.  Despite my protestations to the contrary, my wife realizes that I don’t need to have the ability to instantly download anything and everything on the Internet.  But I also see that as we start placing more and more content and information outside of our computers and in the blah blah cloud, we’re going to get very impatient to get that content quickly.  HD video, 27 megapixel images, and enough MP3s to sink an aircraft carrier stored somewhere in an online vault and we have to have it NAO!  Just because 100 Mbps would do anyone just fine today doesn’t mean that there isn’t a market for gigabit residential service.  It’s like saying that just because we can only drive 65-75 MPH on the highway there’s no need for sports cars that can do 130.  Someone out there will find a use for it if it’s available.  If nothing else, the blah blah cloud providers should be championing us to get the fastest available connections and start storing everything we have with them.  That way, we don’t have to spend so much time worrying about where our stuff is being stored.  We just click it and go.