SDN and NFV – The Ups and Downs

TopSDNBottomNFV

I was pondering the dichotomy between Software Defined Networking (SDN) and Network Function Virtualization (NFV) the other day.  I’ve heard a lot of vendors and bloggers talking about how one inevitably leads to the other.  I’ve also seen a lot of folks saying that the two couldn’t be further apart on the scale of software networking.  The more I thought about these topics, the more I realized they are two sides of the coin.  The problem, at least in my mind, is the perspective.

SDN – Planning The Paradigm

Software Defined Networking telegraphs everything about what it is trying to accomplish right there in the name.  Specifically, the “Definition” part of the phrase.  I’ve made jokes in the past about the lack of definition in SDN as vendors try to adapt their solutions to fit the buzzword mold.  What I finally came to realize is that the SDN folks are all about definition. SDN is the Top Down approach to planning.  SDN seeks to decompose the network into subsystems that can be replaced or reprogrammed to suit the needs of those things which utilize the network.

As an example, SDN breaks the idea of switch down into things like “forwarding plane” and “control plane” and seeks to replace the control plane with alternative software, whether it be a controller-based architecture like OpenFlow or an overlay network similar to that of VMware/Nicira.  We can replace the OS of a switch with a concept like OpenFlow easily.  It’s just a mechanism for determining which entries are populated in the Content Addressable Memory (CAM) tables of the forwarding plane.  In top down design, it’s easy to create a stub entry or “black box” to hold information that flows into it.  We don’t particularly care how the black box works from the top of the design, just that it does its job when called upon.

Top Down designs tend to run into issues when those black boxes lack detail or are missing some critical functionality.  What happens when OpenFlow isn’t capable of processing flows fast enough to keep the CAM table of a campus switch populated with entries?  Is the switch going to fall back to process switching the packets?  That could be an big issue.  Top Down designs are usually very academic and elegant.  They also have a tendency to lack concrete examples and real world experience.  When you think about it, that does speak a lot about the early days of SDN – lots of definition of terminology and technology, but a severe lack of actual packet forwarding.

NFV – Working From The Ground Up

Network Function Virtualization takes a very different approach to the idea of turning hardware networks into software networks.  The driving principle behind NFV is replication of existing technology in a software state.  This is classic Bottom Up design.  Rather than spending a large amount of time planning and assembling the perfect system, Bottom Up designers tend to build as they go.  They concentrate on making something work first, then making their things work together second.

NFV is great for hands-on folks because it gives concrete, real results almost immediately. Once you’ve converted an load balancer or a router to a purely software-based construct you can see right away how it works and what the limitations might be.  Does it consume too many resources on the hypervisor?  Does it excel at forwarding small packets?  Does switching a large packet locally cause a fault?  These are problems that can be corrected in the individual system rapidly rather than waiting to modify the overall plan to account for difficulties in the virtualization process.

Bottom Up design does suffer from some issues as well.  The focus in Bottom Up is on getting things done on a case-by-case basis.  What do you do when you’ve converted all your hardware to software?  Do your NFV systems need to talk to one another?  That’s usually where Bottom Up design starts breaking down.  Without a grand plan at a higher level to ensure that systems can talk to each other this design methodology falls back to a series of “hacks” to get them connected.  Units developed in isolation aren’t required to play nice with everyone else until they are forced to do so.  That leads to increasing complex and fragile interconnection systems that could fail spectacularly should the wrong thread be yanked with sufficient force.


Tom’s Take

Which method is better?  Should we spend all our time planning the system and hope that our Powerpoint Designs work the right way when someone codes them in a few months?  Or should we say “damn the torpedoes” and start building things left and right and hope that someone will figure out a way to tie all these individual pieces together at some point?

Surprisingly, the most successful design requires elements of both.  People need to have a basic plan at the least when starting out on a plan to change the networking world.  Once the ideas are sketched out, you need a team of folks willing to burn the midnight oil and get the ideas implemented in real life to ensure that the plan works the right way.  The guidance from the top is essential to making sure everything works together in the end.

Whether you are leading from the top or the bottom, remember that everything has to meet in the middle sooner or later.

Layoffs – Blessing or Curse?

LayoffsSM

On August 15, Cisco announced that it would be laying off about 4,000 workers across various parts of the organization.  The timing of the announcement comes after the end of Cisco’s fiscal year.  Most of the times that Cisco has announced large layoffs of this sort, it comes in the middle of August after they analyze their previous year’s performance.  Reducing their workforce by 5% isn’t inconsequential by any means.  For the individual employee, a layoff means belt tightening and resume updating.  It’s never a good thing.  But taking the layoffs in a bigger frame of mind, I think this reduction in force will have some good benefits on both sides.

Good For The Goose

If the headline had instead read “Cisco Removes Two Product Lines No One Uses Anymore” I think folks would have cheered.  Cisco is forever being chastised that it needs to focus on the core networking strategy and stop looking at all these additional market adjacencies.  Cisco made 13 acquisitions in the last twelve months.  Some of them were rather large, like Meraki and Sourcefire.  Consolidating development and bringing that talent on board almost certainly would have required that some other talent be removed.  Suppose that the layoffs really did only come from product lines that had been removed, such as the Application Control Engine (ACE).  Is it bad that Cisco is essentially pruning away unneeded product lines?  With the storm of software defined networking on the horizon, I think a slimmer, more focused Cisco is going to come out better in the long run.  Given that the powers that be at Cisco are actively trying to transform into a software company, I’d bet that this round of layoffs likely serve to refocus the company towards that end.

Good For The Gander

Cisco’s loss is the greater industry’s gain.  You’ve got about 4,000 very bright people looking for work in the industry now.  Startups and other networking companies should be snapping those folks up as soon as they come onto the market.  I’m sure there’s a hotshot startup out there yelling at their screen as I type this about how they don’t want to hire some washed-up traditional network developer and their hidebound thinking.  You know what those old fuddy duddies bring to your environment?  Experience.  They’ve made a ton of mistakes and learned from all of them.  Odds are good they won’t be making the same ones in your startup.  They also bring a calm voice of reason that tells you not ship this bug-ridden batch of code and instead tell the venture capital mouthpieces to shove it for a week while you keep this API from deleting all the data in the payroll system when you query it from Internet Explorer.  But, if you don’t want that kind of person keeping you from shooting yourself in the foot with a Howitzer then you don’t really care who is being laid off this week.  Unless it just happens to be you.


Tom’s Take

Layoffs suck.  Having been a part of a couple in my life I can honestly say that the uncertainty and doubt of not having a job tomorrow weighs heavily on the mind.  The bright side is that you have an opportunity to go out and make an impact in the world that you might not have otherwise had if you had been at your old position.  Likewise, if a company is laying off folks for the right reasons then things should work out well for them.  If the layoffs serve to refocus the business or change a line of thinking that is an acceptable loss.  If it’s just a cost cutting measure to serve the company up on a silver platter for acquisition or a shameless attempt to boost the bottom line and get a yearly bonus that’s not the right way to do things.  Companies and talent are never immediately better off when layoffs occur.  In the end you have to hope that it all works out for everyone.

Just One More Slide

OneMoreSlideScreen

More than one presentation that I’ve been too has been a festival of slides.  People cycle through page after page of graphics and eye chart text.  The problem with those kinds of slides is that they tend to bore the audience.  When the audience gets bored, their attention span tends to wander.  And when it does, you get people asking to move through the presentation a bit faster.  They might even ask you to skip to the end.  That’s when you sometimes hear the trademark phrase of a marginal presenter:

“But, I just have one more slide.”

I really don’t like this phrase.  This smacks of a presentation that is more important than it needs to be.  I think back to a famous quote by Coco Chanel:

“Before you leave the house, look in the mirror and take something off.”

Coco has a great point here.  No matter how beautiful you think something might be, something can almost always be removed.  In the same way, there’s almost always a slide that can be removed in any presentation.  Based on some presentations that I’ve been forced to sit through in a former life, there are usually many slides that can be removed.  The point is that no one slide should be that critical to your presentation.

One More Slide is the siren call of a nervous presenter.  When someone has spent all their free time practicing a presentation because they don’t feel totally comfortable speaking in front of people they tend to obsess over details.  They spend all their time practicing their delivery over and over again down to making the same jokes to be sure they don’t sound rehearsed.  That’s how they plan on making it through their presentation – by making sure that nothing can derail them.  When the time comes to present to the group they feel like they must go through every slide in the order that they were rehearsed otherwise they will fail.  They have absolutely no faith in their ability to ad lib if needed.

At any point during a presentation, you need to feel comfortable enough with your speaking ability to jettison the slide deck and just talk if needed.  Good speakers can work from a minimal slide deck.  The best speakers don’t need one at all.  Being able to give your presentation without your slide deck is the sign of a well prepared person.  But being able to move around in your presentation deck to different subjects shows an even greater ability.  If you get caught up in making sure that your audience sees everything that you’ve put on the screen you’ve made yourself no better than a boring presenter that reads the bullet points back to the audience.  Each slide should be a self contained unit unto itself that allows you to move on without it and not lose the whole point of the presentation.

Try this next time you want to practice: Do your presentation backwards.  Does is still make sense?  Does it still flow easily from slide to slide without a lot of exposition?  If so, you’ve reached the point where you can skip slides with no ill effects.  If you have slides that lead into other slides you should ask yourself what’s included on those first slides that can’t be included on the later ones.  In the event you have to ditch the last half of your presentation will thing still make sense even if you have to stop in the middle of a slide?  Slides that tease the audience by doing things like asking rhetorical questions or attempt to engage the audience usually fall into the category of Leave It Out.  If you have to ask the audience a question to get them engaged, you never had their full attention in the first place.


Tom’s Take

I have a rule of thumb when I present.  If I can’t do my presentation without a network connection, laptop, or even a projector then I’m not ready to do it yet.  My slides serve as much as my notecards as they do to keep the audience focused.  I need to be prepared to do my talk with just my voice and my hands.  That way if I’m forced to jettison my prepared notes to explore a discussion topic or I need to shorten my presentation to rush to the airport to beat a blizzard I’m more than ready.  When you can give a presentation without needing to rely on aids then you are truly ready to go without one more slide.

Under the Influencers

DominoFinger

I’ve never really been one for titles or labels.  Pejorative terms like geek or nerd never bothered me growing up.  I never really quibbled over being called a technician or an engineer (or rock star).  And when the time came to define what it was that I did in my spare time in front of a monitor and keyboard I just settled on blogger because that was the most specific term that described what I did.  All that changed this year.

When I went to VMware Partner Exchange, I spent a lot of time hanging out with Amy Lewis (@CommsNinja) from Cisco.  Part of this was due to my filming of an IPv6-focused episode of Engineers Unplugged.  Afterwards, I spent a lot of time as a fly on the wall listening to conversations among the assembled folks.  I saw how they interacted with each other.  I took copious notes and tried to stay out of the way as much as possible.  Not that Amy made that easy at all.  She went out of her way to pull me out of the shadows and introduce me to people that mattered and made decisions on a much grander scale than I was used to.  What struck me is not that she did that.  What made me think was how she introduced me.  Not as a nerd or an engineer or even as a blogger.  She used a very specific word.

Influencer

It took some time before the enormity of what Amy was doing sank in.  Influencers are more than just a blog or a Facebook page or a Twitter handle.  They take all of those things and wrap them into a package that is greater than the sum of its parts.  They say things that other people listen to and consider.  The more I thought about it, the more it made sense.

I think of influencers as people like Stephen Foskett (@SFoskett), Greg Ferro (@etherealmind), or Ivan Pepelnjak (@IOSHints).  When those guys speak, people listen.  When the publish a podcast or write a product review that turns heads.  Every field has influencers.  Wizened people that have been there and done just about everything.  Those people then spend their time educating the greater whole to avoid making the same mistakes all over again or to help those with ability to find the vision needed to do great things.  They don’t hold that knowledge to themselves and use it as capital to fight political battles or profit from those that don’t know any better.  Being a blogger or technical person on the various social media outlets invovles a bit of give and take.  It requires a selfless type of attitude.  Too many analyst firms live by the maxim “Don’t give away the farm” when it comes to social media interaction.  Those firms don’t want their people giving away advice that could be locked into a report and assigned a price.  In my mind, true influencers are the exact opposite.

It struck me funny when Amy referred to me in the same way that thought of others in the industry.  What had I done to earn that moniker?  Who in their right mind would listen to me?  I’m some kid with a keyboard and a WordPress account.  However, the truth of things was a little beyond what I was initially thinking.  It didn’t really hit me until my trip to Cisco Live.

Everyone is an influencer.

Influencers aren’t just luminaries in the industry.  They aren’t the wise old owls that dispense advice like a fortune cookie.  Instead, influencers are people that offer knowledge without reservation for the sole purpose of making the world better off than it was.  You don’t have to have a blog or a Twitter handle to be an influencer.  Those things just make it easy to identify the chatty types.  To really be an influencer, you only need have the desire to speak up when someone asks a question that you have insight into.  If two people are having a conversation about the “best” way to configure something, an influencer will share their opinion freely without reservation.  It might not be much.  A simple caution about a technology or an opinion about where the industry is headed.  But the influence comes because those people take what you’ve said and incorporate it into their thinking.

I’ve been trying to champion people when it comes to writing and speaking out on social media.  I want more bloggers and Tweeters and Facebookers.  I’ve taken to collectively calling them influencers because of what that term really represents.  I want more influencers in the world.  I want intelligent people giving freely of themselves to advance the industry.  I want to recognize them and tell others to listen what these people are saying.  Sure, having a blog or a Twitter handle makes it easier to point them out.  But I’m not above telling someone “Go talk to Bob.  He knows a lot about what’s troubling you”.


Tom’s Take

It doesn’t take a lot to be an influencer.  Helping someone decide between detergent at the grocery store makes you an influencer.  What’s important is taking the next step to make it bigger and better.  Make your opinions and analysis heard.  Be public.  Sure, you’re going to be wrong sometimes.  But when you’re right people will start to listen.  Not just people wanting to know the difference between Tide and Gain.  People that have C-level titles.  Product managers.  People that want to know what the industry is thinking.  When you see that something you’ve said or done has a a real impact on a tangible thing, like a website or a product look, you can rest easy at night knowing that you have influence.

Nobody Cares

Writing a blog can be very fun and rewarding.  I’ve learned a lot from the things I’ve written.  I’ve had a blast with some of the more humorous posts that I’ve put up.  I’ve even managed to be anointed at the Hater of NAT.  After everything though, I’ve learned something very important about writing.  For the most part, nobody cares.

Now, before you run to your keyboard and respond that you do indeed care, allow me to expound on that idea just a bit.  I’ve written lots of different kinds of posts.  I’ve talked about educational stuff, funny lists, and even activist posts trying to get unpopular policies changed.  What I’ve found is that I can never count on something being popular.  There are days when I sit down in front of my computer and start furiously typing away as if I’m going to change the world with the words that I’m putting out.  When I hit the publish button, it’s as if I’m launching those paragraphs into a black hole.  I’m faced with a reality that maybe things weren’t as important as I thought.

A prime example is the original intent for my blog.  I wanted to write a book about teaching people structured troubleshooting.  I figured if I could get a few of those chapters down as blog posts, it would go a long way to helping me get everything sorted out in my mind.  Now, almost three years later, the two least read posts on my site are those two troubleshooting posts.  There are images on my site that have more hits than those two posts combined.  If I were strictly worried about page views, I’d probably have given up by now.

In contrast, some of the most popular posts are the ones I never put a second thought into.  How about my most popular article about the differences between HP and Cisco trunking?  I just fired that off as a way to keep it straight in my head.  Or how about my post about a throwaway line in a Star Trek movie that exploded on Reddit?  I never dreamed that those articles would be as big as they have ended up being.  I’m continually surprised by the things that end up being popular.

What does this mean for your blogging career?  It means that writing is the most important thing you can do.  You should invest time in creating good quality content.  But don’t get disappointed when people don’t find your post as fascinating as you.  Just get right back on your blogging horse and keep turning out the content.  Eventually, you’re going to find an unintentional gem that people are going to go wild about.

Despite the old adage, lightning does indeed strike twice.  The Empire State Building is hit about 100 times per year.  However, you never know when those strikes are going to hit.  Unless you are living in Hill Valley, California you can never know exactly when that bolt from the blue is going to come crashing down.  In much the same way, you shouldn’t second guess yourself when it comes to posting.  Just keep firing them out there until one hits it big.  Whether it be from endless retweets or a chance encounter with the front page of a news aggregator you just need to put virtual pen to virtual paper and hope for the best.

Devaluing Experts – A Response

I was recently reading a blog post from Chris Jones (@IPv6Freely) about the certification process from the perspective of Juniper and Cisco. He talks about his view of the value of a certification that allows you to recertify from a dissimilar track, such as the CCIE, as opposed to a certification program that requires you to use the same recertification test to maintain your credentials, such as the JNCIE. I figured that any comment I had would run much longer than the allowed length, so I decided to write it down here.

I do understand where Chris is coming from when he talks about the potential loss of knowledge in allowing CCIEs to recert from a dissimilar certification track. At the time of this writing, there are six distinct tracks, not to mention the retired tracks, such as Voice, Storage, and many others. Chris’s contention is that allowing a Routing and Switching CCIE to continue to recertify from the Data Center or Wireless track causes them to lose their edge when it comes to R&S knowledge. The counterpoint to that argument is that the method of using the same (or updated) test in the certified track as the singular recertification option is superior because it ensures the engineer is always up on current knowledge in their field.

My counter argument to that post is two fold. The first point that I would debate is that the world of IT doesn’t exist in a vacuum. When I started in IT, I was a desktop repair technician. As I gradually migrated my skill set to server-based skills and then to networking, I found that my previous knowledge was important to continue forward but that not all of it was necessary. There are core concepts that are critical to any IT person, such as the operation of a CPU or the function of RAM. But beyond the requirement to answer a test question is it really crucial that I remember the hex address of COM4 in DOS 5.0? My skill set grew and changed as a VAR engineer to include topics such as storage, voice, security, and even returning to servers by way of virtualization. I was spending my time working with new technology while still utilizing my old skills. Does that mean that I needed stop what I was working on every 1.5 years to start studying the old CCIE R&S curriculum to ensure that I remembered what OSPF LSA types are present in a totally stubby area? Or is it more important to understand how SDN is impacting the future of networking while not having any significant concrete configuration examples from which to generate test questions?

I would argue that giving an engineer an option to maintain existing knowledge badges by allowing new technology to refresh those badges is a great idea for vendors that want to keep fresh technology flowing into their organization. The risk of forcing your engineers into a track without an incentive to stay current comes in when you have a really smart engineer that is not capable of thinking beyond their certification area. Think about the old telecommunications engineers that have spent years upon years in their wiring closets working with SS7 or 66-blocks. They didn’t have an incentive or need to learn how voice over IP (VoIP) worked. Now that their job function has been replaced by something they don’t understand many of them are scrambling to retrain or face being left behind in the market. As Steven Tyler once sang, “If you do what you’ve always done, you’ll always get what you’ve always got.”

Continuous Learning

The second part of my counterpoint is that the only true way to maintain the level of knowledge required for certification shouldn’t rely on 50-100 multiple choice questions. Any expert-level program should allow for the use of continuing education to recertify the credential on a yearly basis. This is how the legal bar system works. It’s also how (ISC)2’s CISSP program works. By demonstrating that you are acquiring new knowledge continually and contributing to the greater knowledge base you are automatically put into a position that allows you to continue to hold your certification. It’s a smart concept that creates information and ensures that the holders of those certifications stay current on new knowledge. Think for moment about changing the topics of an exam. If the exam is changed every two years there is a potential for a gap in knowledge to occur. If someone were recertified on the last day of the CCIE version 3 exam, it would have been almost two years before they had to take an exam that required any knowledge of MPLS, which is becoming an increasingly common enterprise core protocol. Is it fair that the person that took the written exam the next day was required to know about MPLS? What happens if that CCIEv3 gets a job working with MPLS a few months later. According to the current version 4 curriculum they CCIE should know about MPLS. Within the confines of the certification program the user has failed to demonstrate familiarity with the topic.

Instead, if we ensure that the current certification holders are studying new topics such as MPLS or SDN or any manner of networking-related discussions we can be reasonably sure they are conversant with what the current state of the industry looks like. There is no knowledge gap because new topics can be introduced quickly as they become relevant. There is no fear that someone following the letter of the certification law and recertifying on the same material will run into something they haven’t seen before because of a timing issue. Continuous improvement is a much better method in my mind.


Tom’s Take

Recertification is going to be a sticky topic no matter how it’s sliced. Some will favor allowing engineers to spread their wings and become conversant in many enterprise and service provider topics. Still others will insist that the only way to truly be an expert in a field is to study those topics exclusively. Still others will say that a melding of the two approaches is needed, either through continuous improvement or true lab recertification. I think the end result is the same no matter the case. What’s needed is an agile group of engineers that is capable of not only being an expert at their field but is also encouraged to do things outside their comfort zone without fear of losing that which they have worked so hard to accomplish. That’s valuable no matter how you frame it.

Note that this post was not intended to be an attack against any person or any company listed herein. It is intended as a counterpoint discussion of the topics.

Big Data? Or Big Analysis?

data-illustration

Unless you’ve been living under a rock for the past few years, you’ve no doubt heard all about the problem that we have with big data.  When you start crunching the numbers on data sets in the terabyte range the amount of compute power and storage space that you have to dedicate to the endeavor is staggering.  Even at Dell Enterprise Forum some of the talk in the keynote addresses focused on the need to split the processing of big data down into more manageable parallel sets via use of new products such as the VRTX.  That’s all well and good.  That is, it’s good if you actually believe the problem is with the data in the first place.

Data Vs. Information

Data is just description.  It’s a raw material.  It’s no more useful to the average person than cotton plants or iron ore.  Data is just a singular point on a graph with no axes.  Nothing can be inferred from that data point unless you process it somehow.  That’s where we start talking about information.

Information is the processed form of data.  It’s digestible and coherent.  It’s a collection of data points that tell a story or support a hypothesis.  Information is actionable data.  When I have information on something, I can make a decision or present my findings to someone to make a decision.  They key is that it is a second-order product.  Information can’t exist without data upon which to perform some kind of analysis.  And therein lies the problem in our growing “big data” conundrum.

Big Analysis

Data is very sedentary.  It doesn’t really do much after it’s collected.  It may sit around in a database for a few days until someone needs to generate information from it.  That’s where analysis comes into play.  A table is just a table.  It has a height and a width.  It has a composition.  That’s data.  But when we analyze that table, we start generating all kinds of additional information about it.  Is it comfortable to sit at the table?  What color lamp goes best with it?  Is it hard to move across the floor?  Would it break if I stood on it?  All of that analysis is generated from the data at hand.  The data didn’t go anywhere or do anything.  I created all that additional information solely from the data.

Look at the above Wikipedia entry for big data.  The image on the screen is one of the better examples of information spiraling out of control from analysis of a data set.  The picture is a visual example of Wikipedia edits.  Note that it doesn’t have anything to do with the data contained in a particular entry.  They’re just tracking what people did to describe that data or how they analyzed it.  We’ve generated terabytes of information just doing change tracking on a data set.  All that data needs to be stored somewhere.  That’s what has people in IT sales salivating.

Guilt By Association

If you want to send a DBA screaming into the night, just mention the words associative entity (or junction table).  In another lifetime, I was in college to become a DBA.  I went through Intro to Databases and learned about all the constructs that we use to contain data sets.  I might have even learned a little SQL by accident.  What I remember most was about entities.  Standard entities are regular data.  They have a primary key that describes a row of data, such as a person or a vehicle.  That data is pretty static and doesn’t change often.  Case in point – how accurate is the height and weight entry on your driver’s license?

Associative entities, on the other hand, represent borderline chaos.  These are analysis nodes.  They contain more than one primary key as a reference to at least two tables in a database.  They are created when you are trying to perform some kind of analysis on those tables.  They can be ephemeral and usually are generated on demand by things like SQL queries.  This is the heart of my big data / big analysis issue.  We don’t really care about the standard data entities.  We only want the analysis and information that we get from the associative entities.  The more information and analysis we desire, the more of these associative entities we create.  Containing these descriptive sets is causing the explosion in storage and compute costs.  The data hasn’t really grown.  It’s our take on the data that has.

Crunch Time

What can we do?  Sadly, not much.  Our brains are hard-wired to try and make patterns out of seeming unconnected things.  It is a natural reaction that we try to bring order to chaos.  Given all of the data in the world the first thing we are going to want to do with it is try and make sense of it.  Sure, we’ve found some very interesting underlying patterns through analysis such as the well-worn story from last year of Target determining a girl was pregnant before her family knew.  The purpose of all that analysis was pretty simple – Target wanted to know how to better pitch products to a specific focus groups of people.  They spent years of processing time and terabytes of storage all for the lofty goal of trying to figure out what 18-24 year old males are more likely to buy during the hours of 6 p.m. to 10 p.m. on weekday evening.  It’s a key to the business models of the future.  Rather than guessing what people want, we have magical reports that tell us exactly what they want.  Why do you think Facebook is so attached to the idea of “liking” things?  That’s an advertisers dream.  Getting your hands on a second-order analysis of Facebook’s Like database would be the equivalent of the advertising Holy Grail.


Tom’s Take

We are never going to stop creating analysis of data.  Sure, we may run out of things to invent or see or do, but we will never run out of ways to ask questions about them.  As long as pivot tables exist in Excel or inner joins happen in an Oracle database people are going to be generating analysis of data for the sake of information.  We may reach a day where all that information finally buries us under a mountain of ones and zeroes.  We brought it on ourselves because we couldn’t stop asking questions about buying patterns or traffic behaviors.  Maybe that’s the secret to Zen philosophy after all.  Instead of concentrating on the analysis of everything, just let the data be.  Sometimes just existing is enough.

Glue Peddlers

IntegrationGlue

There’s an old adage that says “A chain is only as strong as the weakest link.”  While people typically use this in terms on saying that teams are only as strong as their weakest member, I look at it through a different lens.  In my former life as a Value Added Reseller (VAR) engineer, I spent a lot of my time working with technologies that needed to be linked together like a chain.

You have probably seen the lamentations of a voice engineer complaining about fax machines.  If you haven’t, you should count yourself lucky.  Fax machines are the bane of the lives of many telecom folks.  They aren’t that difficult when you get right down to it.  They’re essentially printers with a 9600 baud modem attached for making phone calls.  Indeed, fax machines are probably one of the most robust pieces of technology that I’ve encountered.  I’ve seen faxes covered in dust and grime from a decade or more of use still dutifully churning out page after page of low resolution black-and-white print.

Faxes themselves aren’t the issue.  The problem is that their technology has been eclipsed to the point where interfacing them in the modern world is often difficult and time consuming.  I usually counsel my customers to leave their fax machines plugged directly into an analog landline to avoid issues.  For those times where that can’t be done, I have a whole bag of tricks to make it work with a voice over IP (VoIP) system.  Adaptors and relays and other such tricks help me figure out how to make this decades-old tech work with a modern PRI or SIP connection.  And don’t even get me started on interfacing a fire alarm with an IP phone system.

The best VARs in the world don’t make their money from reselling a pile of hardware to a customer.  The profits aren’t found in a bill of materials.  Instead, they make money in the glue business.  Tying two disparate technologies together via custom programming or knowledge of processes needed to make dissimilar technology work the right way is their real trade.  This is their “glue.”  I can remember having discussions with people regarding the hardest parts of an implementation.  It’s not in setting up a dial plan or configuring a VM cluster with the right IP address.  It’s usually in making some old piece of technology work correctly.  A fire alarm or a Novell server or an ancient wireless access point can quickly become the focus area of an entire project and consume all your time.

If you really want to differentiate yourself from the pack of “box pushers” out there just reselling equipment you need to concentrate on the point where the glue needs to be the stickiest.  That’s where the customer’s knowledge is the weakest.  That’s the point that will end up causing the most pain.  That’s where the money is waiting for the truly dedicated.  VARs have already figured this out.  If you want to make yourself valuable to a customer or to a VAR, be the best a gluing these technologies together.  Understand how to make old technology work with new tech.  There’s always going to be new technology coming out to replace what’s being used currently.  And there will always be a customer or two that want to keep using that old technology far past the expiration date.  If you are the one that can tie those too things together with a minimum of effort, you’ll find yourself the most popular peddler in the market.

The Microsoft Office Tablet

OfficeTabletI’ve really tried to stay out of the Tablet Wars.  I have a first generation iPad that I barely use any more.  My kids have co-opted it from me for watching on-demand TV shows and playing Angry Birds.  Since I spend most of my time typing blog posts or doing research, I use my laptop more than anything else.  When the Surface RT and Surface Pro escaped from the wilds of Redmond I waited and watched.  I wanted to see what people were going to say about these new Microsoft tablets.  It’s been about 4 months since the release of the Surface Pro and simliar machines from vendors like Dell and Asus.  I’ve been slowly asking questions and collecting information about these devices.  And I think I’ve finally come to a realization.

The primary reason people want to buy a Surface tablet is to run Microsoft Office.

Here’s the setup.  Everyone that expressed an interest in the Pro version of the Surface (or the Latitude 10 from Dell) was asked a question by me: What is the most compelling feature for the Surface Pro for you?  The responses that I got back were overwhelming in their similarity.

1.  I want to use Microsoft Office on my tablet.

2.  I want to run full Windows apps on my tablet.

I never heard anything about portability, power, user interface, or application support (beyond full Windows apps).  I specifically excluded the RT model of the Surface from my questions because of the ARM processor and the reliance of software from the Windows App Store.  The RT functions more like Apple/Android tablets in that regard.

This made me curious.  The primary goal of Surface users is to be able to run Office?  These people have basically told me that the only reason they want to buy a tablet is to use an office suite.  One that isn’t currently available anywhere else for mobile devices.  One that has been rumored to be released on other platforms down the road.  While it may be a logical fallacy, it appears that Microsoft risks invalidating a whole hardware platform because of a single application suite.  If they end up releasing Office for iOS/Android, people would flee from the Surface to the other platforms according to the info above.  Ergo, the only purpose of the Surface appears to be to run one application.  Which I why I’ve started calling it the Microsoft Office Tablet.  Then I started wondering about the second most popular answer in my poll.

Making Your Flow Work

As much as I’ve tried not to use the word “workflow” before, I find that it fits in this particular conversation.  Your workflow is more than just the applications you utilize.  It’s how you use them.  My workflow looks totally different from everyone else even though I use simliar applications.  I use email and word processing for my own purposes.  I write a lot, so a keyboard of some kind is important to my workflow.  I don’t do a lot of graphics design, so a pen input tablet isn’t really a big deal to me.  The list goes on and on, but you see that my needs are my own and not those of someone else.  Workflows may be simliar, but not identical.  That’s where the dichotomy comes into play for me.

When people start looking at using a different device for their workflow, they have to make adjustments of some kind.  Especially if that device is radically different from one they’ve been using before.  Your phone is different from a tablet, and a tablet is different from a laptop.  Even a laptop is different from a desktop, but these two are more simliar than most.  When the time comes to adjust your workflow to a new device, there are generally two categories of people:

1.  People who adjust their workflow to the new device.

2.  People who expect the device to conform to their existing workflow.

For users of the Apple and Android tablets, option 1 is pretty much the only option you’ve got.  That’s because the workflow you’ve created likely can’t be easily replicated between devices.  Desktop apps don’t run on these tablets.  When you pick up an iPad or a Galaxy Tab you have to spend time finding apps to replicate what you’ve been doing previously.  Note taking apps, web browsing apps, and even more specialized apps like banking or ebook readers are very commonly installed.  Your workflow becomes constrained to the device you’re using.  Things like on-screen keyboards or lack of USB ports become bullet points in workflow compatibility.  On occasion, you find that a new workflow is possible with the device.  The prime example I can think of is using the camera on a phone in conjunction with a banking app to deposit checks without needing to take them into the bank.  That workflow would have been impossible just a couple of years ago.  With the increase in camera phone resolution, high speed data transfer, and secure transmission of sensitive data made possible by device advancements we can now picture this new workflow and easily adapt it because a device made it possible.

The other category is where the majority of Surface Pro users come in.  These are the people that think their workflow must work on any device they use.  Rather than modify what they’re doing, they want the perfect device to do their stuff.  These are the people that use a tablet for about a week and then move on to something different because “it just didn’t feel right.”  When they finally do find that magical device that does everything they want, they tend to abandon all other devices and use it exclusively.  That is, until they have a new workflow or a substantial modification to their existing workflow.  Then they go on the hunt for a new device that’s perfect for this workflow.

So long as your workflow is the immutable object in the equation, you are never going to be happy with any device you pick.  My workflows change depending on my device.  I browse Twitter and read email from my phone but rarely read books.  I read books and do light web surfing from a tablet but almost never create content.  I spend a lot of time creating content on my laptop buy hate reading on it.  I’ve adjusted my workflows to suit the devices I’m using.

If the single workflow you need to replicate on your table revolves around content creation, I think it’s time to examine exactly what you’re using a tablet for.  Is it portability beyond what a laptop can offer?  Do you prefer to hunt and peck around a touch screen instead of a keyboard?  Are you looking for better battery life or some other function of the difference in hardware?  Or are you just wanting to look cool with a tablet in the “post PC world?”  That’s the primary reason I don’t use a tablet that much any more.  My workflows conform to my phone and my laptop.  I don’t find use in a tablet.  Some people love them.  Some people swear by them.  Just make sure you aren’t dropping $800-$1000 on a one-application device.

At the end of the day, work needs to get done.  People are going to use whatever device they want to use to get their stuff done.  Some want to do stuff and move on.  Others want to look awesome doing stuff or want to do their stuff everywhere no matter what.  Use what works best for you.  Just don’t be surprised if complaining about how this device doesn’t run my favorite data entry program gets a sideways glance from IT.

Disclaimer:  I own a first generation iPad.  I’ve tested a Dell Latitude 10.  I currently use an iPhone 4S.  I also use a MacBook Air.  I’ve used a Lenovo Thinkpad in the past as my primary workstation.  I’m not a hater of Microsoft or a lover of Apple.  I’ve found a setup that lets me get my job done.

Generation Lost

I’m not trying to cause a big sensation (talking about my generation) – The Who

GenTiltedNaming products is an art form.  When you let the technical engineering staff figure out what to call something, you end up with a model number like X440 or 8086.  When the marketing people get involved at first, you tend to get more order in the naming of things, usually in a series like the 6500 series or the MX series.  The idea that you can easily identify a product’s specs based on its name or a model number is nice for those that try to figure out which widget to use.  However, that’s all changing.

It started with mobile telephones.  Cellular technology has been around in identifiable form since the late 1970s.  The original analog signals worked on specific frequencies and didn’t have great coverage.  It wasn’t until the second generation of this technology moved entirely to digital transmission with superior encoding that the technology really started to take off.  In order to differentiate this new technology from the older analog version, many people made sure to market it as “second generation”, often shortening this to “2G” to save syllables.  When it came time to introduce a successor to the second generation personal carrier service (PCS) systems, many carriers started marketing their offerings withe moniker of “3G”, skipping straight past the idea of third generation offering in favor of the catchier marketing term.  AT&T especially loved touting the call quality and data transmission rate of 3G in advertising.  The 3G campaigns were so successful that when the successor to 3G was being decided, many companies started trying to market their incremental improvements as “4G” to get consumers to adopt them quickly.

Famously, the incremental improvement to high speed packet access (HSPA) that was being deployed en masse before the adoption of Long Term Evolution (LTE) as the official standard was known as high speed packet downlink access (HSPDA).  AT&T petitioned Apple to allow their carrier badge on the iPhone to say “4G” when HSPDA was active.  Even though speeds weren’t nearly as fast as the true 4G LTE standard, AT&T wanted a bit of marketing clout with customers over their Verizon rivals.  When the third generation iPad was released with a true LTE radio later on, Apple made sure to use the “LTE” carrier badge for it.  When the iOS 5 software release came out, Apple finally acquiesced to AT&T’s demands and rebranded the HSPDA network to be “4G” with a carrier update.  In fact, to this day my iPhone 4S still tells me I’m on 4G no matter where I am.  Only when I drop down to 2G does it say anything different.

The fact that we have started referring to carrier standards a “xG” something means the marketing is working.  And when marketing works, you naturally have to copy it in other fields.  The two most recent entries in the Generation Marketing contest come from Broadcom and Brocade.  Broadcom has started marketing their 802.11ac chipsets as “5G wireless.”  It’s somewhat accurate when you consider the original 802.11 standard through 802.11b, 802.11g, 802.11n, and now 802.11ac.  However, most wireless professionals see this more as an attempt to cash in on the current market trend of “G” naming rather than showing true differentiation.  In Brocade’s case, they recently changed the name of their 16G fibre channel solution to “Gen 5” in an attempt to shift the marketing message away from a pure speed measurement (16 gigabit) especially when starting to put it up against the coming 40 gigabit fibre channel over Ethernet (FCoE) offerings coming from their competitor at Cisco.

In both of these cases, the shift has moved away from strict protocol references or speed ratings.  That’s not necessarily a bad thing.  However, the shift to naming it “something G” reeks quite frankly.  Are we as consumers and purchases so jaded by the idea of 3G/4G/5G that we don’t get any other marketing campaigns?  What if they’d call it Revision Five or Fifth Iteration instead?  Doesn’t that convey the same point?  Perhaps it does, but I doubt more than an handful of CxO type people know what iteration means without help from a pocket dictionary.  Those same CxOs know what 4G/5G mean because they can look down at their phone and see it.  More Gs are better, right?

Generational naming should only be used in the broadest sense of the idea.  It should only be taken seriously when more than one company uses it.  Is Aruba going to jump on the 5G wireless bandwagon?  Will EMC release a 6G FC array?  If you’re shaking your head in answer to these questions, you probably aren’t the only one.  Also of note in this discussion – What determines a generation?  IT people have trouble keeping track of what constitutes the difference between a major version change and a point release update.  Why did 3 features cause this to be software version 8.0 but the 97 new features in the last version only made it go from 7.0 to 7.5?  Also, what’s to say that a company doesn’t just skip over a generation?  Why was HSPDA not good enough to be 4G?  Because the ITU said it was just an iteration of 3G and not truly a new generation.  How many companies would have looked at the advantage of jumping straight to 5G by “counting” HSPDA as the fourth generation absent oversight from the ITU?


Tom’s Take

My mom always told me to “call a spade a spade.”  I don’t like the idea of randomly changing the name of something just to give it a competitive edge.  Fibre channel has made it this far as 2 gig, 4 gig, and 8 gig.  Why the sudden shift away from 16 gig?  Especially if you’re going to have to say it runs at 16 gig so people will know what you’re talking about?  Is it a smoke-and-mirrors play?  Why does Broadcom insist on naming their wireless 5G?  802.11a/b/g/n has worked just fine up to now.  Is this just an attempt to confuse the consumer?  We may never know.  What we need to do in the meantime is consider holding feet to the fire and ensuring that consumers and purchasers ensure that this current naming “generation” gets lost.