Unknown's avatar

About networkingnerd

Tom Hollingsworth, CCIE #29213, is a former network engineer and current organizer for Tech Field Day. Tom has been in the IT industry since 2002, and has been a nerd since he first drew breath.

Cisco Live 2013 Wrap Up

Cisco Live Tweetup Pic 1

Cisco Live 2013 Tweetup Pic 2

Cisco Live 2013 in Orlando is in the books. I’m sitting in the airport once again thinking about what made this year so special. It’s interesting to see the huge number of people coming to events like this. All manner of folks that want to see what Cisco is bringing to the market as well as those that want to talk to the best and brightest in the networking world.

I arrived on Saturday afternoon after taking a direct flight from Tech Field Day 9 in Austin. I made sure to pack a few extra clothes to be sure I’d have something to wear in the Orlando heat. As soon as I arrived and checked into my hotel, I headed down to the registration desk. Once I picked up my NetVet badge, I headed right next door to the Social Media Hub:

CLUSSocMedHub

This area has grown substantially since its introduction at Cisco Live 2012.  And when you consider that my original meet up area was a corner table with three chairs, you can help but feel awed at this presentation. I was very impressed to see the lounge aspect fully realized and the ample amount of seating provided a great place for attendees to hang out between sessions. Many of the Twitter folks at Cisco Live like Justin Cohen (@Grinthock) and Patrick Swackhammer (@swackhap) even used the Social Media Hub to watch the keynote addresses and comment on Twitter as they happened. They might have even exceeded their tweet count for a given time period and gotten silenced. It was impressive to see social media being used as the primary method of giving feedback during these big events.

Speaking of social media, the Sunday evening tweetup was a huge success. We had more than 50 people packed into our little corner of the Social Media Hub enjoy good conversation and amazing company. We even got a surprise visit from the former host of Cisco Live, Carlos Dominguez (@carlosdominguez), who stopped by to chat for a bit. We had a chef making Cherries Jubilee along with all the caffeinated and sugary snacks that one could hope for. I jumped on a chair to say a quick “thank you” to all those that attended. Events like this are the way to show the higher ups at Cisco how important social media is to a coherent and vibrant business strategy going forward.

Transportation seemed to be a commonly discussed theme at the event this year, though not usually in a positive manner. While the hotel shuttle system was keeping up rather well with demand and even offered in-bus wifi connectivity, the whole system seemed to break down when forced to cope with large numbers. The CCIE party on Tuesday and Customer Appreciation Event (CAE) on Wednesday both had large numbers of folks waiting for a very small number of buses. The most commonly heard explanation was heavy traffic around the convention center. I would love to believe that, but the fact that a few hundred people were standing around in the oppressive Florida humidity waiting for one of the dwindling spots on the few running buses was what I remember more than anything else. While San Francisco is a much friendlier city for walking I’d rather avoid the issues from this year.

The best part of Cisco Live is the people. I rekindled so many outstanding friendships this year and made quite a few new ones as well. I was astounded at the number of people that would stop me in the hallway to say hello or thank me for writing. Almost everyone was appreciative of the input that I gave into all the social media events. Truth be told, I didn’t really do that much. I helped out with a couple of things here and there, but for the most part I let the incredible Cisco Live Social Media team led by Kathleen Mudge (@KathleenMudge) do everything possible to make the experience amazing. I just wrote a blog post or two about things. If anyone deserves credit, it’s them.


Tom’s Take

I think Cisco is finally starting to get it when it comes to social media. They are pulling out al the stops to enhance the experience through meeting spaces, additional access, and even real time information gathering. For once, it wasn’t an airbrushed tattoo that announced me to the world of Cisco Live 2013. It was this tweet about hotel wifi:

Others such as Blake Krone (@BlakeKrone) got their tweets in the keynotes as well. VMware has always had an edge when it comes to social media in my opinion. This year, Cisco closed that gap considerably. Some of the conversations that I had with decision makers highlighted the ability to involve large numbers of people in a very personal way. Those influencers then spread the word to others in an honest and genuine manner. They are the soul of Cisco Live.

I’m already starting to plan for Cisco Live 2014 in San Francisco. I plan on putting up a poll in the coming months so we can plan a time for the big sign picture instead of leaving it until the last minute. I want to involve everyone I can in submitting suggestions to the Cisco Live Social Media team. Anything you can think of to enhance the experience for everyone will go a long way to making the event the best it can be. From the bottom of my heart I want to say “thank you” to everyone at Cisco Live. See you next year in San Fran!

The SDNquisition

Inquisition

Network Engineer: Trouble in the data center.
Junior Admin: Oh no – what kind of trouble?
Network Engineer: VLAN PoC for VIP is SNAFU.
Junior Admin: Pardon?
Network Engineer: VLAN PoC for VIP is SNAFU.
Junior Admin: I don’t understand what you’re saying.
Network Engineer: [slightly irritatedly and with exaggeratedly clear accent] Virtual LAN Proof of Concept for the Very Important Person is…messed up.
Junior Admin: Well what on earth does that mean?
Network Engineer: *I* don’t know – the CIO just told me to come in here and say that there was trouble in the data center that’s all – I didn’t expect a kind of Spanish Inquisition.

[JARRING CHORD]

[The door flies open and an SDN Developer  enters, flanked by two junior helpers. An SDN Assistant [Jones] has goggles pushed over his forehead. An SDN Blogger [Gilliam] is taking notes for the next article]

SDN Developer: NOBODY expects the SDNquisition! Our chief weapon is orchestration…orchestration and programmability…programmability and orchestration…. Our two weapons are programmability and orchestration…and Open Source development…. Our *three* weapons are programmability, orchestration, and Open Source development…and an almost fanatical devotion to disliking hardware…. Our *four*…no… *Amongst* our weapons…. Amongst our weaponry…are such elements as programmability, orchestration…. I’ll come in again.

[The Inquisition exits]

Network Engineer: I didn’t expect a kind of Inquisition.

[JARRING CHORD]

[The cardinals burst in]

SDN Developer: NOBODY expects the SDNquisition! Amongst our weaponry are such diverse elements as: programmability, orchestration, Open Source development, an almost fanatical devotion to disliking hardware, and nice slide decks – Oh damn!
[To Cardinal SDN Assistant] I can’t say it – you’ll have to say it.
SDN Assistant: What?
SDN Developer: You’ll have to say the bit about ‘Our chief weapons are …’
SDN Assistant: [rather horrified]: I couldn’t do that…

[SDN Developer bundles the cardinals outside again]

Network Engineer: I didn’t expect a kind of Inquisition.

[JARRING CHORD]

[The cardinals enter]

SDN Assistant: Er…. Nobody…um….
SDN Developer: Expects…
SDN Assistant: Expects… Nobody expects the…um…the SDN…um…
SDN Developer: SDNquisition.
SDN Assistant: I know, I know! Nobody expects the SDNquisition. In fact, those who do expect –
SDN Developer: Our chief weapons are…
SDN Assistant: Our chief weapons are…um…er…
SDN Developer: Orchestration…
SDN Assistant: Orchestration and —
SDN Developer: Okay, stop. Stop. Stop there – stop there. Stop. Phew! Ah! … our chief weapons are Orchestration…blah blah blah. Cardinal, read the paradigm shift.
SDN Blogger: You are hereby charged that you did on diverse dates claim that hardware forwarding is preferred to software definition of networking…
SDN Assistant: That’s enough.
[To Junior Admin] Now, how do you plead?
Junior Admin: We’re innocent.
SDN Developer: Ha! Ha! Ha! Ha! Ha!

[DIABOLICAL LAUGHTER]

SDN Assistant: We’ll soon change your mind about that!

[DIABOLICAL ACTING]

SDN Developer: Programmability, orchestration, and Open Source– [controls himself with a supreme effort] Ooooh! Now, Cardinal — the API!

[SDN Assistant produces an API design definition. SDN Developer looks at it and clenches his teeth in an effort not to lose control. He hums heavily to cover his anger]

SDN Developer: You….Right! Open the IDE.

[SDN Blogger and SDN Assistant make a pathetic attempt to launch a cross-platform development kit]

SDN Developer: Right! What function will you software enable?
Junior Admin: VLAN creation?
SDN Developer: Ha! Right! Cardinal, write the API [oh dear] start a NETCONF parser.

[SDN Assistant stands their awkwardly and shrugs his shoulders]

SDN Assistant: I….
SDN Developer: [gritting his teeth] I *know*, I know you can’t. I didn’t want to say anything. I just wanted to try and ignore your dependence on old hardware constructs.
SDN Assistant: I…
SDN Developer: It makes it all seem so stupid.
SDN Assistant: Shall I…?
SDN Developer: No, just pretend for Casado’s sake. Ha! Ha! Ha!

[SDN Assistant types on an invisible keyboard at the IDE screen]

[Cut to them torturing a dear old lady, Marjorie Wilde]

SDN Developer: Now, old woman — you are accused of heresy on three counts — heresy by having no API definition, heresy by failure to virtualize network function, heresy by not purchasing an SDN startup for your own needs, and heresy by failure to have a shipping product — *four* counts. Do you confess?
Wilde: I don’t understand what I’m accused of.
SDN Developer: Ha! Then we’ll make you understand! SDN Assistant! Fetch…THE POWERPOINT!

[JARRING CHORD]

[SDN Assistant launches a popular presentation program]

SDN Assistant: Here it is, lord.
SDN Developer: Now, old lady — you have one last chance. Confess the heinous sin of heresy, reject the works of the hardware vendors — *two* last chances. And you shall be free — *three* last chances. You have three last chances, the nature of which I have divulged in my previous utterance.
Wilde: I don’t know what you’re talking about.
SDN Developer: Right! If that’s the way you want it — Cardinal! Animate the slides!

[SDN Assistant carries out this rather pathetic torture]

SDN Developer: Confess! Confess! Confess!
SDN Assistant: It doesn’t seem to be advancing to the next slide, lord.
SDN Developer: Have you got all the slides using the window shade dissolve?
SDN Assistant: Yes, lord.
SDN Developer [angrily closing the application]: Hm! She is made of harder stuff! Cardinal SDN Blogger! Fetch…THE NEEDLESSLY COMPLICATED VISIO DIAGRAM!

[JARRING CHORD]

[Zoom into SDN Blogger’s horrified face]

SDN Blogger [terrified]: The…Needlessly Complicated Visio Diagram?

[SDN Assistant produces a cluttered Visio diagram — a really cluttered one]

SDN Developer: So you think you are strong because you can survive the Powerpoint. Well, we shall see. SDN Assistant! Show her the Needlessly Complicated Visio Diagram!

[They shove the diagram into her face]

SDN Developer [with a cruel leer]: Now — you will study the Needlessly Complicated Visio Diagram until lunch time, with only a list of approved Open Flow primitives. [aside, to SDN Assistant] Is that really all it is?
SDN Assistant: Yes, lord.
SDN Developer: I see. I suppose we make it worse by shouting a lot, do we? Confess, woman. Confess! Confess! Confess! Confess
SDN Assistant: I confess!
SDN Developer: Not you!

Devaluing Experts – A Response

I was recently reading a blog post from Chris Jones (@IPv6Freely) about the certification process from the perspective of Juniper and Cisco. He talks about his view of the value of a certification that allows you to recertify from a dissimilar track, such as the CCIE, as opposed to a certification program that requires you to use the same recertification test to maintain your credentials, such as the JNCIE. I figured that any comment I had would run much longer than the allowed length, so I decided to write it down here.

I do understand where Chris is coming from when he talks about the potential loss of knowledge in allowing CCIEs to recert from a dissimilar certification track. At the time of this writing, there are six distinct tracks, not to mention the retired tracks, such as Voice, Storage, and many others. Chris’s contention is that allowing a Routing and Switching CCIE to continue to recertify from the Data Center or Wireless track causes them to lose their edge when it comes to R&S knowledge. The counterpoint to that argument is that the method of using the same (or updated) test in the certified track as the singular recertification option is superior because it ensures the engineer is always up on current knowledge in their field.

My counter argument to that post is two fold. The first point that I would debate is that the world of IT doesn’t exist in a vacuum. When I started in IT, I was a desktop repair technician. As I gradually migrated my skill set to server-based skills and then to networking, I found that my previous knowledge was important to continue forward but that not all of it was necessary. There are core concepts that are critical to any IT person, such as the operation of a CPU or the function of RAM. But beyond the requirement to answer a test question is it really crucial that I remember the hex address of COM4 in DOS 5.0? My skill set grew and changed as a VAR engineer to include topics such as storage, voice, security, and even returning to servers by way of virtualization. I was spending my time working with new technology while still utilizing my old skills. Does that mean that I needed stop what I was working on every 1.5 years to start studying the old CCIE R&S curriculum to ensure that I remembered what OSPF LSA types are present in a totally stubby area? Or is it more important to understand how SDN is impacting the future of networking while not having any significant concrete configuration examples from which to generate test questions?

I would argue that giving an engineer an option to maintain existing knowledge badges by allowing new technology to refresh those badges is a great idea for vendors that want to keep fresh technology flowing into their organization. The risk of forcing your engineers into a track without an incentive to stay current comes in when you have a really smart engineer that is not capable of thinking beyond their certification area. Think about the old telecommunications engineers that have spent years upon years in their wiring closets working with SS7 or 66-blocks. They didn’t have an incentive or need to learn how voice over IP (VoIP) worked. Now that their job function has been replaced by something they don’t understand many of them are scrambling to retrain or face being left behind in the market. As Steven Tyler once sang, “If you do what you’ve always done, you’ll always get what you’ve always got.”

Continuous Learning

The second part of my counterpoint is that the only true way to maintain the level of knowledge required for certification shouldn’t rely on 50-100 multiple choice questions. Any expert-level program should allow for the use of continuing education to recertify the credential on a yearly basis. This is how the legal bar system works. It’s also how (ISC)2’s CISSP program works. By demonstrating that you are acquiring new knowledge continually and contributing to the greater knowledge base you are automatically put into a position that allows you to continue to hold your certification. It’s a smart concept that creates information and ensures that the holders of those certifications stay current on new knowledge. Think for moment about changing the topics of an exam. If the exam is changed every two years there is a potential for a gap in knowledge to occur. If someone were recertified on the last day of the CCIE version 3 exam, it would have been almost two years before they had to take an exam that required any knowledge of MPLS, which is becoming an increasingly common enterprise core protocol. Is it fair that the person that took the written exam the next day was required to know about MPLS? What happens if that CCIEv3 gets a job working with MPLS a few months later. According to the current version 4 curriculum they CCIE should know about MPLS. Within the confines of the certification program the user has failed to demonstrate familiarity with the topic.

Instead, if we ensure that the current certification holders are studying new topics such as MPLS or SDN or any manner of networking-related discussions we can be reasonably sure they are conversant with what the current state of the industry looks like. There is no knowledge gap because new topics can be introduced quickly as they become relevant. There is no fear that someone following the letter of the certification law and recertifying on the same material will run into something they haven’t seen before because of a timing issue. Continuous improvement is a much better method in my mind.


Tom’s Take

Recertification is going to be a sticky topic no matter how it’s sliced. Some will favor allowing engineers to spread their wings and become conversant in many enterprise and service provider topics. Still others will insist that the only way to truly be an expert in a field is to study those topics exclusively. Still others will say that a melding of the two approaches is needed, either through continuous improvement or true lab recertification. I think the end result is the same no matter the case. What’s needed is an agile group of engineers that is capable of not only being an expert at their field but is also encouraged to do things outside their comfort zone without fear of losing that which they have worked so hard to accomplish. That’s valuable no matter how you frame it.

Note that this post was not intended to be an attack against any person or any company listed herein. It is intended as a counterpoint discussion of the topics.

Big Data? Or Big Analysis?

data-illustration

Unless you’ve been living under a rock for the past few years, you’ve no doubt heard all about the problem that we have with big data.  When you start crunching the numbers on data sets in the terabyte range the amount of compute power and storage space that you have to dedicate to the endeavor is staggering.  Even at Dell Enterprise Forum some of the talk in the keynote addresses focused on the need to split the processing of big data down into more manageable parallel sets via use of new products such as the VRTX.  That’s all well and good.  That is, it’s good if you actually believe the problem is with the data in the first place.

Data Vs. Information

Data is just description.  It’s a raw material.  It’s no more useful to the average person than cotton plants or iron ore.  Data is just a singular point on a graph with no axes.  Nothing can be inferred from that data point unless you process it somehow.  That’s where we start talking about information.

Information is the processed form of data.  It’s digestible and coherent.  It’s a collection of data points that tell a story or support a hypothesis.  Information is actionable data.  When I have information on something, I can make a decision or present my findings to someone to make a decision.  They key is that it is a second-order product.  Information can’t exist without data upon which to perform some kind of analysis.  And therein lies the problem in our growing “big data” conundrum.

Big Analysis

Data is very sedentary.  It doesn’t really do much after it’s collected.  It may sit around in a database for a few days until someone needs to generate information from it.  That’s where analysis comes into play.  A table is just a table.  It has a height and a width.  It has a composition.  That’s data.  But when we analyze that table, we start generating all kinds of additional information about it.  Is it comfortable to sit at the table?  What color lamp goes best with it?  Is it hard to move across the floor?  Would it break if I stood on it?  All of that analysis is generated from the data at hand.  The data didn’t go anywhere or do anything.  I created all that additional information solely from the data.

Look at the above Wikipedia entry for big data.  The image on the screen is one of the better examples of information spiraling out of control from analysis of a data set.  The picture is a visual example of Wikipedia edits.  Note that it doesn’t have anything to do with the data contained in a particular entry.  They’re just tracking what people did to describe that data or how they analyzed it.  We’ve generated terabytes of information just doing change tracking on a data set.  All that data needs to be stored somewhere.  That’s what has people in IT sales salivating.

Guilt By Association

If you want to send a DBA screaming into the night, just mention the words associative entity (or junction table).  In another lifetime, I was in college to become a DBA.  I went through Intro to Databases and learned about all the constructs that we use to contain data sets.  I might have even learned a little SQL by accident.  What I remember most was about entities.  Standard entities are regular data.  They have a primary key that describes a row of data, such as a person or a vehicle.  That data is pretty static and doesn’t change often.  Case in point – how accurate is the height and weight entry on your driver’s license?

Associative entities, on the other hand, represent borderline chaos.  These are analysis nodes.  They contain more than one primary key as a reference to at least two tables in a database.  They are created when you are trying to perform some kind of analysis on those tables.  They can be ephemeral and usually are generated on demand by things like SQL queries.  This is the heart of my big data / big analysis issue.  We don’t really care about the standard data entities.  We only want the analysis and information that we get from the associative entities.  The more information and analysis we desire, the more of these associative entities we create.  Containing these descriptive sets is causing the explosion in storage and compute costs.  The data hasn’t really grown.  It’s our take on the data that has.

Crunch Time

What can we do?  Sadly, not much.  Our brains are hard-wired to try and make patterns out of seeming unconnected things.  It is a natural reaction that we try to bring order to chaos.  Given all of the data in the world the first thing we are going to want to do with it is try and make sense of it.  Sure, we’ve found some very interesting underlying patterns through analysis such as the well-worn story from last year of Target determining a girl was pregnant before her family knew.  The purpose of all that analysis was pretty simple – Target wanted to know how to better pitch products to a specific focus groups of people.  They spent years of processing time and terabytes of storage all for the lofty goal of trying to figure out what 18-24 year old males are more likely to buy during the hours of 6 p.m. to 10 p.m. on weekday evening.  It’s a key to the business models of the future.  Rather than guessing what people want, we have magical reports that tell us exactly what they want.  Why do you think Facebook is so attached to the idea of “liking” things?  That’s an advertisers dream.  Getting your hands on a second-order analysis of Facebook’s Like database would be the equivalent of the advertising Holy Grail.


Tom’s Take

We are never going to stop creating analysis of data.  Sure, we may run out of things to invent or see or do, but we will never run out of ways to ask questions about them.  As long as pivot tables exist in Excel or inner joins happen in an Oracle database people are going to be generating analysis of data for the sake of information.  We may reach a day where all that information finally buries us under a mountain of ones and zeroes.  We brought it on ourselves because we couldn’t stop asking questions about buying patterns or traffic behaviors.  Maybe that’s the secret to Zen philosophy after all.  Instead of concentrating on the analysis of everything, just let the data be.  Sometimes just existing is enough.

Software Defined Cars

CarLights

I think everything in the IT world has been tagged as “software defined” by this point. There’s software defined networking, software defined storage, the software defined data center, and so on. Given that the definitions of the things I just enumerated are very hard to nail down, it’s no surprise that many in the greater IT community just roll their eyes when they start hearing someone talk about SD.

I try to find ways to discuss advanced topics like this with people that may not understand what a hypervisor is or what a forwarding engine is really supposed to be doing. The analogies that I come up usually relate to everyday objects that are familiar to my readers. If I can frame the Internet as a highway and help people “get it,” then I’ve succeeded.

During one particularly interesting discussion, I started trying to relate SDN to the automobile. The car is a fairly stable platform that has been iterated upon many times in the 150 years that it has been around. We’ve seen steam-powered single seat models give way to 8+ passenger units capable of hauling tons of freight. It is a platform that is very much defined by the hardware. Engines and seating are the first things that spring to mind, but also wheels and cargo areas. The difference between a sports car and an SUV is very apparent due to hardware, much in the same way that a workgroup access switch only resembles a core data center switch in the most basic terms.

This got me to thinking: what would it take to software define a car? How could I radically change the thinking behind an automobile with software. At first, I thought about software programs running in the engine that assist the driver with things like fuel consumption or perhaps an on-demand traction and ride handling system. Those are great additional features for sure but they don’t really add anything to the base performance of a car beyond a few extra tweaks. Even the most advanced “programming” tools that are offered for performance specialists that allow for the careful optimization of transmission shifting patterns and fuel injector mixture recalibration don’t really fall into the software defined category. While those programs offer a way to configure the car in a manner different from the original intent they are difficult to operate and require a great deal of special knowledge to configure in the first place.

That’s when it hit me like a bolt out of the blue. We already have a software defined car. Google has been testing it for years. Only they call it a Driverless Car. Think about it in terms of our favorite topic of SDN. Google has taken the hardware that we are used to (the car) and replaced the control plane with a software construct (the robot steering mechanism). The software is capable of directing the forwarding of the hardware with no user intervention, as illustrated in this video:

That’s a pretty amazing feat when you think about it. Of course, programming a car to drive itself isn’t easy. There’s a ton of extra data that is generated as a car learns to drive itself that must be taken into account. In much the same way, the network is going to generate mountains of additional data that needs to be captured by some kind of sensor or management platform. That extra data represents the network feedback that allows you to do things like steer around obstacles, whether they be a deer in the road or a saturated uplink to a cloud provider.

In addition, the idea of a driverless software defined car is exciting because of the disruption that it represents. Once we don’t need a cockpit with a steering mechanism or access to propulsion mechanisms directly at our fingertips (or feet), we can go about breaking about the historical construction of a car and make it a more friendly concept. Why do I need to look forward when my car does all the work? Why can’t I twist the seats 90 degrees and facilitate conversation among passengers while the automation is occuring? Why can’t I put in an uplink to allow me to get work done or a phone to make calls now that my attention doesn’t need to be focused on the road? When the car is doing all the driving, there are a multitude of ideas that need to be reconsidered for how we design the automobile.

When I started bouncing this idea off of some people, Stephen Foskett (@SFoskett) mentioned to me that some people would take issue with my idea of a software defined car because it’s a self-contained, closed ecosystem. What about a software defined network that collects data and provides for greater visibility to the management layer? Doesn’t it need to be a larger system in order to really take advantage of software definition? That’s the beauty of the software defined piece. Once we have a vehicle generating large amounts of actionable data, we can now collect that and do something with it. Google has traffic data in their Maps application. What if that data was being fed in real time by the cars themselves? What if the car could automatically recognize traffic congestion and reroute on the fly instead of merely suggesting that the driver take an alternate path? What if we could load balance our highway system efficiently because the car is getting real time data about conditions. Now Google has the capability to use their software defined endpoints to reconfigure as needed.

What if that same car could automatically sense that you were driving to the airport and check you into your flight based on arrival time without the need to intervene? How about inputting a destination, such as a restaurant or a sporting event and having the car instantly reserve a parking spot near the venue based on reports from cars already in the lot or from sensors that report the number of empty spots in a parking garage nearby? The possibilities are really limitless even in this first or second stage. The key is that we capture the generated data from the software pieces that we install on top of existing hardware. We never knew we could do this because the interface into the data never existed prior to creating a software layer that we could interact with.  When you look at what Google has already done with their recent acquisition of Waze, the social GPS and map application it does look like Google is starting down this path.  Why rely on drivers to update the Waze database when the cars can do it for you?


Tom’s Take

I have spent a very large portion of my IT career driving to and from customer sites. The idea of a driverless car is appealing, but it doesn’t really help me to just sit over in the passenger seat and watch a computer program do my old job. I still like driving long distances to a certain extent. I don’t want to lose that. It’s when I can start using the software layer to enable things that I never thought possible that I start realizing the potential. Rather than just looking as the driverless software defined car as a replacement for drivers, the key is to look at the potential that it unlocks to be more efficient and make me more productive on the road. That’s the key take away for us all. Those lessons can also be applied to the world of software defined networking/storage/data center as well. We just have to remember to look past the hype and marketing and realize what the future holds in store.

Dell Enterprise Forum and the VRTX of Change

I was invited by Dell to be a part of their first ever Enterprise Forum.  You may remember this event from the past when it was known as Dell Storage Forum, but now that Dell has a bevy of enterprise-focused products in their portfolio a name change was in order.  The Enterprise Forum still had a fair amount of storage announcements.  There was also discussion about networking and even virtualization.  One thing seemed to be on the tip of everyone’s tongue from the moment it was unveiled on Tuesday morning.

VRTX

Say hello to Dell’s newest server platform – VRTX (pronounced “vertex”).  The VRTX is a shift away from the centralized server clusters that you may be used to seeing from companies like Cisco, HP, or IBM.  Dell has taken their popular m1000 blade units and pulled them into an enclosure that bears more than a passing resemblance to the servers I deployed five or six years ago.  The VRTX is capable of holding up to 4 blade servers in the chassis alongside either 12 3.5″ hard drives or 25 2.5″ drives, for a grand total of up to 48 TB of storage space.  What sets VRTX apart from other similar designs, like the IBM S-class BladeCenter of yore, is the ability for expansion.

Rather than just sliding a quad-port NIC into the mezzanine slot and calling it a day, Dell developed VRTX to expand to meet future needs of customers.  That’s why you’ll find 8 PCIe slots in VRTX (3 full height, 5 half height).  That’s the real magic in this system.  For example, the VRTX ships today with 8 1GbE ports for network connectivity.  While 10GbE is slated for a future release you could slide in a 10GbE PCIe card and attach it to a blade if needed to gain connectivity.  You could also put in a Serial Attached SCSI (SAS) Host Bus Adapter (HBA) and gain more expansion for your on-board storage.  In the future, you could even push that to 40GbE or maybe one of those super fast PCIe SSD cards from a company like Fusion-IO.  The key is that the PCIe slots give you a ton of expandability in such a small form factor instead of limiting you to whatever mezzanine card or expansion adapter has been blessed by the skunkworks labs for your supplying server vendor.

VRTX doesn’t come without a bit of controversy.  Dell has positioned this system as a remote office/branch office (ROBO) solution that combines everything you would need to turn up a new site into one shippable unit.  That follows along with comments made at a keynote talk on the third day about Dell believing that compute power has reached a point where it will no longer grow at the same rate.  Dell’s solution to the issue is to push more compute power to the edge instead of centralizing it in the data center.  What you lose in manageability you gain in power.

The funny thing for me was looking at VRTX and seeing the solution to a small scale data center problem I had for many years.  The schools I used to serve didn’t need an 8 or 10-slot blade chassis.  They didn’t need two Compellent SANs with data tiering and failover.  They needed a solution to virtualize their aging workloads onto a small box built for their existing power and cooling infrastructure.  VRTX fits the bill just fine.  It uses 110v power.  The maximum of four blades fits just perfectly with VMware‘s Essentials bundle for cheap virtualization with the capability to expand if needed later on.  Everything is the same as the enterprise-grade hardware that’s being used in other solutions, just in a more SMB-friendly box.  Plus, the entry level price target of $10,000 in a half-loaded configuration fits the budget conscious needs of a school or small office.

If there is one weakness in the first iteration of VRTX it comes from the software side of things.  VRTX doesn’t have any software beyond what you load on it.  It will run VMware, Citrix, Hyper-V, or any manner of server software you want to install.  There’s no software to manage the platform, though.  Without that, VRTX is a standalone system.  If you truly wanted to use it as a “pay as you grow” data center solution, you need to find a way to expand the capabilities of the system linearly as you expand the node count.  As a counterpoint to this, take a look at Nutanix.  Many storage people at Enterprise Forum were calling the VRTX the “Dell Nutanix” solution.  You can watch an overview of what Nutanix is doing from a session at Storage Field Day 2 last November:

The key difference is that Nutanix has a software management program that allows their nodes to scale out when a new node is added.  That is what Dell needs to work on developing to harness the power that VRTX represents.  Dell developed this as a ROBO solution yet no one I talked to saw it that way.  They saw this as a building block for a company starting their data center build out.  What’s needed is the glue to stitch two or more VRTX systems together.  Harnessing the power of multiple discrete compute units is a very important part of breaking through all the barriers discussed at the end of Enterprise Forum.


Tom’s Take

Bigger is better.  Except when it’s not.  Sometimes good things really do come in small packages.  Considering that Dell’s VRTX was a science project for the last four years being built as a proof-of-concept I’d say that Dell has finally achieved one thing they’ve been wanting to do for a while.  It’s hard to compete against HP and IBM due to their longevity and entrenchment in the blade server market.  Now, Dell has a smaller blade server that customers are clamoring to buy to fill needs that aren’t satisfied by bigger boxes.  The missing ingredient right now is a way to tie them all together.  If Dell can mulitplex their resources together they stand an excellent chance of unseating the long-standing titans of blade compute.  And that’s a change worth fighting for.

Disclaimer

I was invited to attend Dell Enterprise Forum at the behest of Dell.  They paid for my travel and lodging expenses while on site in San Jose.  They also provided a Social Media Influencer pass to the event.  At no time did they place any requirements on my attendance or participation in this event.  They did not request that any posts be made about the event.  They did not ask for nor where they granted any kind of consideration in the writing of this or any other Dell Enterprise Forum post.

Tech Field Day 9

TFD-Logo-300

It’s hard to believe that the last Tech Field Day event was held almost two years ago.  Since the, the Field Day series has branched out to cover topics like Networking, Storage, and Wireless.  The industry never stands still for long, however.  The stars aligned and the sponsors asked to bring back the granddaddy of them all.  That’s why I’m happy to announce that I’ll be attending Tech Field Day 9 from June 19-21 in Austin, TX.

There’s an all-star lineup of previous Field Day attendees with a couple of new folks sprinkled in to keep things lively:

https://i0.wp.com/techfieldday.com/wp-content/uploads/2013/05/Al-Head-2012-Small-wpcf_54x60.jpg Alastair Cooke @DemitasseNZ
Trainer, Writer, Consultant, Geek. From New Zealand.
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/Plankers-wpcf_60x60.jpg Bob Plankers @Plankers
A hardcore IT generalist, virtualization expert, blogger, and vocal end user of technology.
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/2012_Pic-wpcf_41x60.jpg Carlo Costanzo @CCostan
Carlo is a NYC based Virtualization Consultant. He writes about whatever interests him at the time @ vCloudInfo.com
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/wahl-headshot-200x200-wpcf_60x60.jpg Chris Wahl @ChrisWahl
The guy who is in your data center virtualizing things
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/Marks-wpcf_55x60.jpg Howard Marks @DeepStorageNet
Storage Analyst Extraordinary and Plenipotentiary
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/JohnObeto-wpcf_53x60.jpg John Obeto @JohnObeto
I like SMBs and Windows
https://i0.wp.com/techfieldday.com/wp-content/uploads/2013/03/jpw_headshot-wpcf_60x58.png Justin Warren @JPWarren
The Anablogger: Old-school, long-form analysis with an irreverent twist.
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/Norwood-wpcf_60x60.png Matthew Norwood @MatthewNorwood
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/Novak-wpcf_60x39.jpg Robert Novak @Gallifreyan
Writer, Photographer, System Administrator, Team Builder, Cat Herder, Comedian, Part-Time Shopkeeper
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/Adzima.jpeg Ryan Adzima @RAdzima
Ryan is an enterprise technology generalist with a tendency to always end up back in networking.
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/Lowe-wpcf_48x60.jpg Scott D. Lowe @OtherScottLowe
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/tmattke-wpcf_60x60.jpg Tony Mattke @Tonhe
network engineer / geek

The delegates are some of the best and brightest across the networking, server, and storage industries.  Which is quite fitting when you consider the sponsors that are coming your way and how the represent the new trend in converged data centers:

https://i0.wp.com/techfieldday.com/wp-content/uploads/2013/04/commvault-logo-wpcf_100x37.jpg https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/dell_blue_rgb-wpcf_60x60.jpg https://i0.wp.com/techfieldday.com/wp-content/uploads/2013/06/logo-wpcf_100x21.png https://i0.wp.com/techfieldday.com/wp-content/uploads/2013/03/neverfail_final_logo-wpcf_100x22.png
https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/Nutanix-wpcf_100x12.png https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/solarwinds_RGB-300x84-wpcf_100x28.jpg https://i0.wp.com/techfieldday.com/wp-content/uploads/2012/08/veeam-Modern-Data-Protection-logo-wpcf_100x38.png

In particular, Infinio is an exciting edition to the Tech Field Day series.  They will be launching during their presentation slot, so I’m sure they’re going to have a very interesting take on their topic.

Tech Field Day 9 is also a transition point for me personally.  For the first time, I’ll be attending the event as both a delegate AND a staff member.  Now that I’m a full-time employee of Foskett Services and Gestalt IT I’m going to split my time between listening to the presenters and making sure that everything is running smoothly in the background.  It’s going to be a challenge to try and keep up with everything, but I feel that I’m more than capable of making every aspect of this event outstanding.

What’s Field Day Like?

Tech Field Day is not a vacation.  This event will involve starting a day early first thing Wednesday morning and running full steam for two and a half days.  We get up early and retire late.  Wall-to-wall meetings and transportation to and from vendors fill the days.  When you consider that most of the time we’re discussing vendors and presentations on the car ride to the next building, there’s very little downtime.  We’ve been known to have late night discussions about converged storage networking and automation until well after midnight.  If that’s your idea of a “vacation” then Tech Field Day is a paradise.  I usually crawl onto a plane late Friday night mentally and physically exhausted with a head full of blog posts and ideas.  It’s not unlike the same kind of feeling you get after running a marathon.  You don’t know if you could do it again tomorrow, but you can’t wait until the next one.

Tech Field Day – Join In Now!

Everyone at home is as much a participant in Tech Field Day as the delegates on site.  At the last event we premiered the ability to watch the streaming video from the presentations on mobile devices.  This means that you can tune in from just about anywhere now.  There’s no need to stay glued to your computer screen.  If you want to tune in to our last presentations of the day from the comfort of your couch with your favorite tablet device then feel free by all means.  We’ll also have the videos from the session posted quickly afterwards on Youtube and Vimeo.  If you have to run to the store for ice cream or catch that playoff game you can always catch up with what’s going on when you get back.  Don’t forget that you can also use Twitter to ask questions and make comments about what you’re seeing and hearing.  Some of the best questions I’ve seen came from the home audience.  Use the hashtag #TFD9 during the event.  Note that I’ll be tagging the majority of my tweets that week with #TFD9, so if the chatter is getting overwhelming you can always mute or filter that tag.

Standard Tech Field Day Sponsor Disclaimer

Tech Field Day is a massive undertaking that involves the coordination of many moving parts.  It’s not unlike trying to herd cats with an aircraft carrier.  One of the most important pieces is the sponsors.  Each of the presenting companies is responsible for paying a portion of the travel and lodging costs for the delegates.  This means they have some skin in the game.  What this does NOT mean is that they get to have a say in what we do.  No Tech Field Day delegate is every forced to write about the event due to sponsor demands. If a delegate chooses to write about anything they see at Tech Field Day, there are no restrictions about what can be said.  Sometimes this does lead to negative discussion.  That is entirely up to the delegate.  Independence means no restrictions.  At times, some Tech Field Day sponsors have provided no-cost evaluation equipment to the delegates.  This is provided solely at the discretion of the sponsor and is never a requirement.  This evaluation equipment is also not a contingency of writing a review, be it positive or negative.  The delegates are in this for the truth, the whole truth, and nothing but the truth.

If you’d like to learn more about what makes Tech Field Day so special, please check out the website at http://techfieldday.com.  If you want to be a part of Tech Field Day, don’t hesitate to fill out the nomination form to become a delegate.  We’re always on the lookout for great people to become a part of the event and we’d love to have you along for the ride.

Glue Peddlers

IntegrationGlue

There’s an old adage that says “A chain is only as strong as the weakest link.”  While people typically use this in terms on saying that teams are only as strong as their weakest member, I look at it through a different lens.  In my former life as a Value Added Reseller (VAR) engineer, I spent a lot of my time working with technologies that needed to be linked together like a chain.

You have probably seen the lamentations of a voice engineer complaining about fax machines.  If you haven’t, you should count yourself lucky.  Fax machines are the bane of the lives of many telecom folks.  They aren’t that difficult when you get right down to it.  They’re essentially printers with a 9600 baud modem attached for making phone calls.  Indeed, fax machines are probably one of the most robust pieces of technology that I’ve encountered.  I’ve seen faxes covered in dust and grime from a decade or more of use still dutifully churning out page after page of low resolution black-and-white print.

Faxes themselves aren’t the issue.  The problem is that their technology has been eclipsed to the point where interfacing them in the modern world is often difficult and time consuming.  I usually counsel my customers to leave their fax machines plugged directly into an analog landline to avoid issues.  For those times where that can’t be done, I have a whole bag of tricks to make it work with a voice over IP (VoIP) system.  Adaptors and relays and other such tricks help me figure out how to make this decades-old tech work with a modern PRI or SIP connection.  And don’t even get me started on interfacing a fire alarm with an IP phone system.

The best VARs in the world don’t make their money from reselling a pile of hardware to a customer.  The profits aren’t found in a bill of materials.  Instead, they make money in the glue business.  Tying two disparate technologies together via custom programming or knowledge of processes needed to make dissimilar technology work the right way is their real trade.  This is their “glue.”  I can remember having discussions with people regarding the hardest parts of an implementation.  It’s not in setting up a dial plan or configuring a VM cluster with the right IP address.  It’s usually in making some old piece of technology work correctly.  A fire alarm or a Novell server or an ancient wireless access point can quickly become the focus area of an entire project and consume all your time.

If you really want to differentiate yourself from the pack of “box pushers” out there just reselling equipment you need to concentrate on the point where the glue needs to be the stickiest.  That’s where the customer’s knowledge is the weakest.  That’s the point that will end up causing the most pain.  That’s where the money is waiting for the truly dedicated.  VARs have already figured this out.  If you want to make yourself valuable to a customer or to a VAR, be the best a gluing these technologies together.  Understand how to make old technology work with new tech.  There’s always going to be new technology coming out to replace what’s being used currently.  And there will always be a customer or two that want to keep using that old technology far past the expiration date.  If you are the one that can tie those too things together with a minimum of effort, you’ll find yourself the most popular peddler in the market.

The Arse First Method of Technical Blogging – Review

When you tell people that you are a blogger, you tend to get a couple of generic responses.  The first is laughter or dismissal.  Some people just don’t understand how you can write all the time.  The second response if curiosity.  Usually, this is expressed as a torrent of questions about how to blog.  What do I write about?  How much should I write? How often should I post? And on and on.  For those of us that have been blogging long enough, it’s almost a wrote recitation of our standards and practices for blogging.  Some people have even been smart enough to turn that standard reply into a blog post.  For Greg Ferro, it was time to turn that blog post into an e-book:

ArseFirstCover

Cheeky, isn’t it? Weighing in at a svelte 37 pages, this little how-to guide details many of Greg’s secrets for writing blog posts over his career.  He talks about tools for screen captures and knowledge archiving.  He also discusses hosting options and content creation.  To the novice blogger, it’s a step-by-step guide in how to get started in blogging.  I would highly recommend picking it up if you aren’t sure how to get started in technical blogging, which is remarkably different than blogging about food or pictures or any other non-technical thing.

The Catch

The funny thing about this book is that, while reading more and more of it, I realized that I violate almost every one of Greg’s recommendations for writing a technical blog.  My opening paragraphs are more like story hooks.  I don’t use a lot of bullet points.  I like putting pictures in my posts.  There are many others that I ignore on a pretty regular basis as well.  But don’t think that means that I don’t appreciate what Greg is trying to do with his book.

Greg writes like he speaks in real life.  He doesn’t mince words.  He’s not in love with the sound of his voice.  He’s going to give it to you straight when you ask him a question.  His blogging style is totally reflective of his speaking style.  On the other hand, my blogging style is indicative of my speaking style as well.  I like telling stories and relating things back to universal images through metaphors.  I tend to expound on subjects and give more details to support my arguments rather than restricting that to a simple bulleted statement. People that read Greg’s blog posts and my blog posts would likely be able to pick out which of us authored a particular post.  That’s because we have our own voices.

Greg’s book is a great way to get started with technical blogging.  After you get your first couple of posts down, it’s important to think about finding your voice.  You may like using lots of pictures or video.  You may prefer to keep it short and sweet with the occasional code example.  The key is find a style that works for you and stick with it.  Once you find a comfortable writing style you’ll find yourself writing more often and about more complex subjects.  When you aren’t worried about getting the words down on paper you’re free to dive right into things that are going to take a lot of thought.

The recommended price of this book is $4.99.  If that scares you off, you can pick it up for just $2.99.  For the price of a candy bar and a 20oz soda, you can learn a little more about blogging and using tools to amplify your writing ability.  If nothing else, you can read through it so you know how Greg thinks when he’s writing down information about things.  You can purchase The Arse First Method of Technical Blogging at https://leanpub.com/Technical-Blogging-Writing-Arse-First.  I promise you won’t be disappointed.

CCIE Loses Its Voice

ccievThe world we live in is constantly adapting and changing to new communications methods.  I can still remember having a party line telephone when I was a kid.  I’ve graduated to using landlines, cellular phones, email, instant messaging, text messaging, and even the occasional video call.  There are more methods to contact people than I can count on both hands.  This change is also being reflected in the workforce as well.  People who just a few years ago felt comfortable having a desk phone and simple voice mail are now embracing instant messaging with presence integration and unified voice mail as well as single number reach to their mobile devices.  It’s a brave new world that a voice engineer is going to need to understand in depth.

To that end, Cisco has decided to retire the CCIE Voice in favor of an updated track that will be christened the CCIE Collaboration.  Note that they aren’t merely changing the blueprint like they have in the past with the CCIE SP or the CCIE R&S.  This is like the CCIE Storage being moved aside for the CCIE Data Center.  The radical shift in content of the exam should be a tip-off to the candidates that this isn’t going to be the same old voice stuff with a few new bells and whistles.

Name That Tune

The lab equipment and software list (CCO account required) includes a bump to CUCM 9.1 for the call processor, as well as various 9.x versions of Unity Connection, Presence, and CUCME.  There’s also a UCS C460, which isn’t too surprising with CUCM being a virtualized product now.  The hardware is rounded out with 2921 and 3925 routers as well as a 3750-X switch.  The most curious inclusion is the Cisco Jabber Video for Telepresence.  That right there is the key to the whole “collaboration” focus on this exam.  There is a 9971 phone listed as an item.  I can almost guarantee you’re going to have to make a video call from the 9971 to the video soft client in Cisco Jabber.  That’s all made possible thanks to Cisco’s integration of video in CUCM in 9.1.  This has been their strategy all along.

The CCIE Voice is considered one of the hardest certifications to get, even among the CCIE family.  It’s not that there is any one specific task to configure that just wrecks candidates.  The real issue is the amount of tasks that must be configured.  Especially when you consider that a simple 3-point task to get the remote site dial plan up and running could take a couple of hours of configuration.  Add in the integrated troubleshooting section that requires you to find a problem after you’ve already configured it incorrectly and you can see why this monster is such a hard test.  One has to wonder what adding video and other advanced topics like presence integration into the lab is going to do to the amount of time the candidate has to configure things.  It was already hard to get done in 8 hours.  I’m going to guess it’s downright impossible to do it in the CCIE Collaboration.  My best guess is that you are going to see versions of the test that are video-centric as well as ones that are voice-centric.  There’s going to be a lot of overlap between the two, but you can’t go into the lab thinking you’re guaranteed to get a video lab.

Hitting the Wrong Notes

There also seems to have been a lot of discussion about the retirement of the CCIE Voice track as opposed to creating a CCIE Voice version 4 track with added video.  In fact, there are some documents out there related to the CCIE Collaboration that reference a CCIE Voice v4.  The majority of discussion seems to be around the CCIE Voice folks getting “grandfathered” into a CCIE Collaboration title.  While I realize that the change in the name was mostly driven about the marketing of the greater collaboration story, I still don’t think that there should be any automatic granting of the Collaboration title.

The CCIE Collaboration is a different test.  While the blueprint may be 75% the same, there’s still the added video component to take into account (as well as cluster configuration for multiple CUCM servers).  People want an upgrade test to let the CCIE Voice become a CCIE Collaboration.  They have one already: the CCIE Collaboration lab exam.  If the title is that important, you should take that lab exam and pass it to earn your new credential.  The fact that there is precedent for this with the migration of the Storage track to Data Center shows that Cisco wants to keep the certifications current and fresh.  While Routing & Switching and Security see content refreshes, they are still largely the same at the core.  I would argue that the CCIE Collaboration will be a different exam in feel, even if not in blueprint or technology.  The focus on IM, presence and video means that there’s going to be an entirely different tone.  Cisco wants to be sure that the folks displaying the credential are really certified to work on it according to the test objectives.  I can tell you that there was serious consideration around allowing Storage candidates to take some sort of upgrade exam to get to the CCIE Data Center, but it looks like that was ultimately dropped in favor of making everyone go through the curriculum.  The retirement of the CCIE Voice doesn’t make you any less of a CCIE.  Like it or not, it looks like the only way to earn the CCIE Collaboration is going to be in the trenches.

It Ain’t Over Until…

The sunsetting officially starts on November 20th, 2013.  That’s the last day to take the CCIE Voice written.  Starting the next day (the 21st) you can only take the Collaboration written exam.  Thankfully, you can use either the Voice written or the Collaboration written exam to qualify for either lab.  That’s good until February 13, 2014.  That’s the last day to take the CCIE Voice lab.  Starting the next day (Valentine’s Day 2014), you will only be able to take the Collaboration lab exam.  If you want to get an idea of what is going to be tested on the lab exam, check out the document on the Cisco Learning Network (CCO account required).

If you’d like to read more about the changes from professional CCIE trainers, check out Vik  Malhi (@vikmalhi) on IPExpert’s blog.  You can also read Mark Snow’s (@highspeedsnow) take on things at INE’s blog.


Tom’s Take

Nothing lasts forever, especially in the technology world.  New gadgets and methods come out all the time to supplant the old guard.  In the world of communications and collaboration, Cisco is trying to blaze a trail towards business video as well as showing the industry that collaboration is more than just a desk phone and a voice mailbox.  That vision has seen some bumps along the way but Cisco seems to have finally decided on a course.  That means that the CCIE Voice has reached the apex of potential.  It is high time for something new and different to come along and push the collaboration agenda to the logical end.  Cisco has already created a new CCIE to support their data center ambitions.  I’m surprised it took them this long to bring business video and non-voice communications to the forefront.  While I am sad to see the CCIE Voice fade away, I’m sure the CCIE Collaboration is going to be a whole new barrel of fun.