Spirent – Bringing The Tests To You

Day two of Network Field Day 4 kicked off with a visit to Spirent.  I was fairly impressed with their testing setup the last time and I wanted to see what new tricks they had in store for us this time around.  After a quick breakfast, we settled in for our first session.  Although this one wasn’t broadcast, we did get permission to talk about what they were showing us.  One of the issues that Spirent has with their setup is that it’s just so…huge.  While it is very accurate and can take just about everything you can throw at it, it’s not exactly the most convenient thing to haul around when you need to test something.  To that end, Spirent is looking a releasing a more compact unit that’s more in line with the needs of an enterprise testing setup.  The unit we saw was about the size of a desktop computer case, but Spirent says the final goal is to have a unit that’s about 1U in size and can be placed in a rack.  That way, you can grab the tester when you need to prove beyond the shadow of a doubt that it’s not the network (or the WAN connection or anything else Spirent can test).  Do remember that having a smaller version of the until does come with a compromise or two.  The most apparent one is the reduction in testing resolution from the nanseconds of the big Spirent setup down to a few milliseconds on the enterprise version.  Truth be told, you probably don’t need the nanosecond resolution of something like a QFabric test when you’re just trying to test an enterprise network.  If a few milliseconds really does matter, then maybe you need to look into the bigger unit.  One of the other things that interested me about their new unit was the interface of the software itself.  Spirent has gone all out to make sure that it’s easy to start a test and set the parameters.  The metaphor that they are using is that of a media player.  You can drag sliders to vary the size and number of packets as well as setting other parameters.  When you’re ready to go, just press the oversized Play button and your test kicks off and runs until completion.  You’ll see a bit of this interface in a bit.

When we picked up the stream again, I got a bit excited.  Spirent has taken everything they know about testing and applied it to some interesting use cases.  No one can deny that we’ve entered a new phase of cyber warfare.  First, it was the kids doing thing for fun and reputation.  Then it was the career bad guys doing it for money.  Now we find ourselves dealing with advanced malware threats and state-sposored cyberterrorism.  After some discussion about social engineering and other topics, we started talking about Spirent applying their testing methodologies to find vulnerabilities and alert you to them before they can be exploited.  Spirent has a huge library of thousands of tests that can be run against a multitude of applications on just about any OS platform, from Windows to iOS.

It’s demo time again!  Spirent fired up a demo environment running Linux and exploited a Jabber server with a bunch of attack traffic.  You can tell that this was a fairly thorough attack, as they went through several iterations before they finally found a vector.  Other tools that I’ve used just attack known holes and give up after one or two iterations.  Spirent has created a tool that can not only iterate on different surfaces, but you can also craft your own tests to take advantage of zero-day exploits in the wild.  That makes me a little more confident with their results, as they don’t quit until the test is finished.

Last up was Ameya Barvé with an overview of the new iTest Lab Optimizer. According to Ameya, one of pains of lab operations involves the lack of automation.  You never know who’s in the lab or who’s reconfigured it to support some wacky sidebar case.  iTest Lab Optimizer takes care of many of these problems by creating a system for lab reservation and topology creation.  By utilizing a layer 1 switch to interconnect the devices in the lab, you can use iTest to overlay the lab topology on top of it on the fly.  I can see the allure of having this kind of capability in a larger lab environment, and should my lab ever grow to the point where it’s not a collection of cables assembled on a side table in my office, I’m sure having a software program like this would be a great boon to speed test setup and execution.

If you’d like to learn more about Spirent, you can check out their website over at http://www.spirent.com.  You can also follow them on Twitter as @Spirent.  You can find a link to the Spirent slide decks at http://www.slideshare.net/spirent.


Tom’s Take

Spirent has some amazing testing gear.  I’ve said as much previously.  What they’ve done since our last meeting is take what they have and shrink it down to the point where it makes cost-effective sense to the rest of the world not needing to test high-end network gear day in and day out.  The newer portable testing suite should appeal those people in the data center or service provider area that have SLAs that need to be met or constantly find themselves getting into arguments over performance numbers.  The rest of their presentation seemed to be an outgrowth of their testing strategies.  For instance, the zero-day cyberwarfare testing suite shows that they can apply the methodology of executing in-depth tests to a different market that requires a specific kind of results.  That shows me that someone inside Spirent is thinking outside the small little niche.  The new iTest software shows me that Spirent is trying to recognize a pain point that many of us weren’t sure could even be addressed.  It also tells me that Spirent isn’t just a one-trick pony and that we should expect to see more good things from them in the near future.

Tech Field Day Disclaimer

Spirent was a sponsor of Network Field Day 4.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 4.  In addition, they provided me with a gift bag containing a coffee mug, a pen, and a golfing tool of some sort. They did not ask for, nor where they promised any kind of consideration in the writing of this review.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

More Network Field Day Coverage

Get More Juice From Your Network LabNetwork Sherpa

Opengear – A Box Full Of Awesome

Presenter number two at Network Field Day 4 was Opengear.  This was a company that I hadn’t heard much about.  A cursory glance at their website reveals that they make console servers among other interesting management devices.  Further searching turned up a post by Jeremy Stretch over at Packetlife about using one of the devices as the core of his free community lab.  If it’s good enough for Stretch, it’s good enough to pique my interest.

As you can see from the short opening, Opengear is dedicated to making network infrastructure management equipment like console servers as well as PDU management and environmental sensors.  Most interesting to me was the ACM5004-G unit the delegates received, which is a 4-port model with a 3G radio uplink.  They also make much more dense devices like the one in Stretch’s lab for those that are wanting something with a few more ports.  Most of the people I know that are looking at something like this for the CCIE lab use an old 2511 router with octal cables.  Those are fairly cheap on eBay but you are taking a risk with the hardware finally wearing out and being out of warranty.  As well, there are a ton of features that you can configure in the Opengear software (we’ll get to that in a minute.

Up next…is a caution for Opengear and other would-be Tech Field Day presenters.  Yes, I understand you are proud of your customer base and want to tell the world about all the cool people that use your product.  That being said, a single slide crammed full of logos, which I affectionately call “The NASCAR Slide” may be a better idea that slide after slide of each company broken down by industry vertical.  You have to think to yourself that filling 8-10 slides of your deck with other people’s logos is not only wasting time and space, but not doing a very good job of telling us what your product does.  All of the companies on that list probably use toilet paper as well, but we don’t see that on your slides.  Better to focus on your product.

Okay, now for awesome time.  Opengear’s management software has a bunch of bells and whistles to suit your fancy.  You can configure all manner things like multiple authentication methods for your users to prevent them from accessing consoles they aren’t supposed to see.  As the underpinnings of the whole Opengear system run on Linux, it’s no surprise that their monitoring software is built on top of Nagios.  This allows you to use their VCMS product to manage multiple disparate units.  Think about that.  You’re using the Opengear boxes to manage your equipment.  Now you can use their software to manage your Opengear boxes.  Those units can also be configured to “call home” over secured VPNs to ensure that your traffic isn’t flying across the Internet unencrypted.  VCMS can also use vendor-neutral commands to manage connected UPSes.  I can’t tell you the number of times having a device that could power cycle a UPS or PDU would have saved my bacon or prevented a trip across the state.  The VCMS can even script responses to events, such as triggering a power cycle if the system is hung or stops responding.

Next up is a demo of the software.  Worth a look if your interested in the gory details of the interface:

We finished off the day with a talk about some of the new and interesting things that Opengear is doing with their devices.  I think the story about configuring them to use a webcam to take pictures of people opening roadside boxes then upload the pictures to an FTP server running on the Opengear box that then sends the picture over 3G back to central location was the most interesting.  Of course, everyone immediately seized on the salmon farm as the strangest use case.  It’s clear that Opengear has a great solution that is only really limited by your imagination.

If you’d like to learn more about Opengear and their variety of products, you can check out their website at http://opengear.com.  You can also follow them on Twitter as @Opengear.


Tom’s Take

I can’t count the number of times that I’ve needed a console server.  Just that functionality alone would save me a lot of pain in some remote deployments I’ve had.  Opengear seems to have taken this idea and ran with it by adding on some great additional functionality, whether it be cellular uplinks or software controls for all manner of third party UPSes.  I think the fact that you can do so much with their boxes with a little imagination and some elbow grease means that we’re going to be hearing stories like the fish farm for a while to come.

Tech Field Day Disclaimer

Opengear was a sponsor of Network Field Day 4.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 4.  In addition, Opengear provided me with an ACM5004-G console server and a polo shirt. They did not ask for, nor where they promised any kind of consideration in the writing of this review.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Statseeker – Information Is Ammunition

The first presenter at Network Field Day 4 came to us from another time and place.  Stewart Reed came to us all the way from Brisbane, Australia to talk to us about his network monitoring software from Statseeker.  I’ve seen Statseeker before at Cisco Live and you likely have too if you been.  They’re the group that always gives away a Statseeker-themed Mini on the show floor.  They’ve also recently done a podcast with the Packet Pushers.

We got into the room with Stewart and he gave us a great overview of who Statseeker is and what they do:

He’s a great presenter and really hits on the points that differentiates Statseeker.  I was amazed by the fact that they said they can keep historical data for a very long period of time.  I’ve managed to crash a network monitoring system years ago by trying to monitor too many switch ports.  Keeping up with all that information was like drinking from a firehose.  Trying to keep that data for long periods of time was a fantasy.  Statseeker, on the other hand, has managed to find a way to not only keep up with all that information but keep it around for later use.  Stewart said one of my new favorite quotes during the presentation, “Whoever has the best notes wins.”  Not only do they have notes that go back for a long time, but their notes don’t suffer from averaging abstraction.  When most systems say that they keep data for long periods of time, what they really mean is that they keep the 15 or 30 minute average data for a while.  I’ve even seen some go to day or week data points in order to reduce the amount of stored data.  Statseeker takes one minute data polls and keeps those one minute data polls for the life of the data.  I can drill into the interface specs at 8:37 on June 10th, 2008 if I want.  Do you think anyone really wants to argue with someone that keeps notes like that?

Of course, what would Network Field Day be without questions:

One of the big things that comes right out in this discussion is the idea that Statseeker doesn’t allow for customer SNMP monitoring.  By restricting the number of OIDs that can be monitored to a smaller subset, this allows for the large-scale port monitoring and long term data storage that Statseeker can provide.  I mean, when you get right down to it, how many times have you had to write your own custom SNMP query for an odd OID?  The majority of the customers that Statseeker are likely going to have something like 90% overlap in what they want to look at.  Restricting the ability to get crazy with monitoring makes this product simple to install and easy to manage.  At the risk of overusing a cliche, this is more in line with Apple model of restriction with focus on ease of use.  Of course, if Statseeker wants to start referring to themselves as the Apple of Network Monitoring, by all means go right ahead.

The other piece from this second video that I liked was the mention that the minimum Statseeker license is 1000 units.  Stewart admits that below that price point, it argument for Statseeker begins to break down somewhat.  This kind of admission is refreshing in the networking world.  You can’t be everything to everyone.  By focusing on long term data storage and quick polling intervals, you obviously have to scale your system to hit a specific port count target.  If you really want to push that same product down into an environment that only monitors around 200 ports, you are going to have to make some concessions.  You also have to compete with smaller, cheaper tools like MRTG and Cacti. I love that they know where they compete best and don’t worry about trying to sell to everyone.

Of course, a live demo never hurts:

If you’d like to learn more about Statseeker, you can head over to their website at http://www.statseeker.com/.  You can also follow them on Twitter as @statseeker.  Be sure to tell them to change their avatar and tweet more.  You can see hear about Statseeker’s presentation in the Packet Pushers Priority Queue Show 14.


Tom’s Take

Statseeker has some amazing data gathering capabilities.  I personally have never needed to go back three years to win an argument about network performance, but knowing that I can is always nice.  Add in the fact that I can monitor every port on the network and you can see the appeal.  I don’t know if Statseeker really fits into the size of environment that I typically work in, but it’s nice to know that it’s there in case I need it.  I expect to see some great things from them in the future and I might even put my name in the hat for the car at Cisco Live next year.

Tech Field Day Disclaimer

Statseeker was a sponsor of Network Field Day 4.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 4. They did not ask for, nor where they promised any kind of consideration in the writing of this review.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Additional Network Field Day 4 Coverage:

StatseekerThe Lone Sysadmin

Statseeker – Keeping An Eye On The Little ThingsLamejournal

What’s My Cisco ATA Second Line MAC Address?

In the world of voice, not everything is wine and roses.  As much as we might want to transition everything over to digital IP phones and soft clients, the fact remains that there are some analog devices that still need connectivity on a new phone system.  The more common offender of this is the lowly fax machine.  Yes, even in this day and age we still need to rely on the tried-and-true facsimile machine to send photostatic copies of documents across the PSTN to a waiting party.  Never mind email or Dropbox or even carrier pigeon.  Fax machines seem to be the most important device connected to a phone system.  Normally, I leave the fax connections and their POTS lines intact without touching anything.  However, there are times when I don’t have that luxury.

In the case of the Cisco VoIP systems, that means relying on the Analog Terminal Adapter, or ATA.  The ATA allows you to connect an analog device to the unit, whether it be a fax machine or a cordless analog phone or even a fire alarm or postage machine.  It has many uses.  The configuration of the ATA is fairly straightforward under any CUCM system.  However, if you have a multitude of analog devices that you need to connect, you might opt to use the second analog port on the ATA.  The ATA 186 of the past and its current replacement, the ATA 187, both have 2 analog ports on the back.  There’s only one Ethernet port, though.  This is where the interesting part comes in to play.  If there are two analog ports but only one Ethernet port, how to I configure the MAC address for the second port?  All phone devices in CUCM must be identified by MAC address.  On an ATA, the primary MAC address printed on the bottom or the side of the box is the address for the first port.

If you want to use the second port, you’re going to have to do a little bit of disassembly.  Cisco uses a standard method to create a new MAC address:

1.  Take the MAC address for port 1.  For example, 00:00:DE:AD:BE:EF.

2.  Drop the first two digits from the MAC address.  In the example, 00:DE:AD:BE:EF.

3.  Append “01” to the end of the 10-digit address.  Example, 00:DE:AD:BE:EF:01.

Once you’ve completed those steps, take the MAC address you’ve just created and plug it into CUCM as a new ATA device.  Once you’ve completed the necessary steps to create the new device, it will register with the DN you’ve assigned to it.  Then you can start calling or faxing it to your heart’s content.

There’s no mention of the secondary MAC address anywhere on the web interface.  You’d figure it wouldn’t be hard to write some HTML function to read the MAC address and do the above operation.  The Cisco documentation buries this information deep inside the setup document.  I’ve even search Cisco’s very own support forums and found all manner of advice that doesn’t work correctly.  I decided that it was time to jot this information down in a handy place for the next time I need to remember how to configure the ATA’s second port.  I hope you find it useful as well.

When Demos Attack

The demo. The holy grail of live, interactive presentation. The point where the rubber meets the road. The seductive allure of a live demonstration drives most technical presentations. Slide after slide gets boring, even with cutesy animations. Audiences can quickly get lost with the droning monotony of slide recitation. However, a demo give them something to focus on. A live system generates real data and shows what you can do. Questions come up and the answers are right there at the tips of your fingers. However, the demo’s siren song can lead to doom if you don’t navigate the waters carefully. Even the most polished demos can fail. Steve Jobs learned this during the launch of the iPhone 4. Tech presenters learn it every day when Mr. Murphy comes calling.

Demos are not inherently bad. In fact, the upside is astounding. The problem comes in the execution. Having been a veteran of many demo presentations, both good and bad as well as a presenter and demonstrator myself, I thought I’d share a couple of ideas I have about demos and how to keep yours from heading south.

1. Make Sure Your Demo Is Interesting – I can’t stress enough how important this bullet point is. Not all things make for good demos. Even things that you think may be the most awesome stuff on the planet can be boring or distracting for your audience. Watching someone type command after command into a CLI window is boring. However, watching a short command instantiate a software load balancer and kick back a list of the configuration is exciting. Watching someone pull up a screen on a phone and poke around is passe. Watching that same phone pull up live info from the Internet and book a reservation at a restaurant for you is much better. The key is to keep the audience on the edge of their seats. You must make the demo compelling and make them want to see where you’re going. The NFD4 Juniper Mykonos demo was exciting because you could see the build out of attack from inception to execution to response. Watching them put up a Google map projection of the attacker’s area with links to local legal council was a hilarious moment, but it illustrates the engagement aspect. On the other hand, the Aerohive BR100 iPad provisioning demo from WFD2 missed the mark a bit. Why? Because watching someone configure an AP is a pretty pedestrian to the audience. Screens full of config values make the eyes go blurry. I understand the power and awesomeness underneath the ability to provision 15 branch offices from a tablet. I just don’t want to see how the sausage is made in this case. Maybe instead having a script run automatically or making it flashier would keep attention focused on the “why” and not the “how”. And if your demo involves a task that needs some time to run to completion, please make sure to fill that time appropriately. Watching a status bar fill up on screen is like nails on a chalkboard to a presentation audience. Avoid long pauses if you can, but if you must you should kick off the first part of the demo and move on with your presentation while the magic is happening in the background. Infineta figured this out at NFD3. Since their long-distance vMotion demo was going to take twenty minutes no matter what, they let it run while they whiteboarded algorithms  Don’t make your audience stare at boredom.

2. Test Your Demo Under Real World Conditions – This was Steve’s mistake during the iPhone 4 demo. People practice their demos and presentations religiously (or at least they should). They keep staring at screen after screen to ensure everything is automatic. But sometimes they forget that all those practice runs don’t represent reality. Yes, an iPhone will access the web just fine in an empty auditorium at Moscone. It’s a different story when the audience full of phones and tablets and laptops all melt the wireless with a tidal wave of packets. Steve forgot to make sure that his practice runs looked like the audience makeup that he’d see that day. Just as important, make sure that your demo environment doesn’t do wacky things. Hiccups in dry runs should give you a hint that you need everything to be ironed out before you do it for real. Make your demo setup simple because you also have to remember that you’re under the gun and nervous as hell up there. Derick Winkworth’s SIP demo failed not because of technology, but because he was typing the wrong password into the software. Derick knew the password. But he got flustered because we gave him a hard time about his password earlier in the demo. Doing a live demo is like a trapeze act without a safety net. Be sure you’ve tested your act enough under the big top so you won’t fall.

3. Have A Backup Plan – Just like the most recent SpaceX Falcon9 rocket launch, you can’t assume that everything is going to work right. You need a backup plan. That includes everything in your presentation. Backup slide decks in case your USB drive dies or the drivers aren’t installed. Backup video adapters in case you thought there was HDMI but there is really on VGA. However, if your presentation has a demo, you have *better* have a backup plan. As above, wireless networks can be unreliable in conference centers. VPN connections can fail at a moment’s notice. Files can get moved. Systems can be shut off. Be ready to roll when it looks like your demo is going south. Instead of tap dancing, move over to a local version. Spin up and backup VM on your laptop and show your demo from there. If your files are gone or your machine is down, have a simple animation showing what was going on. Or go for broke on the whiteboard. Diagram everything and make the audience help you out. Don’t let the hiccups derail you. Be ready to go. And in the event that even your backup plan fails, don’t tap dance around it. Apologize and move on. We’ve all seen demos that fail and we know that not everything goes right.


Tom’s Take

I love great demos. I love being engaged and seeing live systems work. But every time someone pulls out a demo at a presentation, I feel a bit hesitant. I’ve been fortunate enough to be on this side of some great demos. However, I’ve also seen and had some fail spectacularly. If you take into account the things I outlined above, you can minimize the chance that your demo will fail. That way the conversation will center around something awesome and not around shaking head and embarrassed smiles.

Shadow IT – What Evil Lurks In The Heart Of An Admin?

I’ve been hearing the term Shadow IT quite a bit recently.  According to the Fount of All Knowledge, Shadow IT refers to networks and systems built inside organizations without official approval.  I found it curious that people started referring to this almost five years ago, yet a cursory search for “shadow IT” turns up a *ton* of articles written in the last six months.  At first, I wondered if the trend of BYOD had finally petered out a bit.  After all, once you’ve assaulted the populace with a headline every day for at least two months, they kind of grow accustomed to it and get bored seeing it all the time.  Then I wondered why a five-year-old concept should be hot now.  Then it hit me.

I’ve never heard of Shadow IT because it was never a “thing” for me.  The idea that a lab computer or a non-production testing system might be moved into production work wasn’t an obstacle to the way that I’d done things in the past.  As a matter of fact, it’s the way I’ve done things for the most part my entire career.  In order to replace our aging 3Com NBX phone system, I installed Cisco CallManager in a lab and let the sales folks use it to make conference calls one week.  They were so impressed with the quality of the call they made me rip out the old and put in the new the following month.  The whole virtualization strategy around here grew out of one box running ESX standard for a VM migration test.  After people discovered how flexible things were inside of a virtualized environment, naturally our server strategy going forward was focused around our brand new ESX cluster.  Even our network was a series of cobbled-together parts scavenged from the four corners of the globe at a time when the engineering staff needed gigabit connectivity and we had no budget to accomplish it.  Slowly, one piece at a time, we assembled our entire setup without direct authorization and formal approval.  While it was nice to called to a meeting about a new feature and be told, “Yeah, we’ve been running that for the last three months” there were huge weaknesses in the plan.

With a hodge-podge network assembled over the course of months or years to address tactical problems, you have huge support headaches in the event of failures.  Untangling the knots of interconnected systems becomes a lot harder when you keep uncovering devices you knew nothing about.  That new awesome voicemail server?  It’s running on ESXi on a new server that was originally provisioned for lab use.  All well and good until I’m out of the office and someone needs to restart it after a power failure.  Worse still when they have to remember to connect via VMware Client to restart the VM itself.  Extra pain and effort introduced because of the need to move quickly to implement something.  That’s just the side of things from the lab.  Let’s not talk about things like Dropbox or GMail.  Even though I know it’s not technically the right way to do things, my job is quickly reaching the point where I’m dependent on Dropbox.  I keep notes and firmware images in mine that sync between all my systems.  My presentations go in there.  So do PDFs and software images.  If someone decided to block Dropbox tomorrow, I’d be screwed.  I avoid keeping sensitive data in there as a matter of habit, but just about every other important thing is either in a Dropbox or has been copied there at some point.  GMail is another method used frequently to avoid large attachment size limitations or mailbox quotas.  That’s under the best of circumstances.  I’ve used GMail to test incoming and outgoing mail and a number of sites.  I use it to test mail routing and NAT translations of mail servers.  That’s just the legitimate uses.  Think about the number of IT people that use GMail as a way to skirt eDiscovery rules and Freedom of Information actions.  I’ve seen that several times.

BYOD has caused people in management to start looking at their networks and systems a bit closer than they have in the past.  What used to be the big, dark hole where data entered and information came out is now being scrutinized with great fervor because of the possibility of exposure.  Now, instead of turning a blind eye towards the IT department with a mantra of “just make it work,” management must now take into account that running insecure devices or non-tested configurations can lead to trouble down the road.  Trouble that someone occasionally has to answer for, either in the press or in a court of law.  That makes management skittish.  That explains why this is now an important point of contention in IT.  Rather than taking the easy road of results, we now instead must focus on  the whole process.  Ample documentation must exist at every step of the way not as a record of implementation, but instead as a way to show liability and protect people.  In essence, that’s really what Shadow IT is about.  Never mind the challenges of creating systems from untested technology.  It all comes down to who gets the blame when things go wrong and how that can be proved when the yelling starts.

I’ve already made a commitment to do my best to avoid the kinds of last-minute solutions that are implicated in the Shadow IT movement.  I’m not going to do away with my lab or with piloting solutions before implementing them.  What I will do is make sure there is a clearly defined plan in place in the event that the lab solution needs to be moved into production.  I’ll also be sure that all the involved parties agree on the best course of action before the solution is put in place so there can be no arguing or finger pointing after the fact.  The easiest way to get rid of Shadow IT is to shine the Light of Documentation on it.  Then those of us in IT aren’t looked upon as the crazy vigilantes of networking and systems and instead we can get back to being the harmless recluses that our secret identities portray.

Velcro for VAR Engineers

When I was younger, I must have watched The Delta Force about a hundred times. One of the things I loved in that movie was the uniforms the Delta guys wore. Jet black, covered in cargo pockets, and very useful. The most compelling feature, however, was the velcro on the shoulders and chest. The Delta troopers could remove the patches on their uniforms whenever they needed to be anonymous, then put them back on at will. I loved this idea. As time has gone on, I’ve notice the same kind of capability on the new military BDUs. Rank insignia, unit affiliation, and even the name tag are all velcro patches that can be removed, reapplied, and changed as needed.

This idea of configurable uniforms finally hit home for me the other day when I was going through my closet looking for a vendor-specific shirt. Yes, I know that Greg has decried the plumage of the vendor in a previous blog post, but as a VAR I’m a bit hamstrung. Sometimes, I need to put on my Aruba shirt or my Cisco jacket or my Aerohive tuxedo. Customers feel a bit reassured when you’re wearing a shirt from a company that you’re pitching. However, I’ve noticed that all these shirts seem to start looking alike after a while. I have the same Dri-Fit Nike polo shirt with four different vendor logos. I have the same dark blue polo with three other different vendor logos. I think I have a Cisco shirt in every color of the rainbow. I even have shirts that don’t fit anymore with fun old logos, like my Master CNE. Why do I need to have that many logo shirts in my closet? Why can’t I have a little more control over my VAR uniform?

That’s when it hit me. Let’s do the velcro configurability on vendor polo shirts. A velcro patch over the left breast and maybe another couple on the sleeves. Think of the possibilities. Now, instead of worrying about what vendor shirt I’m going to wear in the morning, I can just pick out the black one or the red one. Then, when I’m ready to brand myself, I just need to pick out the appropriate patch and slap it on the velcro. No fuss, no muss. If I wear the wrong vendor shirt today, it can cause some embarrasing issues. With the patch system, I just remove the errant patch and replace it in seconds. Much easier than trying to keep track of which shirt I shouldn’t be wearing to a particular site. You could even make a big show of it. When it’s time to get work done, make a big production of taking your patch out and slapping it on. When you need to be “off the record” about something, make a theatrical gesture of ripping the identification patch off your shirt as if to say, “I’m not with this company right now. Here’s what I think.” It would be practical as well as awesome.

Sure, there are details to work out. Even getting the vendors to start offering velcro patches would be a huge step in the right direction. I’m all for this, as it means I can finally take a little more control over my wardrobe. Now where did I put that sewing machine?