Juniper – Land of Unicorns and Broccoli

The final Network Field Day 4 (NFD4) presentation was from Juniper. Juniper has been a big supporter of Tech Field Day so getting to see some of their newest technology and advances was just another step in the the wonderful partnership. We arrived Friday afternoon to a very delicious lunch before settling in for the four hour session.

We were introduced to one of our own, Derick Winkworth (@cloudtoad). Derick was a delegate and NFD2 and has recently come to Juniper as the PM of Automation. It’s always nice to see someone from Tech Field Day in front of us for the vendor. Some have said that the vendors are stealing away members of the Field Day community, but I see it more as the vendors realizing the unique opportunity to bring someone on board the “gets it.” However, I couldn’t let Derick off the hook quite that easily. At Cisco Live, Derick proved his love for Dave Ward of Cisco by jumping up during Dave’s OnePK panel and throwing a pair of men’s briefs at him with “I ❤ Dave” written on the back. Lots of laughs were had by all, and Dave seemed appreciative of his gift. Once I learned the Derick was presenting first for NFD4, I hatched my own fan boy plot. When Derick walked up front to face the NFD delegates as “the enemy,” I too proved my love for the Cloud Toad by jumping up and tossing him a pair of underwear as well. These were adorned with “I ❤ @cloudtoad” to show Derick that he too has groupies out there.

Derick then proceeded to give us a small overview of the decision he made to join Juniper and the things that he wanted to improve to make everyone’s life a bit better. I can tell the Derick is genuinely pumped about his job and really wants to make a difference. If someone is that excited about going to work every day, it really doesn’t matter if it’s for a vendor or a VAR or even a garbageman. I only wish that half the people I work with had the same passion for their jobs as Derick.

Our first presentation was a bit of a surprise. We got a first hand look at storage from Simon Gordon. Yes, Juniper shook things up by making their first peek all about hard drives. Okay, so maybe it was more about showing how technologies like QFabric can help accelerate data transfers back and forth across your network. The two storage people in the room seemed fascinated by the peek into how Juniper handled these kinds of things. I was a bit lost with all the terminology and tried to keep up as best I could, but that’s what the recorded video archive is for, right?  It’s no surprise that Juniper is pitching QFabric as a solution for the converged data center, just like their competitors are pitching their own fabric solutions.  It just reminds me that I need to spend some more time studying these fabric systems.  Also, you can see here where the demo gremlins bit the Juniper folks.  It seemed to happen to everyone this time around.  The discussion, especially from Colin McNamara (@colinmcnamara) did a great job of filling the time where the demo gremlins were having their fun.

The second presentation was over Virtual Chassis, Juniper’s method of stacking switches together to unify control planes and create managment simplicity. The idea is to take a group of switches and interconnect the backplanes to create high throughput while maintaining the ability to program them quickly. The technology is kind of interesting, especially when you extend it toward something like QFabric to create a miniature version of the large fabric deployment. However, here is where I get to the bad guy a bit… Juniper, while this technology is quite compelling, the presentation fell a bit flat. I know that Tech Field Day has a reputation for chewing up presenters. I know that some sponsors are afraid that if they don’t have someone technical in front of the group that bedlam and chaos will erupt. That being said, make sure that the presenter is engaging as well as technical. I have nothing but respect for the presenter and I’m sure he’s doing amazing things with the technology. I just don’t think he felt all the comfortable in front of our group talking about it. I know how nervous you can be during a presentation. Little things like demo failures can throw you off your game. But in the end, a bad presentation can be saved by a good presenter. A good presentation can take a hit from a less-than-ideal presenter.  Virtual chassis is a huge talking point for me.  Not only because it’s the way that the majority of my customers will interconnect their devices.  Not because it’s a non-proprietary connector way to interconnect switches.  It’s because Virtual Chassis is the foundation for some exciting things (that may or may not be public knowledge) around fabrics that I can’t wait to see.

Up next was Kyle Adams with Mykonos. They are a new acquistion by Juniper in the security arena. They have developed a software platform that provides a solution to the problem of web application security. Mykonos acts like a reverse proxy in front of your web servers. When it’s installed, it intercepts all of the traffic traveling to your Internet-facing servers and injects a bit of forbidden fruit to catch hackers. Things like fake debug codes, hidden text fields, and even phantom configuration files. Mykonos calls these “tar pits” and they are designed to fool the bad guys into a trail of red herrings. Becauase all of the tar pit data is generated on the fly and injected into the HTTP session, no modification of the existing servers is necessary. That is the piece that had eluded my understanding up until this point. I always thought Mykonos integrated into your infrastructure and sprayed fake data all over your web servers in the hope of catching people trying to footprint your network. Realizing now that it does this instead from the network level, it interesting to see the approaches that Mykonos can take. The tar pit data is practically invisible to the end user. Only those that are snooping for less-than-honorable intentions may even notice it. But once they take the bait and start digging a bit deeper, that’s when Mykonos has them. The software then creates a “super cookie” on the system as a method of identifying the attacker. These super coookes are suprisingly resilient, using combinations of Java and Flash and other stuff to stay persistent even if the original cookie is deleted. Services like Hulu and Netflix use them to better identify customers. Mykonos uses them to tie attacker sessions together and collect data. There are some privacy concerns naturally, but that is a discussion for a different day. Once Mykonos has tagged you, that’s when the countermeasures can start getting implemented.

I loved watching this in demo form. Mykonos randomly selects a response based on threat level and deploys it in an effort to prevent the attacker from compromising things. Using methods such as escalting network latency back to the attacker or creating fake .htacess files with convincingly encrypted usernames and passwords, Mykonos sets the hook to reel in the big fish. While the attacker is churning through data and trying to compromise what he thinks is a legitimate security hole, Mykonos is collecting data the whole time to later identify the user. That way, they can either be blocked from accessing your site or perhaps even prosecuted if desired. I loved the peek at Mykonos. I can see why Christofer Hoff (@beaker) was so excited to bring them on board. This refreshing approach to web application firewalls is just crazy enough to work well. As I said on the video, Mykonos is the ultimate way to troll attackers.

The final presentation at Juniper once again starred Derick Winkworth along with Dan Backman. Dan had presented over workflow automation at NFD2. Today, they wanted to talk about the same topic from a slightly different perspective. Derick took the helm this time and started off with a hilarious description of the land of milk and honey and unicorns, which according to him was representitive of what happens when you can have a comfortable level of workflow automation. It’s also where the title of this post came from.  As you can tell from the video, this was the best part of having a former delegate presenting to us.  He knew just how to keep us in stitches with all his whiteboarding and descriptions.  After I was done almost spitting my refreshments all over my laptop, he moved on to his only “slide”, which was actually a Visio diagram. I suppose this means that Derick has entered the Hall Of Slideless TFD Presenters. His approach to workflow automation actually got me a bit excited. He talked less about scripting commands or automating configuration tasks and instead talked about all the disparate systems out there and how the lack of communication between them can cause the silo effect present in many organizations to amplify.  I like Derick’s approach to using Junos to pull information in from various different sources to help expedite things like troubleshooting or process execution.  Leveraging other utilities like curl helps standardize the whole shooing match without reinventing the wheel.  If I can use the same utilities that I’ve always used, all my existing knowledge doesn’t become invalidated or replaced.  That really speaks to me.  Don’t make me unlearn everything.  Give me the ability to take your product and use additional tools to do amazing things.  That, to me, is the essence of SDN.

If you’d like to learn more about the various Juniper products listed above, be sure to visit their website at http://www.juniper.net.  You can also follow their main Twitter account as @JuniperNetworks.


Tom’s Take

Juniper’s doing some neat things from what they showed us at NFD4.  They appear to be focusing on fabric technology, both from the QFabric converged networking overview and even the Virtual Chassis discussion.  Of course, protecting things is of the utmost importance, so Mykonos can prevent the bad guys from getting the goods in a very novel way.  Uniting all of this is Junos, the single OS that has all kinds of capabilities around SDN and now OpenFlow 1.3.  Sure, the demo gremlins hit them a couple of times, but they were able to keep the conversation going for the most part and present some really compelling use cases for their plans.  The key for Juniper is to get the word out about all their technology and quit putting up walls that try and “hide” the inner workings of things.  Geeks really like seeing all the parts and pieces work.  Geeks feel a lot more comfortable knowing the ins and outs of a process.  That will end up winning more converts in the long run than anything else.

Tech Field Day Disclaimer

Juniper was a sponsor of Network Field Day 4.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 4.  In addition, Juniper provided me with a hooded sweatshirt with the Juniper logo and some “I Wish This Ran Junos” stickers. They did not ask for, nor where they promised any kind of consideration in the writing of this review.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Cisco – Borderless Speed Dating

The first presentation of the final day of Network Field Day 4 brought us to the mothership on Tasman Drive.  The Cisco Borderless team had a lineup of eleven different presenters ready to show us everything they had.  For those of you not familar with the term, Borderless Networks inside Cisco essentially means “everything that isn’t data center or voice.”  Yeah, that means routing and switching and security and wireless and everything else.  That also meant that we got a very diverse group of people presenting to us and a lot of short twenty minute videos of their products.  In a way, it’s very much like speed dating. With little time to get the point across, you tend to shed the unnecessary pleasantries and get right to the important stuff.

First up was the UCS team with new E-series servers.  These are blades that are designed to slide into a ISR G2 router and provide a full-featured x86 platform.  It’s a great idea in search of an application.  I can still remember the AxP modules and how they were going to change my life.  That never really materialized.  The payoff use case that you are looking for is the second video above.  Cisco is starting to push for the idea that you can contain a whole branch office in a single router and run not only the phone system and networking routing and VPN, but now a light-duty server as well.  I’m not sure how many people will be looking to do that with virtualized server resources residing in the data center, but there was some discussion of using this a temporary failover type of environment to push the branch server to the edge in the event of some kind of disaster or outage.  That might work better to me that running the entire branch on the router.  Of course, as you can tell, the demo gremlins found Cisco as well.

The next presentation was the new darling Cloud Services Router (CSR) 1000v.  This little gem got some face time on stage with John Chambers at Cisco Live this year.  It’s a totally virtualized router (hence the “v”) that can move workloads into the cloud when needed.  I’m really curious as to why this is included with Borderless, as this is a very data center specific play right now.  I know that Cisco is pushing this device currently as a VPN concentrator or MPLS endpoint for WAN aggregation.  It makes more sense from some of their diagrams to have it running inside a cloud provider network carving up user space.  I’m going to keep an eye on this one to see where the development goes.

Now, we get to something fun.  Cisco FlexVPN is what happens when someone finally took a look at all the different methods for configuring VPNs on the various Cisco devices and said “WTF?!?”  FlexVPN utilizes IKEv2 to help speed configuration.  You can watch the short video and see all the stuff that we have to deal with to configure a VPN today.  Cisco finally took our complaints to heart and made things a lot more simple.  Of course there are drawbacks, and with FlexVPN that means it only works with IKEv2.  There’s no backwards compatibility.  Of course, if you’re going to have to be migrating everything anyway, you might as well make a clean break and rebuild it right.  That’s going to make things like hub-and-spoke VPN configuration a whole lot less painful in the near future.  Props to Cisco for fixing a pain point for us.

Okay, so maybe a I lied just a bit.  Since Cisco Unified Border Element runs on a router (even though it’s technically voice), we got a presentation about it!  I was in hog heaven here.  If you are looking at deploying a SIP trunk, you had better be looking at a CUBE box to handle the handoff.  Don’t think, just do it.  Listen to the voice of Amy Arnold (@amyengineer) and Erik Peterson (@ucgod).  You need this.  You just don’t know how much until you start banging your head against a wall.

More Voice!!!  By this point, I was practically crying tears of joy.  Two voice presentations in one day.  At a networking event no less!  This presentation on enhanced SRST shows how big of kludge SRST really is.  I’m not a huge fan of it, but I have to configure it to be sure that the phone systems work correctly in the event of a WAN outage.  It’s all still CLI and very annoying to configure and keep in sync.  Thankfully, with the ESRST manager highlighted in the video above, we can keep those configurations in sync and even have it automagically pull the necessary configurations out of CUCM.  This software runs on a Service Engine right now in the router, but I can’t wait to see if Cisco ports it to a virtual setup to run under a CUCMBE 6000 server or even on a UCS-E blade down the road.  Anything that I can do to make SRST less painful is a welcome change.

Okay, this had to be one of the more interesting presentations I’ve been involved in at an NFD event.  We got our AppNav presentation over Webex from a remote resource.  I know this a hot thing to do at Cisco offices to make sure we have the most talented people giving us the most up-to-date info about a particular subject.  However, I expect this when I’m in the middle of nowhere Oklahoma, not at the mothership in San Jose.  The Webex cut out now and then and there were times when we had to strain to hear what was being said in the room.  Looking back at the video, I marvel that the room mikes picked up as much as they did.  As for AppNav itself, it’s a virtual DC version of the Wide Area Application Services (WAAS).  My grasp of WAN acceleration isn’t as good as it should be, even from Infineta back at NFD3.  There’s some good info in here I’m sure.  I’m just going to have to go back and digest it to see where it fits into my needs.

Now it’s time for some switching talk.  We got a roadmap on the Catalyst line.  There are some interesting tidbits in the slides, such as a monster 9000W power supply for the 4500 to support UPoE (more on that in a minute).  The 4500 is also going to get VSS support and ISSU support.  Those two things alone are going to make me start considering the use of the 4500 in the core of most of my smaller networks.  The fixed configuration Catalyst switches also have some nice roadmaps, including UPoE support and lots of IPv6 enhancements.  As I move forward in 2013, I’m planning on doing a lot with IPv6, so knowing that I’m going to have switching support behind me is a nice comfort.  Of all the updates, the most talked about one was probably the Catalyst 6500.  A switch that has been rumored to be on the chopping block for many years now, the venerable Cat6K is getting more updates, including FabricPath support and 100Gig module support.  I think this switch may outlast my networking career at this rate.  There are lots of rumors as to why Cisco is renovating this campus core stalwart once more, but it’s clear that they are attempting to squeeze as much life out of it as they can right now.  To me, the idea of stretching FabricPath down into the campus presents some very tantalizing opportunities to finally get rid of spanning tree on all but the user-facing links.  Let’s hope that the Cat6k sticks around long enough to get a gold watch and a nice pension for all the work it’s given us over the years.

Our next discussion was around security and using Cisco TrustSec to do things a little differently that we’re used to.  By now, I think everyone has talked your ear off about BYOD.  Even I’ve done it a couple of times.  It’s a real issue for people in the dark security caves because our traditional methods of access lists and so forth don’t work the same way when you’ve got employees bringing their own laptops or asking you to give them access to data from tablets or phones.  What this has morphed into is a need to do more role-based authorization.  That’s what TrustSec means to me.  Of course, a lot of previous attempts to do this, like NAC, haven’t really hit the mark or have been so convoluted that it was almost impossible to get them working correctly.  Today, Cisco has rolled all the functionality of NAC and ACS into the Identity Services Engine (ISE).  I’ve had a very brief encounter with ISE, so I know it has a lot of potential.  I want to see how Cisco will incorporate it into the bigger TrustSec picture to make everything work across my various platforms.

Time to turn up the juice.  Cisco brought out Universal Power over Ethernet (UPoE), which is their solution to pump up to 60 watts of power across a standard Ethernet cable to power…well, whatever it is that eats 60w of power.  Cisco’s doing this by taking 802.3at PoE+, which can pump 30w down the cable, and pushing an additional 30w of power down the other unused pairs.  Interestingly, Cisco talked to the people behind the ISO and EIA/TIA standards and found that when you have a bunch of unstructured cables running about around 50 watts (which is the 60w number above minus cable loss), you get a temperature in the cable bundle about 8-10 degrees above the ambient room temperature.  In reality, this means that 60w is the max amount of power you’re likely to ever get out of a Cat5e cable unless you chill it or have some kind of new material that can reduce the heating effect.  Cisco seems to be targeting UPoE to drive things like monitors, thin client desktops, and even those crazy command center touch pads that you see littered across the floor of a trading house or stock exchange.  This last item really makes me believe that UPoE is going to be positioned in the same vein as the ultra-low latency Nexus 3548 – financial markets.  Thin clients and command center touch panels are likely to be the kind of mission-critical devices these companies are willing to pay big buck to power.  With the above-mentioned 9000w PS for the Catalyst 4500, you can see why we’re going to soon need to put a nuclear reactor in to drive these things.

Cisco Smart Operations dropped by to talk to us about Cisco Smart Install.  This is the feature that I tend to turn off when I see it by the telltale sign of “Error opening tftp://255.255.255.255/network-config.”  The Smart Operations team is doing its best to create an environment where an IT department that doesn’t have the headcount to send technicians to deploy remote site switches can leverage software tools to have those devices auto-provision themselves.  You can also configure them to automatically configure things like Smartport roles, which has never really been one of my favorite switch features.  Overall, I can appreciate where Cisco is wanting to go with this technology.  But, as a CLI jockey, I’m still a bit jaded when it comes to having part of my job replaced by a TFTP script.

The final Cisco NFD4 presentation was about application visibility and control.  This is a lot of the intelligence that is built into the Cisco Prime monitoring software that was demoed for us back at NFD3.  If you can identify the particular fingerprints of a given application, such as Telepresence, you can better determine when those fingerprints are out of whack.  I’m also excited because fingerprinting apps is going to be a huge part of security in the near future, as evidenced by Palo Alto’s app-based firewall and the others like Sonicwall and Watchguard that have followed along.  Even the Cisco ASA-CX is starting to come around to the idea of stopping apps and not protocols.

If you’d like to learn more about Cisco Borderless Networks, check them out at http://www.cisco.com/en/US/netsol/ns1015/index.html.  You can see an archive of the presentations and associated data sheets at http://blogs.cisco.com/borderless/networking-field-day-4-at-cisco-nfd4/.  You should also follow the Cisco Borderless team on Twitter as @CiscoEnterprise and @CiscoGeeks.


Tom’s Take

There you have it.  Lots of presenters.  Hours of video.  A couple of thousand words from me on all of it.  It’s almost exhausting to see that much information in a short span of time.  Some of the things that Cisco did with this presentation were great.  There were technologies that only needed a bit of time.  There were others that we could have spent an hour or more on.  I think that the next NFD presenters that want to try something along these lines should setup the first three hours with rapid fire presentations and reserve the last hour for us to call back to earlier presenters and hit them with additional questions.  That way, we don’t run out of time and we get to talk about the things that interest us the most.  Bravo overall to the Cisco Borderless team for breaking out of the mold and trying something new to keep the NFD delegates hooked in.

Tech Field Day Disclaimer

Cisco was a sponsor of Network Field Day 4.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 4.  In addition, they provided me with an 8GB USB drive with marketing collateral and data sheets. They did not ask for, nor where they promised any kind of consideration in the writing of this review.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Brocade – Packet Spraying and SDN Integrating

Brocade kicked off our first double session at Network Field Day 4.  We’d seen them previously at Network Field Day 2 and I’d just been to Brocade’s headquarters for their Tech Day a few weeks before.  I was pretty sure that the discussion that was about to take place was going to revolve around OpenFlow and some of the hot new hardware the Brocade had been showing off recently.  Thankfully, Lisa Caywood (@TheRealLisaC) still has some tricks up her sleeve.

I hereby dub Lisa “Queen of the Mercifully Short Introduction.”  Lisa’s overview of Brocade hit all the high points about what Brocade’s business lines revolve around.  I think by now that most people know that Brocade acquired Foundry for their ethernet switching line to add to their existing storage business that revolves around Fibre Channel.  With all that out of the way, it was time to launch into the presentations.

Jessica Koh was up first to talk to me about a technology that I haven’t seen already – HyperEdge.  This really speaks to me because the majority of my customer base isn’t ever going to touch a VDX or and ADX or an MLXe.  HyperEdge technology is Brocade’s drive to keep the campus network infrastructure humming along to keep pace with the explosion of connectivity in the data center.  Add in the fact that you’ve got all manner of things connecting into the campus network, and you can see how things like manageability can be at the forefront of people’s minds.  To that end, Brocade is starting off the HyperEdge discussion early next year with the ability to stack dissimilar ICX switches together.  This may sound like crazy talk to those of you that are used to stacking together Cisco 3750s or 2960s.  On those platforms, every switch has to be identical.  With the HyperEdge stacking, you can take an ICX 6610 and stack it with an ICX 6450 and it all works just fine.  In addition, you can place a layer 3 capable switch into the stack in order to provide a device that will get your packets off the local subnet.  That is a very nice feature that allows the customer base to buy layer 2 today if needed then add on in the future when they’ve outgrown the single wiring closet or single VLAN.  Once you’ve added the layer 3 switch to the stack, all those features are populated across all the ports of the whole stack.  That helps to get rid of some of the idiosyncrasies of some of the first stacking switch configurations, like not being able to locally switch packets.  Add in the fact that the stacking interfaces on these switches are the integrated 10Gig Ethernet ports, and you can see why I’m kind of excited.  No overpriced stacking kits.  Standard SFP+ interfaces that can be reused in the event I need to break the stack apart.

I’m putting this demo video up to show how a demo during your presentation can be both a boon and a bane.  Clear you cache after you’re done or log in as a different user to be sure you’re getting a clean experience.  The demo can be a really painful part when it doesn’t run correctly.

Kelvin Franklin was up next with an overview of VCS, Brocade’s fabric solution.  This is mostly review material from my Tech Day briefing, but there are some highlights here.  Firstly, Brocade is using yet a third new definition for the word “trunk”.  Unlike Cisco and HP, Brocade refers to the multipath connections into a VCS fabric as a trunk.  Now, a trunk isn’t a trunk isn’t a trunk.  You just have to remember the context of which vendor you’re talking about.  This was also the genesis of packet spraying, which I’m sure was a very apt description for what Brocade’s VCS is doing to the packets as they send them out of the bundled links but it doesn’t sound all that appealing.  Another thing to keep in mind when looking at VCS is that it is heavily based on TRILL for the layer 2 interconnects, but it does use FSPF from Brocade’s heavy fibre channel background to handle the routing of the links instead of IS-IS as the TRILL standard calls for.  Check out Ivan’s post from last year as to why that’s both good and bad.  Brocade also takes time to call out the fact that they’ve done their own ASIC in the new VCS switches as opposed to using merchant silicon like many other competitors.  Only time will tell how effective the move to merchant silicon will be for those that choose to use it, but so long as Brocade can continue to drive higher performance from custom silicon it may be an advantage for them.

This last part of the VCS presentation covers some of the real world use cases for fabrics and how Brocade is taking an incremental approach to building fabrics.  I’m curious to see how the VCS will begin to co-mingle with the HyperEdge strategy down the road.  Cisco has committed to bringing their fabric protocol (FabricPath) to the campus in the Catalyst 6500 in the near future.  With all the advantages of VCS that Brocade has discussed, I would like to see it extending down into the campus as well.  That would be a huge advantage for some of my customers that need the capability to do a lot of east-west traffic flows without the money to invest in the larger VCS infrastructure until their data usage can provide adequate capital.  There may not be a lot that comes out of it in the long run, but even having the option to integrate the two would be a feather in the marketing cap.

After lunch and a short OpenStack demo, we got an overview of Brocade’s involvement with the Open Networking Foundation (ONF) from Curt Beckmann.  I’m not going to say a lot about this video, but you really do need to watch it if you are at all curious to see where Brocade is going with their involvement with OpenFlow going forward.  As you’ve no doubt heard before, OpenFlow is really driving the future of networking and how we think about managing data flows.  Seeing what Brocade is doing to implement ideas and driving direction of ONF development is nice because it’s almost like a crystal ball of networking’s future.

The last two videos really go together to illustrate how Brocade is taking OpenFlow and adopting it into their model for software defined networking (SDN).  By now, I’ve heard almost every imaginable definition of SDN support.  On one end of the spectrum, you’ve got Cisco and Juniper.  A lot of their value is tied up in their software.  IOS and Junos represent huge investments for them.  Getting rid of this software so the hardware can be controlled by a server somewhere isn’t the best solution as they see it.  Their response has been to open APIs into their software and allow programmability into their existing structures.  You can use software to drive your networking, but you’re going to do it our way.  At the other extreme end of the scale, you’ve got NEC.  As I’ve said before, NEC is doubling down on OpenFlow mainly for one reason – survival.  If they don’t adapt their hardware to be fully OpenFlow compliant, they run the risk of being swept off the table by the larger vendors.  Their attachment to their switch OS isn’t as important as making their hardware play nice with everyone else.  In the middle, you’ve got Brocade.  They’ve made some significant investments into their switch software and protocols like VCS.  However, they aren’t married to the idea of their OS being the be all, end all of the conversation.  What they do want, however, is Brocade equipment in place that can take advantage of all the additional features offered from areas that aren’t necessarily OpenFlow specific.  I think their idea around OpenFlow is to push the hybrid model, where you can use a relatively inexpensive Brocade switch to fulfill your OpenFlow needs while at the same time allowing for that switch to perform some additional functionality above and beyond that defined by the ONF when it comes to VCS or other proprietary software.  They aren’t doing it for the reasons of survival like NEC, but it offers them the kind of flexibility they need to get within striking distance of the bigger players in the market.

If you’d like to learn more about Brocade, you can check out their website at http://www.brocade.com.  You can also follow them on Twitter as @BRCDComm.

Tom’s Take

I’ve seen a lot of Brocade in the last couple of months.  I’ve gotten a peek at their strategies and had some good conversations with some really smart people.  I feel pretty comfortable understanding where Brocade is going with their Ethernet business.  Yes, whenever you mention them you still get questions about fibre channel and storage connectivity, but Brocade really is doing what they can to get the word out about that other kind of networking that they do.  From the big iron of the VDX to the ability to stack the ICX switches all the way to the planning in the ONF to run OpenFlow on everything they can, Brocade seems to have started looking at the long-term play in the data networking market.  Yes, they may not be falling all over themselves to go to war with Cisco or even HP right now.  However, a bit of visionary thinking can lead one to be standing on the platform when the train comes rumbling down the track.  That train probably has a whistle that sounds an awful lot like “OpenFlow,” so only time can tell who’s going to be riding on it and who’s going to be underneath it.

Tech Field Day Disclaimer

Brocade was a sponsor of Network Field Day 4.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 4.  In addition, Brocade provided me with a gift bag containing a 2GB USB stick with marketing information and a portable cell phone charger. They did not ask for, nor where they promised any kind of consideration in the writing of this review.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Spirent – Bringing The Tests To You

Day two of Network Field Day 4 kicked off with a visit to Spirent.  I was fairly impressed with their testing setup the last time and I wanted to see what new tricks they had in store for us this time around.  After a quick breakfast, we settled in for our first session.  Although this one wasn’t broadcast, we did get permission to talk about what they were showing us.  One of the issues that Spirent has with their setup is that it’s just so…huge.  While it is very accurate and can take just about everything you can throw at it, it’s not exactly the most convenient thing to haul around when you need to test something.  To that end, Spirent is looking a releasing a more compact unit that’s more in line with the needs of an enterprise testing setup.  The unit we saw was about the size of a desktop computer case, but Spirent says the final goal is to have a unit that’s about 1U in size and can be placed in a rack.  That way, you can grab the tester when you need to prove beyond the shadow of a doubt that it’s not the network (or the WAN connection or anything else Spirent can test).  Do remember that having a smaller version of the until does come with a compromise or two.  The most apparent one is the reduction in testing resolution from the nanseconds of the big Spirent setup down to a few milliseconds on the enterprise version.  Truth be told, you probably don’t need the nanosecond resolution of something like a QFabric test when you’re just trying to test an enterprise network.  If a few milliseconds really does matter, then maybe you need to look into the bigger unit.  One of the other things that interested me about their new unit was the interface of the software itself.  Spirent has gone all out to make sure that it’s easy to start a test and set the parameters.  The metaphor that they are using is that of a media player.  You can drag sliders to vary the size and number of packets as well as setting other parameters.  When you’re ready to go, just press the oversized Play button and your test kicks off and runs until completion.  You’ll see a bit of this interface in a bit.

When we picked up the stream again, I got a bit excited.  Spirent has taken everything they know about testing and applied it to some interesting use cases.  No one can deny that we’ve entered a new phase of cyber warfare.  First, it was the kids doing thing for fun and reputation.  Then it was the career bad guys doing it for money.  Now we find ourselves dealing with advanced malware threats and state-sposored cyberterrorism.  After some discussion about social engineering and other topics, we started talking about Spirent applying their testing methodologies to find vulnerabilities and alert you to them before they can be exploited.  Spirent has a huge library of thousands of tests that can be run against a multitude of applications on just about any OS platform, from Windows to iOS.

It’s demo time again!  Spirent fired up a demo environment running Linux and exploited a Jabber server with a bunch of attack traffic.  You can tell that this was a fairly thorough attack, as they went through several iterations before they finally found a vector.  Other tools that I’ve used just attack known holes and give up after one or two iterations.  Spirent has created a tool that can not only iterate on different surfaces, but you can also craft your own tests to take advantage of zero-day exploits in the wild.  That makes me a little more confident with their results, as they don’t quit until the test is finished.

Last up was Ameya Barvé with an overview of the new iTest Lab Optimizer. According to Ameya, one of pains of lab operations involves the lack of automation.  You never know who’s in the lab or who’s reconfigured it to support some wacky sidebar case.  iTest Lab Optimizer takes care of many of these problems by creating a system for lab reservation and topology creation.  By utilizing a layer 1 switch to interconnect the devices in the lab, you can use iTest to overlay the lab topology on top of it on the fly.  I can see the allure of having this kind of capability in a larger lab environment, and should my lab ever grow to the point where it’s not a collection of cables assembled on a side table in my office, I’m sure having a software program like this would be a great boon to speed test setup and execution.

If you’d like to learn more about Spirent, you can check out their website over at http://www.spirent.com.  You can also follow them on Twitter as @Spirent.  You can find a link to the Spirent slide decks at http://www.slideshare.net/spirent.


Tom’s Take

Spirent has some amazing testing gear.  I’ve said as much previously.  What they’ve done since our last meeting is take what they have and shrink it down to the point where it makes cost-effective sense to the rest of the world not needing to test high-end network gear day in and day out.  The newer portable testing suite should appeal those people in the data center or service provider area that have SLAs that need to be met or constantly find themselves getting into arguments over performance numbers.  The rest of their presentation seemed to be an outgrowth of their testing strategies.  For instance, the zero-day cyberwarfare testing suite shows that they can apply the methodology of executing in-depth tests to a different market that requires a specific kind of results.  That shows me that someone inside Spirent is thinking outside the small little niche.  The new iTest software shows me that Spirent is trying to recognize a pain point that many of us weren’t sure could even be addressed.  It also tells me that Spirent isn’t just a one-trick pony and that we should expect to see more good things from them in the near future.

Tech Field Day Disclaimer

Spirent was a sponsor of Network Field Day 4.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 4.  In addition, they provided me with a gift bag containing a coffee mug, a pen, and a golfing tool of some sort. They did not ask for, nor where they promised any kind of consideration in the writing of this review.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

More Network Field Day Coverage

Get More Juice From Your Network LabNetwork Sherpa

Opengear – A Box Full Of Awesome

Presenter number two at Network Field Day 4 was Opengear.  This was a company that I hadn’t heard much about.  A cursory glance at their website reveals that they make console servers among other interesting management devices.  Further searching turned up a post by Jeremy Stretch over at Packetlife about using one of the devices as the core of his free community lab.  If it’s good enough for Stretch, it’s good enough to pique my interest.

As you can see from the short opening, Opengear is dedicated to making network infrastructure management equipment like console servers as well as PDU management and environmental sensors.  Most interesting to me was the ACM5004-G unit the delegates received, which is a 4-port model with a 3G radio uplink.  They also make much more dense devices like the one in Stretch’s lab for those that are wanting something with a few more ports.  Most of the people I know that are looking at something like this for the CCIE lab use an old 2511 router with octal cables.  Those are fairly cheap on eBay but you are taking a risk with the hardware finally wearing out and being out of warranty.  As well, there are a ton of features that you can configure in the Opengear software (we’ll get to that in a minute.

Up next…is a caution for Opengear and other would-be Tech Field Day presenters.  Yes, I understand you are proud of your customer base and want to tell the world about all the cool people that use your product.  That being said, a single slide crammed full of logos, which I affectionately call “The NASCAR Slide” may be a better idea that slide after slide of each company broken down by industry vertical.  You have to think to yourself that filling 8-10 slides of your deck with other people’s logos is not only wasting time and space, but not doing a very good job of telling us what your product does.  All of the companies on that list probably use toilet paper as well, but we don’t see that on your slides.  Better to focus on your product.

Okay, now for awesome time.  Opengear’s management software has a bunch of bells and whistles to suit your fancy.  You can configure all manner things like multiple authentication methods for your users to prevent them from accessing consoles they aren’t supposed to see.  As the underpinnings of the whole Opengear system run on Linux, it’s no surprise that their monitoring software is built on top of Nagios.  This allows you to use their VCMS product to manage multiple disparate units.  Think about that.  You’re using the Opengear boxes to manage your equipment.  Now you can use their software to manage your Opengear boxes.  Those units can also be configured to “call home” over secured VPNs to ensure that your traffic isn’t flying across the Internet unencrypted.  VCMS can also use vendor-neutral commands to manage connected UPSes.  I can’t tell you the number of times having a device that could power cycle a UPS or PDU would have saved my bacon or prevented a trip across the state.  The VCMS can even script responses to events, such as triggering a power cycle if the system is hung or stops responding.

Next up is a demo of the software.  Worth a look if your interested in the gory details of the interface:

We finished off the day with a talk about some of the new and interesting things that Opengear is doing with their devices.  I think the story about configuring them to use a webcam to take pictures of people opening roadside boxes then upload the pictures to an FTP server running on the Opengear box that then sends the picture over 3G back to central location was the most interesting.  Of course, everyone immediately seized on the salmon farm as the strangest use case.  It’s clear that Opengear has a great solution that is only really limited by your imagination.

If you’d like to learn more about Opengear and their variety of products, you can check out their website at http://opengear.com.  You can also follow them on Twitter as @Opengear.


Tom’s Take

I can’t count the number of times that I’ve needed a console server.  Just that functionality alone would save me a lot of pain in some remote deployments I’ve had.  Opengear seems to have taken this idea and ran with it by adding on some great additional functionality, whether it be cellular uplinks or software controls for all manner of third party UPSes.  I think the fact that you can do so much with their boxes with a little imagination and some elbow grease means that we’re going to be hearing stories like the fish farm for a while to come.

Tech Field Day Disclaimer

Opengear was a sponsor of Network Field Day 4.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 4.  In addition, Opengear provided me with an ACM5004-G console server and a polo shirt. They did not ask for, nor where they promised any kind of consideration in the writing of this review.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Statseeker – Information Is Ammunition

The first presenter at Network Field Day 4 came to us from another time and place.  Stewart Reed came to us all the way from Brisbane, Australia to talk to us about his network monitoring software from Statseeker.  I’ve seen Statseeker before at Cisco Live and you likely have too if you been.  They’re the group that always gives away a Statseeker-themed Mini on the show floor.  They’ve also recently done a podcast with the Packet Pushers.

We got into the room with Stewart and he gave us a great overview of who Statseeker is and what they do:

He’s a great presenter and really hits on the points that differentiates Statseeker.  I was amazed by the fact that they said they can keep historical data for a very long period of time.  I’ve managed to crash a network monitoring system years ago by trying to monitor too many switch ports.  Keeping up with all that information was like drinking from a firehose.  Trying to keep that data for long periods of time was a fantasy.  Statseeker, on the other hand, has managed to find a way to not only keep up with all that information but keep it around for later use.  Stewart said one of my new favorite quotes during the presentation, “Whoever has the best notes wins.”  Not only do they have notes that go back for a long time, but their notes don’t suffer from averaging abstraction.  When most systems say that they keep data for long periods of time, what they really mean is that they keep the 15 or 30 minute average data for a while.  I’ve even seen some go to day or week data points in order to reduce the amount of stored data.  Statseeker takes one minute data polls and keeps those one minute data polls for the life of the data.  I can drill into the interface specs at 8:37 on June 10th, 2008 if I want.  Do you think anyone really wants to argue with someone that keeps notes like that?

Of course, what would Network Field Day be without questions:

One of the big things that comes right out in this discussion is the idea that Statseeker doesn’t allow for customer SNMP monitoring.  By restricting the number of OIDs that can be monitored to a smaller subset, this allows for the large-scale port monitoring and long term data storage that Statseeker can provide.  I mean, when you get right down to it, how many times have you had to write your own custom SNMP query for an odd OID?  The majority of the customers that Statseeker are likely going to have something like 90% overlap in what they want to look at.  Restricting the ability to get crazy with monitoring makes this product simple to install and easy to manage.  At the risk of overusing a cliche, this is more in line with Apple model of restriction with focus on ease of use.  Of course, if Statseeker wants to start referring to themselves as the Apple of Network Monitoring, by all means go right ahead.

The other piece from this second video that I liked was the mention that the minimum Statseeker license is 1000 units.  Stewart admits that below that price point, it argument for Statseeker begins to break down somewhat.  This kind of admission is refreshing in the networking world.  You can’t be everything to everyone.  By focusing on long term data storage and quick polling intervals, you obviously have to scale your system to hit a specific port count target.  If you really want to push that same product down into an environment that only monitors around 200 ports, you are going to have to make some concessions.  You also have to compete with smaller, cheaper tools like MRTG and Cacti. I love that they know where they compete best and don’t worry about trying to sell to everyone.

Of course, a live demo never hurts:

If you’d like to learn more about Statseeker, you can head over to their website at http://www.statseeker.com/.  You can also follow them on Twitter as @statseeker.  Be sure to tell them to change their avatar and tweet more.  You can see hear about Statseeker’s presentation in the Packet Pushers Priority Queue Show 14.


Tom’s Take

Statseeker has some amazing data gathering capabilities.  I personally have never needed to go back three years to win an argument about network performance, but knowing that I can is always nice.  Add in the fact that I can monitor every port on the network and you can see the appeal.  I don’t know if Statseeker really fits into the size of environment that I typically work in, but it’s nice to know that it’s there in case I need it.  I expect to see some great things from them in the future and I might even put my name in the hat for the car at Cisco Live next year.

Tech Field Day Disclaimer

Statseeker was a sponsor of Network Field Day 4.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 4. They did not ask for, nor where they promised any kind of consideration in the writing of this review.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Additional Network Field Day 4 Coverage:

StatseekerThe Lone Sysadmin

Statseeker – Keeping An Eye On The Little ThingsLamejournal

What’s My Cisco ATA Second Line MAC Address?

In the world of voice, not everything is wine and roses.  As much as we might want to transition everything over to digital IP phones and soft clients, the fact remains that there are some analog devices that still need connectivity on a new phone system.  The more common offender of this is the lowly fax machine.  Yes, even in this day and age we still need to rely on the tried-and-true facsimile machine to send photostatic copies of documents across the PSTN to a waiting party.  Never mind email or Dropbox or even carrier pigeon.  Fax machines seem to be the most important device connected to a phone system.  Normally, I leave the fax connections and their POTS lines intact without touching anything.  However, there are times when I don’t have that luxury.

In the case of the Cisco VoIP systems, that means relying on the Analog Terminal Adapter, or ATA.  The ATA allows you to connect an analog device to the unit, whether it be a fax machine or a cordless analog phone or even a fire alarm or postage machine.  It has many uses.  The configuration of the ATA is fairly straightforward under any CUCM system.  However, if you have a multitude of analog devices that you need to connect, you might opt to use the second analog port on the ATA.  The ATA 186 of the past and its current replacement, the ATA 187, both have 2 analog ports on the back.  There’s only one Ethernet port, though.  This is where the interesting part comes in to play.  If there are two analog ports but only one Ethernet port, how to I configure the MAC address for the second port?  All phone devices in CUCM must be identified by MAC address.  On an ATA, the primary MAC address printed on the bottom or the side of the box is the address for the first port.

If you want to use the second port, you’re going to have to do a little bit of disassembly.  Cisco uses a standard method to create a new MAC address:

1.  Take the MAC address for port 1.  For example, 00:00:DE:AD:BE:EF.

2.  Drop the first two digits from the MAC address.  In the example, 00:DE:AD:BE:EF.

3.  Append “01” to the end of the 10-digit address.  Example, 00:DE:AD:BE:EF:01.

Once you’ve completed those steps, take the MAC address you’ve just created and plug it into CUCM as a new ATA device.  Once you’ve completed the necessary steps to create the new device, it will register with the DN you’ve assigned to it.  Then you can start calling or faxing it to your heart’s content.

There’s no mention of the secondary MAC address anywhere on the web interface.  You’d figure it wouldn’t be hard to write some HTML function to read the MAC address and do the above operation.  The Cisco documentation buries this information deep inside the setup document.  I’ve even search Cisco’s very own support forums and found all manner of advice that doesn’t work correctly.  I decided that it was time to jot this information down in a handy place for the next time I need to remember how to configure the ATA’s second port.  I hope you find it useful as well.

When Demos Attack

The demo. The holy grail of live, interactive presentation. The point where the rubber meets the road. The seductive allure of a live demonstration drives most technical presentations. Slide after slide gets boring, even with cutesy animations. Audiences can quickly get lost with the droning monotony of slide recitation. However, a demo give them something to focus on. A live system generates real data and shows what you can do. Questions come up and the answers are right there at the tips of your fingers. However, the demo’s siren song can lead to doom if you don’t navigate the waters carefully. Even the most polished demos can fail. Steve Jobs learned this during the launch of the iPhone 4. Tech presenters learn it every day when Mr. Murphy comes calling.

Demos are not inherently bad. In fact, the upside is astounding. The problem comes in the execution. Having been a veteran of many demo presentations, both good and bad as well as a presenter and demonstrator myself, I thought I’d share a couple of ideas I have about demos and how to keep yours from heading south.

1. Make Sure Your Demo Is Interesting – I can’t stress enough how important this bullet point is. Not all things make for good demos. Even things that you think may be the most awesome stuff on the planet can be boring or distracting for your audience. Watching someone type command after command into a CLI window is boring. However, watching a short command instantiate a software load balancer and kick back a list of the configuration is exciting. Watching someone pull up a screen on a phone and poke around is passe. Watching that same phone pull up live info from the Internet and book a reservation at a restaurant for you is much better. The key is to keep the audience on the edge of their seats. You must make the demo compelling and make them want to see where you’re going. The NFD4 Juniper Mykonos demo was exciting because you could see the build out of attack from inception to execution to response. Watching them put up a Google map projection of the attacker’s area with links to local legal council was a hilarious moment, but it illustrates the engagement aspect. On the other hand, the Aerohive BR100 iPad provisioning demo from WFD2 missed the mark a bit. Why? Because watching someone configure an AP is a pretty pedestrian to the audience. Screens full of config values make the eyes go blurry. I understand the power and awesomeness underneath the ability to provision 15 branch offices from a tablet. I just don’t want to see how the sausage is made in this case. Maybe instead having a script run automatically or making it flashier would keep attention focused on the “why” and not the “how”. And if your demo involves a task that needs some time to run to completion, please make sure to fill that time appropriately. Watching a status bar fill up on screen is like nails on a chalkboard to a presentation audience. Avoid long pauses if you can, but if you must you should kick off the first part of the demo and move on with your presentation while the magic is happening in the background. Infineta figured this out at NFD3. Since their long-distance vMotion demo was going to take twenty minutes no matter what, they let it run while they whiteboarded algorithms  Don’t make your audience stare at boredom.

2. Test Your Demo Under Real World Conditions – This was Steve’s mistake during the iPhone 4 demo. People practice their demos and presentations religiously (or at least they should). They keep staring at screen after screen to ensure everything is automatic. But sometimes they forget that all those practice runs don’t represent reality. Yes, an iPhone will access the web just fine in an empty auditorium at Moscone. It’s a different story when the audience full of phones and tablets and laptops all melt the wireless with a tidal wave of packets. Steve forgot to make sure that his practice runs looked like the audience makeup that he’d see that day. Just as important, make sure that your demo environment doesn’t do wacky things. Hiccups in dry runs should give you a hint that you need everything to be ironed out before you do it for real. Make your demo setup simple because you also have to remember that you’re under the gun and nervous as hell up there. Derick Winkworth’s SIP demo failed not because of technology, but because he was typing the wrong password into the software. Derick knew the password. But he got flustered because we gave him a hard time about his password earlier in the demo. Doing a live demo is like a trapeze act without a safety net. Be sure you’ve tested your act enough under the big top so you won’t fall.

3. Have A Backup Plan – Just like the most recent SpaceX Falcon9 rocket launch, you can’t assume that everything is going to work right. You need a backup plan. That includes everything in your presentation. Backup slide decks in case your USB drive dies or the drivers aren’t installed. Backup video adapters in case you thought there was HDMI but there is really on VGA. However, if your presentation has a demo, you have *better* have a backup plan. As above, wireless networks can be unreliable in conference centers. VPN connections can fail at a moment’s notice. Files can get moved. Systems can be shut off. Be ready to roll when it looks like your demo is going south. Instead of tap dancing, move over to a local version. Spin up and backup VM on your laptop and show your demo from there. If your files are gone or your machine is down, have a simple animation showing what was going on. Or go for broke on the whiteboard. Diagram everything and make the audience help you out. Don’t let the hiccups derail you. Be ready to go. And in the event that even your backup plan fails, don’t tap dance around it. Apologize and move on. We’ve all seen demos that fail and we know that not everything goes right.


Tom’s Take

I love great demos. I love being engaged and seeing live systems work. But every time someone pulls out a demo at a presentation, I feel a bit hesitant. I’ve been fortunate enough to be on this side of some great demos. However, I’ve also seen and had some fail spectacularly. If you take into account the things I outlined above, you can minimize the chance that your demo will fail. That way the conversation will center around something awesome and not around shaking head and embarrassed smiles.

Shadow IT – What Evil Lurks In The Heart Of An Admin?

I’ve been hearing the term Shadow IT quite a bit recently.  According to the Fount of All Knowledge, Shadow IT refers to networks and systems built inside organizations without official approval.  I found it curious that people started referring to this almost five years ago, yet a cursory search for “shadow IT” turns up a *ton* of articles written in the last six months.  At first, I wondered if the trend of BYOD had finally petered out a bit.  After all, once you’ve assaulted the populace with a headline every day for at least two months, they kind of grow accustomed to it and get bored seeing it all the time.  Then I wondered why a five-year-old concept should be hot now.  Then it hit me.

I’ve never heard of Shadow IT because it was never a “thing” for me.  The idea that a lab computer or a non-production testing system might be moved into production work wasn’t an obstacle to the way that I’d done things in the past.  As a matter of fact, it’s the way I’ve done things for the most part my entire career.  In order to replace our aging 3Com NBX phone system, I installed Cisco CallManager in a lab and let the sales folks use it to make conference calls one week.  They were so impressed with the quality of the call they made me rip out the old and put in the new the following month.  The whole virtualization strategy around here grew out of one box running ESX standard for a VM migration test.  After people discovered how flexible things were inside of a virtualized environment, naturally our server strategy going forward was focused around our brand new ESX cluster.  Even our network was a series of cobbled-together parts scavenged from the four corners of the globe at a time when the engineering staff needed gigabit connectivity and we had no budget to accomplish it.  Slowly, one piece at a time, we assembled our entire setup without direct authorization and formal approval.  While it was nice to called to a meeting about a new feature and be told, “Yeah, we’ve been running that for the last three months” there were huge weaknesses in the plan.

With a hodge-podge network assembled over the course of months or years to address tactical problems, you have huge support headaches in the event of failures.  Untangling the knots of interconnected systems becomes a lot harder when you keep uncovering devices you knew nothing about.  That new awesome voicemail server?  It’s running on ESXi on a new server that was originally provisioned for lab use.  All well and good until I’m out of the office and someone needs to restart it after a power failure.  Worse still when they have to remember to connect via VMware Client to restart the VM itself.  Extra pain and effort introduced because of the need to move quickly to implement something.  That’s just the side of things from the lab.  Let’s not talk about things like Dropbox or GMail.  Even though I know it’s not technically the right way to do things, my job is quickly reaching the point where I’m dependent on Dropbox.  I keep notes and firmware images in mine that sync between all my systems.  My presentations go in there.  So do PDFs and software images.  If someone decided to block Dropbox tomorrow, I’d be screwed.  I avoid keeping sensitive data in there as a matter of habit, but just about every other important thing is either in a Dropbox or has been copied there at some point.  GMail is another method used frequently to avoid large attachment size limitations or mailbox quotas.  That’s under the best of circumstances.  I’ve used GMail to test incoming and outgoing mail and a number of sites.  I use it to test mail routing and NAT translations of mail servers.  That’s just the legitimate uses.  Think about the number of IT people that use GMail as a way to skirt eDiscovery rules and Freedom of Information actions.  I’ve seen that several times.

BYOD has caused people in management to start looking at their networks and systems a bit closer than they have in the past.  What used to be the big, dark hole where data entered and information came out is now being scrutinized with great fervor because of the possibility of exposure.  Now, instead of turning a blind eye towards the IT department with a mantra of “just make it work,” management must now take into account that running insecure devices or non-tested configurations can lead to trouble down the road.  Trouble that someone occasionally has to answer for, either in the press or in a court of law.  That makes management skittish.  That explains why this is now an important point of contention in IT.  Rather than taking the easy road of results, we now instead must focus on  the whole process.  Ample documentation must exist at every step of the way not as a record of implementation, but instead as a way to show liability and protect people.  In essence, that’s really what Shadow IT is about.  Never mind the challenges of creating systems from untested technology.  It all comes down to who gets the blame when things go wrong and how that can be proved when the yelling starts.

I’ve already made a commitment to do my best to avoid the kinds of last-minute solutions that are implicated in the Shadow IT movement.  I’m not going to do away with my lab or with piloting solutions before implementing them.  What I will do is make sure there is a clearly defined plan in place in the event that the lab solution needs to be moved into production.  I’ll also be sure that all the involved parties agree on the best course of action before the solution is put in place so there can be no arguing or finger pointing after the fact.  The easiest way to get rid of Shadow IT is to shine the Light of Documentation on it.  Then those of us in IT aren’t looked upon as the crazy vigilantes of networking and systems and instead we can get back to being the harmless recluses that our secret identities portray.

Velcro for VAR Engineers

When I was younger, I must have watched The Delta Force about a hundred times. One of the things I loved in that movie was the uniforms the Delta guys wore. Jet black, covered in cargo pockets, and very useful. The most compelling feature, however, was the velcro on the shoulders and chest. The Delta troopers could remove the patches on their uniforms whenever they needed to be anonymous, then put them back on at will. I loved this idea. As time has gone on, I’ve notice the same kind of capability on the new military BDUs. Rank insignia, unit affiliation, and even the name tag are all velcro patches that can be removed, reapplied, and changed as needed.

This idea of configurable uniforms finally hit home for me the other day when I was going through my closet looking for a vendor-specific shirt. Yes, I know that Greg has decried the plumage of the vendor in a previous blog post, but as a VAR I’m a bit hamstrung. Sometimes, I need to put on my Aruba shirt or my Cisco jacket or my Aerohive tuxedo. Customers feel a bit reassured when you’re wearing a shirt from a company that you’re pitching. However, I’ve noticed that all these shirts seem to start looking alike after a while. I have the same Dri-Fit Nike polo shirt with four different vendor logos. I have the same dark blue polo with three other different vendor logos. I think I have a Cisco shirt in every color of the rainbow. I even have shirts that don’t fit anymore with fun old logos, like my Master CNE. Why do I need to have that many logo shirts in my closet? Why can’t I have a little more control over my VAR uniform?

That’s when it hit me. Let’s do the velcro configurability on vendor polo shirts. A velcro patch over the left breast and maybe another couple on the sleeves. Think of the possibilities. Now, instead of worrying about what vendor shirt I’m going to wear in the morning, I can just pick out the black one or the red one. Then, when I’m ready to brand myself, I just need to pick out the appropriate patch and slap it on the velcro. No fuss, no muss. If I wear the wrong vendor shirt today, it can cause some embarrasing issues. With the patch system, I just remove the errant patch and replace it in seconds. Much easier than trying to keep track of which shirt I shouldn’t be wearing to a particular site. You could even make a big show of it. When it’s time to get work done, make a big production of taking your patch out and slapping it on. When you need to be “off the record” about something, make a theatrical gesture of ripping the identification patch off your shirt as if to say, “I’m not with this company right now. Here’s what I think.” It would be practical as well as awesome.

Sure, there are details to work out. Even getting the vendors to start offering velcro patches would be a huge step in the right direction. I’m all for this, as it means I can finally take a little more control over my wardrobe. Now where did I put that sewing machine?