The Three B’s of Innovation

Innovation is key in our lives.  It allows us to go from adding on our fingers to having a calculator capable of solving quadratic equations.  It shows that we can graduate from playing checkers all the way up to tossing disaffecting fowl at inferior porcine construction workers.  Innovation, however, is hard work.  It requires significant time and investment to realize anything good.  Helen Hanson said, “Inspiration is the windfall from hard work and focus.”  However, in the tech arena, innovation doesn’t always seem to come from hard work.  I always like to think of the source of tech innovation as coming from three different options: building it, buying it, or b*tching about it.

Building It – This one is the classical source of inspiration.  Look at companies like Selsius developing CallManager, or Cognio’s spectrum analyzer chips.  More often than not, these companies are lead by brilliant individuals with a kind of hyper focus that allows them to shut out all distractions and build the next better widget.  Ryan Woodings of MetaGeek is a great example.  He spent lots of time and hard work analyzing a need and creating a brilliant product to address it.  Companies that go out on a limb to build better widgets help make the world better with new ideas to address our needs and desires.  The only problem with this kind of development is the amount of time it takes to come up with these ideas.  You must be willing to invest a large amount of resources to achieve your goals.  How many inventors and innovators have gone broke trying to realize their dreams?  Even Ryan had to struggle with keeping his day job until MetaGeek took off and became lucrative enough to be his new day job.  Large companies like Cisco and Juniper suffer from the same problem on a different scale.  They must sink huge budgets into research and development in order to create new products.  Sometimes, the people in charge don’t take well to R&D budgets spiraling out of control, so they cut back and risk stifling innovation.  You also run into issues with your company being brilliant, but unknown.  How many times have you heard the phrase, “The best company no one’s ever heard of.”?  Chances are the kind of company that has brilliant people with an acute ability to focus on product innovation may not have the vision to tell people about their wonderful new widget.  If that happens, they may wither and die before they get famous because no one knew who they were and how great their product was.  In many cases, this leads to…

Buying It – If you don’t have a particularly inspirational idea or an innovative team to design something new, but you happen to be sitting on a pile of cash the size of a dump truck, you always have the option of buying innovation.  This isn’t always a bad thing.  If you have a company that has a great product and no exposure, you can take your money and invest it in the technology and people and market it yourself.  It’s also faster in a lot of cases to purchase the time and effort that someone else has put in rather than reinventing the wheel.  John Chambers at Cisco has a philosophy that if Cisco can’t be the best in a market, they’ll acquire the first or second place company and make them the best.  Cisco bought Selsius and Cognio and Profigo and Protego and a whole host of other companies.  HP has purchased Colubris and 3COM and 3Par. Microsoft recently purchased Skype.  Even Juniper has purchased Trapeze.  Acquisitions happen all the time.  At one point, Cisco was even funding engineers to branch out and start their own companies to perform research and development into new product lines.  If they made good, Cisco would come back and buy the company and rehire the old engineer as the new director of that particular product’s development.  That’s not to say that buying your R&D is always the best method.  Eventually, you will run out of cash to buy companies, and then your research and innovation goes kaput.  Many of the best companies are just too big to swallow, no matter how much money you have to buy them.  It’s also very tough to integrate the development teams from purchased assets into your fold.  How many time do you see a company be purchased, then have the lead team leave six to eight months later?  People who are used to having total control over the entire creative process sometimes don’t take kindly to corporate oversight.  The have a difference of opinion and out the door they go, ostensibly to form a new company around the same idea.

B*tch About It – I’m seeing this becoming a bigger and bigger method of innovation recently.  It seems that when a company comes up with an idea, a competitor does their best to knock it down several pegs.  Whether it be to buy time to bring their own strategy to market or to pitch an idea with a totally different product, the idea is that by complaining (often loudly), you can force your people to innovate while making sure the other guys can’t sell their widgets.  For a particular example, I’m going to pick on TrILL.  Right now, TrILL isn’t a standard.  Other companies, like Cisco and Brocade, have come out with proprietary features that function much like TrILL eventually will, but don’t totally interoperate.  Others have decided to do something totally different, like Juniper or HP.  Right now, the battle of PR is being waged by Juniper in regards to their QFabric being totally different that TrILL or Cisco’s FabricPath.  HP is waging a PR war against Cisco from the standpoint that FabricPath is not standards-based, while HP is content to wait for TrILL to become ratified and rely on their own proprietary IRF solution in the interim.  To me, the innovation here isn’t so much centered around what any one particular company is capable of, but rather what there competitors are incapable of doing.  FabricPath isn’t a standard.  IRF doesn’t scale to the Nth node.  QFabric is currently a pipe dream without substance.  Every vendor is guilty of attacking their competitors rather than extolling the virtues of their particular solution.  If you spend your time telling me what your product does rather than concentrating on what your competitor’s product doesn’t do, you are much more likely to convince me to buy your widget, or so the thinking goes.

Tom’s Take

Creation and innovation take resources.  Plain and simple.  You’re either going to use brainpower to invent something, money to buy something, or hot air to talk about how much something sucks.  I love people that are creative enough to think of things that no one else has thought of before.  Late night television is littered with them. Others aren’t as creative, but have the resources to make those creative people prosper in a great environment.  These kinds of innovators are tried and true and have my utmost respect.  The last group, the whiners and complainers, not so much.  The amount of effort it takes to sell me on the idea that someone else’s hard work isn’t up to snuff could be better directed in developing new products or ideas.  That negativity can be turned into hugely positive things if only we take the time to focus.  I make an effort to stop presenters once their message becomes all about how something isn’t great and make them tell me about how great their thing is instead.  You quickly see who the true innovators are.  Real innovators are proud of their accomplishments and won’t hesitate to talk about them.  The whiners and complainers fall apart when they aren’t allowed to use negativity to sell themselves.  Think about that the next time you get ready to innovate.

Cut Me Some SLAAC, Or Why You Need RA Guard

The Internet has been buzzing for the last couple of weeks about a new vulnerability discovered in IPv6 and the way it is interpreted by networking devices.  Firstly, head over to Sam Bowne’s excellent IPv6 site and read his assessment of the attack HERE.  What is being exploited is a “feature” in IPv6.

Since IPv6 doesn’t use Address Resolution Protocol (ARP), it relies on ICMP and Neighbor Discovery messages to determine neighbors on the network.  It also uses Router Advertisements (RA) to build a picture of how to get off the local network.  When the Stateless Address AutoConfiguration (SLAAC) flag is set in the RA, the local host will choose an address from the announced address space and begin using it.  This is a great addition to the protocol, since it allows a network admin to setup an automatic addressing protocol that isn’t reliant on a server like DHCPv6.  However, from a security standpoint, it introduces some possible problems.  If a host on the network were to start sending RA packets to the LAN, that man-in-the-middle could start influencing packet routing.  Worse still, if the attacker isn’t really interested in rerouting packets, they could just take the anarchist’s approach and burn the whole network down with a specially-crafted DoS attack.  By flooding a ton of RA messages onto the local network with different network address spaces, the attacker can cause the CPUs on Windows and FreeBSD boxes to spike to 100% and stay there indefinitely.  This is because the host system continually tries to process the RAs flooding in from the network and starts trying to pick address space in every network announcement that it hears.  This consumes all its resources with updating the routing table and addressing the adapters on the system.  This could cause problems for your end users should an attacker get into a position to launch RAs into your LAN.  Right now, there are a couple of ways to fix this:

1. Disable IPv6 – Okay, this doesn’t fix the problem it just makes it go away.

2.  Disable RAs on the local network – Again, not a fix, just hiding it.  Plus, this breaks SLAAC, which I see is a real advantage to IPv6.

3.  Install a firewall or ACL on your host-facing ports to block RAs or filter out the ICMPv6 packets carrying them.

What I find even more interesting about this whole affair is the response of the three biggest players in the game in regards to the issue.  Let me sum it all up using their words:

Microsoft – This is Working As Intended (WAI).  We don’t plan on fixing this.

Juniper – We need to work with the IETF and figure out a standard solution to address this problem, and until we do we aren’t patching against it.

Cisco – We fixed this last year, and by the way have you heard of RA Guard?

Cisco has implemented a solution very similar to what they do with DHCP snooping on IPv4 switches.  They call it RA Guard.  As defined in RFC 6105, RA Guard can be enabled on all host-facing switchports to filter RAs from non-trusted sources.  In this case, the trusted source would be a switchport you know to contain a valid router, so you wouldn’t enable RA guard on it.  The RFC defines a discovery method using Secure Neighbor Discovery (SeND) that made me chuckle because the four states of the discovery are the same as 802.1w Rapid Spanning Tree.  Seems we’re never going to get rid of it.  When you enable SeND-based RA Guard discovery, it can dynamically scan the network for devices broadcasting RAs and block or allow them as necessary.  That way, you don’t have to worry about misconfiguring a switchport and killing all the advertisements coming from it. By enabling RA guard on a Cisco switch with firmware 12.2(50)SY, you can effectively mitigate the possibility of an unauthorized attacker DoSing your entire network with what amounts to a script-kiddie style attack.

Tom’s Take

Take a vulnerability that has been known about for two years but swept under the rug, add in a dash of vendor disregard, and shake until the Internet security community is frothing at the mouth to tell you that you should turn off IPv6 on your entire network.  Sounds like a recipe for overreaction to me.  I’m not denying that it is a serious vulnerability.  In fact, given the fact that IPv6 is enabled by default on the current Windows version, it could cause issues.  That is, unless you are smart and take measures to fix the issue rather than sweeping it back under the rug.  Rather than just turning off IPv6 until someone other than Microsoft releases a patch, we need to work through the issue and fix the underlying security issue.  At the same time, this needs to be agreed upon by the major networking vendors sooner rather than later.  The longer this issue exists in its present form, the more security sensationalists can point to it and decry one of the advantages of IPv6 on your network when in fact they should be focusing on the lack of security in the software that allows anyone to masquerade as an IPv6 router.  Then, maybe we can cut IPv6 a little bit of slack.

Flexing Your Muscles – HP Networking Announcements

It’s Interop week, which means a lot of new product announcements coming out on all kinds of interesting hardware and software.  HP is no different and a couple of the products that they’ve got on tap appear to have some interesting designs on how we perceive the idea of a campus network for the coming months.

HP Flex Network Architecture

HP has announced a new network design methodology they refer to as the Flex Network Architecture.  It addresses many of the deficiencies that HP is seeing in network design related to thinking about networks in old ways.  There has been a lot of innovation in networking in recent months but it has been focused primarily on the datacenter.  This isn’t all that surprising, as most of the traction around building large networks has included buzzwords like “cloud” or “fabric” and tend to focus on centralizing the exchange of data.  HP wants to take a great number of these datacenter innovations and push them out to the campus and branch network, where most of us users live.

HP’s three primary areas of focus in the Flex Network are the Flex Fabric, which is the datacenter area that includes unified computing resources and storage, Flex Campus, which is the primary user-facing network area where users connect via technologies such as wireless, and the Flex Branch, where HP is focusing on creating a user experience very similar to that of the Campus users.  To do this, HP is virtually eliminating the lines between what have been historically referred to as “SMB” or “Enterprise” type networks.  I believe this distinction is fairly unnecessary in today’s market, as many branch networks could technically qualify as “Enterprise”, and a lot of campus networks could realistically be considered “SMB”.  By marrying the technology more toward the needs of the site and less to a label, HP can better meet the needs of the environment.

According to HP, there are five key components of the Flex Network:

1. Standards – In order to build a fully resilient architecture, standards are important.  By keeping the majority of your network interoperable with key standards, it allows you to innovate in pockets to improve things like wireless performance without causing major disruptions in other critical areas.

2. Scalability – HP wants to be sure that the solutions they offer are scalable to the height of their capability.  Mike Neilsen summed this up rather well by saying, “What we do, we do well.  What we don’t do well, we don’t do at all.”

3. Security – Security should be enabled all over your network.  It should not be an afterthought, it should be planned in at the very beginning and at every step of the project.  This should be the mantra of every security professional and company out there.

4. Agility – The problem with growing your network is that you lose the ability to be agile.  Network layers quickly overwhelm your ability to quickly make changes and decrease network latency.  HP wants to be sure that Flex allows you to collapse networks down to at most one or two layers to provide the ability to have them running in top condition.

5. Consistency – If you can’t repeat your success every time, you won’t have success.  By leveraging HP’s award winning managment tools like IMC, you can monitor and verify that your network experience is the same for all users.

The focus of the Flex Campus for this briefing is the new A10500-series switch.  This is a new A-series unit designed to go into the core of the campus network and provide high-speed switching for data bound for users.  This would most closely identify with a Cisco Catalyst 6500 switch.  The A10500 is the first switch in HP’s stable to provide IRF in up to 4 chassis.  For those not familiar, Intelligent Resilient Framework (IRF) is the HP method of providing Multiple Link Aggregation (MLAG) between core switches.  By linking multiple chassis together, one logical unit can be presented to the distribution layer to provide fault tolerance and load balancing.  HP’s current A-series switches currently support only 2 chassis in an IRF instance, but the 4IRF technology is planned to be ported to them in the near future.  One of the important areas where HP has done research on the campus core is the area of multimedia.  With the world becoming more video focused and consuming more and more bandwidth dedicated to things like HD Youtube videos and rich user-focused communications, bandwidth is being consumed like alcohol and a trade show.  HP has increased the performance of the A10500 above the Cat 6500 w/ Sup720 modules by reducing latency by 75%, while increasing switching capacity by almost 250%.  There is also a focus on aggregating as many connections as possible.  The launch LPUs (or line cards in Cisco parlance) are a 16-port 10GbE module and a 48-port 1GbE module, with plans to include a 4-port 40GbE and 48-port 10GbE module at some point next year, which should provide 270% more 10GbE density that the venerable Cat 6500. The A10500 comes in 3 flavors, a 4-slot chassis that uses a single crossbar fabric to provide better than 320 Gbps of throughput, and an 8-slot chassis that can mount the LPUs either vertically or horizontally and provide no less than 640 Gbps of throughput.  These throughput numbers are courtesy of the new MPU modules, what Cisco people might call Supervisor engines.  The n+1 fabric modules that tie all the parts and pieces together are called SFMs.  This switch isn’t really designed to go into a datacenter, so there is no current plan to provide FCoE LPUs, but there is strong consideration to support TRILL and SPB in the future to ease the ability to interoperate with datacenter devices.

Another new product that HP is launching is focused on security in the datacenter.  This comes courtesy of the new TippingPoint S6100N.  This is designed to function similarly to an IDS/IPS box, inspecting traffic flying in and out of your datacenter swtiches.  It has the ability to pump up to 16 Gbps of data through the box at once, but once you start inspecting more and more of that traffic, you’ll probably see something closer to 8-10Gbps of throughput.  The S6100N also gives you the ability to have full visibility for VM-to-VM conversations, something that is currently giving many networking and security professionals grief, as much of the traffic being generated in today’s datacenter is generated and destined for virtual machines.  I think there is going to be a real opportunity in the coming months for a device that can provide this kind of visibility without impeding traffic.  HP looks to have a great future with this kind of device.

The third new piece of  the Flex Network Architecture is the addition of Intelligent Management Center (IMC) 5.0 to the Flex Management portion of the Flex Architecture.  The flagship software program for HP’s network management strategy, IMC provide Single Pane of Glass (SPoG) functionality to manage both wired and wireless networks, as well as access control and identity management for both.  It integrates with HP Network Management Center to allow total visibility into your network infrastructure, whether it consist of HP, Procurve, Cisco, 3Com, or Juniper.  There are over 2600 supported devices in IMC that can be monitored and controlled.  In addition, you can use the integrated access controls to control the software being run on the end user workstations and monitor their bandwidth utilization to determine if you’ve got a disproportionately low number of users monopolizing your bandwidth.  IMC is available for installation on a dedicated hardware appliance or in a virtual machine for those that have an invested VMware infrastructure.  You can try it out all the features for 60 days at no charge to see if it fits into your environment and helps alleviate “swivel chair” management.

Tom’s Take

The new HP Flex Architecture gives HP a great hook to provide many new services under a consistent umbrella.  The new addition of the A10500 gives HP a direct competitor to the venerable Cat 6500 platform that can provide high speed connectivity to the campus core without muddying the waters with unnecessary connectivity options, ala the A12500 or Nexus 7000 with their high-priced FCoE capabilities.  The S6100N helps HP begin to focus on the new landscape of datacenter security, where firewalls are less important the visibility into traffic flows between physical and virtual servers.  The IMC update allows all of these pieces to be managed easily from one operations center with no additional effort.  It seems that HP is gearing up to spread out from their recent success in the datacenter and take the fight to the campus and branch office.  I liked what I heard from HP on this call, as it was more of what HP could do and less of what Cisco couldn’t.  So long as HP can continue to bring new and innovative products to the networking marketplace, I think their fortunes have nowhere to go but up.

I Hate NAT! Or Do I…?

Thanks to Packet Pushers episode 43, it appears that I’m going to be known as Tom the Networking Nerd, the guy that hates NAT.  I suppose that’s as good as anything to be known for, and the trick to being a famous celebrity blogger is getting yourself a hook, like Apple Fanboy or General Grumpy Networking Guy.  I know that I made some pretty bold statements during the show, and while some of it may have been playing into a image, I’d like to take a few words to clarify my position on why I dislike NAT.

Network Address Translation goes all the way back to RFC 2663.  At its heart, it is simply a mechanism for modifying IP packets and the information in their headers when the cross a routed boundary.  As I said in my IPv6 presentation, I do really believe that basic NAT serves a purpose in a network.  As a protocol, NAT isn’t inherently bad or good.  Instead, it is the application of the protocol by people that colors it in my eyes.  Allow me to give an example that I feel is very similar to NAT: Bittorrent.  I think the Bittorrent is a very useful protocol for file downloading.  By using the number of users accessing a file to amplify the speed at which it is downloaded, it helps alleviate the effect of having thousands of people downloading a file at once.  It also has built-in error correction, as each file piece is hashed upon receipt.  It’s very useful for downloading large, error-prone files like Linux ISOs or even World of Warcraft patches.  However, Bittorrent has an evil side.  Since it is so useful for downloading large files or collections of files, it has been appropriated by the underground file sharing movement as a method for distributing movies, games, and applications less-than-legally.  The opponents of this activity say that Bittorrent encourages people to break the law, much in the same was the the motion picture industry argued that VCRs encouraged movie piracy back in the 1980’s.  The counter argument here is that Bittorrent isn’t good or bad, instead the intentions of the user must be taken into account.  I feel the same way about NAT.

NAT breaks things.  End-to-end communications between nodes are disrupted by NAT boundaries.  VoIP packets die when they hit NAT walls.  VPNs don’t particularly care for it either.  We’ve had to come up with things like STUN and ICE to workaround a workaround.  It’s getting embarrassing when you have to fix the fix that you put in place.  That’s not to say that NAT doesn’t have its good points, however.

NAT is great when two companies need to merge and have the same address space be in use during the transition.  NAT is a good idea if you need to obscure IP addresses for some reason.  Note that I didn’t say anything about securing them.  If you hear a security “expert” claim that NAT provides security, you have my explicit permission to beat them within an inch of their life.  Beyond a few cases, I think NAT has been used as a kind of “Internet duct tape” and has been stretched long past its original use case into something dastardly.  I realize that without RFC 1918 and Port Address Translation (PAT), we would have run out of IPv4 addresses long ago.  NAT has become a necessary tool in the IPv4 world, and I accept that.  However, the paradigm of NAT is started to be extended into IPv6 unecessarily.  That’s the part that I don’t agree with.

People are looking at IPv6 and wondering how to apply their IPv4 knowledge to it.  That in and of itself is nothing new.  When I was first learning how to do VoIP stuff, I used my networking background to assimilate new terms.  I used to say things like, “Oh! A calling search space is just like an access list.”  While it was great for me when I was just learning, I found out later than metaphor wasn’t quite accurate.  In much the same way, network engi…rock stars are starting to plan IPv6 deployments and they are looking for analogies for the IPv4 things they are used to.  When they don’t see things like NAT or RFC 1918, it makes IPv6 feel alien to them.  For proof of this, you need not look any further than RFC 1884 – site local addresses.  This was an idea to reserve fec0::/10 for use only within a specific site.  While link-local addressing has many uses, the idea of what amounts to RFC 1918 addresses for IPv6 is kind of silly.  Why not just use your IPv6 global addresses instead?  RFC 3879 thankfully removed site local addressing from our lexicon for the time being, until it was reintroduced again in RFC 4193 with a slighly different name.  The same thing happened (in my opinion) with NAT-PT.  People were looking for something they saw in IPv4 in IPv6 unnecessarily.

I feel that NAT in IPv6 will encourage laziness.  I know there is no such thing as infinite address space, but we’ve got enough address space right now to outlast this Internet and whatever the next revision is going to look like.  If we allow NAT64 to take hold in the networking world, we are giving our blessing to people to keep doing the same things they’ve been doing with IPv4 all over again.  It’s the networking equivalent of the famous Santayana quote, “Those who forget the past are doomed to repeat it.”  If we allow people to implement IPv6-as-IPv4, we aren’t going to do ourselves any favors.  Many of the non-address space related reasons to go to IPv6 evaporate when NAT is introduced.  We eliminate wonderful things like end-to-end communications solely for the sake of ease-of-configuration.  Bah, I say.

I’ve always been a proponent of building it right the first time.  Even if it takes a little long to learn how to do it, as long as the implementation is done correctly at the end everything will work out.  As an integrator, a large portion of my time is spent cleaning up after other people’s mistakes.  All too often in IT, we patch a problem and declare it “good enough” only to spend twice as long down the road fixing what we should have fixed the last time.  If we start using NAT unnecessarily in IPv6, it will come back to bite us down the road.  People will avoid using IPv6 internally because “it’s too much trouble”, and just use NAT64 to talk to the IPv6 Internet.  A lot of good work will have been for naught due to the laziness of some.

There are already a lot of v4-to-v6 transition mechanisms out there.  F5 is looking at using load balancers to do 4-to-6 translation.  Proxy servers can help.  Tunneling can be had in any flavor you want, even the much-hated Teredo protocol.  Incidentally, Teredo was developed because Microsoft needed a specification to get around…NAT boundaries.  If we absolutely need to do something other than the ideal of running dual stack, we can use some of the other mechanisms that have been devised that will allow us to transition to where we need to be rather than coming back to our old NAT kludge.

Tom’s Take

I once was able to work on a network that didn’t have NAT.  Globally routable IP addresses as far as the eye could see.  And yet, due to a voice implementation that didn’t have enough address space, NAT still came back to bite me in the ass over and over again.  You can see why that might make me grumpy.

In the end, I really don’t hate NAT as much as I might let on.  Yes, it’s a necessary evil in IPv4.  I don’t want people to keep using it as a crutch instead of designing proper networks with good security policies.  I want people to start thinking about IPv6 as a great new opportunity instead of thinking about it in the same old way.  If designers had kept thinking about phones in the same old way, we would never have gotten the iPhone or the Nexus S.  If computer designers had thought about portable PCs in the same old way, we wouldn’t have iPads or Galaxy Tabs or Chrome CR-48s.  I want to encourage people to solve problems with the new tools they have been given rather than falling back on broken ideas from the 90s.  I may not be as opposed to NAT64 if it had an expiration date.  Use it if you want, but be sure to have it out of your network no later than December 21, 2012 or the world will end.  I suppose I could live with that.  But it doesn’t mean I have to like NAT.

Nerd Tips – Broken Execution Association

Here’s a quick tip for those of you out there that might find yourself fighting off an offending virus or malware program that keeps coming back no matter what you try, such as Win 7 Antivirus 2011.  This particular program does have a little trick that it likes to pull in order to keep itself in memory.  When an executable file (EXE) is launched in the system, usually a set of keys in the registry are consulted to find out what to do with the file.  Most often, the file itself is run with a command string like “%1”, which calls the file.  The malware program inserts itself in front of the execution string, so that every time you try to launch a program to fight off the crapware, like Malwarebyte AntiMalware for instance, the virus just launches instead and reinfects your system.

Should you find yourself in this quandry, unable to launch the programs needed to disinfect yourself, take heart.  An old DOS trick can be used to get yourself right as rain.  In the old days, executable files came in a format other that EXE.  DOS used a file format of COM to execute simple little programs like COMMAND.COM or DOSSHELL.COM.  COM files were orginally simple, with very little code and no metadata in the header.  Likewise, when Windows 3.x was just a program executing on top of DOS, it preserved the executable format of the COM programs.  Fast forward to Windows 7, and you will see that this convention is still honored.  If you find yourself unable to launch REGEDIT.EXE or MBAM.EXE and instead keep launching the virus, do the following:

1.  Launch a command prompt (CMD.EXE or COMMAND.COM if necessary).  You might have to launch it as an administrator to make some changes to system files.

2.  Find the file you need to execute, like REGEDIT.EXE.

3.  Use the follwing command to rename the file: ren REGEDIT.EXE REGEDIT.COM


Sounds simple, eh?  You’ll find that when the file is displayed, it won’t have the neat icon it used to.  Instead, it will look like a generic DOS executable file.  That’s perfectly fine.  When you double-click the file to launch it, it will fire right up.   This is because the COM file association as an executable file format is usually not changed by the malware writers, since very few COM files are still used on modern systems.  Following these steps, you can get Malwarebytes to load and disinfect your system, bypassing the EXE file lockout.  Malwarebytes will even repair the EXE association for you, so when you reboot you’ll be back to normal.  Just remember to go back and rename the file you change to a COM file back to an EXE file.

As a disclaimer, this process doesn’t work 100% of the time, and if the malware writer was smart enough to screw up the COM file association, you’re doubly screwed.  Don’t go mucking around in your system registry changing things unless you know what your doing, since a screwed registry will really kill your system fast.  Use caution, logic, and if all else fails, find a systems rock star to help you out.

Note, I reference Malwarebytes as a removal tool not because of any consideration on their part, but instead because it just works.  I’ve installed the trial on many computers for people that tend to get infected over and over, and it really helps cut down on their infection rates.  Try it out, and don’t forget to buy it if you find it useful.  Every penny they get goes to help cut down on the amount of crap out there trying to infect your system over and over again.

Tech Movies

On occasion, a trending hashtag pops up on Twitter that I think is quite funny.  Today, it was #TechMovies.  However, I know that most people that follow me probably don’t care to listen to my inane ramblings about these kinds of things.  I’m pretty sure that if I gave it my all, I’d have about 2 followers by the end of the day.  In an effort to spare my followers the horrors that swim around in my mind, I decided instead to post my list to my blog.  Read ahead at your own peril…

Still here?  Okay, here come the #TechMovies:


A Root Bridge Too Far

A View to a TRILL

On Her Majesty’s Shared Secret Service

Event Split Horizon

SDLC Punk!

Friday the 0xd

Licesne to Kill -9

Fort httpd

The ’emerge world’ according to ARP (two for one)

BGP Neighbors

The Matrix “shutdown -r now”

DNS the Menace

The OS X 10.7 Lion King

SLAXers

Hard Route Target

Harry Potter and the Chamber of Enable Secrets

A League of Their PWN

Chitty Chitty ! !

DOS Bootdisk

Universal Image Soldier

Major Version League

ntop Gun

10 Fast 10 Furious

What About Microsoft Bob?

Visual BASIC Instinct

The EIGRP Sanction

Manos: Hands of Fate Sharing

Under Siege 2: Dark Fiber Territory

IPv6 for Enterprise Networks – Review

Unless you’ve been living in a cave with Tony Stark for the past several months, you are well aware that IPv6 is a reality that can’t be ignored by today’s networker.  Part of the issue affecting IPv6 adoption is the lack of good reading material.  Yes, there are mountains of RFCs out there that talk about IPv6 in nauseating detail.  However, these documents aren’t all that accessible for the average network rock star that is working 50 hours a week and doesn’t have time to pour over page after page of dry Internet-ese.  There have been some great posts about IPv6 for the common man from people like Jeremy Stretch and Chris Jones but there is a segment of the population that would rather read about the subject from a vendor source.  Enter Shannon McFarland and company:

IPv6 for Enterprise Networks Cover

Cisco Press graciously provided a copy of this book for my evaluation.  Clocking in at a svelte 361 pages, this tome has a great wealth of IPv6 information from a design perspective.  There are some code examples for your networking gear, but much of the discussion in this book revolves around IPv6 design and building your network right the first time.

Chapter 1 starts off the same way many network rock stars will start off pitching IPv6 to their company, with a discussion of the market drivers for IPv6 adoption.  Even though networking professionals know IPv6 is inevitable, the C-level executives will most likely need some additional convincing.  This chapter is great for them to hear about the reasons why IPv6 is necessary.  Chapter 2 is an overview of the Cisco hierarchical network design, now expanded to include IPv6 content.  If you’ve seen any network design documentation in the past decade, this should be a review for you.  Just note the IPv6 sections.

Chapter 3 starts the meat of the book.  This chapter discusses the coexistence mechanisms that you are likely to face when prepping your network, since we are going to need to run IPv4 alongside IPv6 no matter how much we might not want to.  Tunnels and NAT64 get some discussion, along with running IPv6 over MPLS.  Chapter 4 discusses the various network services that will need to be IPv6 aware to help run your network, such as OSPFv3 or BGP.  Great discussion is made about multicast, since multicast is such a crucial component in IPv6.  Chapter 5 is a short one, discussing the planning that one will need to go through for implementing an IPv6 infrastructure.  This is more of the paperwork and staging behind the scenes that might be boring, but in an enterprise is critical for painless IPv6 deployment.

Chapter 6 is the largest chapter and will most likely be where you spend most of your time.  This is a soup-to-nuts campus IPv6 deployment.  The authors analyze the deployment from three different perspectives, the dual stack (my favorite), the hybrid model that is useful for non-IPv6 applications, and the service block model, which allows you to bring IPv6 online in sections.  Every facet of your network is analyzed in this chapter, from VLANs to routing protocols to QoS and other network services.  If you are going to be deploying IPv6 in your network in the future, you’d do well to just dog ear Chapter 6 so you can turn back to it quickly.

Chapters 7 through 10 deal with specific cases of IPv6 deployment to support use-cases, such as virtualization, branch offices, datacenters, or remote access.  They exist so that you can quickly reference these scenarios as needed, since you may not need to worry about deployment of IPv6 in a datacenter in your environment for instance.  The authors do a wonderful job of explaining all the things you might need to take into account in your deployment of ancillary technologies, such as Microsoft protocols to be aware of or application requirements that may not necessarily be network dependent.

Chapter 11 is all about managing your shiny new IPv6 network through things like Netflow and SNMP.  Careful attention should be paid if you don’t want to find yourself chasing poltergeists in your network at 3 a.m. on a Sunday.  Chapter 12 gives you a great breakdown of parts and pieces that would be great to construct a lab to pilot your IPv6 implementation before unleashing it on your live network.  That way, IPv6 doesn’t call the Resume Generating Event (RGE) protcol.

Tom’s Take

I liked this book quite a bit.  There is a ton of good information to be found inside for all levels of network rock star, from those just learning about deploying IPv6 to the poor souls that find themselves mired in a remote access IPv6 deployment gone wrong.  With a big focus on proper network design, IPv6 for Enterprise Networks ensures that you don’t have to rebuild your IPv6-enabled network after a short time due to bad design decisions or compromises. Every scenario I’ve seen discussed concerning IPv6 deployment is laid out in clear language, with both pros and cons for deployment.  I highly recommend picking up this book as soon as you can so your journey down the IPv6 yellow brick road starts off smoothly and you can avoid the pitfalls before you encounter them.

As a bonus, if you are going to Cisco Live 2011 in Las Vegas, Shannon McFarland is giving a session based around this book, BRKRST-2301 Enterprise IPv6 deployment.  If you aren’t adverse to 8 a.m. sessions the morning after the Customer Appreciation Event you should sign up and check it out.  I plan on bringing my book so that Shannon can autograph it.  That way, I can claim I met him before he became a gigantic IPv6 rock star.

Laser Beam Eyes – My LASIK Experience

Just like any good nerd out there, I have vision issues.  While I’m capable of reading things close up, once you get past arm’s length it all gets blurry.  I wore glasses for a couple of years in middle school before switching to contact lenses for my primary form of vision correction.  Allow me to state for the record that I was the worst contact lens wearer imaginable.  30-day extended wear pairs would last me 8 months.  I left them in all the time, even when I slept.  The only time I wore glasses is when I couldn’t stand the contacts any longer, and that usually lasted about an hour because I couldn’t stand my glasses either.  I always wanted to be free of the plastic and glass I was forced to use to avoid bumping into large objects.

Enter laser vision correction, commonly referred to as LASIK.  I’d looked at getting it for several years, but I never looked too deeply.  I figured I’d get around to it sooner or later.  Last year, my eye doctor asked me if I’d ever considered getting LASIK.  It seems that having a stable prescription for a decade makes you a good candidate.  She did some preliminary tests in her office and found that my corneas were the proper thickness to perform the procedure.  And with that, I started investigating all the possibilities.  There are lots of different options out there for people that want to use the power of the almighty laser to fix vision issues.  Lucky enough for me, I fell into the category of “average”, meaning my prescription wasn’t too crazy to cause issues with the fixing my eyes.

For those not familiar with the process, the doctor essentially cuts a flap in your eye, peels back that flap, and uses the laser to correct your vision on the cornea itself.  In essence, the doctor is creating a permanent contact lens for your eye.  No need to take it out every night and wash it, or worry about losing it in the ocean.  Always there, always correcting your vision.  After chatting with a couple of different doctors, I settled on Dr. Gary Wilson at ClearSight Center.  His plan seemed to meet my needs and wasn’t over priced.  While I was willing to spend whatever it took to make sure I could see at the end of the procedure, I also didn’t want to break the bank on useless add-ons.

The pre-op appointments were pretty standard.  The measured my eyes and double-checked my prescription.  They told me that I would need to have my glasses on for at least two weeks, since the eyeball needs to settle back into a normal shape if you are a long-term contact wearer.  Seems contacts deform the eye slightly.  Once I had my contacts out for the requisite two weeks, there were a few last minute checks and I thought I was off and running.  Except…since Dr. Wilson is the only eye surgeon at the center, if he’s sick the whole operation shuts down.  And since Dr. Wilson caught a bit of a stomach bug, my surgery was off the table for its original date, April 15th.  A reschedule for the following Tuesday was also met with disappointment, as Dr. Wilson was still not quite up to surgery.  As I would rather have my eye doctor performing at full capacity, I rescheduled for April 26th.  As a side note to you network people out there, this goes to show that a one-person operation can be a disaster when that one person is unavailable for any reason.  Spread out your knowledge so that having a single person down doesn’t mean having your whole business down.

Surgery day started out a little nerve wracking.  I had to fill out a few forms, including writing out a paragraph of an agreement long hand.  It had been so long since I’ve written anything in cursive I almost forgot how to write.  After the forms were filled out, the waiting began.  It took about an hour before they were ready for me.  After stepping back into the operating area, I was given a sexy shower cap to wear on my head and cool shoe covers as well.  I asked for one of those peek-a-boo hospital gowns but was met with blank stares and shivers of revulsion.  Then, the eyedrops started.  Antibiotic drops, anesthetic drops, drops to clear my redness.  All in all, I think I had eight different eyedrops administered over the course of the next twenty minutes.  Not just a drop or two either.  It felt like Niagara Falls splashing against my face.  I also got to take a steroid to aid the healing process and two different anti-anxiety medications to keep me from being jittery.  Not that they helped totally, as the idea of having my eyes operated on coupled with the hosptial-like atmosphere (not my favorite of places) lead me to have a small panic attack right before I went back.  Thankfully, the nurse was right there with a 7-UP and package of delicious crackers.  Maybe the crackers had Valium hidden in them…

Once the doctor was ready, it was showtime.  I walked back into the room and laid down on what was essentially a massage table.  I fit my head back into the little headrest and the doctor and nurses explained the procedure to me.  All I really had to do was stare straight ahead and follow a little light.  Easy, right?  After taping my left eye shut to prevent me from getting hurt by errant laser blasts, the doctor placed a device over my right eye. This was basically the most uncomfortable portion of the procedure, as it felt someone was pressing down on my eye for about thirty seconds, during which time everything was black.  What was happening was the device was creating the flap on my eye, slicing off a section of my cornea.  I elected to go with a bladeless cornea cut, as the idea of having someone put a razor blade close to my eyeball wasn’t pleasant.  Once the flap was created, the device was removed and my vision returned.  I then had to stare at a green light over my head so the laser could get a reference point for my eyeball.  There was a tracking system positioned around my head so that if my eye twitched even slightly, the laser would shut off instantly to prevent damage. Not that it was entirely necessary, as the amount of medication I’d been subjected to made sure my eyes didn’t twitch.  The doctor warned me that my vision was about to get very blurry.  Boy, he wasn’t kidding.  Like, fifteen beers blurry.  The green light I was supposed to be staring at went from looking like a pinpoint to a whole constellation.  This was due to the doctor flipping my cornea flap up to laser my eye.  Once ready, a 9-second laser burst was all it took to correct 20 years of bad vision.  The chemical smell in the air from the laser light being produced smelled like burning hair, but I tried not to think about it as I stared at the green constellation of lights above me.  Nine seconds later, the doctor flipped my cornea flap back down and smoothed it out with a little plastic tool.  As my eyeball was numbed to the point of barely existing at that point, it was a little surreal to watch him touch my eye with something that I couldn’t feel.  He made me close my eye and taped it shut so he could move on to the left eye, since I had elected to have them both done at once.  The left eye required an eleven second laser burst, due to a slight amount of astigmatism.  Afterwards, my eyes were rinsed out with some saline solution, and I stood up for the first time in twenty years able to see without glasses or contacts.

The post-op was fairly uneventful.  I was informed I shouldn’t read or use a computer for about 24 hours.  I should only watch TV and try to take as many naps as I could so my eyes would start healing.  I was given a regimen of eye drops to take four times daily to help prevent infection.  I was told that any time I felt my eyes getting dry, I should use artificial tears to keep them wet and lubricated.  Other than that, it was pretty easy compared to other post-op instructions I’ve heard.

Tom’s Take

Overall, LASIK was a great success for me.  Twenty-four hours later my vision was 20/16, which is a step better than the average person.  I know that over the course of the next few months the healing process will cause my vision to fluctuate some.  As long as I end up with 20/20, I’ll be damn happy.  I haven’t tried to drive at night yet, so I’m not sure of the effects of night halos around light sources.  I can say that I’m a little more sensitive to sunshine.  It’s not painful, but I do notice the sun being a lot brighter than usual outside.  I hope that the next few months will prove to be as good as the last forty eight hours.

If you are a good candidate for LASIK, I highly recommend the procedure.  The ability to not worry about glasses or contacts when you wake up in the morning is more than worth it.  There was no pain at all, and the procedure was the epitome of fast and easy.  There is no reason why everyone shouldn’t enjoy the fruits of modern technology like this.

The (Trendy) Games People Play

A few weeks ago, Twitter decided to push out an update to the iOS client software that helped them better monitize their service.  The quick bar at the top of the window that has now become infamously known as the “Dick Bar” forced me to look at all the trending topics on Twitter at the present time.  I don’t normally care for the junk that starts trending on Twitter, and after taking a good look at some of the inane things that kept popping up, I thought I was going to go insane.  It was right then and there that I had to take drastic measures to restore my sanity.

My distaste for Network Address Translation (NAT) is no secret to the people that follow me on Twitter.  That in and of itself could be a whole series of blog posts.  Instead, I decided to take my hatred of all things NAT and combine it with a trending hashtag in an attempt to have a little fun with things.  Usually, the hashtags I pick are simple questions or statements.  I figure by tacking on something about NAT, I’ll either confuse the non-network rock star people on Twitter or get a few laughs out of my followers.  Either way, it keeps me from doing more devious things in my insanity.  As such, some examples of what I have come up with so far:

#ifitwasuptome NAT would require an advanced feature license. That way, if you really want to use it, it’s gonna cost you.

NAT is a really bad idea. #SixWordFact

#saynoto NAT. It’s a gateway drug that leads to NAT-on-a-stick, policy-based NAT, and worst of all carrier-grade NAT.

As you can see, a seemingly innocuous hashtag has been corrupted for my crusade against the WD-40 of the network world.  WD-40 because if the packets are stuck on an RFC 1918 network, NAT helps get them unstuck.  I plan on having a lot more fun with this game.  I’ll even start adding in more topics, like IPv4.  If you have suggestions, don’t hesitate to shout them out.  If nothing else, it’ll help make Twitter a little more sensible for those of us in the networking profession.

Live with the Nerd – My Cisco Live 2011 Schedule

Since my friend Jeff posted his Cisco Live schedule, I thought I’d do the same so you could see what I’m going to be interested in when I get to Las Vegas in July.

SaturdayArrive at the Mandalay Bay in Las Vegas

Sunday
4:30 PM
6:00 PM
GENCOL-1001 Collaboration Welcome Session
6:30 PM
8:30 PM
GENCOL-1002 Collaboration Welcome Reception
Monday
8:00 AM
9:30 AM
CUG-4663 IP Communications Product Direction
10:00 AM
11:30 AM
CUG-4665 Unified Communications and Messaging Product Direction
12:30 PM
2:30 PM
BRKUCC-2006 SIP Trunk design and deployment in Enterprise UC networks
Tuesday
8:00 AM
9:30 AM
BRKEVT-3304 Advanced CUCM – Tandberg Call Control Troubleshooting
10:00 AM
11:30 AM
Conference Event GENKEY-4700 Keynote and Welcome Address
12:30 PM
2:30 PM
BRKNMS-2035 Ten Cool LMS Tricks to Better Manage Your Network
Wednesday
8:00 AM
10:00 AM
BRKUCC-3000 Advanced Dial Plan Design for Unified Communications Networks
10:30 AM
11:30 AM
Conference Event GENKEY-4701 Cisco Technology Keynote
12:30 PM
2:30 PM
BRKUCC-1903 Migration and Co-Existence Strategy for Unified Communications (UC) or Collaboration Applications on Unified Computing Systems (UCS)
Thursday
8:00 AM
10:00 AM
BRKRST-2301 Enterprise IPv6 Deployment
2:30 PM
3:30 PM
Conference Event GENKEY-4702 Closing Keynote: William Shatner
4:00 PM
5:30 PM
BRKUCC-2061 IPv6 in Enterprise Unified Communications Networks


As you can see, I’m going to be spending a lot of my time with voice and IPv6.  Voice is because that’s what I spend most of my time doing nowadays.  IPv6 because that’s what I expect to be spending most of my time doing fairly soon.  Plus, with all the IPv6 talk on Packet Pushers, I’m going to need to stay on the cutting edge if I want to be able to hold a conversation.

A couple of highlights:

– The sessions on Monday with a “CUG” are for the Cisco Collaboration Users Group.  This is a great program that I’ve been involved in for the past couple of years that allows me to have a say in the direction of the product lines in Cisco’s collaboration space.  It also gives me the opportunity to interact directly with the business units (BUs) and get early access to beta programs.  If you aren’t already a member of the Collaboration Users Group, head over to their site and sign up now.  You’ll get access to a great group of people focused on collaboration, and if you play your cards right, maybe even an invitation to a kick-ass part on Monday night!

– BRKUCC-1903 is a class about migrating CUCM from any 4.x or newer version to the current version (8.5 as of right now).  It’s taught by Brandon Ta, who is one of the smartest people I’ve ever talked to.  He knows the ins and outs of CUCM like no one else.  His migration strategies and recommended tools have saved me so much time in the past on the migrations I’ve had to do.  He also gives tips on Unity/Unity Connection, IPCC, and Presence, so don’t hesitate to sign up if you’ve got one of these migrations coming up soon.

If you’re headed to Cisco Live, make sure to hop on over to Dane DeValcourt’s Cisco Live Twitter page and check out all the tweeps that are going to be putting in an appearance.  There’s already a great list of networking people that have affirmed they are attending.  Make sure to let him know if you’re going as well so he can put your name on the list.