Opening the Future

I’ve been a huge proponent of open source and open development for years.  I may not be as vocal about it as some of my peers, but I’ve always had my eye on things coming out of the open community.  As networking and virtualization slowly open their processes to these open development styles, I can’t help but think about how important this will be for the future.

Eyes Looking Forward

If Heartbleed taught me anything in the past couple of weeks, it’s that the future of software has to be open.  Finding a vulnerability in a software program that doesn’t have source access or an open community build around it is impossible.  Look at how quickly the OpenSSL developers were able to patch their bug once it was found.  Now, compare that process to the recently-announced zero day bug in Internet Explorer.  While the OpenSSL issue was much bigger in terms of exposure, the IE bug is bigger in terms of user base.

Open development isn’t just about having multiple sets of eyes looking at code.  It’s about modularizing functionality to prevent issues from impacting multiple systems.  Look at what OpenStack is doing with their plugin system.  Networking is a different plug in from block storage.  Virtual machine management isn’t the same as object storage.  The plugin idea was created to allow very smart teams to work on these pieces independently of each other.  The side effect is that a bug in one of these plugins is automatically firewalled away from the rest of the system.

Open development means that the best eyes in the world are looking at what you are doing and making sure you’re doing it right.  They may not catch every bug right away but they are looking.  I would argue that even the most stringent code checking procedures at a closed development organization would still have the same error rate as an open project.  Of course, those same procedures and closed processes would mean we would never know if there was an issue until after it was fixed.

Code of the People

Open development doesn’t necessarily mean socialism, though.  Look at all the successful commercial projects that were built using OpenSSL.  They charged for the IP built on a project that provide secure communication.  There’s no reason other commercial companies can’t do the same.  Plenty of service providers are charging for services offered on top of OpenStack.  Even Windows uses BSD code in parts of its networking stack.

Open development doesn’t mean you can’t make money.  It just means you can’t hold your methods hostage.  If someone can find a better way to do something with your project, they will.  Developers are worried that someone will “steal” code and rewrite a clone of your project.  While that might be true of a mobile app or simple game, it’s far more likely that an open developer will contribute code back to your project rather than just copying it.  You do take risk by opening yourself up to the world, but the benefits of that risk far outweigh any issues you might run into by closing your code base.


Tom’s Take

It may seem odd for me to be talking about development models.  But as networking moves toward a background that requires more knowledge about programming it will become increasingly important for a new generation of engineers to be comfortable with programming.  It’s too late for guys like me to jump on the coding bandwagon.  But at the same time, we need to ensure that the future generation doesn’t try to create new networking wonders only to lock the code away somewhere and never let it be improved.  There are enough apps in the app stores of the world that will never be updated past a certain revision because the developer ran out of time or patience with their coding.  Instead, why not train developers that the code they write should allow for contribution and teamwork to continue.  An open future in networking means not repeating the mistakes of the past.  That alone will make the outcome wonderful.

SDN and Elephants

Blind_men_and_elephant3

There is a parable about six blind men that try to describe an elephant based on touch alone.  One feels the tail and thinks the elephant is a rope.  One feels the trunk and thinks it’s a tree branch.  Another feels the leg and thinks it is a pillar.  Eventually, the men realize they have to discuss what they have learned to get the bigger picture of what an elephant truly is.  As you have likely guessed, there is a lot here that applies to SDN as well.

Point of View

Your experience with IT is going to dictate your view on SDN.  If you are a DevOps person, you are going to see all the programmability aspects of SDN as important.  If you are a network administrator, you are going to latch on to the automation and orchestration pieces as a way to alleviate the busy work of your day.  If you are a person that likes to have a big picture of your network, you will gravitate to the controller-based aspects of protocols like OpenFlow and ACI.

However, if you concentrate on the parts of SDN that is most familiar, you will miss out on the bigger picture.  Just as an elephant is more than just a trunk or a tail, SDN is more than the individual pieces.  Programmable interfaces and orchestration hooks are means to an end.  The goal is to take all the pieces and make them into something greater.  That takes vision.

The Best Policy

I like what’s going on both with Cisco’s ACI and the OpenStack Congress project.  They’ve moved past the individual parts of SDN and instead are working on creating a new framework to apply those parts. Who cares how orchestration works in a vacuum?  Now, if that orchestration allows a switch to be auto-provisioned on boot, that’s something.  APIs are a part of a bigger push to program networks.  The APIs themselves aren’t the important part.  It’s the interface that interacts with the API that’s crucial.  Congress and ACI are that interface.

Policy is something we all understand.  Think for a moment about quality of service (QoS). Most of the time engineers don’t know LLQ from CBWFQ or PQ.  But if you describe what you want to do, you get it quickly.  If you want a queue that guarantees a certain amount of bandwidth but has a cap you know which implementation you need to use.  With policies, you don’t need to know even that.  You can create the policy that says to reserve a certain amount of bandwidth but not to go past a hard limit.  But you don’t need to know if that’s LLQ or some other form of queuing.  You also don’t need to worry about implementing it on specific hardware or via a specific vendor.

Policies are agnostic.  So long as the API has the right descriptors you can program a Cisco switch just as easily as a Juniper router.  The policy will do all the work.  You just have to figure out the policy.  To tie it back to the elephant example, you find someone with the vision to recognize the individual pieces of the elephant and make them into something greater than a rope and pillar.  You then realize that the elephant isn’t quite like what you were thinking, but instead has applications above and beyond what you could envision before you saw the whole picture.


Tom’s Take

As far as I’m concerned, vendors that are still carrying on about individual aspects of SDN are just trying to distract from a failure to have vision.  VMware and Cisco know where that vision needs to be.  That’s why policy is the coin of the realm.  Congress is the entry point for the League of Non-Aligned Vendors.  If you want to fight the bigger powers, you need to back the solution to your problems.  If Congress could program Brocade, Arista, and HP switches effortlessly then many small-to-medium enterprise shops would rush to put those devices into production.  That would force the superpowers to take a look at Congress and interface with it.  That’s how you build a better elephant – by making sure all the pieces work together.

Security’s Secret Shame

photo-2

Heartbleed captured quite a bit of news these past few days.  A hole in the most secure of web services tends to make people a bit anxious.  Racing to release patches and assess the damage consumed people for days.  While I was a bit concerned about the possibilities of exposed private keys on any number of web servers, the overriding thought in my mind was instead about the speed at which we went from “totally secure” to “Swiss cheese security” almost overnight.

Jumping The Gun

As it turns out, the release of the information about Heartbleed was a bit sudden.  The rumor is that the people that discovered the bug were racing to release the information as soon as the OpenSSL patch was ready because they were afraid that the information had already leaked out to the wider exploiting community.  I was a bit surprised when I learned this little bit of information.

Exploit disclosure has gone through many phases in recent years.  In days past, the procedure was to report the exploit to the vendor responsible.  The vendor in question would create a patch for the issue and prepare their support organization to answer questions about the patch.  The vendor would then release the patch with a report on the impact.  Users would read the report and patch the systems.

Today, researchers that aren’t willing to wait for vendors to patch systems instead perform an end-run around the system.  Rather than waiting to let the vendors release the patches on their cycle, they release the information about the exploit as soon as they can.  The nice ones give the vendors a chance to fix things.  The less savory folks want the press of discovering a new hole and project the news far and wide at every opportunity.

Shame Is The Game

Part of the reason to release exploit information quickly is to shame the vendor into quickly releasing patch information.  Researchers taking this route are fed up with the slow quality assurance (QA) cycle inside vendor shops.  Instead, they short circuit the system by putting the issue out in the open and driving the vendor to release a fix immediately.

While this approach does have its place among vendors that move a glacial patching pace, one must wonder how much good is really being accomplished.  Patching systems isn’t a quick fix.  If you’ve ever been forced to turn out work under a crucial deadline while people were shouting at you, you know the kind of work that gets put out.  Vendor patches must be tested against released code and there can be no issues that would cause existing functionality to break.  Imagine the backlash if a fix to the SSL module cause the web service to fail on startup.  The fix would be worse than the problem.

Rather than rushing to release news of an exploit, researchers need to understand the greater issues at play.  Releasing an exploit for the sake of saving lives is understandable.  Releasing it for the fame of being first isn’t as critical.  Instead of trying to shame vendors into releasing a patch rapidly to plug some hole, why not work with them instead to identify the issue and push the patch through?  Shaming vendors will only put pressure on them to release questionable code.  It will also alienate the vendors from researchers doing   things the right way.


Tom’s Take

Shaming is the rage now.  We shame vendors, users, and even pets.  Problems have taken a back seat to assigning blame.  We try to force people to change or fix things by making a mockery of what they’ve messed up.  It’s time to stop.  Rather than pointing and laughing at those making the mistake, you should pick up a keyboard and help them fix it. Shaming doesn’t do anything other than upset people.  Let’s put it to bed and make things better by working together instead of screaming “Ha ha!” when things go wrong.

End Of The CLI? Or Just The Start?

update1-leak3-hero

Windows 8.1 Update 1 launches today. The latest chapter in Microsoft’s newest OS includes a new feature people been asking for since release: the Start Menu. The biggest single UI change in Windows 8 was the removal of the familiar Start button in favor of a combined dashboard / Start screen. While the new screen is much better for touch devices, the desktop population has been screaming for the return of the Start Menu. Windows 8.1 brought the button back, although it only linked to the Start screen. Update 1 promises to add functionality to the button once more. As I thought about it, I realized there are parallels here that we in the networking world can learn as well.

Some very smart people out there, like Colin McNamara (@ColinMcNamara) and Matt Oswalt (@Mierdin) have been talking about the end of the command line interface (CLI). With the advent of programmable networks and API-driven configuration the CLI is archaic and unnecessary, or so the argument goes. Yet, there is a very strong contingent of the networking world that is clinging to the comfortable glow of a terminal screen and 80-column text entry.

Command The Line

API-driven interfaces provide flexibility that we can’t hope to match in a human interface. There is no doubt that a large portion of the configuration of future devices will be done via API call or some sort of centralized interface that programs the end device automatically. Yet, as I’ve said before, engineers don’t like have visibility into a system. Getting rid of the CLI for the sake of streamlining a device is a bad idea.

I’ve worked with many devices that don’t have a CLI. Cisco Catalyst Express switches leap immediately to mind. Other devices, like the Cisco UC500 SMB phone system, have a CLI but use of it is discouraged. In face, when you configure the UC500 using the CLI, you start getting warnings about not being able to use the GUI tool any longer. Yet there are functions that are only visible through the CLI.

Non-Starter

Will the programmable networking world will make the same mistake Microsoft did with Windows 8? Even a token CLI is better than cutting it out entirely. Programmable networking will allow all kinds of neat tricks. For instance, we can present a Cisco-like CLI for one group of users and a Juniper-like CLI for a different group that both accomplish the same results. We don’t need to have these CLIs sitting around resident memory. We should be able to generate them on the fly or call the appropriate interfaces from a centralized library. Extensibility, even in the archaic interface of last resort.

If all our talk revolves around the removal of the tool people have been using for decades to program devices you will make enemies quickly. The talk needs to shift from the death of CLI and more toward the advantages gained through adding API interfaces to your programming. Even if our interface into calling those APIs looks similar to a comfortable CLI, you’re going to win more converts up front if you give them something they recognize as a transition mechanism.


Tom’s Take

Microsoft bit off more than they could chew when they exiled the Start Menu to the same pile as DOSShell and Microsoft Bob. People have spent almost 20 years humming the Rolling Stones’ “Start Me Up” as they click on that menu. Microsoft drove users to this approach. To pull it out from under them all at once with no transition plan made for unhappy users. Networking advocates need to be just as cognizant of the fact that we’re headed down the same path. We need to provide transition options for the die-hard engineers out there so they can learn how to program devices via non-traditional interfaces. If we try to make them quit cold turkey you can be sure the Start Menu discussion will pale in comparison.

The Sunset of Windows XP

WindowsXPMeadow

The end is nigh for Microsoft’s most successful operating system of all time. Windows XP is finally set to reach the end of support next Tuesday. After twelve and a half years, and having its death sentence commuted at least twice, it’s time for the sunset of the “experienced” OS. Article after article has been posted as of late discussing the impact of the end of support for XP. The sky is falling for the faithful. But is it really?

Yes, as of April 8, 2014, Windows XP will no longer be supported from a patching perspective. You won’t be able to call in a get help on a cryptic error message. But will your computer spontaneously combust? Is your system going to refuse to boot entirely and force you at gunpoint to go out and buy a new Windows 8.1 system?

No. That’s silly. XP is going to continue to run just as it has for the last decade. XP will be as secure on April 9th as it was on April 7th. But it will still function. Rather than writing about how evil Microsoft is for abandoning an operating system after one of the longest support cycles in their history, let’s instead look at why XP is still so popular and how we can fix that.

XP is still a popular OS with manufacturing systems and things like automated teller machines (ATMs). That might be because of the ease with which XP could be installed onto commodity hardware. It could also be due to the difficulty in writing drivers for Linux for a large portion of XP’s life. For better or worse, IT professionals have inherited a huge amount of embedded systems running an OS that got the last major service pack almost six years ago.

For a moment, I’m going to take the ATMs out of the equation. I’ll come back to them in a minute. For the other embedded systems that don’t dole out cash, why is support so necessary? If it’s a manufacturing system that’s been running for the last three or four years what is another year of support from Microsoft going to get you? Odds are good that any support that manufacturing system needs is going to be entirely unrelated to the operating system. If we treat these systems just like an embedded OS that can’t be changed or modified, we find that we can still develop patches for the applications running on top of the OS. And since XP is one of the most well documented systems out there, finding folks to write those patches shouldn’t be difficult.

In fact, I’m surprised there hasn’t been more talk of third party vendors writing patches for XP. I saw more than a few start popping up once Windows 2000 started entering the end of its life. It’s all a matter of the money. Banks have already started negotiating with Microsoft to get an extension of support for their ATM networks. It’s funny how a few million dollars will do that. SMBs are likely to be left out in the cold for specialized systems due to the prohibitive cost of an extended maintenance contract, either from Microsoft or another third party. After all, the money to pay those developers needs to come from somewhere.


Tom’s Take

Microsoft is not the bad guy here. They supported XP as long as they could. Technology changes a lot in 12 years. The users aren’t to blame either. The myth of a fast upgrade cycle doesn’t exist for most small businesses and enterprises. Every month that your PC can keep running the accounting software is another month of profits. So who’s fault is the end of the world?

Instead of looking at it as the end, we need to start learning how to cope with unsupported software. Rather than tilting at the windmills in Remond and begging for just another month or two of token support we should be investigating ways to transition XP systems that can’t be upgraded within 6-8 months to an embedded systems plan. We’ve reached the point where we can’t count on anyone else to fix our XP problems but ourselves. Once we have the known, immutable fact of no more XP support, we can start planning for the inevitable – life after retirement.