Unknown's avatar

About networkingnerd

Tom Hollingsworth, CCIE #29213, is a former network engineer and current organizer for Tech Field Day. Tom has been in the IT industry since 2002, and has been a nerd since he first drew breath.

The SDNicorn

SDNicorn and the CLUS Princess

SDNicorn and the CLUS Princess

Cisco Live is a conference full of characters. Larger than life people like Scott Morris (@ScottMorrisCCIE) and Terry Slattery. Even I’ve been known to indulge in the antics sometimes. Remember the tattoo? This year, I wanted to do something a little different. And with some help from Amazon I managed to come up with one the best and worst ideas I’ve ever had.

The SDNicorn

Why a unicorn head? It’s actually an idea I’ve had for Networking Field Day for quite a while. The Wireless Field Day folks already have their own spirit mascot: the polar bear.

GlassPolarBear

I wanted to give the networking folks their own mascot. What better animal than the unicorn? After all, when things just seem to happen for no reason, who says it isn’t because of unicorns? In addition, Amy Lewis (@CommsNinja) of Cisco is a huge unicorn fan. If you’re going to go for something, go all the way right?

The SDNicorn Rides Again

When I pulled the mask out at the Sunday evening tweetup, it got a huge round of applause. People started lining up for pictures with me. It was a bigger reception than I could have hoped for.

Bn-pIhACIAIcevP

I thought it would have a few laughs then I would retire the unicorn head until the next day. That’s when it took on a life of its own. I started looking for picture ideas. Rob Novak (@Gallifreyan) got a picture of me drinking an energy drink.

Bn9iuY0CYAAs9hY

It is unicorn fuel after all. The best part came when a random group of Cisco interns shooting video for an executive event asked me to put on the mask for a quick pickup shoot. That’s how you know you’ve made an impact.

The rest of the event was just as fun for me. I wore it to the CCIE party when Amy Arnold (@AmyEngineer) got a great shot of me enjoying wine.

853790947

George Chongris (@th1nkdifferent) got a nice selfie too.

BoIWUaKIAAAkf6d.jpg-large

A few people even remarked that I was a little bit scary. I took the mask down to the World of Solutions and ran into Mike Dvorkin (@Dvorkinista), where he proceeded to borrow the mask and trot around the WoS asking for hay. I was doubled over with laughter the whole time.

BoGD514IgAABf0M.jpg-large

I also got J. Michael Metz (@DrJMetz) to admit that Dynamic FCoE is really powered by unicorns.  Hat and all.

BoHnH-OIAAIN6WO.jpg-large

The Customer Appreciation Event had to be the highlight, however. I wore the mask with the CAE hat of course. But people started borrowing it to do other things. Bob McCouch (@BobMcCouch) got a great shot of Kale Blankenship (@vCabbage) in the mask enjoying a beer.

BoNln58IEAAwBSy

James Bowling (@vSential) was a highlight with his cloud-powered video too.

Tom’s Take

I admit the unicorn was a bit silly. But it was memorable. People have been tweeting about it and writing articles about it already. I plan on bringing the SDNicorn to Networking Field Day 8 this fall as well as VMworld in August. I think this idea has a bit more life left in it yet. At least I’m covering something up instead of showing off again, right?

BoInJcTCQAAbm2u.jpg-large

Software Release Models

If you remember a while back, I wrote all about the ways that software companies name releases and what they really mean.  Then I started thinking about all the ways companies release products without actually letting you get them.  You’ve heard it before: a big release announcement followed by questions about availability that never get answered.  I thought I’d take a moment to decode a few of them for you.

Early Field Trials – This thing is still broken.  We thought we fixed all the bugs but when we let someone outside the company install it they managed to screw it up.  So we gave it to three of our biggest customers and parked an engineer in their office for the next six months to iron out all the bugs.  No, you can’t buy it.  We don’t make enough money from you to justify parking one of our people in your office.  Check back next year.

New Product Hold – Buy our stuff! We released it and you can order it.  But we don’t think it’s quite ready for production.  Plus, we want to make sure you aren’t going to install it wrong and make us look bad.  So we’re going to make you ask for permission to buy something from us.  We’ll give it to you with the understanding that you can only install it on these two platforms on the second Tuesday in July.

Generally Available – We had to release it before our CEO went on stage.  You can order it, but we haven’t actually built any of them yet.  So while it’s available, it will still take another three months for it to get to you.  We hope to have all our supply chain issues sorted out just in time for it to become obsolete.

Expanded Release – We farmed the manufacturing out to some third world country to get this thing into stores.  The quality is going to be pretty dodgy until we can get our own facilities up to snuff.  Good luck on these things lasting more than three months.  In fact, we’re probably going to exclude these from any kind of warranty or support.  It’s cheaper that way.

Continuous Release – Bigger numbers are better, right?  Instead of waiting to release it, we’re just pushing every nightly build down to you.  The release number doesn’t even mean anything any more.  What’s a version?  Oh, and we’re going to break things right and left.  If you’re in the beta channel then abandon all hope now.


Tom’s Take

I bet you all have some funny ones as well.  Leave them in the comments!

Overlay Transport and Delivery

amazon-com-boxes

The difference between overlay networks and underlay networks still causes issues with engineers everywhere.  I keep trying to find a visualization that boils everything down to the basics that everyone can understand.  Thanks to the magic of online ordering, I think I’ve finally found it.

Candygram for Mongo

Everyone on the planet has ordered something from Amazon (I hope).  It’s a very easy experience.  You click a few buttons, type in a credit card number, and a few days later a box of awesome shows up on your doorstep.  No fuss, no muss.  Things you want show up with no effort on your part.

Amazon is the world’s most well-known overlay network.  When you place an order, a point-to-point connection is created between you and Amazon.  Your item is tagged for delivery to your location.  It’s addressed properly and finds its way to you almost by magic.  You don’t have to worry about your location.  You can have things shipped to a home, a business, or a hotel lobby halfway across the country.  The magic of an overlay is that the packets are going to get delivered to the right spot no matter what.  You don’t need to worry about the addressing.

That’s not to say there isn’t some issue with the delivery.  With Amazon, you can pay for expedited delivery.  Amazon Prime members can get two-day shipping for a flat fee.  In overlays, your packets can take random paths depending on how the point-to-point connection is built.  You can pay to have a direct path provided the underlay cooperates with your wishes.  But unless a full mesh exists, your packet delivery is going to be at the mercy of the most optimized path.

Mongo Only Pawn In Game Of Life

Amazon only works because of the network of transports that live below it.  When you place an order, your package could be delivered any number of ways.  UPS, FedEx, DHL, and even the US Postal Service can be the final carrier for your package.  It’s all a matter of who can get your package there the fastest and the cheapest.  In many ways, the transport network is the underlay of physical shipping.

Routes are optimized for best forwarding.  So are UPS trucks.  Network conditions matter a lot to both packets and packages.  FedEx trucks stuck in traffic jams at rush hour don’t do much good.  Packets that traverse slow data center interconnects during heavy traffic volumes risk slow packet delivery.  And if the road conditions or cables are substandard?  The whole thing can fall apart in an instant.

Underlays are the foundation that higher order services are built on.  Amazon doesn’t care about roads.  But if their shipping times get drastically increased due to deteriorating roadways you can bet their going to get to the bottom of it.  Likewise, overlay networks don’t directly interact with the underlay but if packet delivery is impacted people are going to take a long hard look at what’s going on down below.

Tom’s Take

I love Amazon.  It beats shopping in big box stores and overpaying for things I use frequently.  But I realize that the infrastructure in place to support the behemoth that is Amazon is impressive.  Amazon only works because the transport system in place is optimized to the fullest.  UPS has a computer system that eliminates left turns from driver routes.  This saves fuel even if it means the routes are a bit longer.

Network overlays work the same way.  They have to rely on an optimized underlay or the whole system crashes in on itself.  Instead of worrying about the complexity of introducing an overlay on top of things, we need to work on optimizing the underlay to perform as quickly as possible.  When the underlay is optimized, the whole thing works better.

Who Wants A Free Puppy?

Embed from Getty Images

Years ago, my wife was out on a shopping trip. She called me excitedly to tell me about a blonde shih-tzu puppy she found and just had to have. As she talked, I thought about all the things that this puppy would need to thrive. Regular walks, food, and love are paramount on the list. I told her to use her best judgement rather than flat out saying “no”. Which is also how I came to be a dog owner. Today, I’ve learned there is a lot more to puppies (and dogs) than walks and feeding. There is puppy-proofing your house. And cleaning up after accidents. And teaching the kids that puppies should be treated gently.

An article from Martin Glassborow last week made me start thinking about our puppy again. Scott McNealy is famous for having told the community that “Open Source is free like a puppy.” back in 2005. While this was a dig at the community in regards to the investment that open source takes, I think Scott was right on the mark. I also think Martin’s recent article illustrates some of the issues that management and stakeholders don’t see with comunity projects.

Open software today takes care and feeding. Only instead of a single OS on a server in the back of the data center, it’s all about new networking paradigms (OpenFlow) or cloud platform plays (OpenStack). This means there are many more moving parts. Engineers and programmers get it. But go to the stakeholders and try to explain what that means. The decision makers love the price of open software. They are ambivalent to the benefits to the community. However, the cost of open projects is usually much higher than the price. People have to invest to see benefits.

TNSTAAFL

At the recent SolidFire Summit, two cloud providers were talking about their software. One was hooked in to the OpenStack community. He talked about having an entire team dedicating to pulling nightly builds and validating them. They hacked their own improvements and pushed them back upstream for the good of the community. He seemed love what he was talking about. The provider next to him was just a little bit larger. When asked what his platform was he answered “CloudStack”. When I asked why, he didn’t hesitate. “They have support options. I can have them fix all my issues.”

Open projects appeal to the hobbiest in all of us. It’s exciting to build something from the ground up. It’s a labor of love in many cases. Labors of love don’t work well for some enterprises though. And that’s the part that most decision makers need to know. Support for this awesome new thing may not alwasy be immediate or complete. To bring this back to the puppy metaphor, you have to have patience as your puppy grows up and learns not to chew on slippers.

The reward for all this attention? A loving pet in the case of the puppy. In the case of open software, you have a workable framework all your own that is customized to your needs and very much a part of your DNA. Supported by your staff and hopefull loved as much or more than any other solution. Just like dog owners that look forward to walking the dog or playing catch at the dog part, your IT organization should look forward to the new and exciting challenges that can be solved with the investment of time.


Tom’s Take

Nothing is free. You either pay for it with money or with time. Free puppies require the latter, just as free software projects do. If the stakeholders in the company look at it as an investment of time and energy then you have the right frame of mind from the outset. If everything isn’t clear up front, you will find yourself needing to defend all the time you’ve spent on your no-cost project. Hopefully your stakeholders are dog people so they understand that the payoff isn’t in the price, but the experience.

Opening the Future

I’ve been a huge proponent of open source and open development for years.  I may not be as vocal about it as some of my peers, but I’ve always had my eye on things coming out of the open community.  As networking and virtualization slowly open their processes to these open development styles, I can’t help but think about how important this will be for the future.

Eyes Looking Forward

If Heartbleed taught me anything in the past couple of weeks, it’s that the future of software has to be open.  Finding a vulnerability in a software program that doesn’t have source access or an open community build around it is impossible.  Look at how quickly the OpenSSL developers were able to patch their bug once it was found.  Now, compare that process to the recently-announced zero day bug in Internet Explorer.  While the OpenSSL issue was much bigger in terms of exposure, the IE bug is bigger in terms of user base.

Open development isn’t just about having multiple sets of eyes looking at code.  It’s about modularizing functionality to prevent issues from impacting multiple systems.  Look at what OpenStack is doing with their plugin system.  Networking is a different plug in from block storage.  Virtual machine management isn’t the same as object storage.  The plugin idea was created to allow very smart teams to work on these pieces independently of each other.  The side effect is that a bug in one of these plugins is automatically firewalled away from the rest of the system.

Open development means that the best eyes in the world are looking at what you are doing and making sure you’re doing it right.  They may not catch every bug right away but they are looking.  I would argue that even the most stringent code checking procedures at a closed development organization would still have the same error rate as an open project.  Of course, those same procedures and closed processes would mean we would never know if there was an issue until after it was fixed.

Code of the People

Open development doesn’t necessarily mean socialism, though.  Look at all the successful commercial projects that were built using OpenSSL.  They charged for the IP built on a project that provide secure communication.  There’s no reason other commercial companies can’t do the same.  Plenty of service providers are charging for services offered on top of OpenStack.  Even Windows uses BSD code in parts of its networking stack.

Open development doesn’t mean you can’t make money.  It just means you can’t hold your methods hostage.  If someone can find a better way to do something with your project, they will.  Developers are worried that someone will “steal” code and rewrite a clone of your project.  While that might be true of a mobile app or simple game, it’s far more likely that an open developer will contribute code back to your project rather than just copying it.  You do take risk by opening yourself up to the world, but the benefits of that risk far outweigh any issues you might run into by closing your code base.


Tom’s Take

It may seem odd for me to be talking about development models.  But as networking moves toward a background that requires more knowledge about programming it will become increasingly important for a new generation of engineers to be comfortable with programming.  It’s too late for guys like me to jump on the coding bandwagon.  But at the same time, we need to ensure that the future generation doesn’t try to create new networking wonders only to lock the code away somewhere and never let it be improved.  There are enough apps in the app stores of the world that will never be updated past a certain revision because the developer ran out of time or patience with their coding.  Instead, why not train developers that the code they write should allow for contribution and teamwork to continue.  An open future in networking means not repeating the mistakes of the past.  That alone will make the outcome wonderful.

SDN and Elephants

Blind_men_and_elephant3

There is a parable about six blind men that try to describe an elephant based on touch alone.  One feels the tail and thinks the elephant is a rope.  One feels the trunk and thinks it’s a tree branch.  Another feels the leg and thinks it is a pillar.  Eventually, the men realize they have to discuss what they have learned to get the bigger picture of what an elephant truly is.  As you have likely guessed, there is a lot here that applies to SDN as well.

Point of View

Your experience with IT is going to dictate your view on SDN.  If you are a DevOps person, you are going to see all the programmability aspects of SDN as important.  If you are a network administrator, you are going to latch on to the automation and orchestration pieces as a way to alleviate the busy work of your day.  If you are a person that likes to have a big picture of your network, you will gravitate to the controller-based aspects of protocols like OpenFlow and ACI.

However, if you concentrate on the parts of SDN that is most familiar, you will miss out on the bigger picture.  Just as an elephant is more than just a trunk or a tail, SDN is more than the individual pieces.  Programmable interfaces and orchestration hooks are means to an end.  The goal is to take all the pieces and make them into something greater.  That takes vision.

The Best Policy

I like what’s going on both with Cisco’s ACI and the OpenStack Congress project.  They’ve moved past the individual parts of SDN and instead are working on creating a new framework to apply those parts. Who cares how orchestration works in a vacuum?  Now, if that orchestration allows a switch to be auto-provisioned on boot, that’s something.  APIs are a part of a bigger push to program networks.  The APIs themselves aren’t the important part.  It’s the interface that interacts with the API that’s crucial.  Congress and ACI are that interface.

Policy is something we all understand.  Think for a moment about quality of service (QoS). Most of the time engineers don’t know LLQ from CBWFQ or PQ.  But if you describe what you want to do, you get it quickly.  If you want a queue that guarantees a certain amount of bandwidth but has a cap you know which implementation you need to use.  With policies, you don’t need to know even that.  You can create the policy that says to reserve a certain amount of bandwidth but not to go past a hard limit.  But you don’t need to know if that’s LLQ or some other form of queuing.  You also don’t need to worry about implementing it on specific hardware or via a specific vendor.

Policies are agnostic.  So long as the API has the right descriptors you can program a Cisco switch just as easily as a Juniper router.  The policy will do all the work.  You just have to figure out the policy.  To tie it back to the elephant example, you find someone with the vision to recognize the individual pieces of the elephant and make them into something greater than a rope and pillar.  You then realize that the elephant isn’t quite like what you were thinking, but instead has applications above and beyond what you could envision before you saw the whole picture.


Tom’s Take

As far as I’m concerned, vendors that are still carrying on about individual aspects of SDN are just trying to distract from a failure to have vision.  VMware and Cisco know where that vision needs to be.  That’s why policy is the coin of the realm.  Congress is the entry point for the League of Non-Aligned Vendors.  If you want to fight the bigger powers, you need to back the solution to your problems.  If Congress could program Brocade, Arista, and HP switches effortlessly then many small-to-medium enterprise shops would rush to put those devices into production.  That would force the superpowers to take a look at Congress and interface with it.  That’s how you build a better elephant – by making sure all the pieces work together.

Security’s Secret Shame

photo-2

Heartbleed captured quite a bit of news these past few days.  A hole in the most secure of web services tends to make people a bit anxious.  Racing to release patches and assess the damage consumed people for days.  While I was a bit concerned about the possibilities of exposed private keys on any number of web servers, the overriding thought in my mind was instead about the speed at which we went from “totally secure” to “Swiss cheese security” almost overnight.

Jumping The Gun

As it turns out, the release of the information about Heartbleed was a bit sudden.  The rumor is that the people that discovered the bug were racing to release the information as soon as the OpenSSL patch was ready because they were afraid that the information had already leaked out to the wider exploiting community.  I was a bit surprised when I learned this little bit of information.

Exploit disclosure has gone through many phases in recent years.  In days past, the procedure was to report the exploit to the vendor responsible.  The vendor in question would create a patch for the issue and prepare their support organization to answer questions about the patch.  The vendor would then release the patch with a report on the impact.  Users would read the report and patch the systems.

Today, researchers that aren’t willing to wait for vendors to patch systems instead perform an end-run around the system.  Rather than waiting to let the vendors release the patches on their cycle, they release the information about the exploit as soon as they can.  The nice ones give the vendors a chance to fix things.  The less savory folks want the press of discovering a new hole and project the news far and wide at every opportunity.

Shame Is The Game

Part of the reason to release exploit information quickly is to shame the vendor into quickly releasing patch information.  Researchers taking this route are fed up with the slow quality assurance (QA) cycle inside vendor shops.  Instead, they short circuit the system by putting the issue out in the open and driving the vendor to release a fix immediately.

While this approach does have its place among vendors that move a glacial patching pace, one must wonder how much good is really being accomplished.  Patching systems isn’t a quick fix.  If you’ve ever been forced to turn out work under a crucial deadline while people were shouting at you, you know the kind of work that gets put out.  Vendor patches must be tested against released code and there can be no issues that would cause existing functionality to break.  Imagine the backlash if a fix to the SSL module cause the web service to fail on startup.  The fix would be worse than the problem.

Rather than rushing to release news of an exploit, researchers need to understand the greater issues at play.  Releasing an exploit for the sake of saving lives is understandable.  Releasing it for the fame of being first isn’t as critical.  Instead of trying to shame vendors into releasing a patch rapidly to plug some hole, why not work with them instead to identify the issue and push the patch through?  Shaming vendors will only put pressure on them to release questionable code.  It will also alienate the vendors from researchers doing   things the right way.


Tom’s Take

Shaming is the rage now.  We shame vendors, users, and even pets.  Problems have taken a back seat to assigning blame.  We try to force people to change or fix things by making a mockery of what they’ve messed up.  It’s time to stop.  Rather than pointing and laughing at those making the mistake, you should pick up a keyboard and help them fix it. Shaming doesn’t do anything other than upset people.  Let’s put it to bed and make things better by working together instead of screaming “Ha ha!” when things go wrong.

End Of The CLI? Or Just The Start?

update1-leak3-hero

Windows 8.1 Update 1 launches today. The latest chapter in Microsoft’s newest OS includes a new feature people been asking for since release: the Start Menu. The biggest single UI change in Windows 8 was the removal of the familiar Start button in favor of a combined dashboard / Start screen. While the new screen is much better for touch devices, the desktop population has been screaming for the return of the Start Menu. Windows 8.1 brought the button back, although it only linked to the Start screen. Update 1 promises to add functionality to the button once more. As I thought about it, I realized there are parallels here that we in the networking world can learn as well.

Some very smart people out there, like Colin McNamara (@ColinMcNamara) and Matt Oswalt (@Mierdin) have been talking about the end of the command line interface (CLI). With the advent of programmable networks and API-driven configuration the CLI is archaic and unnecessary, or so the argument goes. Yet, there is a very strong contingent of the networking world that is clinging to the comfortable glow of a terminal screen and 80-column text entry.

Command The Line

API-driven interfaces provide flexibility that we can’t hope to match in a human interface. There is no doubt that a large portion of the configuration of future devices will be done via API call or some sort of centralized interface that programs the end device automatically. Yet, as I’ve said before, engineers don’t like have visibility into a system. Getting rid of the CLI for the sake of streamlining a device is a bad idea.

I’ve worked with many devices that don’t have a CLI. Cisco Catalyst Express switches leap immediately to mind. Other devices, like the Cisco UC500 SMB phone system, have a CLI but use of it is discouraged. In face, when you configure the UC500 using the CLI, you start getting warnings about not being able to use the GUI tool any longer. Yet there are functions that are only visible through the CLI.

Non-Starter

Will the programmable networking world will make the same mistake Microsoft did with Windows 8? Even a token CLI is better than cutting it out entirely. Programmable networking will allow all kinds of neat tricks. For instance, we can present a Cisco-like CLI for one group of users and a Juniper-like CLI for a different group that both accomplish the same results. We don’t need to have these CLIs sitting around resident memory. We should be able to generate them on the fly or call the appropriate interfaces from a centralized library. Extensibility, even in the archaic interface of last resort.

If all our talk revolves around the removal of the tool people have been using for decades to program devices you will make enemies quickly. The talk needs to shift from the death of CLI and more toward the advantages gained through adding API interfaces to your programming. Even if our interface into calling those APIs looks similar to a comfortable CLI, you’re going to win more converts up front if you give them something they recognize as a transition mechanism.


Tom’s Take

Microsoft bit off more than they could chew when they exiled the Start Menu to the same pile as DOSShell and Microsoft Bob. People have spent almost 20 years humming the Rolling Stones’ “Start Me Up” as they click on that menu. Microsoft drove users to this approach. To pull it out from under them all at once with no transition plan made for unhappy users. Networking advocates need to be just as cognizant of the fact that we’re headed down the same path. We need to provide transition options for the die-hard engineers out there so they can learn how to program devices via non-traditional interfaces. If we try to make them quit cold turkey you can be sure the Start Menu discussion will pale in comparison.

The Sunset of Windows XP

WindowsXPMeadow

The end is nigh for Microsoft’s most successful operating system of all time. Windows XP is finally set to reach the end of support next Tuesday. After twelve and a half years, and having its death sentence commuted at least twice, it’s time for the sunset of the “experienced” OS. Article after article has been posted as of late discussing the impact of the end of support for XP. The sky is falling for the faithful. But is it really?

Yes, as of April 8, 2014, Windows XP will no longer be supported from a patching perspective. You won’t be able to call in a get help on a cryptic error message. But will your computer spontaneously combust? Is your system going to refuse to boot entirely and force you at gunpoint to go out and buy a new Windows 8.1 system?

No. That’s silly. XP is going to continue to run just as it has for the last decade. XP will be as secure on April 9th as it was on April 7th. But it will still function. Rather than writing about how evil Microsoft is for abandoning an operating system after one of the longest support cycles in their history, let’s instead look at why XP is still so popular and how we can fix that.

XP is still a popular OS with manufacturing systems and things like automated teller machines (ATMs). That might be because of the ease with which XP could be installed onto commodity hardware. It could also be due to the difficulty in writing drivers for Linux for a large portion of XP’s life. For better or worse, IT professionals have inherited a huge amount of embedded systems running an OS that got the last major service pack almost six years ago.

For a moment, I’m going to take the ATMs out of the equation. I’ll come back to them in a minute. For the other embedded systems that don’t dole out cash, why is support so necessary? If it’s a manufacturing system that’s been running for the last three or four years what is another year of support from Microsoft going to get you? Odds are good that any support that manufacturing system needs is going to be entirely unrelated to the operating system. If we treat these systems just like an embedded OS that can’t be changed or modified, we find that we can still develop patches for the applications running on top of the OS. And since XP is one of the most well documented systems out there, finding folks to write those patches shouldn’t be difficult.

In fact, I’m surprised there hasn’t been more talk of third party vendors writing patches for XP. I saw more than a few start popping up once Windows 2000 started entering the end of its life. It’s all a matter of the money. Banks have already started negotiating with Microsoft to get an extension of support for their ATM networks. It’s funny how a few million dollars will do that. SMBs are likely to be left out in the cold for specialized systems due to the prohibitive cost of an extended maintenance contract, either from Microsoft or another third party. After all, the money to pay those developers needs to come from somewhere.


Tom’s Take

Microsoft is not the bad guy here. They supported XP as long as they could. Technology changes a lot in 12 years. The users aren’t to blame either. The myth of a fast upgrade cycle doesn’t exist for most small businesses and enterprises. Every month that your PC can keep running the accounting software is another month of profits. So who’s fault is the end of the world?

Instead of looking at it as the end, we need to start learning how to cope with unsupported software. Rather than tilting at the windmills in Remond and begging for just another month or two of token support we should be investigating ways to transition XP systems that can’t be upgraded within 6-8 months to an embedded systems plan. We’ve reached the point where we can’t count on anyone else to fix our XP problems but ourselves. Once we have the known, immutable fact of no more XP support, we can start planning for the inevitable – life after retirement.

The Alignment of Net Neutrality

Embed from Getty Images

Net neutrality has been getting a lot of press as of late, especially as AT&T and Netflix have been sparring back and forth in the press.  The FCC has already said they are going to take a look at net neutrality to make sure everyone is on a level playing field.  ISPs have already made their position clear.  Where is all of this posturing going to leave the users?

Chaotic Neutral

Broadband service usage has skyrocketed in the past few years.  Ideas that would never have been possible even 5 years ago are now commonplace.  Netflix and Hulu have made it possible to watch television without cable.  Internet voice over IP (VoIP) allows a house to have a phone without a phone line.  Amazon has replaced weekly trips to the local department store for all but the most crucial staple items.  All of this made possible by high speed network connectivity.

But broadband doesn’t just happen.  ISPs must build out their networks to support the growing hunger for faster Internet connectivity.  Web surfing and email aren’t the only game in town.  Now, we have streaming video, online multiplayer, and persistently connected devices all over the home.  The Internet of Things is going to consume a huge amount of bandwidth in an average home as more smart devices are brought online.  ISPs are trying to meet the needs of their subscribers.  But are they going far enough?

ISPs want to build networks their customers will use, and more importantly pay to use.  They want to ensure that complaints are kept to a minimum while providing the services that customers demand.  Those ISP networks cost a hefty sum.  Given the choice between paying to upgrade a network and trying to squeeze another month or two out of existing equipment, you can guarantee the ISPs are going to take the cheaper route.  Coincidentally, that’s one of the reasons why the largest backers of 802.1aq Shortest Path Bridging were ISP-oriented.  SPB doesn’t require new equipment to forward frames (like TRILL).  ISPs can use existing equipment to deliver SPB with no out-of-pocket expenditure on hardware.  That little bit of trivia should give you an idea why ISPs are trying to do away with net neutrality.

True Neutral

ISPs want to keep using their existing equipment as long as possible.  Every dollar they make from this cycle’s capital expenditure means a dollar of profit in their pocket before they have to replace a switch.  If there was a way to charge even more money for existing services, you can better believe they would do it.  Which is why this infographic hits home for most:

net-neutrality

Charging for service tiers would suit ISPs just fine.  After all, as the argument goes, you are using more than the average user.  Shouldn’t you shoulder the financial burden of increased network utilization?  That’s fine for corner cases like developers or large consumers of downstream bandwidth.  But with Netflix usage increasing across the board, why should the ISP charge you more on top of a Netflix subscription?  Shouldn’t their network anticipate the growing popularity of streaming video?

The other piece of the tiered offering above that should give pause is the common carrier rules for service providers.  Common carriers get to be absolved of liability for the things they transport because they have to agree to transport everything offered to them.  What do you think would happen if those carriers suddenly decide they want to discriminate about what they send?  If that discrimination revokes their common carrier status, what’s to stop them from acting like a private carrier and start refusing to transport certain applications or content?  Maybe forcing a video service to negotiate a separate peering agreement for every ISP they want to use?  Who would do that?

Neutral Good

Net Neutrality has to exist to ensure that we are free to use the services we want to consume.  Sure, this means that things like Quality of Service (QoS) can’t be applied to packets to ensure they are all being treated equally.  The inverse is to have guaranteed delivery for an additional fee.  And every service you add on to the top would incur more fees.  New multiplayer game launching next week? The ISP will charge you an extra $5 per month to insure you have a low ping time to beat the other guy.  If you don’t buy the package, your multiplayer traffic gets dumped in with Netflix and the rest of the bulk traffic.

This is part of the reason why Google Fiber is such a threat to existing ISPs.  When the only options for local loop delivery are the cable company and the phone company, it’s difficult to have options that aren’t being tiered in the absence of neutrality.  With viable third party fiber buildouts like Google starting to spring up it becomes a bargaining chip to increase speeds to users and upgrade backbones to support heavy usage.  If you don’t believe that, look at what AT&T did immediately after Google announced Google Fiber in Austin, TX.


Tom’s Take

ISPs shouldn’t be able to play favorites with their customers.  End users are paying for a connection.  End users are also paying services to use their offerings.  Why should we have to pay for a service twice if the ISP wants to charge me more in a tiering setup?  That smells of a protection racket in many ways.  I can imagine the ISP techs sitting there in a slick suit saying, “That’s a nice connection you got there.  It would be a shame if something were to happen to it.”  Instead, it’s up to the users to demand ISPs offer free and unrestricted access to all content.  In some cases, that will mean backing alternatives and “voting with your dollar” to make the message be heard loud and clear.  I won’t sign up for services that have data usage caps or metered speed limits past a certain ceiling.  I would drop any ISP that wants me to pay extra just because I decide to start using a video streaming service or a smart thermostat.  It’s time for ISPs to understand that hardware should be an investment in future customer happiness and not a tool that’s used to squeeze another dime out of their user base.