Riding the SD-WAN Wave

Software Defined Networking has changed the way that organizations think about their network infrastructure.  Companies are looking at increasing automation of mundane tasks, orchestration of policy, and even using white box switches with the help of new unbound operating systems.  A new class of technologies that is coming to market hopes to reduce complexity and cost for the Achilles Heel of many enterprises: the Wide Area Network (WAN).

Do You WANt To Build A Snowman?

The WAN has always been a sore spot for enterprise networks.  It’s necessary to connect your organization to the world.  If you have remote sites or branch locations, it is critical for daily operations.  If you have an e-commerce footprint your WAN connection needs to be able to handle the generated traffic.  But good WAN connectivity costs money.  Lots of money.

WAN protocols are constantly being refined to come up with the fastest possible transmission and the highest possible uptime.  Frame Relay, Asynchronous Transfer Mode (ATM) and Multi-Protocol Label Switching (MPLS) are a succession of technologies that have shaped enterprise WAN connectivity for over a decade.  They have their strengths and weaknesses.  But it is difficult to build an enterprise WAN without one.

Some customers can’t get MPLS connectivity.  Or even Frame Relay for the matter.  Their locations are too remote or the cost of having the connection installed is far above the return on investment.  These customers are often forced to resort to consumer-class connections, like cable modems, Digital Subscriber Line (DSL), or even 4G/LTE modem uplinks.  While cheaper and easy to install, these solutions are often not as robust as their business-grade counterparts.  And when it comes to support on a down circuit…

Redefining the WAN

How does Software Defined WAN (SD-WAN) help?  SD-WAN technologies from companies like Silver Peak, CloudGenix, and Viptela function like overlay networks for the WAN.  They take the various inputs that you have, such as MPLS, cable, and 4G/LTE networks.  These inputs are then arranged in such a way as to allow you to intelligently program how traffic will behave on the links.  If you want only critical business traffic on the MPLS circuit during business hours you can do that.  If you want to ensure the 4G/LTE uplink is only used in the event of an emergency outage, you can do that too.  You can even program various costs and metrics into the system to help you make decisions about when a given link would be a better economic decision given the time of day or amount of transferred data.

You’re probably saying to yourself, “But I can do all of that today.” And you would be right. But all of this has to happen manually, or at the least require a lot of programming.  If you’ve ever tried to configure OER/PFR on a Cisco router you know what I’m talking about.  And that’s just one vendor’s equipment.  What if there are multiple devices in play?  How do you configure the edge routers for fifty sites?  What happens when a circuit goes down at 3 a.m.?  Having a simple interface for making decisions or even the ability to script actions based on inputs makes the system much more flexible and responsive.

It all comes down to a simple number for all parties involved.  For engineering, the amount of time spent configuring and maintaining complex WAN connectivity will be reduced.  Engineers love not needing to spend time on things.  For the decision makers (and bean counters), it all comes down to money.  SD-WAN technologies reduce costs by better utilizing existing infrastructure.  Eventually, their analysis can allow you to reduce or remove unnecessary connectivity.  That means more money in the pockets of the people that want the money.


Tom’s Take

I’ve referred to WAN applications as the “hello world” for SDN.  That’s because I saw so many people demoing them when SDN was first being talked about.  Cisco did this at Cisco Live 2012 in San Diego.  SD-WAN didn’t really become a concrete thing in my mind until is was the topic of discussion on the Spring ONUG meeting.  Those are the people with the money.  And they are looking at the cost savings and optimization from SD-WAN technologies.  You can better believe that the first wave of SD-WAN that you’ve seen in the last couple of months is just the precursor to a wider look at connectivity in general.  Better get ready to surf.

Wires Are The Exception

cropped-dsc_0734.jpg

Last week I went to go talk to a group of vocational students about networking.  While I was there, I needed to send a couple of emails.  I prefer to write emails from my laptop, so I pulled it out of my bag between talks and did the first thing that came to mind: I asked for the wireless SSID and password.  Afterwards, I started thinking about how far we’ve come with connectivity.

I can still remember working with a wireless card back in 2001 trying to get the drivers to play nice with Windows 2000.  Now, wireless cards are the rule and wired ports are the exception.  My primary laptop needs a dongle to have a wired port.  My new Mac Mini is happily churning along halfway across the room connected to my network as a server over wireless.  It would appear that the user edge quietly became wireless and no tears were shed for the wire.

It’s also funny that a lot of the big security features like 802.1x and port security became less and less of an issue once open ports started disappearing in common areas.  802.1x for wired connections is barely even talked about now.  It’s more of an authentication mechanism for wireless now.  I’ve even heard some vendors of these solutions touting the advantages of using it with wireless and then throwing in the afterthought comment, “We also made it easy to configured for wired connections too.”

We still need wires, of course.  Access points have to connect to the infrastructure.  Power still can be delivered via microwave.  But the shift toward wireless has made ubiquitous cabling unnecessary.  I used to propose a minimum of four cable drops per room to provide connectivity in a school.  I would often argue for six in case a teacher wanted to later add an IP phone and a couple of student workstations.  Now, almost everything is wireless.  The single wire powers a desk phone and an antiquated desktop.  Progressive schools are replacing the phones with soft clients and the desktops with teach laptops.

The wire is not in any danger of becoming extinct.  But it is going to be relegated to the special purpose category.  Wires will only live behind the scenes in data centers and IDF closets.  They will be the thing that we throw in our bag for emergencies, like an extra console cable or a VGA adapter.

Wireless is the future.  People don’t walk into a coffee shop and ask, “Hey, where’s the Ethernet cable?” Users don’t crowd around wall plates with hubs to split the one network drop into four or eight so they can plug their tablets in.  Companies like Aruba Networks recognized this already when they started posing questions about all-wireless designs.  We even made a video about it:

While I don’t know that the all-wireless design is going to work, I can say with certainty that the only wires that will be running across your desktop soon will be power cables and the occasional USB cord.  Ethernet will be relegated to the same class as electrical wires connected to breaker boxes and water pipes.  Important and unseen.

The Trap of Net Neutrality

net-neutrality

The President recently released a video and statement urging the Federal Communications Commission (FCC) to support net neutrality and ensure that there will be no “pay for play” access to websites or punishment for sites that compete against a provider’s interests.  I wholeheartedly support the idea of net neutrality.  However, I do like to stand on my Devil’s Advocate soapbox every once in a while.  Today, I want to show you why a truly neutral Internet may not be in our best interests.

Lawful Neutral

If the FCC mandates a law that the Internet must remain neutral, it will mean that all traffic must be treated equally.  That’s good, right?  It means that a provider can’t slow my Netflix stream or make their own webmail service load faster than Google or Yahoo.  It also means that the provider can’t legally prioritize packets either.

Think about that for a moment.  We, as network and voice engineers, have spent many an hour configuring our networks to be as unfair as possible.  Low-latency queues for voice traffic.  Weighted fair queues for video and critical applications.  Scavenger traffic classes and VLANs for file sharers and other undesirable bulk noise.  These plans take weeks to draw up and even longer to implement properly.  It helps us make sense out of the chaos in the network.

By mandating a truly neutral net, we are saying that those carefully marked packets can’t escape from the local network with their markings intact.  We can’t prioritize voice packets once they escape the edge routers.  And if we move applications to the public cloud, we can’t ensure priority access.  Legally, the providers will be forced to remark all CoS and DSCP values at the edge and wash their hands of the whole thing.

And what about provider MPLS circuits?  If the legally mandated neutral provider is administering your MPLS circuits (as they do in small and medium enterprise), can they copy the DSCP values to the MPLS TE field before forwarding the packet?  Where does the law stand on prioritizing private traffic transiting a semi-public link?

Chaotic Neutral

The idea of net neutrality is that no provider should have the right to decide how your traffic should be handled.  But providers will extend that idea to say they can’t deal with any kind of marking.  They won’t legally be able to offer you differentiated service even if you were wiling to pay for it.  That’s the double-edge sword of neutrality.

You can be sure that the providers will already have found a “solution” to the problem.  Today, quality of service (QoS) only becomes an issue when the link becomes congested.  Packets don’t queue up if there’s bandwidth available to use.  So the provider solution is simple.  If you need differentiated service, you need to buy a bigger pipe.  Over provision your WAN circuits!  We can’t guarantee delivery unless you have more bandwidth than you need!  Who cares what the packets are marked?  Which, of course, leads to a little gem from everyone’s favorite super villain:

SyndromeEF

Of course, the increased profits from these services will line the pockets of the providers instead of going to build out the infrastructure necessary to support these overbuilt networks.  The only way to force providers to pony up the money to build out networks is to make it so expensive to fail that the alternative is better.  That requires complex negotiation and penalty-laden, iron-clad service level agreements (SLAs).

The solution to the issue of no prioritized traffic is to provide a list of traffic that should be prioritized.  Critical traffic like VoIP should be allowed to be expedited, as the traffic characteristics and protections we afford it make sense.  Additionally, traffic destined for a public cloud site that function as internal traffic of a company should be able to be prioritized across the provider network.  Tunneling or other forms of traffic protection may be necessary to ensure this doesn’t interfere with other users.  Exempt traffic should definitely be the exception, not the rule.  And it should never fall on the providers to determine which traffic should be exempted from neutrality rules.


Tom’s Take

Net neutrality is key to the future of society.  The Internet can’t function properly if someone else with a vested interest in profits decides how we consume content.  It’s like the filter bubble of Google.  A blind blanket policy doesn’t do us any good, either.  Everyone involved in networking knows there are types of traffic that can be prioritized without having a detrimental effect.  We need to make smart decisions about net neutrality and know when to make exceptions.  But that power needs to be in the hands of the users and customers.  They will make decisions in their best interest.  The providers should have the capability to implement the needs of their customers.  Only then will the Internet be truly neutral.

How Do You Spell That?

I spent a bit of my career on the phone doing support for a national computer vendor. In addition to the difficulties of walking people through opening the case and diagnosing motherboard issues, I found myself needing to overcome language barriers. While I only have a hint of an accent (or so I’ve been told), spelling out acronyms was a challenge. That’s where the phonetic alphabet comes into play

By now, almost everyone uses the NATO phonetic alphabet. It’s the most recognized in the world. The US joint Army/Navy version varies a bit but does have a lot of similarities. However, when I first started out using the NATO version quite a few callers didn’t know what Lima was or giggled when I said Tango.

I decided that some people have much more familiarity with first names. This was borne out when I kept using Mary for “M” instead of Mike. People immediately knew it. Same for Victor, Peter, and so on. So I cobbled together my own Name Phonetic Alphabet.

A – Adam
B – Barbara
C – Charlie
D – David
E – Edward
F – Frank
G – George
H – Harold
I – Irwin
J – John
K – Kevin
L – Larry
M – Mary
N – Nancy
O – Oliver
P – Peter
Q – Quincy (or queen)
R – Roger
S – Sam
T – Tom (my favorite)
U – Umbrella
V – Victor
W – William
X – X-Ray
Y – Yellow
Z – Zebra

Finding a name for Y and Z was pretty difficult, but everyone knows Yellow and Zebra. I was tempted to use Zander, but the more popular version of that name from Buffy the Vampire Slayer was spelled Xander. No sense confusing folks. As for X, if you don’t know X from the sound we need to have a chat.

Was it a duplication of effort? Certainly. But it works universally with everyone I’ve ever talked to, including children. It makes “Roger Adam Irwin David” easy to get across to people without trying to remember Romeo and India.

The key to communication with others is to find something that works for you.  If you can easily convey your information to someone else, the shortcuts you take don’t matter.  If first names work best, use them.  If drawing pictures works better, use those.  In the end, getting the point across is the goal.

Twitter, Please Stop Giving Me Things I Don’t Want

new-twitter-logo

Last week, Twitter confirmed that they will start injecting tweets from users you don’t follow into your timeline.  The collective cry from their user base ranged from outrage to a solid “meh”.  It seems that Twitter has stumbled onto the magic formula that Facebook has perfected: create a feature the users don’t care about and force it onto them.  Why?

Twitter Doesn’t Care About Power Users

Twitter has an interesting mix of users.  They reported earlier this year that 44% of their user base has never tweeted.  That’s a lot of accounts that were created for the purpose of reserving a name or following people in read-only mode.  That must concern Twitter.  Because people that don’t tweet can’t be measure for things like advertising.  They won’t push the message of a sponsored tweet.  They won’t add their voice to the din.  But what about those users that tweet regularly?

Power users are those that tweet frequently without a large follower base.  Essentially, everyone that isn’t a celebrity with a million followers or a non-tweeting account.  You know, the real users on Twitter.  The people that make typos in their tweets and actually check to see who follows them.  The ones that don’t have a “social media team” tweeting for them.  Nothing wrong with a team tweeting for a brand, but when they’re tweeting for a person it’s a little disconcerting.

Power users keep getting screwed by Twitter.  The API changes really hurt those that use clients other than the official ones.  Given that Twitter has killed most of it’s “official” clients in favor of pushing people to use the web, it makes you wonder what their strategy might be.  They are entirely beholden to their investors right now.  That means user signups and ad revenue.  And it means focusing on making the message widespread.  Why worry about placating the relatively small user base that uses your product when you can create a method for reaching millions with a unicast sponsored hashtag? Or by injecting tweets from people you don’t follow into your timeline?

The tweet injection thing is like a popup ad.  It serves the purpose of Twitter deciding to show you some tweets from other “users”.  Anyone want to bet those users will quickly start becoming corporate accounts? Perhaps they pay Twitter to ensure their tweets show up in a the timelines of a specific demographic.  It makes total sense when your users are nothing but a stream of revenue

Making Twitter Usable Again

I mentioned some things the other day that I think Twitter needs to do to make their service usable for power users again.  I wanted to expand on them a bit here:

The Unfollow Bug – Twitter has a problem with keeping followers.  For some reason, your account will randomly unfollow a user with no notification.  You usually don’t figure it out until you want to send them a DM or notice that they’ve unfollowed you and mention it.  It’s an irritating bug that’s been going on for years with no hope of resolution.  Twitter needs to sort this one out quickly.  As a side note, if you run a service that monitors people that have unfollowed you, consider adding a digest of users that I have unfollowed this week.  if the list doesn’t match those that I purposely unfollow, at least you know when you’ve been hit by this bug.

Links in Direct Messages – Twitter disabled the ability to send a link in a direct message a few months ago.  Their argument was that it cut down on spam.  The real reason was Twitter’s attempt to turn DMs into a instant message platform.  Twitter experimented with a setting that enabled DMs from users you don’t follow.  They pulled it before it went live due to user feedback.  One of the arguments was that spam accounts could bombard you with URLs that led to phishing attacks and other unsavory things.  Twitter responded by disabling links in DMs even though they removed the feature it was intended to protect.  It’s time for Twitter to give us this feature back.

Token Limits – This “feature” has to go.  Restricting 3rd party clients because they exist destroys the capabilities of your power users. I use a client because it gives me easy access to features I use all the time, like conversation views and muting.  I also don’t like sitting on the garish Twitter website and constantly refreshing to see new tweets.  I’d rather use some other client. Twitter has a love/hate relationship with non-official clients.  Mostly because those clients strip out ads and sponsored tweets.  They don’t let Twitter earn money from them.  Which is why Twitter is stamping them out for “replicating official client features” left and right.  Curiously enough, I’ve never heard about HootSuite being hit with user token limits.  But considering that a lot of Twitter’s favorite celebrities use it (or at least their social media teams do), I’m not shocked they’re on the exempt list.


Tom’s Take

I still find Twitter a very useful tool.  It’s not something that can just be set into automatic and left alone.  It takes curation and attention to make it work for you.  But it also needs help from Twitter’s side.  Instead of focusing on ways to make me see things I don’t care about from people I don’t want to follow, how about making your service work the way I want it to work.  I’m more like to use (and suggest) a service that works.  I barely check Facebook anymore because I’m constantly “fixing” their Top Posts algorithm.  Don’t turn your service into something I spend most of my time fixing.

The Great Tech Reaving

It seems as though the entire tech world is splitting up.  HP announced they are splitting the Personal Systems Group into HP, Inc and the rest of the Enterprise group in HP Enterprise.  Symantec is forming Veritas into a separate company as it focuses on security and leaves the backup and storage pieces to the new group.  IBM completed the sale of their x86 server business to Lenovo.  There are calls for EMC and Cisco to split as well.  It’s like the entire tech world is breaking up right before the prom.

Acquisition Fever

The Great Tech Reaving is a logical conclusion to the acquisition rush that has been going on throughout the industry for the past few years.  Companies have been amassing smaller companies like trading cards.  Some of the acquisitions have been strategic.  Buying a company that focuses on a line of work similar to the one you are working on makes a lot of sense.  For instance, EMC buying XtremIO to help bolster flash storage.

Other acquisitions look a bit strange.  Cisco buying Flip Video.  Yahoo buying Tumblr. There’s always talk around these left field mergers.  Is the CEO looking for synergy? Is there a hidden play that we’re unaware of? Sometimes that kind of thinking pays off.  Other times you end up with Zimbra.  More often than not, the company ends up writing down the assets of the acquired company and taking very little from the purchase.  Maybe not as big as the Autonomy write down, but even getting rid of Flip can make waves.

It makes a person wonder what the point of an acquisition is if it’s just going to wind up being an accounting charge later.  Is it a tax shelter?  A way to use up outstanding cash?  Maybe even a way to buy a particularly good developer and fold them into your organization to keep them out of a competitor’s hands?  The reasons are myriad but it appears that the fever is dying down.  And that might end up hurting innovation in the long term.

This Is Not An Exit Strategy

Think about the startup out there making a hot new technology.  They had their heart set on getting bought by a bigger company in the market.  Now, they just watched that company split off half of their business into a new company.  Cash is hard to find for a new acquisition.  Now the startup has to find a different way to monetize things.  Should we redouble our efforts to market the product? Get new investors? Go public?

I’ve said before that pinning your hopes on getting purchased isn’t the best way to run a business.  It’s like betting all your hopes on getting the winning numbers in the lottery.  It might happen, but the odds are against it.  Perhaps the end result of a market full of split companies will be a reevaluation of the idea of an exit strategy.  Rather than building a business for the sole purpose of being bought entrepreneurs will start building businesses to make products and sell them.  It’s a radical idea, but not so radical as to be unbelievable.  Just ask Hewlett and Packard.  Or Jobs and Wozniak. Or anyone else that didn’t have an exit strategy instead of a business plan.


Tom’s Take

Companies can be too big.  IBM has sold off most of what made it IBM.  Symantec and HP are in the process.  The next domino to fall will be EMC.  Then Cisco.  After that, the landscape will look much different.  But in a good way.  It’s like a stock split.  The same amount of knowledge is out there.  It’s just held differently.  That’s good for the industry because it forces the status quo to change.  New alliances, new partnerships, and new synergies can be found by upsetting the apple cart now and then.

API-jinks

Dastardly-vi

Network programmability is a very hot topic.  Developers are looking to the future when REST APIs and Python replaces the traditional command line interface (CLI).  The ability to write programs to interface with the network and build on functionality is spurring people to integrate networking with DevOps.  But what happens if the foundation of the programmable network, the API, isn’t the rock we all hope it will be?

Shiny API People

APIs enable the world we live in today.  Whether you’re writing for POSIX or JSON or even the Microsoft Windows API, you’re interacting with software to accomplish a goal.  The ability to use these standard interfaces makes software predictable and repeatable.  Think of an API as interchangeable parts for software.  By giving developers a way to extract information or interact the same way every time, we can write applications that just work.

APIs are hard work though.  Writing and documenting those functions takes time and effort.  The API guidelines from Microsoft and Apple can be hundreds or even thousands of pages long depending on which parts you are looking at.  They can cover exciting features like media services or mundane options like buttons and toolbars.  But each of these APIs has to be maintained or chaos will rule the day.

APIs are ever changing things.  New functions are added.  Old functions are deprecated.  Applications that used to work with the old version need to be updated to use new calls and methods.  That’s just the way things are done.  But what happens if the API changes aren’t above board?  What happens when API suddenly becomes “antagonistic programming interface”?

Losing My Religion

The most obvious example of a company pulling a fast one with API changes comes from Twitter.  When they moved from version 1.0 to version 1.1 of their API, they made some procedural changes on the backend that enraged their partners.  They limited the number of user tokens that third party clients could have.  They changed the guidelines for the way that things were to be presented or requested.  And they were the final arbiters of the appeals process for getting more access.  If they thought that an application’s functionality was duplicating their client functionality, they summarily dismissed any requests for more user tokens.

Twitter has taken this to a new level lately.  They’ve introduced new functionality, like pictures in direct messages and cards, that may never be supported in the API.  They are manipulating the market to allow their own first party apps to have the most functionality.  They are locking out developers left and right and driving users to their own products at the expense of developers they previously worked arm-in-arm with.  If Twitter doesn’t outright buy your client and bury it, they just wait until you’ve hit your token limit and let you suffocate from your own popularity.

How does this translate to the world of network programmability?  Well, we are taking for granted that networking vendors that are exposing APIs to developers are doing it for all the right reasons.  Extending network flexibility is a very critical thing.  So is reducing complexity and spurring automation.  But what happens when a vendor starts deprecating functions and forcing developers to go back to the drawing board?

Networking can move at a snail’s pace then fire right up to Ludicrous Speed.  The OpenFlow release cycle is a great example of software outpacing the rest of technology.  What happens when API development hits the same pace?  Even the most agile development team can’t keep pace with a 3-6 month cycle when old API calls are deprecated left and right.  They would just throw their hands up and stop working on apps until things settled down.

And what if the impetus is more sinister?  What if a vendor decides to say, “We’re changing the API calls around.  If you want some help rewriting your apps to function with the new ones, you can always buy our services.” Or if they decide to implement your functionality in their base system?  What happens when a networking app gets Sherlocked?


Tom’s Take

APIs are a good and noble thing.  We need them or else things don’t work correctly.  But those same APIs can cause problems if they aren’t documented correctly or if the vendor starts doing silly things with them.  What we need is a guarantee from vendors that their APIs are going to be around for a while so we can develop apps to help their networking gear work properly.  Microsoft wouldn’t be where it is today without robust support for APIs that has been consistent for years.  Networking needs to follow the same path.  The fewer hijinks with your APIs, the better your community will be.