Why is Lync The Killer SDN Application?

lync-logo

The key to showing the promise of SDN is to find a real-world application to showcase capabilities.  I recently wrote about using SDN to slice education networks.  But this is just one idea.  When it comes to real promise, you have to shelve the approach and trot out a name.  People have to know that SDN will help them fix something on their network or optimize an troublesome program.  And it appears that application is Microsoft Lync.

MIssing Lync

Microsoft Lync (neè Microsoft Office Communicator) is a software application designed to facilitate communications.  It includes voice calling capability, instant messaging, and collaboration tools.  The voice part is particularly appealing to small businesses.  With a Microsoft Office 365 for Business subscription, you gain access to Lync.  That means introducing a voice soft client to your users.  And if it’s available, people are going to use it.

As a former voice engineer, I can tell you that soft clients are a bit of a pain to configure.  They have their own way of doing things.  Especially when Quality of Service (QoS) is involved.  In the past, tagging soft client voice packets with Cisco Jabber required setting cluster-wide parameters for all clients.  It was all-or-nothing.  There were also plans to use things like Cisco MediaNet to tag Jabber packets, but this appears to be an old method.  It was much easier to use physical phones and set their QoS value and leave the soft phones relegated to curiosities.

Lync doesn’t use a physical phone.  It’s all software based.  And as usage has grown, the need to categorize all that traffic for optimal network transmission has become important.  But configuring QoS for Lync is problematic at best.  Microsoft guidelines say to configure the Lync servers with QoS policies.  Some enterprising users have found ways to configure clients with Group Policy settings based on port numbers.  But it’s all still messy.

A Lync To The Future

That’s where SDN comes into play.  Dynamic QoS policies can be pushed into switches on the fly to recognize Lync traffic coming from hosts and adjust the network to suit high traffic volumes.  Video calls can be separated from audio calls and given different handling based on a variety of dynamically detected settings.  We can even guarantee end-to-end QoS and see that guarantee through the visibility that protocols like OpenFlow enable in a software defined network.

SDN QoS is very critical to the performance of soft clients.  Separating the user traffic from the critical communication traffic requires higher-order thinking and not group policy hacking.  Ensuring delivery end-to-end is only possible with SDN because of overall visibility.  Cisco has tried that with MediaNet and Cisco Prime, but it’s totally opt-in.  If there’s a device that Prime doesn’t know about inline, it will be a black hole.  SDN gives visibility into the entire network.

The Weakest Lync

That’s not to say that Lync doesn’t have it’s issues.  Cisco Jabber was an application built by a company with a strong networking background.  It reports information to the underlying infrastructure that allows QoS policies to work correctly.  The QoS marking method isn’t perfect, but at least it’s available.

Lync packets don’t respect the network.  Lync always assumes there will be adequate bandwidth.  Why else would it not allow for QoS tagging?  It’s also apparent when you realize that some vendors are marking packets with non-standard CoS/DSCP markings.  Lync will happily take override priority on the network.  Why doesn’t Lync listen to the traffic conditions around it?  Why does it exist in a vacuum?

Lync is an application written by application people.  It’s agnostic of networks.  It doesn’t know if it’s running on a high-speed LAN or across a slow WAN connection.  It can be ignorant of the network because that part just gets figured out.  It’s a classic example of a top-down program.  That’s why SDN holds such promise for Lync.  Because the app itself is unaware of the networks, SDN allows it to keep chugging along in bliss while the controllers and forwarding tables do all the heavy lifting.  And that’s why the tie between Lync and SDN is so strong.  Because SDN makes Lync work better without the need to actually do anything about Lync, or your server infrastructure in general.


Tom’s Take

Lync is the poster child for bad applications that can be fixed with SDN.  And when I say poster child, I mean it.  Extreme Networks, Aruba Networks, and Meru are all talking about using SDN in concert with Lync.  Some are using OpenFlow, others are using proprietary methods.  The end result is making a smarter network to handle an application living in a silo.  Cisco Jabber is easy to program for QoS because it was made by networking folks.  Lync is a pain because it lives in the same world as Office and SQL Server.  It’s only when networks become contentious that we have to find novel ways of solving problems.  Lync is the use case for SDN for small and medium enterprises focused primarily on wireless connectivity.  Because making Lync behave in that environment is indistinguishable from magic, at least without SDN.


If you want to see some interesting conversations about Lync and SDN, especially with OpenFlow, tune into SDN Connect Live on September 18th.  Meru Networks and Tech Field Day will have a roundtable discussion about Lync, featuring Lync experts and SDN practitioners.

Moscone Madness

moscone1

The Moscone Center in San Francisco is a popular place for technical events.  Apple’s World Wide Developer Conference (WWDC) is an annual user of the space.  Cisco Live and VMworld also come back every few years to keep the location lively.  This year, both conferences utilized Moscone to showcase tech advances and foster community discussion.  Having attended both this year in San Francisco, I think I can finally state the following with certainty.


It’s time for tech conferences to stop using the Moscone Center.


Let’s face it.  If your conference has more than 10,000 attendees, you have outgrown Moscone.  WWDC works in Moscone because they cap the number of attendees at 5,000.  VMworld 2014 has 22,000 attendees.  Cisco Live 2014 had well over 20,000 as well.  Cramming four times the number of delegates into a cramped Moscone Center does not foster the kind of environment you want at your flagship conference.

The main keynote hall in Moscone North is too small to hold the large number of audience members.  In an age where every keynote address is streamed live, that shouldn’t be a problem.  Except that people still want to be involved and close to the event.  At both Cisco Live and VMworld, the keynote room filled up quickly and staff were directing the overflow to community spaces that were already packed too full.  Being stuffed into a crowded room with no seating or table space is frustrating.  But those are just the challenges of Moscone.  There are others as well.

I Left My Wallet In San Francisco

San Francisco isn’t cheap.  It is one of the most expensive places in the country to live.  By holding your conference in downtown San Francisco, you are forcing your 20,000+ attendees into a crowded metropolitan area with expensive hotels.  Every time I looked up a hotel room in the vicinity of VMworld or Cisco Live, I was unable to find anything for less than $300 per night.  Contrast that with Interop or Cisco Live in Las Vegas, where sub-$100 are available and $200 per night gets you into the hotel of the conference center.

Las Vegas is built for conferences.  It has adequate inexpensive hotel options.  It is designed to handle a large number of travelers arriving at once.  While spread out geographically, it is easy to navigate.  In fact, except for the lack of Uber, Las Vegas is easy to get around in than San Francisco.  I never have a problem finding a restaurant in Vegas to take a large party.  Bringing a group of 5 or 6 to a restaurant in San Francisco all but guarantees you won’t find a seat for hours.

The only real reason I can see for holding conferences at Moscone, aside from historical value, is the ease of getting materials and people into San Francisco.  Cisco and VMware both are in Silicon Valley.  Driving up to San Francisco is much easier than shipping the conference equipment to Las Vegas or Orlando.  But ease-of-transport does not make it easy on your attendees.  Add in the fact that the lower cost of setup is not reflected in additional services or reduced hotel rates and you can imagine that attendees have no real incentive to come to Moscone.


Tom’s Take

The Moscone Center is like the Cotton Bowl in Dallas.  While both have a history of producing wonderful events, both have passed their prime.  They are ill-suited for modern events.  They are cramped and crowded.  They are in unfavorable areas.  It is quickly becoming more difficult to hold events for these reasons.  But unlike the Cotton Bowl, which has almost 100 years of history, Moscone offers not real reason to stay.  Apple will always be here.  Every new iPhone, Mac, and iPad will be launched here.  But those 5,000 attendees are comfortable in one section of Moscone.  Subjecting your VMworld and Cisco Live users to these kinds of conditions is unacceptable.

It’s time for Cisco, VMware, and other large organizations to move away from Moscone.  It’s time to recognize that Moscone is not big enough for an event that tries to stuff in every user it can.  instead, conferences should be located where it makes sense.  Las Vegas, San Diego, and Orlando are conference towns.  Let’s use them as they were meant to be used.  Let’s stop the madness of trying to shoehorn 20,000 important attendees into the sardine can of the Moscone Center.

Do We Need To Redefine Open?

beer-mug

There’s a new term floating around that seems to be confusing people left and right.  It’s something that’s been used to describe a methodology as well as used in marketing left and right.  People are using it and don’t even really know what it means.  And this is the first time that’s happened.  Let’s look at the word “open” and why it has become so confusing.

Talking Beer

For those at home that are familiar with Linux, “open” wasn’t the first term to come to mind.  “Free” is another word that has been used in the past with a multitude of loaded meanings.  The original idea around “free” in relation to the Open Source movement is that the software is freely available.  There are no restrictions on use and the source is always available.  The source code for the Linux kernel can be searched and viewed at any time.

Free describes the fact that the Linux kernel is available for no cost.  That’s great for people that want to try it out.  It’s not so great for companies that want to try and build a business around it, yet Red Hat has managed to do just that.  How can they sell something that doesn’t cost anything?  It’s because they keep the notion of free sharing of code alive while charging people for support and special packages that interface with popular non-free software.

The dichotomy between unencumbered idea software and no cost software is so confusing that the movement created a phrase to describe it:

Free as in freedom, not free as in beer.

When you talk about freedom, you are unrestricted.  You can use the software as the basis for anything.  You can rewrite it to your heart’s content.  That’s your right for free software. When you talk about free beer, you set the expectation that whatever you create will be available at no charge.  Many popular Linux distributions are available at no cost.  That’s like getting beer for nothing.

Open, But Not Open Open

The word “open” is starting to take on aspects of the “free” argument.  Originally, the meaning of open came from the Open Source community.  Open Source means that you can see everything about the project.  You can modify anything.  You can submit code and improve something.  Look at the OpenDaylight project as an example.  You can sign up, download the source for a given module, and start creating working code.  That’s what Brent Salisbury (@NetworkStatic) and Matt Oswalt (@Mierdin) are doing to great effect.  They are creating the network of the future and allowing the community to do the same.

But “open” is being redefined by vendors.  Open for some means “you can work with our software via an API, but you can’t see how everything works”.  This is much like the binary-only NVIDIA driver.  Proprietary programming is pre-compiled and available to download for free, but you can’t modify the source at all.  While it works with open source software, it’s not open.

A conversation I had during Wireless Field Day 7 drove home the idea of this new “open” in relation to software defined networking.  Vendors tout open systems to their customers. They standardize on northbound interfaces that talk to orchestration platforms and have API support for other systems to call them.  But the southbound interface is proprietary.  That means that only their controller can talk to the network hardware attached to it.  Many of these systems have “open” in the name somewhere, as if to project the idea that they work with any component makeup.

This new “open” definition of having proprietary components with an API interface feels very disingenuous.  It also makes for some very awkward conversations:

$VendorA: Our system is open!

ME: Since this is an open system, I can connect my $VendorB switch and get full functionality from your controller, right?

$VendorA: What exactly do you mean by “full”?


Tom’s Take

Using “open” to market these systems is wrong.  Telling customers that you are “open” because your other equipment can program things through a narrow API is wrong.  But we don’t have a word to describe this new idea of “open”.  It’s not exactly closed.  Perhaps we can call it something else.  Maybe “ajar”.  That might make the marketing people a bit upset.  “Try our new AjarNetworking controller.  As open as we wanted to make it without closing everything.”

“Open” will probably be dominated by marketing in the next couple of year.  Vendors will try to tell you how well they interoperate with everyone.  And I will always remember how open protocols like SIP are and how everyone uses that openness against it.  If we can’t keep the definition of “open” clean, we need to find a new term.

Security Dessert Models

MMCOOKIE

I had the good fortune last week to read a great post from Maish Saidel-Keesing (@MaishSK) that discussed security models in relation to candy.  It reminded me that I’ve been wanting to discuss security models in relation to desserts.  And since Maish got me hungry for a Snicker’s bar, I decided to lay out my ideas.

When we look at traditional security models of the past, everything looks similar to creme brûlée.  The perimeter is very crunchy, but it protects a soft interior.  This is the predominant model of the world where the “bad guys” all live outside of your network.  It works when you know where your threats are located.  This model is still in use today where companies explicitly trust their user base.

The creme brûlée model doesn’t work when you have large numbers of guest users or BYOD-enabled users.  If one of them brings in something that escapes into the network, there’s nothing to stop it from wreaking havoc everywhere.  In the past, this has caused massive virus outbreaks and penetrations from things like malicious USB sticks in the parking lot being activated on “trusted” computers internally.

A Slice Of Pie

A more modern security model looks more like an apple pie.  Rather than trusting that everything inside the network, the smart security team will realize that users are as much of a threat as the “bad guys” outside.  They crunchy crust on top will also be extended around the whole soft area inside.  Users that connect tablets, phones, and personal systems will have a very aggressive security posture in place to prevent access of anything that could cause problems in the network (and data center).  This model is great when you know that the user base is not to be trusted.  I wrote about it over a year ago on the Aruba Airheads community site.

The apple pie model does have some drawbacks.  While it’s a good idea to isolate your users outside the “crust”, you still have nothing protecting your internal systems if a rogue device or “trusted” user manages to get inside the perimeter.  The pie model will protect you from careless intrusions but not from determined attackers.  To fix that problem, you’re going to have to protect things inside the network with a crunchy shell as well.

Melts In Your Firewall, Not In Your Hand

Maish was right on when he talked about M&Ms being a good metaphor for security.  They also do a great job of visualizing the untrusted user “pie” model.  But the ultimate security model will end up looking more like an M&M cookie.  It will have a crunchy edge all around.  It will be “soft” in the middle.  And it will also have little crunchy edges around the important chocolate parts of your network (or data center).  This is how you protect the really important things like customer data.  You make sure that even getting past the perimeter won’t grant access.  This is the heart of “defense in depth”.

The M&M cookie model isn’t easy by any means. It requires you to identify assets that need to be protected.  You have to build the protection in at the beginning.  No ACLs that permit unrestricted access.  The communications between untrusted devices and trusted systems needs to be kept to the bare minimum necessary.  Too many M&Ms in a cookie makes for a bad dessert.  So too must you make sure to identify the critical systems that need be protected and group them together to minimize configuration effort and attack surface.


 

Tom’s Take

Security is a world of protecting the important things while making sure they can be used by people.  If you err on the side of too much caution, you have a useless system.  If you are too permissive, you have a security risk.  Balance is the key.  Just like the recipe for cookies, pie, or even creme brûlée the proportion of ingredients must be just right to make a tasty dessert.  In security you have to have the same mix of permissions and protections.  Otherwise, the whole thing falls apart like a deflated soufflé.

 

2014 – Introductions Are In Order

It’s January 1 again.  Time to look back at what I said I was going to do for 2013.  Remember how there was going to be lots of IPv6 in the coming year?  Three whole posts.  Not exactly ushering the future, is it?  What did I work on instead?

It’s been a bit of a change for me.  I’ve gone from bits and bytes to spreadsheets and event planning.  It’s a good thing.  I’m more in touch with people now that I ever was behind a console screen.  I can see the up-and-comers in the industry.  I help bring attention to people that deserve it.  People like Brent Salisbury (@NetworkStatic), Jason Edelman (@JEdelman8), and Jake Snyder (@JSnyder81).

I still get involved with technology.  It’s just more at a higher architectural level.  That means I can stay grounded while at the same time interacting with the people that really know what’s going on.  In many ways, it’s the cross discipline aspect that I’ve been preaching to my old coworkers for years taken to a different extreme.

That means 2014 is going to look much different than I thought it would a year ago.  Almost like I need to introduce myself to the new year all over again.

I really want to spend the next year concentrating on the people.  I want to help bring bloggers and influencers along and give them a way to express themselves.  Perhaps that means social media.  Or a new blog.  Or maybe getting them on board with programs like the Solarwinds Ambassadors.  I want the smart people out there to show the world how smart they are.  I don’t want anyone to go unheard for lack of a platform.

I also really liked this article from John Mark Troyer about creating the new year you want to see.  John has some great points here.  I’ve always tried to stay away from making bold predictions for the coming year because they never pan out.  If you want to be right, you either couch the prediction with a healthy about of uncertainty or you guess something that’s almost guaranteed to happen.  I much prefer writing about what I need to accomplish or what I think needs to happen.  You really are more likely to get something accomplished if you have a concrete goal of self advancement.

Every new year starts out with limitless potential.  Every one of us has the ability and the desire to do something amazing.  I’ve never been one for making resolutions, as that seems to be setting yourself up for failure in many cases.  Instead, I try to do what I can every day to be awesome.  You should too.  Make 2014 an even better year than the last ten or twenty.  Learn how SDN works.  Learn a programming language.  Write a book or a blog or a funny tweet. Express yourself so that everyone knows who you are.  Make 2014 the year you introduce yourself to the world.  If you’ve already done that, make sure the world won’t forget you any time soon.

Will Dell Buy Aerohive?

DELL-Aerohive-Logo

One rumor I keep hearing about in the industry involves a certain buzzing wireless vendor and the world’s largest startup.  Acquisitions happen all the time.  Rumors of them are even more frequent.  But the more I thought about it, the more I realized this may be good for everyone.

Dell wants to own the stack from top to bottom.  In the past, they have had to partner with printer companies (Lexmark) and networking companies (Brocade and Juniper) to deliver parts of the infrastructure they couldn’t provide themselves.  In the case of printers, Dell found a way to build them on their own.  That reduced their reliance on Lexmark.  In the networking world, Dell shocked everyone by going outside their OEM relationship and buying Force10.  I’ve talked before about why the Force10 pickup was a better deal in the long run than Brocade.

Dell’s Desires

Dell needs specific pieces of the puzzle.  They don’t want to be encumbered with ancillary products that will need to be jettisoned later.  Buying Brocade would have required unwinding a huge fibre channel business.  In much the same way, I don’t think Dell will end up buying their current wireless OEM, Aruba Networks.  Aruba has decided to branch out past the doing simple wireless and moved into wired network switches and security and identity management programs like ClearPass.  Dell doesn’t want any of that.  They already have an issue integrating the Force10 networking expertise into the PowerConnect line.  I’ve been told in the past the FTOS will eventually come to PowerConnect, but that has yet to happen.  Integrating purchased companies isn’t easier.  That becomes exponentially harder the more product lines you have to integrate.

Aruba is too expensive for Dell to buy outright.  Michael Dell spent a huge chunk of his cash to get his company back from the shareholders.  He’s going to put it on a diet pretty soon.  I would expect to see a few product lines slimmed down or outright dropped.  That makes it tough to justify buying so much from another company.  Dell needs a scalpel, not a sledgehammer.

Aerohive’s Aspirations

Aerohive is the best target for Dell.  They are clearly fighting for third place in the wireless market behind Cisco and Aruba.  Aerohive has never been shy about punching above their weight.  They have the mentality of a scrappy terrier that won’t go down without a fight.  But, they are getting pressure to expand quickly across their product lines.  They took their time releasing an 802.11ac access point.  Their switching offering hasn’t caught on in the same way that of Aruba or Meraki (now a division of Cisco).

Aerohive is on the verge of going public.  I’m sure the infusion of cash would allow them to pay off some early investors as well as fund more development for 802.11ac Phase 2 gear and maybe a firewall offering.  The risk comes when you look at what happened to Ruckus Wireless shortly after their IPO.  While they did recover, it didn’t look very good for a company that supposedly did have a unique claim, their antenna design.  Aerohive is a cloud management platform like many others in the market.  You have to wonder how investors would view them.  Scrappy doesn’t sell stock.

Aerohive is now fighting in the new Gartner “Wired and Wireless Access” magic quadrant, which is an absolute disaster for everyone.  An analyst firm thinks that wireless is just like wired, so naturally it makes sense for AP vendors to start making switches, right?  Except the people who are really brilliant when it comes to wireless, like Matthew Gast and Victor Shtrom couldn’t care less about bits on copper.  They’ve spent the better part of their careers solving the RF problems in the world.  And now someone tells them that interference problems aren’t that much different than spanning tree?  I would have long since planted my head permanently onto my desk if I’d been told that in their position.

Aerohive gains a huge backer in the fight if Dell acquires them.  They get the name to go up against Cisco/Meraki.  The gain R&D from Dell with expertise around cloud management.  They can start developing integration with HiveManager and Dell’s SMB extensive product line.  Switch supply becomes a thing of the past.  Their entire software offering fits well with what Dell is trying to accomplish from a device independence perspective with regards to customers.

Tom’s Take

I don’t put much stock in random rumors.  But I’ve heard this one come up enough to make me ask some tough questions.  There are people in both camps that think it will happen sometime in 2014.  Dell has to get the books sorted out and figure out who’s in charge of buying things.  Aerohive has to see if there’s enough juice left in the market to IPO and not look foolish.  Maybe Dell needs to run the numbers and find out what it would take to cash out Aerohive’s investors and add the company to the growing Empire of Round Rock.  A little buzz for the World’s Largest Startup couldn’t hurt.