Why is Lync The Killer SDN Application?

lync-logo

The key to showing the promise of SDN is to find a real-world application to showcase capabilities.  I recently wrote about using SDN to slice education networks.  But this is just one idea.  When it comes to real promise, you have to shelve the approach and trot out a name.  People have to know that SDN will help them fix something on their network or optimize an troublesome program.  And it appears that application is Microsoft Lync.

MIssing Lync

Microsoft Lync (neè Microsoft Office Communicator) is a software application designed to facilitate communications.  It includes voice calling capability, instant messaging, and collaboration tools.  The voice part is particularly appealing to small businesses.  With a Microsoft Office 365 for Business subscription, you gain access to Lync.  That means introducing a voice soft client to your users.  And if it’s available, people are going to use it.

As a former voice engineer, I can tell you that soft clients are a bit of a pain to configure.  They have their own way of doing things.  Especially when Quality of Service (QoS) is involved.  In the past, tagging soft client voice packets with Cisco Jabber required setting cluster-wide parameters for all clients.  It was all-or-nothing.  There were also plans to use things like Cisco MediaNet to tag Jabber packets, but this appears to be an old method.  It was much easier to use physical phones and set their QoS value and leave the soft phones relegated to curiosities.

Lync doesn’t use a physical phone.  It’s all software based.  And as usage has grown, the need to categorize all that traffic for optimal network transmission has become important.  But configuring QoS for Lync is problematic at best.  Microsoft guidelines say to configure the Lync servers with QoS policies.  Some enterprising users have found ways to configure clients with Group Policy settings based on port numbers.  But it’s all still messy.

A Lync To The Future

That’s where SDN comes into play.  Dynamic QoS policies can be pushed into switches on the fly to recognize Lync traffic coming from hosts and adjust the network to suit high traffic volumes.  Video calls can be separated from audio calls and given different handling based on a variety of dynamically detected settings.  We can even guarantee end-to-end QoS and see that guarantee through the visibility that protocols like OpenFlow enable in a software defined network.

SDN QoS is very critical to the performance of soft clients.  Separating the user traffic from the critical communication traffic requires higher-order thinking and not group policy hacking.  Ensuring delivery end-to-end is only possible with SDN because of overall visibility.  Cisco has tried that with MediaNet and Cisco Prime, but it’s totally opt-in.  If there’s a device that Prime doesn’t know about inline, it will be a black hole.  SDN gives visibility into the entire network.

The Weakest Lync

That’s not to say that Lync doesn’t have it’s issues.  Cisco Jabber was an application built by a company with a strong networking background.  It reports information to the underlying infrastructure that allows QoS policies to work correctly.  The QoS marking method isn’t perfect, but at least it’s available.

Lync packets don’t respect the network.  Lync always assumes there will be adequate bandwidth.  Why else would it not allow for QoS tagging?  It’s also apparent when you realize that some vendors are marking packets with non-standard CoS/DSCP markings.  Lync will happily take override priority on the network.  Why doesn’t Lync listen to the traffic conditions around it?  Why does it exist in a vacuum?

Lync is an application written by application people.  It’s agnostic of networks.  It doesn’t know if it’s running on a high-speed LAN or across a slow WAN connection.  It can be ignorant of the network because that part just gets figured out.  It’s a classic example of a top-down program.  That’s why SDN holds such promise for Lync.  Because the app itself is unaware of the networks, SDN allows it to keep chugging along in bliss while the controllers and forwarding tables do all the heavy lifting.  And that’s why the tie between Lync and SDN is so strong.  Because SDN makes Lync work better without the need to actually do anything about Lync, or your server infrastructure in general.


Tom’s Take

Lync is the poster child for bad applications that can be fixed with SDN.  And when I say poster child, I mean it.  Extreme Networks, Aruba Networks, and Meru are all talking about using SDN in concert with Lync.  Some are using OpenFlow, others are using proprietary methods.  The end result is making a smarter network to handle an application living in a silo.  Cisco Jabber is easy to program for QoS because it was made by networking folks.  Lync is a pain because it lives in the same world as Office and SQL Server.  It’s only when networks become contentious that we have to find novel ways of solving problems.  Lync is the use case for SDN for small and medium enterprises focused primarily on wireless connectivity.  Because making Lync behave in that environment is indistinguishable from magic, at least without SDN.


If you want to see some interesting conversations about Lync and SDN, especially with OpenFlow, tune into SDN Connect Live on September 18th.  Meru Networks and Tech Field Day will have a roundtable discussion about Lync, featuring Lync experts and SDN practitioners.

An Educational SDN Use Case

During the VMUnderground Networking Panel, we had a great discussion about software defined networking (SDN) among other topics. Seems that SDN is a big unknown for many out there. One of the reasons for this is the lack of specific applications of the technology. OSPF and SQL are things that solve problems. Can the same be said of SDN? One specific question regarded how to use SDN in small-to-medium enterprise shops. I fired off an answer from my own experience:

Since then, I’ve had a few people using my example with regards to a great use case for SDN. I decided that I needed to develop it a bit more now that I’ve had time to think about it.

Schools are a great example of the kinds of “do more with less” organizations that are becoming more common. They have enterprise-class networks and needs and live off budgets that wouldn’t buy janitorial supplies. In fact, if it weren’t for E-Rate, most schools would have technology from the Stone Age. But all this new tech doesn’t help if you can’t find a way for it to be used to the fullest for the purposes of educating students.

In my example, I talked about the shift from written standardized testing to online assessments. Oklahoma and Indiana are leading the way in getting rid of Scantrons and #2 pencils in favor of keyboards and monitors. The process works well for the most part with proper planning. My old job saw lots of scrambling to prep laptops, tablets, and lab machines for the rigors of running the test. But no amount of pre-config could prepare for the day when it was time to go live. On those days, the network was squarely in the sights of the administration.

I’ve seen emails go around banning non-testing students from the computers. I’ve seen hard-coded DNS entries on testing machines while the rest of the school had DNS taken offline to keep them from surfing the web. Redundant circuits. QoS policies that would make voice engineers cry. All in the hope of keeping the online test bandwidth free to get things completed in the testing window. All the while, I was thinking to myself, “There has got to be an easier way to do this…”

Redefining with Software

Enter SDN. The original use case for SDN at Stanford was network slicing. The Next-Gen Network Team wanted to use the spare capacity of the network for testing without crashing the whole system. Being able to reconfigure the network on the fly is a huge leap forward. Pushing policy into devices without CLI cuts down on the resume-generating events (RGE) in production equipment. So how can we apply these network slicing principles to my example?

On the day of the test, have the configuration system push down a new policy that gives the testing machines a guaranteed amount of bandwidth. This reservation will ensure each machine is able to get what it needs without being starved out. With SDN, we can set this policy on a per-IP basis to ensure it is enforced. This slice will exist separate from the production network to ensure that no one starting a huge FTP transfer or video upload will disrupt testing. By leaving the remaining bandwidth intact for the rest of the school’s production network administrators can ensure that the rest of the student body isn’t impacted during testing. With the move toward flipped classrooms and online curriculum augmentation, having available bandwidth is crucial.

Could this be done via non-SDN means? Sure. Granted, you’ll have to plan the QoS policy ahead of time and find a way to classify your end-user workstations properly. You’ll also have to tune things to make sure no one is dominating the test machine pool. And you have to get it right on every switch you program. And remove it when you’re done. Unless you missed a student or a window, in which case you’ll need to reprogram everything again. SDN certainly makes this process much easier and less disruptive.


Tom’s Take

SDN isn’t a product. You can’t order a part number for SDN-001 and get a box labeled SDN. Instead, it’s a process. You apply SDN to existing environment and extend the capabilities through new processes. Those processes need use cases. Use cases drive business cases. Business cases provide buy in from the stakeholders. That’s why discussing cases like the one above are so important. When you can find a use for SDN, you can get people to accept it. And that’s half the battle.

SLAAC May Save Your Life

Flatline

A chance dinner conversation at Wireless Field Day 7 with George Stefanick (@WirelesssGuru) and Stewart Goumans (@WirelessStew) made me think about the implications of IPv6 in healthcare.  IPv6 adoption hasn’t been very widespread, thanks in part to the large number of embedded devices that have basic connectivity.  Basic in this case means “connected with an IPv4 address”.  But that address can lead to some complications if you aren’t careful.

In a hospital environment, the units that handle medicine dosing are connected to the network.  This allows the staff to program them to properly dispense medications to patients.  Given an IP address in a room, staff can ensure that a patient is getting just the right amount of painkillers and not an overdose.  Ensuring a device gets the same IP each time is critical to making this process work.  According to George, he has recommended that the staff stop using DHCP to automatically assign addresses and instead move to static IP configuration to ensure there isn’t a situation where a patient inadvertently receives a fatal megadose of medication, such as when an adult med unit is accidentally used in a pediatric application.

This static policy does lead to network complications.  Units removed from their proper location are rendered unusable because of the wrong IP.  Worse yet, since those units don’t check in with the central system any more, they could conceivably be incorrectly configured.  At best this will generate a support call to the IT staff.  At worst…well, think lawsuit.  Not to mention what happens if there is a major change to gateway information.  That would necessitate massive manual reconfiguration and downtime until those units can be fixed.

Cut Me Some SLAAC

This is where IPv6 comes into play, especially with Stateless Address Auto Configuration (SLAAC).  By using an automatically configured address structure that never changes, this equipment will never go offline.  It will always be checked in on the network.  There will be little chance of the unit dispensing the wrong amount of medication.  The medical unit will have history available via the same IPv6 address.

There are challenges to be sure.  IPv6 support isn’t cheap or easy.  In the medical industry, innovation happens at a snail’s pace.  These devices are just now starting to have mobile connectivity for wireless use.  Asking the manufacturers to add IPv6 into their networking stacks is going to take years of development at best.

Having the equipment attached all the time also brings up issues with moving the unit to the wrong area and potentially creating a fatal situation.  Thankfully, the router advertisements can help there.  If the RA for a given subnet locks the unit into a given prefix, controls can be enacted on the central system to ensure that devices in that prefix range will never be allowed to dispense medication above or below a certain amount.  While this is more of a configuration on the medical unit side, IPv6 provides the predictability needed to ensure those devices can be found and cataloged.  Since a SLAAC addressed device using EUI-64 will always get the same address, you never have to guess which device got a specific address.  You will always know from the last 64 bits which device you are speaking to, no matter the prefix.

Tom’s Take

Healthcare is a very static industry when it comes to innovation.  Medical companies are trying to keep pace with technology advances while at the same time ensuring that devices are safe and do not threaten the patients they are supposed to protect.  IPv6 can give us an extra measure of safety by ensure devices receive the same address every time.  IPv6 also gives the consistency needed to compile proper reporting about the operation of a device and even the capability of finding that device when it is moved to an improper location.  Thanks to SLAAC and IPv6, one day these networking technologies might just save your life.

Moscone Madness

moscone1

The Moscone Center in San Francisco is a popular place for technical events.  Apple’s World Wide Developer Conference (WWDC) is an annual user of the space.  Cisco Live and VMworld also come back every few years to keep the location lively.  This year, both conferences utilized Moscone to showcase tech advances and foster community discussion.  Having attended both this year in San Francisco, I think I can finally state the following with certainty.


It’s time for tech conferences to stop using the Moscone Center.


Let’s face it.  If your conference has more than 10,000 attendees, you have outgrown Moscone.  WWDC works in Moscone because they cap the number of attendees at 5,000.  VMworld 2014 has 22,000 attendees.  Cisco Live 2014 had well over 20,000 as well.  Cramming four times the number of delegates into a cramped Moscone Center does not foster the kind of environment you want at your flagship conference.

The main keynote hall in Moscone North is too small to hold the large number of audience members.  In an age where every keynote address is streamed live, that shouldn’t be a problem.  Except that people still want to be involved and close to the event.  At both Cisco Live and VMworld, the keynote room filled up quickly and staff were directing the overflow to community spaces that were already packed too full.  Being stuffed into a crowded room with no seating or table space is frustrating.  But those are just the challenges of Moscone.  There are others as well.

I Left My Wallet In San Francisco

San Francisco isn’t cheap.  It is one of the most expensive places in the country to live.  By holding your conference in downtown San Francisco, you are forcing your 20,000+ attendees into a crowded metropolitan area with expensive hotels.  Every time I looked up a hotel room in the vicinity of VMworld or Cisco Live, I was unable to find anything for less than $300 per night.  Contrast that with Interop or Cisco Live in Las Vegas, where sub-$100 are available and $200 per night gets you into the hotel of the conference center.

Las Vegas is built for conferences.  It has adequate inexpensive hotel options.  It is designed to handle a large number of travelers arriving at once.  While spread out geographically, it is easy to navigate.  In fact, except for the lack of Uber, Las Vegas is easy to get around in than San Francisco.  I never have a problem finding a restaurant in Vegas to take a large party.  Bringing a group of 5 or 6 to a restaurant in San Francisco all but guarantees you won’t find a seat for hours.

The only real reason I can see for holding conferences at Moscone, aside from historical value, is the ease of getting materials and people into San Francisco.  Cisco and VMware both are in Silicon Valley.  Driving up to San Francisco is much easier than shipping the conference equipment to Las Vegas or Orlando.  But ease-of-transport does not make it easy on your attendees.  Add in the fact that the lower cost of setup is not reflected in additional services or reduced hotel rates and you can imagine that attendees have no real incentive to come to Moscone.


Tom’s Take

The Moscone Center is like the Cotton Bowl in Dallas.  While both have a history of producing wonderful events, both have passed their prime.  They are ill-suited for modern events.  They are cramped and crowded.  They are in unfavorable areas.  It is quickly becoming more difficult to hold events for these reasons.  But unlike the Cotton Bowl, which has almost 100 years of history, Moscone offers not real reason to stay.  Apple will always be here.  Every new iPhone, Mac, and iPad will be launched here.  But those 5,000 attendees are comfortable in one section of Moscone.  Subjecting your VMworld and Cisco Live users to these kinds of conditions is unacceptable.

It’s time for Cisco, VMware, and other large organizations to move away from Moscone.  It’s time to recognize that Moscone is not big enough for an event that tries to stuff in every user it can.  instead, conferences should be located where it makes sense.  Las Vegas, San Diego, and Orlando are conference towns.  Let’s use them as they were meant to be used.  Let’s stop the madness of trying to shoehorn 20,000 important attendees into the sardine can of the Moscone Center.

Do We Need To Redefine Open?

beer-mug

There’s a new term floating around that seems to be confusing people left and right.  It’s something that’s been used to describe a methodology as well as used in marketing left and right.  People are using it and don’t even really know what it means.  And this is the first time that’s happened.  Let’s look at the word “open” and why it has become so confusing.

Talking Beer

For those at home that are familiar with Linux, “open” wasn’t the first term to come to mind.  “Free” is another word that has been used in the past with a multitude of loaded meanings.  The original idea around “free” in relation to the Open Source movement is that the software is freely available.  There are no restrictions on use and the source is always available.  The source code for the Linux kernel can be searched and viewed at any time.

Free describes the fact that the Linux kernel is available for no cost.  That’s great for people that want to try it out.  It’s not so great for companies that want to try and build a business around it, yet Red Hat has managed to do just that.  How can they sell something that doesn’t cost anything?  It’s because they keep the notion of free sharing of code alive while charging people for support and special packages that interface with popular non-free software.

The dichotomy between unencumbered idea software and no cost software is so confusing that the movement created a phrase to describe it:

Free as in freedom, not free as in beer.

When you talk about freedom, you are unrestricted.  You can use the software as the basis for anything.  You can rewrite it to your heart’s content.  That’s your right for free software. When you talk about free beer, you set the expectation that whatever you create will be available at no charge.  Many popular Linux distributions are available at no cost.  That’s like getting beer for nothing.

Open, But Not Open Open

The word “open” is starting to take on aspects of the “free” argument.  Originally, the meaning of open came from the Open Source community.  Open Source means that you can see everything about the project.  You can modify anything.  You can submit code and improve something.  Look at the OpenDaylight project as an example.  You can sign up, download the source for a given module, and start creating working code.  That’s what Brent Salisbury (@NetworkStatic) and Matt Oswalt (@Mierdin) are doing to great effect.  They are creating the network of the future and allowing the community to do the same.

But “open” is being redefined by vendors.  Open for some means “you can work with our software via an API, but you can’t see how everything works”.  This is much like the binary-only NVIDIA driver.  Proprietary programming is pre-compiled and available to download for free, but you can’t modify the source at all.  While it works with open source software, it’s not open.

A conversation I had during Wireless Field Day 7 drove home the idea of this new “open” in relation to software defined networking.  Vendors tout open systems to their customers. They standardize on northbound interfaces that talk to orchestration platforms and have API support for other systems to call them.  But the southbound interface is proprietary.  That means that only their controller can talk to the network hardware attached to it.  Many of these systems have “open” in the name somewhere, as if to project the idea that they work with any component makeup.

This new “open” definition of having proprietary components with an API interface feels very disingenuous.  It also makes for some very awkward conversations:

$VendorA: Our system is open!

ME: Since this is an open system, I can connect my $VendorB switch and get full functionality from your controller, right?

$VendorA: What exactly do you mean by “full”?


Tom’s Take

Using “open” to market these systems is wrong.  Telling customers that you are “open” because your other equipment can program things through a narrow API is wrong.  But we don’t have a word to describe this new idea of “open”.  It’s not exactly closed.  Perhaps we can call it something else.  Maybe “ajar”.  That might make the marketing people a bit upset.  “Try our new AjarNetworking controller.  As open as we wanted to make it without closing everything.”

“Open” will probably be dominated by marketing in the next couple of year.  Vendors will try to tell you how well they interoperate with everyone.  And I will always remember how open protocols like SIP are and how everyone uses that openness against it.  If we can’t keep the definition of “open” clean, we need to find a new term.

Maybe MU-MIMO Matters

Wireless

As 802.11ac becomes more widely deployed in environments I find myself looking to the next wave and the promise it brings.  802.11ac Wave 1 for me really isn’t that groundbreaking.  It’s an incremental improvement on 802.11n.  Wave 1 really only serves to wake up the manufacturers to the fact that 5 GHz radios are needed on devices now.  The real interesting stuff comes in Wave 2.  Wider channels, more spatial streams, and a host of other improvements are on the way.  But the most important one for me is MU-MIMO.

Me Mi Mo Mum

Multi-user Multiple-Input Multiple-Output (MU-MIMO) is a huge upgrade over the MIMO specification in 802.11n.  MIMO allowed access points to multiplex signals on different channels into one data stream.  It accomplished this via Spatial Division Multiplexing (SDM).  This means that more antennas on an access point are a very good thing.  It increases the throughput above and beyond what could be accomplished with just a single antenna.  But it does have a drawback.

Single-user MIMO can only talk to one client at a time.  All the work necessary to multiplex those data streams require the full attention of a single access point for the period in time that the client is transmitting.  That means that crowded wireless networks can see reduced throughput because of shorter transmit windows.  And what wireless network isn’t crowded today?

MU-MIMO solves this problem by utilizing additional antenna capacity to transmit multiple data streams at once.  If you have spare antennas, you can send another data stream.  The AP then takes the multiple streams and stitches them back together.  This means an effective increase in throughput for certain devices even though the signal strength isn’t as high (thanks to FCC power limits).  Here’s a great video from Wireless Field Day 7 that explains the whole process:

What I found most interesting in this video is two-fold.  First, MU-MIMO is of great benefit to client devices that don’t have the maximum number of spatial streams.  Laptops are going to have three stream and four stream cards, so their MU-MIMO benefit is minimal.  However, the majority of devices on the market using wireless are mobile.  Tablets and phones don’t have multiple spatial streams, usually just one (or in some cases two).  They do this to improve battery life.  MU-MIMO will help them out considerably.

The second takeaway is that devices without a high number of receive chains will make the AP do more work.  That’s because the AP has to process the transmit stream and prevent the extra streams from being transmitted toward the wrong client.  That’s going to incur processing power.  That means you’ll need an AP with a lot of processing power.  Or a control system that can crunch those numbers for you.

When you consider that a large number of APs in a given system are sitting idle for a portion of the time it would be nice to be able to use that spare capacity for MU-MIMO processing.  In addition, having those extra antennas available to help with MU-MIMO sounding would be nice too.  There’s already been some work done on the research side of things.  Maybe we’ll soon see the ability to take the idle processing power of a wireless network and use it to boost the client throughput as needed.


 

Tom’s Take

Wireless never ceases to amaze me.  When I started writing this article, I thought I knew how MU-MIMO worked.  Thankfully, Cisco set me straight at Wireless Field Day 7.  MU-MIMO is going to help clients that can’t run high-performance networking cards.  The kinds of clients that are being sold as fast as possible today.  That means that the wireless system is already being developed to support a new kind of wireless device.

A device that doesn’t have access to limitless power from a wall socket or a battery that lasts forever.  There’s been talk of tablets with increased spatial streams for a while, but the cruel mistress of battery life will always win in the end.  That’s why MU-MIMO matters the most. Because if the wireless device can’t get more powerful, maybe it’s time for the wireless network to do the heavy lifting.

IPv6 and the VCR

IPv6 isn’t a fad.  It’s not a passing trend that will be gone tomorrow.  When Vint Cerf is on a nationally televised non-technical program talking about IPv6 that’s about as real as it’s going to get.  Add in the final depletion of IPv4 address space from the RIRs and you will see that IPv6 is a necessity.  Yet there are still people in tech that deny the increasing need for IPv6 awareness.  Those same people that say it’s not ready or that it costs too much.  It reminds me of a different argument.

IPvcr4

My house is full of technology.  Especially when it comes to movie watching.  I have DVRs for watching television, a Roku for other services, and apps on my tablet so the kids can watch media on demand.  I have a DVD player in almost every room of the house.  I also have a VCR.  It serves one purpose – to watch two movies that are only available on a video tape.  Those two movies are my wedding and the birth of my oldest son.

At first, the VCR stated connected to our television all the time.  We had some movies that we owned on VHS that we didn’t have DVD or digital copies.  As time wore on, those VHS movies were replaced by digital means.  Soon, the VCR only served to enable viewing of the aforementioned personal media.  We couldn’t get that on a DVD from just anywhere.  But the VCR stayed connected for those occasions when the movies needed to be watched.  Soon, it was too much of a hassle to reconnect the VCR, even for these family films.  Eventually, we figured out how to hook up the VCR and record the content into a digital format that’s available from non-analog sources.

How does this compare to IPv6?  Most people assume that the transition to IPv6 from IPv4 will be sudden and swift.  They will wake up one morning and find that all their servers and desktops are running global IPv6 addresses and IPv4 will be a distant memory.  In fact, nothing can be further from the truth.  IPv4 and IPv6 can coexist on the same system.  IPv6 can be implemented alongside IPv4 without disrupting connectivity.  As in the above example, you can watch DVDs and VHS tapes on the same TV without disruption.

As IPv4 address availability is restricted, may engineers will find themselves scrambling to replace existing systems and deploy new ones without access to IPv4 addresses.  That’s when the real cutover begins.  As these new systems are brought online, IPv6 will be the only address space available.  These systems will be connected with IPv6 first and provisions will be made for them to connect to other systems via IPv4.  Eventually, IPv4 will only exist for legacy systems that can’t be upgraded or migrated.  Just like the VCR above, it will only be needed for a handful of operations.

We never have to reach a point where IPv4 will be completely eliminated.  IPv4-only hosts will still be able to connect to one another so long as the global IPv4 routing table is available.  It may be reduced in size as IPv6 gains greater adoption, but it will never truly go away.  Instead, it will be like IBM’s SNA protocol.  Relevant to a few isolated hosts at best.  The world will move on and IPv6 will be the first choice for connectivity.


 Tom’s Take

I must admit that this idea was fostered from a conversation with Ed Horley (@EHorley).  The evangelism that he’s doing with both the CAv6TF and the RMv6TF is unparalleled.  They are doing their best to get the word out about IPv6 adoption.  I think it’s important for people in tech to know that IPv6 isn’t displacing IPv4.  It’s extending network functionality. It’s granting a new lease on life for systems desperately in need of address space.  And it allows IPv4-only systems to survive a little while longer.  You don’t have to watch the same old VHS tapes every day.  But you don’t have to leave the IPvcr4 hooked up all the time either.