The Atomic Weight of Policy


The OpenDaylight project put out a new element this week with their Helium release.  The second release is usually the most important, as it shows that you have a real project on your hands and not just a bunch of people coding in the back room to no avail.  Not that something like that was going to happen to ODL.  The group of people involved in the project have the force of will to change the networking world.

Helium is already having an effect on the market.  Brocade announced their Vyatta Controller last week, which is based on Helium code.  Here’s a handy video as well.  The other thing that Helium has brought forth is the ongoing debate about network policy.  And I think that little gem is going to have more weight in the long run than anything else.

The Best Policy

Helium contains group-based policies for making groups of network objects talk to each other.  It’s a crucial step to bring ODL from an engineering hobby project to a full-fledged product that can be installed by someone that isn’t a code wizard.  That’s because most of the rest of the world, including IT people, don’t speak in specific terms with devices.  They have an idea of what needs to be done and they rely on the devices to implement that idea.

Think about a firewall.  When is the last time you wrote a firewall rule by hand? Unless you are a CLI masochist, you use the GUI to craft a policy that says something like “prevent DNS queries from any device that isn’t a DNS server (referenced by this DNS Server object list)”.  We create those kinds of policies because we can’t account for every new server appearing on the network that wants to make a DNS query.  We block the ones that don’t need to be doing it, and we modify the DNS Server object list to add new servers when needed.

Yet, in networking we’re still playing with access lists and VLAN databases.  We must manually propagate this information throughout the network when updates occur.  Because no one relies on VTP to add and prune information in the network.  The tools we’ve been using to do this work are archaic and failure-prone at best.

Staying Neutral

The inclusion of policy components of Helium will go a long way to paving the road for more of the same in future releases.  Of course, there is already talk about Cisco’s OpFlex and how it didn’t make the cut for Helium despite being one of the most dense pieces of code proposed for ODL.  It’s good that Cisco and ODL both realized that putting out code that wasn’t ready was only going to hurt in the long run.  When you’ve had seven or eight major releases, you can lay an egg with a poorly implemented feature and it won’t be the end of the world.  But if you put out a stinker in the second release, you may not make it to the third.

But this isn’t about OpFlex.  Or Congress. Or any other policy language that might be proposed for ODL in the future.  It’s about making sure that ODL is a policy-driven controller infrastructure.  Markus Nispel of Extreme talked about SDN at Wireless Field Day 7.  In that presentation, he said that he thinks the industry will standardize on ODL as the northbound interface in the network.  For those not familiar, the northbound interface is the one that is user-facing.  We interact with the northbound controller interface while the southbound controller interface programs the devices.

ODL is making sure the southbound interface is OpenFlow.  What they need to do now is ensure the northbound interface can speak policy to the users configuring it.  We’ve all heard the rhetoric of “network engineers need to learn coding” or “non-programmers will be out of a job soon”.  But the harsher reality is that while network programmers are going to be very busy people on the backend, the day-to-day operations of the network will be handled by different teams.  Those teams don’t speak IOS, Junos, or OpenFlow.  They think in policy-based thoughts.

Ops teams don’t want to know how something is going to work when implemented.  They don’t want to spend hours troubleshooting why a VLAN didn’t populate only to find they typoed the number.  They want to plug information into a policy and let the controller do the rest.  That’s what Helium has started and what ODL represents.  An interface into the network for mortal Ops teams.  A way to make that work for everyone, whether it be an OpFlex interface into Cisco APIC programming ACI or a Congress interface to an NSX layer.  If you are a the standard controller, you need to talk to everyone no matter their language.

Tom’s Take

ODL is going to be the controller in SDN.  There is too much behind it to let it fail.  Vendors are going to adopt it as their base standard for SDN.  They may add bells and whistles but it will still be ODL underneath.  That means that ODL needs to set the standard for network interaction.  And that means policy.  Network engineers complain about fragility in networking and how it’s hard to do and in the very same breath say they don’t want the CLI to go away.  What they need to say is that policy gives everyone the flexibility to create robust, fault-tolerant configurations with a minimum of effort while still allowing for other interface options, like CLI and API.  If you are the standard, you can’t play favorites.  Policy is going to be the future of networking interfaces.  If you don’t believe that, you’ll find yourself quickly crushed under the weight of reality.

SDN Use Case: Content Filtering

K-12 schools face unique challenges with their IT infrastructure.  Their user base needs access to a large amount of information while at the same time facing restrictions.  While it does sound like some corporate network policies, the restrictions in the education environment are legal in nature.  Schools must find new ways to provide the assurance of restricting content without destroying their network in the process.  Which lead me to ask: Can SDN Help?

Online Protection

The government E-Rate program gives schools money each year under Priority 1 funding for Internet access.  Indeed, the whole point of the E-Rate program is to get schools connected to the Internet.  But we all know the Internet comes with a bevy of distractions. Many of those distractions are graphic in nature and must be eliminated in a school.  Because it’s the law.

The Children’s Internet Protection Act (CIPA) mandates that schools and libraries receiving E-Rate funding for high speed broadband Internet connections must filter those connections to remove questionable content.  Otherwise they risk losing funding for all E-Rate services.  That makes content filters very popular devices in schools, even if they aren’t funded by E-Rate (which they aren’t).

Content filters also cause network design issues.  In the old days, we had to put the content filter servers on a hub along with the outbound Internet router in order to insure they could see all the traffic and block the bad bits.  That became increasing difficult as network switch speeds increased.  Forcing hundreds of megabits through a 10Mbit hub was counterproductive.  Moving to switchport mirroring did alleviate the speed issues, but still caused network design problems.  Now, content filters can run on firewalls and bastion host devices or are enabled via proxy settings in the cloud.  But we all know that running too many services on a firewall causes performance issues.  Or leads to buying a larger firewall than needed.

Another issue that has crept up as of late is the use of Virtual Private Networking (VPN) as a way to defeat the content filter.  Setting up an SSL VPN to an outside, non-filtered device is pretty easy for a knowledgeable person.  And if that fails, there are plenty of services out there dedicated to defeating content filtering.  While the aim of these service is noble, such as bypassing the Great Firewall of China or the mandated Internet filtering in the UK, they can also be used to bypass the CIPA-mandated filtering in schools as well.  It’s a high-tech game of cat-and-mouse.  Blocking access to one VPN only for three more to pop up to replace it.

Software Defined Protection

So how can SDN help?  Service chaining allows traffic to be directed to a given device or virtual appliance before being passed on through the network.  This great presentation from Networking Field Day 7 presenter Tail-f Networks shows how service chaining can force traffic through security devices like IDS/IPS and through content filters as well.  There is no need to add hubs or mirrored switch ports in your network.  There is also no need to configure traffic to transit the same outbound router or firewall, thereby creating a single point of failure.  Thanks to the magic of SDN, the packets go to the filter automatically.  That’s because they don’t really have a choice.

It also works well for providers wanting to offer filtering as a service to schools.  This allows a provider to configure the edge network to force traffic to a large central content filter cluster and ensure delivery.  It also allows the service provider network to operate without impact to non-filtered customers.  That’s very useful even in ISPs dedicated to education institutions, as the filter provisions for K-12 schools don’t apply to higher education facilities, like colleges and universities.  Service chaining would allow the college to stay free and clear while the high schools are cleansed of inappropriate content.

The VPN issue is a thorny one for sure.  How do you classify traffic that is trying to hide from you?  Even services like Netflix are having trouble blocking VPN usage and they stand to lose millions if they can’t.  How can SDN help in this situation? We could build policies to drop traffic headed for known VPN endpoints.  That should take care of the services that make it easy to configure and serve as a proxy point.  But what about those tech-savvy kids that setup SSL VPNs back home?

Luckily, SDN can help there as well.  Many unified threat management appliances offer the ability to intercept SSL conversations.  This is an outgrowth of sites like Facebook defaulting to SSL to increase security.  SSL intercept essentially acts as a man-in-the-middle attack.  The firewall decrypts the SSL conversation, scans the packets, and re-encrypts it using a different certificate.  When the packets come back in, the process is reversed.  This SSL intercept capability would allow those SSL VPN packets to be dropped when detected.  The SDN component ensures that HTTPS traffic is always redirected to a device that and do SSL intercept, rather than taking a path through the network that might lead to a different exit point.

Tom’s Take

Content filtering isn’t fun.  I’ve always said that I don’t envy the jobs of people that have to wade through the unsavory parts of the Internet to categorize bits as appropriate or not.  It’s also a pain for network engineers that need to keep redesigning the networking and introducing points of failure to meet federal guidelines for decency.  SDN holds the promise of making that easier.  In the above Tail-f example, the slide deck shows a UI that allows simple blocking of common protocols like Skype.  This could be extended to schools where student computers and wireless networks are identified and bad programs are disallowed while web traffic is pushed to a filter and scrubbed before heading out to the Wild Wild Web.  SDN can’t solve every problem we might have, but if it can make the mundane and time consuming problems easier, it might just give people the breathing room they need to work on the bigger issues.

Why is Lync The Killer SDN Application?


The key to showing the promise of SDN is to find a real-world application to showcase capabilities.  I recently wrote about using SDN to slice education networks.  But this is just one idea.  When it comes to real promise, you have to shelve the approach and trot out a name.  People have to know that SDN will help them fix something on their network or optimize an troublesome program.  And it appears that application is Microsoft Lync.

MIssing Lync

Microsoft Lync (neè Microsoft Office Communicator) is a software application designed to facilitate communications.  It includes voice calling capability, instant messaging, and collaboration tools.  The voice part is particularly appealing to small businesses.  With a Microsoft Office 365 for Business subscription, you gain access to Lync.  That means introducing a voice soft client to your users.  And if it’s available, people are going to use it.

As a former voice engineer, I can tell you that soft clients are a bit of a pain to configure.  They have their own way of doing things.  Especially when Quality of Service (QoS) is involved.  In the past, tagging soft client voice packets with Cisco Jabber required setting cluster-wide parameters for all clients.  It was all-or-nothing.  There were also plans to use things like Cisco MediaNet to tag Jabber packets, but this appears to be an old method.  It was much easier to use physical phones and set their QoS value and leave the soft phones relegated to curiosities.

Lync doesn’t use a physical phone.  It’s all software based.  And as usage has grown, the need to categorize all that traffic for optimal network transmission has become important.  But configuring QoS for Lync is problematic at best.  Microsoft guidelines say to configure the Lync servers with QoS policies.  Some enterprising users have found ways to configure clients with Group Policy settings based on port numbers.  But it’s all still messy.

A Lync To The Future

That’s where SDN comes into play.  Dynamic QoS policies can be pushed into switches on the fly to recognize Lync traffic coming from hosts and adjust the network to suit high traffic volumes.  Video calls can be separated from audio calls and given different handling based on a variety of dynamically detected settings.  We can even guarantee end-to-end QoS and see that guarantee through the visibility that protocols like OpenFlow enable in a software defined network.

SDN QoS is very critical to the performance of soft clients.  Separating the user traffic from the critical communication traffic requires higher-order thinking and not group policy hacking.  Ensuring delivery end-to-end is only possible with SDN because of overall visibility.  Cisco has tried that with MediaNet and Cisco Prime, but it’s totally opt-in.  If there’s a device that Prime doesn’t know about inline, it will be a black hole.  SDN gives visibility into the entire network.

The Weakest Lync

That’s not to say that Lync doesn’t have it’s issues.  Cisco Jabber was an application built by a company with a strong networking background.  It reports information to the underlying infrastructure that allows QoS policies to work correctly.  The QoS marking method isn’t perfect, but at least it’s available.

Lync packets don’t respect the network.  Lync always assumes there will be adequate bandwidth.  Why else would it not allow for QoS tagging?  It’s also apparent when you realize that some vendors are marking packets with non-standard CoS/DSCP markings.  Lync will happily take override priority on the network.  Why doesn’t Lync listen to the traffic conditions around it?  Why does it exist in a vacuum?

Lync is an application written by application people.  It’s agnostic of networks.  It doesn’t know if it’s running on a high-speed LAN or across a slow WAN connection.  It can be ignorant of the network because that part just gets figured out.  It’s a classic example of a top-down program.  That’s why SDN holds such promise for Lync.  Because the app itself is unaware of the networks, SDN allows it to keep chugging along in bliss while the controllers and forwarding tables do all the heavy lifting.  And that’s why the tie between Lync and SDN is so strong.  Because SDN makes Lync work better without the need to actually do anything about Lync, or your server infrastructure in general.

Tom’s Take

Lync is the poster child for bad applications that can be fixed with SDN.  And when I say poster child, I mean it.  Extreme Networks, Aruba Networks, and Meru are all talking about using SDN in concert with Lync.  Some are using OpenFlow, others are using proprietary methods.  The end result is making a smarter network to handle an application living in a silo.  Cisco Jabber is easy to program for QoS because it was made by networking folks.  Lync is a pain because it lives in the same world as Office and SQL Server.  It’s only when networks become contentious that we have to find novel ways of solving problems.  Lync is the use case for SDN for small and medium enterprises focused primarily on wireless connectivity.  Because making Lync behave in that environment is indistinguishable from magic, at least without SDN.

If you want to see some interesting conversations about Lync and SDN, especially with OpenFlow, tune into SDN Connect Live on September 18th.  Meru Networks and Tech Field Day will have a roundtable discussion about Lync, featuring Lync experts and SDN practitioners.

An Educational SDN Use Case

During the VMUnderground Networking Panel, we had a great discussion about software defined networking (SDN) among other topics. Seems that SDN is a big unknown for many out there. One of the reasons for this is the lack of specific applications of the technology. OSPF and SQL are things that solve problems. Can the same be said of SDN? One specific question regarded how to use SDN in small-to-medium enterprise shops. I fired off an answer from my own experience:

Since then, I’ve had a few people using my example with regards to a great use case for SDN. I decided that I needed to develop it a bit more now that I’ve had time to think about it.

Schools are a great example of the kinds of “do more with less” organizations that are becoming more common. They have enterprise-class networks and needs and live off budgets that wouldn’t buy janitorial supplies. In fact, if it weren’t for E-Rate, most schools would have technology from the Stone Age. But all this new tech doesn’t help if you can’t find a way for it to be used to the fullest for the purposes of educating students.

In my example, I talked about the shift from written standardized testing to online assessments. Oklahoma and Indiana are leading the way in getting rid of Scantrons and #2 pencils in favor of keyboards and monitors. The process works well for the most part with proper planning. My old job saw lots of scrambling to prep laptops, tablets, and lab machines for the rigors of running the test. But no amount of pre-config could prepare for the day when it was time to go live. On those days, the network was squarely in the sights of the administration.

I’ve seen emails go around banning non-testing students from the computers. I’ve seen hard-coded DNS entries on testing machines while the rest of the school had DNS taken offline to keep them from surfing the web. Redundant circuits. QoS policies that would make voice engineers cry. All in the hope of keeping the online test bandwidth free to get things completed in the testing window. All the while, I was thinking to myself, “There has got to be an easier way to do this…”

Redefining with Software

Enter SDN. The original use case for SDN at Stanford was network slicing. The Next-Gen Network Team wanted to use the spare capacity of the network for testing without crashing the whole system. Being able to reconfigure the network on the fly is a huge leap forward. Pushing policy into devices without CLI cuts down on the resume-generating events (RGE) in production equipment. So how can we apply these network slicing principles to my example?

On the day of the test, have the configuration system push down a new policy that gives the testing machines a guaranteed amount of bandwidth. This reservation will ensure each machine is able to get what it needs without being starved out. With SDN, we can set this policy on a per-IP basis to ensure it is enforced. This slice will exist separate from the production network to ensure that no one starting a huge FTP transfer or video upload will disrupt testing. By leaving the remaining bandwidth intact for the rest of the school’s production network administrators can ensure that the rest of the student body isn’t impacted during testing. With the move toward flipped classrooms and online curriculum augmentation, having available bandwidth is crucial.

Could this be done via non-SDN means? Sure. Granted, you’ll have to plan the QoS policy ahead of time and find a way to classify your end-user workstations properly. You’ll also have to tune things to make sure no one is dominating the test machine pool. And you have to get it right on every switch you program. And remove it when you’re done. Unless you missed a student or a window, in which case you’ll need to reprogram everything again. SDN certainly makes this process much easier and less disruptive.

Tom’s Take

SDN isn’t a product. You can’t order a part number for SDN-001 and get a box labeled SDN. Instead, it’s a process. You apply SDN to existing environment and extend the capabilities through new processes. Those processes need use cases. Use cases drive business cases. Business cases provide buy in from the stakeholders. That’s why discussing cases like the one above are so important. When you can find a use for SDN, you can get people to accept it. And that’s half the battle.

SLAAC May Save Your Life


A chance dinner conversation at Wireless Field Day 7 with George Stefanick (@WirelesssGuru) and Stewart Goumans (@WirelessStew) made me think about the implications of IPv6 in healthcare.  IPv6 adoption hasn’t been very widespread, thanks in part to the large number of embedded devices that have basic connectivity.  Basic in this case means “connected with an IPv4 address”.  But that address can lead to some complications if you aren’t careful.

In a hospital environment, the units that handle medicine dosing are connected to the network.  This allows the staff to program them to properly dispense medications to patients.  Given an IP address in a room, staff can ensure that a patient is getting just the right amount of painkillers and not an overdose.  Ensuring a device gets the same IP each time is critical to making this process work.  According to George, he has recommended that the staff stop using DHCP to automatically assign addresses and instead move to static IP configuration to ensure there isn’t a situation where a patient inadvertently receives a fatal megadose of medication, such as when an adult med unit is accidentally used in a pediatric application.

This static policy does lead to network complications.  Units removed from their proper location are rendered unusable because of the wrong IP.  Worse yet, since those units don’t check in with the central system any more, they could conceivably be incorrectly configured.  At best this will generate a support call to the IT staff.  At worst…well, think lawsuit.  Not to mention what happens if there is a major change to gateway information.  That would necessitate massive manual reconfiguration and downtime until those units can be fixed.

Cut Me Some SLAAC

This is where IPv6 comes into play, especially with Stateless Address Auto Configuration (SLAAC).  By using an automatically configured address structure that never changes, this equipment will never go offline.  It will always be checked in on the network.  There will be little chance of the unit dispensing the wrong amount of medication.  The medical unit will have history available via the same IPv6 address.

There are challenges to be sure.  IPv6 support isn’t cheap or easy.  In the medical industry, innovation happens at a snail’s pace.  These devices are just now starting to have mobile connectivity for wireless use.  Asking the manufacturers to add IPv6 into their networking stacks is going to take years of development at best.

Having the equipment attached all the time also brings up issues with moving the unit to the wrong area and potentially creating a fatal situation.  Thankfully, the router advertisements can help there.  If the RA for a given subnet locks the unit into a given prefix, controls can be enacted on the central system to ensure that devices in that prefix range will never be allowed to dispense medication above or below a certain amount.  While this is more of a configuration on the medical unit side, IPv6 provides the predictability needed to ensure those devices can be found and cataloged.  Since a SLAAC addressed device using EUI-64 will always get the same address, you never have to guess which device got a specific address.  You will always know from the last 64 bits which device you are speaking to, no matter the prefix.

Tom’s Take

Healthcare is a very static industry when it comes to innovation.  Medical companies are trying to keep pace with technology advances while at the same time ensuring that devices are safe and do not threaten the patients they are supposed to protect.  IPv6 can give us an extra measure of safety by ensure devices receive the same address every time.  IPv6 also gives the consistency needed to compile proper reporting about the operation of a device and even the capability of finding that device when it is moved to an improper location.  Thanks to SLAAC and IPv6, one day these networking technologies might just save your life.

Moscone Madness


The Moscone Center in San Francisco is a popular place for technical events.  Apple’s World Wide Developer Conference (WWDC) is an annual user of the space.  Cisco Live and VMworld also come back every few years to keep the location lively.  This year, both conferences utilized Moscone to showcase tech advances and foster community discussion.  Having attended both this year in San Francisco, I think I can finally state the following with certainty.

It’s time for tech conferences to stop using the Moscone Center.

Let’s face it.  If your conference has more than 10,000 attendees, you have outgrown Moscone.  WWDC works in Moscone because they cap the number of attendees at 5,000.  VMworld 2014 has 22,000 attendees.  Cisco Live 2014 had well over 20,000 as well.  Cramming four times the number of delegates into a cramped Moscone Center does not foster the kind of environment you want at your flagship conference.

The main keynote hall in Moscone North is too small to hold the large number of audience members.  In an age where every keynote address is streamed live, that shouldn’t be a problem.  Except that people still want to be involved and close to the event.  At both Cisco Live and VMworld, the keynote room filled up quickly and staff were directing the overflow to community spaces that were already packed too full.  Being stuffed into a crowded room with no seating or table space is frustrating.  But those are just the challenges of Moscone.  There are others as well.

I Left My Wallet In San Francisco

San Francisco isn’t cheap.  It is one of the most expensive places in the country to live.  By holding your conference in downtown San Francisco, you are forcing your 20,000+ attendees into a crowded metropolitan area with expensive hotels.  Every time I looked up a hotel room in the vicinity of VMworld or Cisco Live, I was unable to find anything for less than $300 per night.  Contrast that with Interop or Cisco Live in Las Vegas, where sub-$100 are available and $200 per night gets you into the hotel of the conference center.

Las Vegas is built for conferences.  It has adequate inexpensive hotel options.  It is designed to handle a large number of travelers arriving at once.  While spread out geographically, it is easy to navigate.  In fact, except for the lack of Uber, Las Vegas is easy to get around in than San Francisco.  I never have a problem finding a restaurant in Vegas to take a large party.  Bringing a group of 5 or 6 to a restaurant in San Francisco all but guarantees you won’t find a seat for hours.

The only real reason I can see for holding conferences at Moscone, aside from historical value, is the ease of getting materials and people into San Francisco.  Cisco and VMware both are in Silicon Valley.  Driving up to San Francisco is much easier than shipping the conference equipment to Las Vegas or Orlando.  But ease-of-transport does not make it easy on your attendees.  Add in the fact that the lower cost of setup is not reflected in additional services or reduced hotel rates and you can imagine that attendees have no real incentive to come to Moscone.

Tom’s Take

The Moscone Center is like the Cotton Bowl in Dallas.  While both have a history of producing wonderful events, both have passed their prime.  They are ill-suited for modern events.  They are cramped and crowded.  They are in unfavorable areas.  It is quickly becoming more difficult to hold events for these reasons.  But unlike the Cotton Bowl, which has almost 100 years of history, Moscone offers not real reason to stay.  Apple will always be here.  Every new iPhone, Mac, and iPad will be launched here.  But those 5,000 attendees are comfortable in one section of Moscone.  Subjecting your VMworld and Cisco Live users to these kinds of conditions is unacceptable.

It’s time for Cisco, VMware, and other large organizations to move away from Moscone.  It’s time to recognize that Moscone is not big enough for an event that tries to stuff in every user it can.  instead, conferences should be located where it makes sense.  Las Vegas, San Diego, and Orlando are conference towns.  Let’s use them as they were meant to be used.  Let’s stop the madness of trying to shoehorn 20,000 important attendees into the sardine can of the Moscone Center.

Do We Need To Redefine Open?


There’s a new term floating around that seems to be confusing people left and right.  It’s something that’s been used to describe a methodology as well as used in marketing left and right.  People are using it and don’t even really know what it means.  And this is the first time that’s happened.  Let’s look at the word “open” and why it has become so confusing.

Talking Beer

For those at home that are familiar with Linux, “open” wasn’t the first term to come to mind.  “Free” is another word that has been used in the past with a multitude of loaded meanings.  The original idea around “free” in relation to the Open Source movement is that the software is freely available.  There are no restrictions on use and the source is always available.  The source code for the Linux kernel can be searched and viewed at any time.

Free describes the fact that the Linux kernel is available for no cost.  That’s great for people that want to try it out.  It’s not so great for companies that want to try and build a business around it, yet Red Hat has managed to do just that.  How can they sell something that doesn’t cost anything?  It’s because they keep the notion of free sharing of code alive while charging people for support and special packages that interface with popular non-free software.

The dichotomy between unencumbered idea software and no cost software is so confusing that the movement created a phrase to describe it:

Free as in freedom, not free as in beer.

When you talk about freedom, you are unrestricted.  You can use the software as the basis for anything.  You can rewrite it to your heart’s content.  That’s your right for free software. When you talk about free beer, you set the expectation that whatever you create will be available at no charge.  Many popular Linux distributions are available at no cost.  That’s like getting beer for nothing.

Open, But Not Open Open

The word “open” is starting to take on aspects of the “free” argument.  Originally, the meaning of open came from the Open Source community.  Open Source means that you can see everything about the project.  You can modify anything.  You can submit code and improve something.  Look at the OpenDaylight project as an example.  You can sign up, download the source for a given module, and start creating working code.  That’s what Brent Salisbury (@NetworkStatic) and Matt Oswalt (@Mierdin) are doing to great effect.  They are creating the network of the future and allowing the community to do the same.

But “open” is being redefined by vendors.  Open for some means “you can work with our software via an API, but you can’t see how everything works”.  This is much like the binary-only NVIDIA driver.  Proprietary programming is pre-compiled and available to download for free, but you can’t modify the source at all.  While it works with open source software, it’s not open.

A conversation I had during Wireless Field Day 7 drove home the idea of this new “open” in relation to software defined networking.  Vendors tout open systems to their customers. They standardize on northbound interfaces that talk to orchestration platforms and have API support for other systems to call them.  But the southbound interface is proprietary.  That means that only their controller can talk to the network hardware attached to it.  Many of these systems have “open” in the name somewhere, as if to project the idea that they work with any component makeup.

This new “open” definition of having proprietary components with an API interface feels very disingenuous.  It also makes for some very awkward conversations:

$VendorA: Our system is open!

ME: Since this is an open system, I can connect my $VendorB switch and get full functionality from your controller, right?

$VendorA: What exactly do you mean by “full”?

Tom’s Take

Using “open” to market these systems is wrong.  Telling customers that you are “open” because your other equipment can program things through a narrow API is wrong.  But we don’t have a word to describe this new idea of “open”.  It’s not exactly closed.  Perhaps we can call it something else.  Maybe “ajar”.  That might make the marketing people a bit upset.  “Try our new AjarNetworking controller.  As open as we wanted to make it without closing everything.”

“Open” will probably be dominated by marketing in the next couple of year.  Vendors will try to tell you how well they interoperate with everyone.  And I will always remember how open protocols like SIP are and how everyone uses that openness against it.  If we can’t keep the definition of “open” clean, we need to find a new term.