Who Wants A Free Puppy?

Years ago, my wife was out on a shopping trip. She called me excitedly to tell me about a blonde shih-tzu puppy she found and just had to have. As she talked, I thought about all the things that this puppy would need to thrive. Regular walks, food, and love are paramount on the list. I told her to use her best judgement rather than flat out saying “no”. Which is also how I came to be a dog owner. Today, I’ve learned there is a lot more to puppies (and dogs) than walks and feeding. There is puppy-proofing your house. And cleaning up after accidents. And teaching the kids that puppies should be treated gently.

An article from Martin Glassborow last week made me start thinking about our puppy again. Scott McNealy is famous for having told the community that “Open Source is free like a puppy.” back in 2005. While this was a dig at the community in regards to the investment that open source takes, I think Scott was right on the mark. I also think Martin’s recent article illustrates some of the issues that management and stakeholders don’t see with comunity projects.

Open software today takes care and feeding. Only instead of a single OS on a server in the back of the data center, it’s all about new networking paradigms (OpenFlow) or cloud platform plays (OpenStack). This means there are many more moving parts. Engineers and programmers get it. But go to the stakeholders and try to explain what that means. The decision makers love the price of open software. They are ambivalent to the benefits to the community. However, the cost of open projects is usually much higher than the price. People have to invest to see benefits.

TNSTAAFL

At the recent SolidFire Summit, two cloud providers were talking about their software. One was hooked in to the OpenStack community. He talked about having an entire team dedicating to pulling nightly builds and validating them. They hacked their own improvements and pushed them back upstream for the good of the community. He seemed love what he was talking about. The provider next to him was just a little bit larger. When asked what his platform was he answered “CloudStack”. When I asked why, he didn’t hesitate. “They have support options. I can have them fix all my issues.”

Open projects appeal to the hobbiest in all of us. It’s exciting to build something from the ground up. It’s a labor of love in many cases. Labors of love don’t work well for some enterprises though. And that’s the part that most decision makers need to know. Support for this awesome new thing may not alwasy be immediate or complete. To bring this back to the puppy metaphor, you have to have patience as your puppy grows up and learns not to chew on slippers.

The reward for all this attention? A loving pet in the case of the puppy. In the case of open software, you have a workable framework all your own that is customized to your needs and very much a part of your DNA. Supported by your staff and hopefull loved as much or more than any other solution. Just like dog owners that look forward to walking the dog or playing catch at the dog part, your IT organization should look forward to the new and exciting challenges that can be solved with the investment of time.


Tom’s Take

Nothing is free. You either pay for it with money or with time. Free puppies require the latter, just as free software projects do. If the stakeholders in the company look at it as an investment of time and energy then you have the right frame of mind from the outset. If everything isn’t clear up front, you will find yourself needing to defend all the time you’ve spent on your no-cost project. Hopefully your stakeholders are dog people so they understand that the payoff isn’t in the price, but the experience.

The OpenFlow Longbow

carck_longbow

The label of disruption seems to be thrown around quite a bit.  I’ve heard tablets being called the disruptive technology when it comes to PCs.  I’ve also heard people talking about software defined networking (SDN) as the disruptive technology to the way that we’ve been doing networking for the last decade.  Nowhere is that more true than with OpenFlow.  But what does this disruption mean?  Aren’t we still essentially forwarding packets the same way we have in the past?  To get a frame of reference, let’s look at one of my favorite disruptive technologies – the longbow.

Who Are Yew?

The Welsh yew longbow had been a staple of the English military as far back as 600AD.  The recurve shape of the bow provided for a faster, longer arrow flight than the shorter bows used by foot soldiers and mounted calvary.  The wood was especially important, as the yew tree was only really found in abundance in Wales.  Longbows existed in one form or another in many armies in Europe, but the Welsh bow was the only one that was feared.

The advantage was never more apparent than during the Hundred Years War between England and France.  The longbow was deployed as a mid-range artillery to harass advancing troops.  There are questions about whether or not the arrows were able to pierce the plate armor used by knights at the time, but the results of archery corps can’t be denied.  Especially in the Battle of Agincourt.  The English used longbows to slow the advance of French forces in armor, tiring them out as they crossed the field and holding them at bay until the heavier foot soldiers of the English army could be repositioned to take the advancing enemy apart.  Agincourt was a win for the English and fives years later, the war was over.

The longbow proved itself to be a very disruptive technology.  Not because it killed soldiers better or faster than a mace or broadsword.  It was disruptive because it changed the way generals composed their armies.  Instead of relying on heavy assault troops in armor to punch a hole in the enemy lines, the longbow forced a soldier to become more mobile with less armor to be able to cross the range of the bow much more quickly and close to a range where the technology advantage was negated.  Bowmen were at a distinct advantage at point-blank range.  Armies grew more mobile all the way up to the point where a new technology disrupted the reign of the longbow: gunpowder.  Once musketeers became more prevalent, they replaced the role traditionally held by the longbow archer.

Disrupting the Flow

How does this history lesson apply to OpenFlow?  OpenFlow is poised to disrupt networking in the same way as the longbow forced people to take a new look at their armies.  OpenFlow takes something we know about and turns it on its head.  It gives us much more control over how a switch forwards packets.  It also makes us ask questions about how we build our networks.

Questions about big core switches, spine-and-leaf topologies, and the intelligence of edge devices all become very pertinent in an OpenFlow design.  Should I put a larger switch at the edge since it’s going to be doing a lot of heavy lifting?  Should I use a fabric in place of a three-tier design?  Will my controller allow me to use different interconnects to ensure high-speed traffic flows east and west in the data center?

OpenFlow is taking over some vendors offerings.  NEC and HP have already committed to OpenFlow designs.  Even companies that haven’t really embraced OpenFlow have decided to offer it rather than dismiss it.  Arista and Cisco are offering new switches that have support for OpenFlow, even if that support may not extend to more proprietary enhancements right now.  Just like the longbow, OpenFlow is forcing the opposition to reconfigure the way they fight the battle.  They may not like it.  They may even say in private that they’re just doing it to mollify a part of the customer base looking for specific points in an proposal.  But they are still dedicating time and effort to OpenFlow all the same.


Tom’s Take

Disruption happens all the time.  We don’t use cell phones in bags hardwired into our vehicles any more.  Our computers are the size of a broom closet and run off of punch cards.  Just like weapons in the ancient world, whoever comes up with a more effective way of winning battles enjoys a distinct advantage for a time.  Eventually, something comes along that disrupts the disruption.  OpenFlow is currently the king of the SDN battlefield.  It holds that title by virtue of how many people are racing to interoperate with it.  Eventually, it will be dethroned just as the longbow was.  They key will be recognizing the next new thing first and using it to your advantage.  And arming your archers with it.

Should Microsoft Buy Big Switch?

MSBSN

Network virtualization is getting more press than ever.  The current trend seems to be pitting the traditional networking companies, like Cisco and Juniper, against the upstarts in the server virtualization companies, like VMware and OpenStack.  To hear the press and analysts talk about it makes one think that these companies represent all there is in the industry.

Whither Microsoft?

One company that seems to have been left out of the conversation is Microsoft.  The stalwarts of Redmond have been turning heads with their rapid pace of innovation to reach parity with VMware’s offerings.  However, when the conversation turns to networking Microsoft is usually left out in the cold.  That’s because their efforts at networking in the past have been…problematic.  They are very service oriented and care little for the world outside their comfortable servers.  That won’t last forever.  VMware will be able to easily shift the conversation away from feature parity with Hyper-V and concentrate on all the networking expertise that it has now that is missing in the competitor.

Microsoft can fix that problem with a small investment.  If you can innovate by building it, you need to buy it.  Microsoft has the cash to buy several startups, even after sinking a load of it into Nokia.  But which SDN-focused company makes the most sense for Microsoft?  I spent a lot of time thinking about this very question and the answer became clear for me:  Microsoft needs to buy Big Switch Networks.

A Window On The Future

Microsoft needs SDN expertise.  They have no current networking experience outside of creating DHCP and DNS services on their platforms.  I mean, did anyone ever use their Network Access Protocol solution as a NAC option?  Microsoft has traditionally created bare bones network constructs to please their server customers.  They think networking is a resource outside their domain, which coincidentally is just how their competitors used to look at it as well.  At least until Martin Casado changed their minds.

Big Switch is a perfect fit for Microsoft.  They have the chops to talk OpenFlow.  Their recent shift away from overlays to software on bare metal would play well as a marketing point against VMware and their “overlays are the best way” message.  They could also help Microsoft do more development on NV-GRE, the also ran to VxLAN.  Ivan Pepelnjak (@IOSHints) was pretty impressed with NV-GRE last December, but it’s dropped of the radar in the wake of VMware embracing VxLAN in NSX.  I think having a bit more development work from the minds at Big Switch would put it back into the minds of some smaller network virtualization companies looking to support something other than the de facto standard.  I know that Big Switch has moved away from the overlay model, but if NV-GRE can easily be adapted to the work Big Switch was doing a few months ago, it would be a great additional offering to the idea of running everything in an SDN-enabled switch OS.

Microsoft will also benefit from the pile of SDN applications that Big Switch has rumored to be sitting around and festering for lack of attention.  Applications like network taps sell Big Switch products now.  With NSX introducing the ideas of integrated load balancers and firewalls into the base product, Big Switch is going to be hard pressed to charge extra for them.  Instead, they’re going to have to go out on a limb and finish developing them past the alpha stage and hope that they are enough to sell more product and recoup the development costs.  With the deep pockets in Redmond, finishing those applications would be a drop in the bucket if it means that the new product can compete directly on an even field with VMware.

Building A Bigger Switch

Big Switch gains in this partnership also.  They get to take some pressure of their overworked development team.  It can’t be easy switching horses in mid-stream, especially when it involves changing your entire outlook on how SDN should be done.  Adding a few dozen more people to the project will allow you to branch out and investigate how integrating software into your ideas could be done.  Big Switch has already done a great job developing Project Floodlight.  Why not let some big brains chew on other ideas in the same vein for a while.

Big Switch could also use the stability of working for an established company.  They have a pretty big target on their backs now that everyone is developing an SDN strategy.  Writing an OS for bare metal switches is going to bring them into contention with Cumulus Networks.  Why not let an OS vendor do some of the heavy lifting?  It would also allow Microsoft’s well established partner program to offer incentives to partners that want to sell white label switches with software from Big Switch to get into networking much more cheaply than before.  Think about federal or educational discounts that Microsoft already gives to customers.  Do you think they’d be excited to see the same kind of consideration when it comes to networking hardware?

Tom’s Take

Little fish either get eaten by bigger ones or they have to be agile enough to avoid being snapped up.  The smartest little fish in the ocean may be the remora.  It survives by attaching itself to a bigger fish and providing a benefit for them both.  The remora gets the protection of not being eaten while also not taking too much from the host.  Microsoft would do well to setup some kind of similar arrangement with Big Switch.  They could fund future development into NV-GRE compatible options, or they just by the company outright.  Both parties get something out of the deal: Microsoft gets the SDN component they need.  Big Switch gets a backer with so much industry clout that they can no longer be dismissed.

HP Networking and the Software Defined Store

HP

HP has had a pretty good track record with SDN.  Even if it’s not very well-known.  HP has embraced OpenFlow on a good number of its Procurve switches.  Given the age of these devices, there’s a good chance you can find them laying around in labs or in retired network closets to test with.  But where is that going to lead in the long run?

HP Networking was kind enough to come to Interop New York and participate in a Tech Field Day roundtable.  It had been a while since I talked to their team.  I wanted to see how they were handling the battle being waged between OpenFlow proponents like NEC and Brocade, Cisco and their hardware focus, and VMware with NSX.  Jacob Rapp and Chris Young (@NetManChris) stepped up to the plate to talk about SDN and the vision on HP.

They cover a lot of ground in here.  Probably the most important piece to me is the SDN app store.

The press picked up on this quickly.  HP has an interesting idea here.  I should know.  I mentioned it in passing in an article I wrote a month ago.  The more I think about the app store model, the more I realize that many vendors are going to go down the road.  Just not in the way HP is thinking.

HP wants to curate content for enterprises.  They want to ensure that software works with their controller to be sure that there aren’t any hiccups in implementation.  Given their apparent distaste for open source efforts, it’s safe to say that their efforts will only benefit HP customers.  That’s not to say that those same programs won’t work on other controllers.  So long as they operate according to the guidelines laid down by the Open Networking Foundation, all should be good.

Show Me The Money

Where’s the value then?  That’s in positioning the apps in the store.  Yes, you’re going to have some developers come to HP and want to simple apps to put in the store.  Odds are better that you’re going to see more recognizable vendors coming to the HP SDN store.  People are more likely to buy software from a name they recognize, like TippingPoint or F5.  That means that those companies are going to want to have a prime spot in the store.  HP is going to make something from hosting those folks.

The real revenue doesn’t come from an SMB buying a load balancer once.  It comes from a company offering it as a service with a recurring fee.  The vendor gets a revenue stream. HP would be wise to work out a recurring fee as well.  It won’t be the juicy 30% cut that Apple enjoys from their walled garden, but anything would be great for the bottom line.  Vendors win from additional sales.  Customers win from having curated apps that work every time that are easy to purchase, install, and configure.  HP wins because everyone comes to them.

Fragmentation As A Service

Now that HP has jumped on the idea of an enterprise-focused SDN app store, I wonder which company will be the next to offer one?  I also worry that having multiple app stores won’t end up being cumbersome in the long run.  Small developers won’t like submitting their app to four or five different vendor-affiliated stores.  More likely they’ll resort to releasing code on their own rather than jump through hoops.  That will eventually lead to support fragmentation.  Fragmentation helps no one.


Tom’s Take

HP Networking did a great job showcasing what they’ve been doing in SDN.  It was also nice to hear about their announcements the day before they broke wide to the press.  I think HP is going to do well with OpenFlow on their devices.  Integrating OpenFlow visibility into their management tools is also going to do wonders for people worried about keeping up with all the confusing things that SDN can do to a traditional network.  The app store is a very intriguing concept that bears watching.  We can only hope that it ends up being a well-respect entry in a long line of easing customers into the greater SDN world.

Tech Field Day Disclaimer

HP was a presenter at the Tech Field Day Interop Roundtable.  In addition, they also provided the delegates a 1TB USB3 hard disk drive.  They did not ask for any consideration in the writing of this review nor were they promised any.  The conclusions and analysis contained in this post are mine and mine alone.

SDN 101 at ONUG Academy

300x250_TFD10_V2

Software defined networking is king of the hill these days in the greater networking world.  Vendors are contemplating strategies.  Users are demanding functionality.  And engineers are trying to figure out what it all means.  What’s needed is a way for vendor-neutral parties to get together and talk about what SDN represents and how best to implement it.  Most of the talk so far has been at vendor-specific conferences like Cisco Live or at other conferences like Interop.  I think a third option has just presented itself.

Nick Lippis (@NickLippis) has put together a group of SDN-focused people to address concerns about implementation and usage.  The Open Networking User Group (ONUG) was assembled to allow large companies using SDN to have a semi-annual meeting to discuss strategy and results.  It allows Facebook to talk to JP Morgan about what they are doing to simplify networking through use of things like OpenFlow.

This year, ONUG is taking it a step further by putting on the ONUG Academy, a day-long look at SDN through the eyes of those that implement it.  They have assembled a group of amazing people, including the founder of Cumulus Networks and Tech Field Day’s own Brent Salisbury (@NetworkStatic).  There will be classes about optimizing networks for SDN as well as writing SDN applications for the most popular controllers on the market.  Nick shares more details about the ONUG academy here:

If you’re interested in attending ONUG either for the academy or for the customer-focused meetings, you need to register today.  As a special bonus, if you use the code TFD10 when you sign up, you can take 10% of the cost of registration.  Use that extra cash to go out and buy a cannoli or two.

I’ll be at ONUG with Tech Field Day interviewing customers and attendees about their SDN strategies as well as where they think the state of the industry is headed.  If you’re there, stop by and say hello.  And be sure to bring me one of those cannolis.

SDN and NFV – The Ups and Downs

TopSDNBottomNFV

I was pondering the dichotomy between Software Defined Networking (SDN) and Network Function Virtualization (NFV) the other day.  I’ve heard a lot of vendors and bloggers talking about how one inevitably leads to the other.  I’ve also seen a lot of folks saying that the two couldn’t be further apart on the scale of software networking.  The more I thought about these topics, the more I realized they are two sides of the coin.  The problem, at least in my mind, is the perspective.

SDN – Planning The Paradigm

Software Defined Networking telegraphs everything about what it is trying to accomplish right there in the name.  Specifically, the “Definition” part of the phrase.  I’ve made jokes in the past about the lack of definition in SDN as vendors try to adapt their solutions to fit the buzzword mold.  What I finally came to realize is that the SDN folks are all about definition. SDN is the Top Down approach to planning.  SDN seeks to decompose the network into subsystems that can be replaced or reprogrammed to suit the needs of those things which utilize the network.

As an example, SDN breaks the idea of switch down into things like “forwarding plane” and “control plane” and seeks to replace the control plane with alternative software, whether it be a controller-based architecture like OpenFlow or an overlay network similar to that of VMware/Nicira.  We can replace the OS of a switch with a concept like OpenFlow easily.  It’s just a mechanism for determining which entries are populated in the Content Addressable Memory (CAM) tables of the forwarding plane.  In top down design, it’s easy to create a stub entry or “black box” to hold information that flows into it.  We don’t particularly care how the black box works from the top of the design, just that it does its job when called upon.

Top Down designs tend to run into issues when those black boxes lack detail or are missing some critical functionality.  What happens when OpenFlow isn’t capable of processing flows fast enough to keep the CAM table of a campus switch populated with entries?  Is the switch going to fall back to process switching the packets?  That could be an big issue.  Top Down designs are usually very academic and elegant.  They also have a tendency to lack concrete examples and real world experience.  When you think about it, that does speak a lot about the early days of SDN – lots of definition of terminology and technology, but a severe lack of actual packet forwarding.

NFV – Working From The Ground Up

Network Function Virtualization takes a very different approach to the idea of turning hardware networks into software networks.  The driving principle behind NFV is replication of existing technology in a software state.  This is classic Bottom Up design.  Rather than spending a large amount of time planning and assembling the perfect system, Bottom Up designers tend to build as they go.  They concentrate on making something work first, then making their things work together second.

NFV is great for hands-on folks because it gives concrete, real results almost immediately. Once you’ve converted an load balancer or a router to a purely software-based construct you can see right away how it works and what the limitations might be.  Does it consume too many resources on the hypervisor?  Does it excel at forwarding small packets?  Does switching a large packet locally cause a fault?  These are problems that can be corrected in the individual system rapidly rather than waiting to modify the overall plan to account for difficulties in the virtualization process.

Bottom Up design does suffer from some issues as well.  The focus in Bottom Up is on getting things done on a case-by-case basis.  What do you do when you’ve converted all your hardware to software?  Do your NFV systems need to talk to one another?  That’s usually where Bottom Up design starts breaking down.  Without a grand plan at a higher level to ensure that systems can talk to each other this design methodology falls back to a series of “hacks” to get them connected.  Units developed in isolation aren’t required to play nice with everyone else until they are forced to do so.  That leads to increasing complex and fragile interconnection systems that could fail spectacularly should the wrong thread be yanked with sufficient force.


Tom’s Take

Which method is better?  Should we spend all our time planning the system and hope that our Powerpoint Designs work the right way when someone codes them in a few months?  Or should we say “damn the torpedoes” and start building things left and right and hope that someone will figure out a way to tie all these individual pieces together at some point?

Surprisingly, the most successful design requires elements of both.  People need to have a basic plan at the least when starting out on a plan to change the networking world.  Once the ideas are sketched out, you need a team of folks willing to burn the midnight oil and get the ideas implemented in real life to ensure that the plan works the right way.  The guidance from the top is essential to making sure everything works together in the end.

Whether you are leading from the top or the bottom, remember that everything has to meet in the middle sooner or later.

Brocade’s Pragmatically Defined Network

logo-brocadeMost of the readers of my blog would agree that there is a lot of discussion in the networking world today about software defined networking (SDN) and the various parts and pieces that make up that umbrella term.  There’s argument over what SDN really is, from programmability to orchestration to network function virtualization (NFV).  Vendors are doing their part to take advantage of some, all, or in some cases none of the above to push a particular buzzword strategy to customers.  I like to make sure that everything is as clear as possible before I start discussing the pros and cons.  That’s why I jumped at the chance to get a briefing from Brocade around their new software and hardware releases that were announced on April 30th.

I spoke with Kelly Harrell, Brocade’s new vice president and general manager of the Software Business Unit.  If that name sounds somewhat familiar, it might be because Mr. Harrell was formerly at Vyatta, the software router company that was acquired by Brocade last year.  We walked through a presentation and discussion of the direction that Brocade is taking their software defined networking portfolio.  According to Brocade, the key is to be pragmatic about the new network.  New technologies and methodologies need to be introduced while at the same time keeping in mind that those ideas must be implemented somehow.  I think that a large amount of the frustration with SDN today comes from a lot of vaporware presentations and pie-in-the-sky ideas that aren’t slated to come to fruition for months.  Instead, Brocade talked to me about real products and use cases that should be shipping very soon, if not already.

The key to Brocade is to balance SDN against network function virtualization, something I referred to a bit in my Network Field Day 5 post about Brocade.  Back then, I called NFV “Networking Done (by) Software,” which was my sad attempt to point out how NFV is just the opposite of what I see SDN becoming.  During our discussion, Harrell pointed out that NFV and SDN aren’t totally dissimilar after all.  Both are designed to increase the agility with which a company can execute on strategy and create value for shareholders.  SDN is primarily focused on programmability and orchestration.  NFV is tied more toward lowering costs by implementing existing technology in a flexible way.

NFV seeks to take existing appliances that have been doing tasks, such as load balancers or routers, and free their workloads from being tied to a specific piece of hardware.  In fact, there has been an explosion of these types of migrations from a variety of vendors.  People are virtualizing entire business lines in an effort to remove the reliance on specialized hardware or reduce the ongoing support costs.  Brocade is seeking to do this with two platforms right now.  The first is the Vyatta vRouter, which is the extension what came over in the Vyatta acquisition.  It’s a router and a firewall and even a virtual private networking (VPN) device that can run on just about anything.  It is hypervisor agnostic and cloud platform agnostic as well.  The idea is that Brocade can include a copy of the vRouter with application packages that can be downloaded from an enterprise cloud app store.  Once downloaded and installed, the vRouter can be fired up and pull a predefined configuration from the scripts included in the box.  By making it agnostic to the underlying platform, there’s no worry about support down the road.

The second NFV platform Brocade told me about is the virtual ADX application delivery switch.  It’s basically a software load balancer.  That’s not really the key point of the whole idea of applying the NFV template to an existing hardware platform.  Instead, the idea is that we’re taking something that’s been historically huge and hard to manage and moving it closer to the edge where it can be of better use.  Rather that sticking a huge load balancer at the entry point to the data center to ensure that flows are separated, the vADX allows the load balancer to be deployed very close to the server or servers that need to have the information flow metered.  Now, the agility of SDN/NFV allows these software devices to be moved and reconfigured quickly without needing to worry about how much reprogramming is going to be necessary to pull the primary load balancer out or change a ton of rules to take reroute traffic to a vMotioned cluster.  In fact, I’m sure that we’re going to see a new definition of the “network edge” being to emerge as more software-based NFV devices begin to be deployed closer and closer to the devices that need them.

On the OpenFlow front, Brocade told me about their new push toward something they are calling “Hybrid Port OpenFlow.”  OpenFlow is a great disruptive SDN technology that is gaining traction today, largely in part because of companies like Brocade and NEC that have embraced it and started pushing it out to their customer base well ahead of other manufacturers.  Right now, OpenFlow support really consists to two modes – ON and OFF.  OFF is pretty easy to imagine.  ON is a bit more complicated.  While a switch can be OpenFlow enabled and still forward normal traffic, the practice has always been to either dedicate the switch to OpenFlow forwarding, in effect turning it into a lab switch, or to enable OpenFlow selectively for a group of ports out of the whole switch, kind of like creating a lab VLAN for testing on a production box.  Brocade’s Hybrid Port OpenFlow model allows you to enable OpenFlow on a port and still allow it to do regular traffic forwarding sans OpenFlow.  That may be the best model for adopters going forward due to one overriding factor – cost.  When you take a switch or a group of ports on a switch and dedicate them for OpenFlow, you are cost the enterprise something.  Every port on the switch costs a certain amount of money.  Every minute an engineer spends working on a crazy lab project incurs a cost.  By enabling the network engineers to turn on OpenFlow at will without disrupting the existing traffic flow, Brocade can reduce the opportunity cost of enabling OpenFlow to almost zero.  If OpenFlow just becomes something that works as soon as you enable it, like IPv6 in Windows 7, you don’t have to spend as much time planning for your end node configuration.  You just build the core and let the end nodes figure out they have new capabilities.  I figure that large Brocade networks will see their OpenFlow adoption numbers skyrocket simply because Hybrid Port mode turns the configuration into Easy Mode.

The last interesting software piece that Brocade showed me is a prime example of the kinds of things that I expect SDN to deliver to us in the future.  Brocade has created an application called the Application Resource Broker (ARB).  It sits above the fray of the lower network layers and monitors indicators of a particular application’s health, such as latency and load.  When one of those indicators hits a specific threshold, ARB kicks in to request more resources from vCenter to balance things out.  If the demand on the application continues to rise beyond the available resources, ARB can dynamically move the application to a public cloud instance with a much deeper pool of resources, a process known as cloudbursting.  All of this can happen automatically without the intervention of IT.  This is one of the things that shows me what SDN can really do.  Software can take care of itself and dynamically move things around when abnormal demand happens.  Intelligent choices about the network environment can be made on solid data.  No guess what about what “might” be happening.  ARB removes doubt and lag in response time to allow for seamless network repair.  Try doing that with a telnet session.

There’s a lot more to the Brocade announcement than just software.  You can check it out at http://www.brocade.com.  You can also follow them on Twitter as @BRCDComm.


Tom’s Take

The future looks interesting at first.  Flying cars, moving sidewalks, and 3D user interfaces are all staples of futuristic science fiction.  The problem for many arises when we need to start taking steps to build those fanciful things.  A healthy dose of pragmatism helps to figure out what we need to do today to make tomorrow happen.  If we root our views of what we want to do with what we can do, then the future becomes that much more achievable.  Even the amazing gadgets we take for granted today have a basis in the real technology of the time they were first created.  By making those incremental steps, we can arrive where we want to be a whole lot sooner with a better understanding of how amazing things really are.

Brocade Defined Networking

logo-brocade

Brocade stepped up to the plate once again to present to the assembled delegates at Network Field Day 5.  I’ve been constantly impressed with what they bring each time they come to the party.  Sometimes it’s a fun demo.  Other times its a great discussion around OpenFlow.  With two hours to spend, I wanted to see how Brocade would steer this conversation.  I could guarantee that it would involve elements of software defined networking (SDN), as Brocade has quietly been assembling a platoon on SDN-focused luminaries.  What I came away with surprised even me.

Mike Schiff takes up the reigns from Lisa Caywood for the title of Mercifully Short Introductions.  I’m glad that Brocade assumes that we just need a short overview for both ourselves and the people watching online.  At this point, if you are unsure of who Brocade is you won’t get a feel for it in eight short minutes.

Curt Beckman started off with fifteen minutes of discussion about where the Open Networking Foundation (ONF) is concentrating on development.  Because Curt is the chairman of the ONF, we kind of unloaded on him a bit about how the ONF should really be called the “Open-to-those-with-$30,000-to-spare Networking Foundation”.  That barrier to entry really makes it difficult for non-vendors to have any say in the matters of OpenFlow.  Indeed, the entry fee was put in place specifically to deter those not materially interested in creating OpenFlow based products from discussing the protocol.  Instead, you have the same incumbent vendors that make non-OpenFlow devices today steering the future of the standard.  Unlike the IETF,  you can’t just sign up for the mailing list or show up to the meetings and say your peace.  You have to have buy in, both literally and figuratively.  I proposed the hare-brained idea of creating a Kickstarter project to raise the necessary $30,000 for the purpose of putting a representative of “the people” in the ONF.  In discussions that I’ve had before with IETF folks they all told me you tend to see the same thing over and over again.  Real people don’t sit on committees.  The IETF is full of academics that argue of the purity of an OAM design and have never actually implemented something like that in reality.  Conversely, the ONF is now filled with deep pocketed people that are more concerned with how they can use OpenFlow to sell a few more switches rather than now best to implement the protocol in reality.  If you’d like to donate to an ONF Kickstarter project, just let me know and I’ll fire it up.  Be warned – I’m planning on putting Greg Ferro (@etherealmind) and Brent Salisbury (@networkstatic) on the board.  I figure that should solve all my OpenFlow problems.

The long presentation of this hour was all about OpenFlow and hybrid switching.  I’ve seen some of the aspects of this in my day job.  One of the ISPs in my area is trying to bring a 100G circuit into the state for Internet2 SDN-enabled links.  The demo that I saw in their office was pretty spiffy.  You could slice off any section of the network and automatically build a path between two nodes with a few simple clicks.  Brocade expanded my horizons of where these super fast circuits were being deployed with discussions of QUILT and GENI as well as talking about projects across the ocean in Australia and Japan.  I also loved the discussions around “phasing” SDN into your existing network.  Brocade realizes that no one is going to drop everything they currently have and put up an full SDN network all at once.  Instead, most people are going to put in a few SDN-enabled devices and move some flows to them at first both as a test and as a way to begin new architecture.  Just like remodeling a house, you have to start somewhere and shore up a few areas before you can really being to change the way everything is laid out.  That is where the network will eventually lead to being fully software defined down the road.  Just realize that it will take time to get there.

Next up was a short update from Vyatta.  They couldn’t really go into a lot of detail about what they were doing, as they were still busy getting digested by Brocade after being acquired.  I don’t have a lot to say about them specifically, but there is one thing I thought about as I mulled over their presentation.  I’m not sure how much Vyatta plays into the greater SDN story when you think about things like full API programmability, orchestration, and even OpenFlow.  Rather than being SDN, I think products like Vyatta and even Cisco’s Nexus 1000v should instead be called NDS – Networking Done (by) Software.  If you’re doing Network Function Virtualization (NFV), how much of that is really software definition versus doing your old stuff in a new way?  I’ve got some more, deeper thoughts on this subject down the road.  I just wanted to put something out there about making sure that what you’re doing really is SDN instead of NDS, which is a really difficult moving target to hit because the definition of what SDN really does changes from day to day.

Up next is David Meyer talking about Macro Trends in Networking.  Ho-ly crap.  This is by far my favorite video from NFD5.  I can say that with comfort because I’ve watched it five times already.  David Meyer is a lot like Victor Shtrom from Ruckus at WFD2.  He broke my brain after this presentation.  He’s just a guy with some ideas that he wants to talk about.  Except those ideas are radical and cut right to the core of things going on in the industry today.  Let me try to form some thoughts out of the video above, which I highly recommend you watch in its entirety with no distractions.  Also, have a pen and paper handy – it helps.

David is talking about networks from a systems analysis perspective.  As we add controls and rules and interaction to a fragile system, we increase the robustness of that system.  Past a certain point, though, all those extra features end up harming the system.  While we can cut down on rules and oversight, ultimately we can’t create a truly robust system until we can remove a large portion of the human element.  That’s what SDN is trying to do.  By allowing humans to interact with the rules and not the network itself you can increase the survivability of the system.  When we talk about complex systems, we really talk about increasing their robustness while at the same time adding features and flexibility.  That’s where things like SDN come into the discussion in the networking system.  SDN allows us to constrain the fragility of a system by creating a rigid framework to reduce the complexity.  That’s the “bow tie” diagram about halfway in.  We have lots of rules and very little interaction from agents that can cause fragility.  When the outputs come out of SDN, the are flexible and unconstrained again but very unlikely to contribute to fragility in the system.  That’s just one of the things I took away from this presentation.  There are several more that I’d love to discuss down the road once I’ve finished cooking them in my brain.  For now, just know that I plan on watching this presentation several more times in the coming weeks.  There’s so much good stuff in such a short time frame.  I wish I could have two hours with David Meyer to just chat about all this crazy goodness.

If you’d like to learn more about Brocade, you can check out their website at http://www.brocade.com.  You can also follow them on Twitter as @BRCDcomm


Tom’s Take

Brocade gets it.  They’ve consistently been running in the front of the pack in the whole SDN race.  They understand things like OpenFlow.  They see where the applications are and how to implement them in their products.  They engage with the builders of what will eventually become the new SDN world.  The discussions that we have with Curt Beckman and David Meyer show that there are some deep thinkers that are genuinely invested in the future of SDN and not just looking to productize it.  Mark my words – Brocade is poised to leverage their prowess in SDN to move up the ladder when it comes to market share in the networking world.  I’m not saying this lightly either.  There’s an adage attributed to Wayne Gretskey – “Don’t skate where the puck is.  Skate where the puck is going.”  I think Brocade is one of the few networking companies that’s figured out where the puck is going.

Tech Field Day Disclaimer

Brocade was a sponsor of Network Field Day 5.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 5.  In addition, Brocade provided a USB drive of marketing material and two notepads styled after RFC 2460.  At no time did they ask for, nor where they promised any kind of consideration in the writing of this review.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Cisco Data Center Duel

Cisco Logo

Network Field Day 5 started off with a full day at Cisco. The Data Center group opened and closed the day, with the Borderless team sandwiched in between. Omar Sultan (@omarsultan) greeted us as we settled in for a continental breakfast before getting started.

The opening was a discussion of onePK, a popular topic as of late from Cisco. While the topic du jour in the networking world is software-defined networking (SDN), Cisco steers the conversation toward onePK. This, at its core, is API access to all the flavors of the Internetwork Operating System (IOS). While other vendors discuss how to implement protocols like OpenFlow or how to expose pieces of their underlying systems to developers, Cisco has built a platform to allow access into pieces and parts of the OS. You can write applications in Java or Python to pull data from the system or push configurations to it. The process is slowly being rolled out to the major Cisco platforms. The support for the majority of the Nexus switching line should give the reader a good idea of where Cisco thinks this technology will be of best use.

One of the specific applications that Cisco showed off to us using onePK is the use of Puppet to provision switches from bare metal to functioning with a minimum of human effor. Puppet integration was a big underlying topic at both Cisco and Juniper (more on that in the Juniper NFD5 post). Puppet is gaining steam in the netowrking industry as a way to get hardware up and running quickly with the least amount of fuss. Server admins have enjoyed the flexibility of Puppet for a some time. It’s good to see well-tested and approved software like this being repurposed for similar functionality in the world of routing and switching.

Next up was a discussion about the Cisco ONE network controller. Controllers are a very hot topic in the network world today. OpenFlow allows a central management and policy server to push information and flow data into switches. This allows network admins to get a “big picture” of the network and how the packets are flowing across it. Having the ability to view the network in its entirity also allows admins to start partitioning it in a process called “slicing.” This was one of the first applications that the Stanford wiz kids used OpenFlow to accomplish. It makes sense when you think about how universities wanted to partition off their test networks to prevent this radical OpenFlow idea from crashing the production hardware. Now, we’re looking at using slicing for things like multi-tenancy and security. The building blocks are there to make some pretty interesting leaps. The real key is that the central controller have the ability to keep up with the flows being pushed through the network. Cisco’s ONE controller not only speaks OpenFlow, but onePK as well. This means that while the ONE controller can talk to disparate networking devices running OpenFlow, it will be able to speak much more clearly to any Cisco devices you have lying around. That’s a pretty calculated play from Cisco, given that the initial target for their controller will be networks populated primarily by Cisco equipment. The use case that was given to us for the Cisco ONE controller was replacing large network taps with SDN options. Fans of NFD may remember our trip to Gigamon. Cisco hadn’t forgotten, as the network tap they used as an example in their slide looked just like the orange Gigamon switch we saw at a previous NFD.

After the presentations from the Borderless team, we ended the day with an open discussion around a few topics. This is where the real fun started. Here’s the video:

The first hour or so is a discussion around hybrid switching. I had some points in here about the standoff between hardware and software people not really wanting to get along right now. I termed it a Mexican Standoff because no one really wants to flinch and go down the wrong path. The software people just want to write overlays and things like and make it run on everything. The entrenched hardware vendors, like Cisco, want to make sure their hardware is providing better performance than anyone else (because that’s where their edge is). Until someone decides to take a chance and push things in different directions, we’re not going to see much movement. Also, around 1:09:00 is where we talked a bit about Cisco jumping into the game with a pure OpenFlow switch without much more on top of it. This concept seemed a bit foreign to some of the Cisco folks, as they can’t understand why people wouldn’t want IOS and onePK. That’s where I chimed in with my “If I want a pickup truck, I don’t take a chainsaw to a school bus.” You shouldn’t have to shed all the extra stuff to get the performance you want. Start with a smaller platform and work your way up instead of starting with the kitchen sink and stripping things away.

Shortly after this is where the fireworks started. One of Cisco’s people started arguing that OpenFlow isn’t the answer. He said that the customer he was talking to didn’t want OpenFlow. He even went so far as to say that “OpenFlow is a fantasy because it promises everything and there’s nothing in production.” (about 1:17:00) Folks, this was one of the most amazing conversations I’ve ever seen at a Network Field Day event. The tension in the room was palpable. Brent and Greg were on this guy the entire time about how OpenFlow was solving real problems for customers today, and in Brent’s case he’s running it in production. I really wonder how the results of this are going to play out. If Cisco hears that their customers don’t care that much about OpenFlow and just want their gear to do SDN like in onePK then that’s what they are going to deliver. The question then becomes whether or not network engineers that believe that OpenFlow has a big place in the networks of tomorrow can convince Cisco to change their ways.

If you’d like to learn more about Cisco, you can find them at http://www.cisco.com/go/dc.  You can follow their data center team on Twitter as @CiscoDC.


Tom’s Take

Cisco’s Data Center group has a lot of interesting things to say about programmability in the network. From discussions about APIs to controllers to knock down, drag out aruguments about what role OpenFlow is going to play, Cisco has the gamut covered. I think that their position at the top of the network heap gives them a lot of insight into what’s going on. I’m just worried that they are going to use that to push a specific agenda and not embrace useful technologies down the road that solve customer problems. You’re going to hear a lot more from Cisco on software defined networking in the near future as they begin to roll out more and more features to their hardware in the coming months.

Tech Field Day Disclaimer

Cisco was a sponsor of Network Field Day 5.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 5.  In addition, Cisco provided me with a breakfast and lunch at their offices.  They also provided a Moleskine notebook, a t-shirt, and a flashlight toy.  At no time did they ask for, nor where they promised any kind of consideration in the writing of this review.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

Additional NFD5 Blog Posts

NFD5: Cisco onePK – Terry Slattery

NFD5: SDN and Unicorn Blood – Omar Sultan

Brocade – Packet Spraying and SDN Integrating

Brocade kicked off our first double session at Network Field Day 4.  We’d seen them previously at Network Field Day 2 and I’d just been to Brocade’s headquarters for their Tech Day a few weeks before.  I was pretty sure that the discussion that was about to take place was going to revolve around OpenFlow and some of the hot new hardware the Brocade had been showing off recently.  Thankfully, Lisa Caywood (@TheRealLisaC) still has some tricks up her sleeve.

I hereby dub Lisa “Queen of the Mercifully Short Introduction.”  Lisa’s overview of Brocade hit all the high points about what Brocade’s business lines revolve around.  I think by now that most people know that Brocade acquired Foundry for their ethernet switching line to add to their existing storage business that revolves around Fibre Channel.  With all that out of the way, it was time to launch into the presentations.

Jessica Koh was up first to talk to me about a technology that I haven’t seen already – HyperEdge.  This really speaks to me because the majority of my customer base isn’t ever going to touch a VDX or and ADX or an MLXe.  HyperEdge technology is Brocade’s drive to keep the campus network infrastructure humming along to keep pace with the explosion of connectivity in the data center.  Add in the fact that you’ve got all manner of things connecting into the campus network, and you can see how things like manageability can be at the forefront of people’s minds.  To that end, Brocade is starting off the HyperEdge discussion early next year with the ability to stack dissimilar ICX switches together.  This may sound like crazy talk to those of you that are used to stacking together Cisco 3750s or 2960s.  On those platforms, every switch has to be identical.  With the HyperEdge stacking, you can take an ICX 6610 and stack it with an ICX 6450 and it all works just fine.  In addition, you can place a layer 3 capable switch into the stack in order to provide a device that will get your packets off the local subnet.  That is a very nice feature that allows the customer base to buy layer 2 today if needed then add on in the future when they’ve outgrown the single wiring closet or single VLAN.  Once you’ve added the layer 3 switch to the stack, all those features are populated across all the ports of the whole stack.  That helps to get rid of some of the idiosyncrasies of some of the first stacking switch configurations, like not being able to locally switch packets.  Add in the fact that the stacking interfaces on these switches are the integrated 10Gig Ethernet ports, and you can see why I’m kind of excited.  No overpriced stacking kits.  Standard SFP+ interfaces that can be reused in the event I need to break the stack apart.

I’m putting this demo video up to show how a demo during your presentation can be both a boon and a bane.  Clear you cache after you’re done or log in as a different user to be sure you’re getting a clean experience.  The demo can be a really painful part when it doesn’t run correctly.

Kelvin Franklin was up next with an overview of VCS, Brocade’s fabric solution.  This is mostly review material from my Tech Day briefing, but there are some highlights here.  Firstly, Brocade is using yet a third new definition for the word “trunk”.  Unlike Cisco and HP, Brocade refers to the multipath connections into a VCS fabric as a trunk.  Now, a trunk isn’t a trunk isn’t a trunk.  You just have to remember the context of which vendor you’re talking about.  This was also the genesis of packet spraying, which I’m sure was a very apt description for what Brocade’s VCS is doing to the packets as they send them out of the bundled links but it doesn’t sound all that appealing.  Another thing to keep in mind when looking at VCS is that it is heavily based on TRILL for the layer 2 interconnects, but it does use FSPF from Brocade’s heavy fibre channel background to handle the routing of the links instead of IS-IS as the TRILL standard calls for.  Check out Ivan’s post from last year as to why that’s both good and bad.  Brocade also takes time to call out the fact that they’ve done their own ASIC in the new VCS switches as opposed to using merchant silicon like many other competitors.  Only time will tell how effective the move to merchant silicon will be for those that choose to use it, but so long as Brocade can continue to drive higher performance from custom silicon it may be an advantage for them.

This last part of the VCS presentation covers some of the real world use cases for fabrics and how Brocade is taking an incremental approach to building fabrics.  I’m curious to see how the VCS will begin to co-mingle with the HyperEdge strategy down the road.  Cisco has committed to bringing their fabric protocol (FabricPath) to the campus in the Catalyst 6500 in the near future.  With all the advantages of VCS that Brocade has discussed, I would like to see it extending down into the campus as well.  That would be a huge advantage for some of my customers that need the capability to do a lot of east-west traffic flows without the money to invest in the larger VCS infrastructure until their data usage can provide adequate capital.  There may not be a lot that comes out of it in the long run, but even having the option to integrate the two would be a feather in the marketing cap.

After lunch and a short OpenStack demo, we got an overview of Brocade’s involvement with the Open Networking Foundation (ONF) from Curt Beckmann.  I’m not going to say a lot about this video, but you really do need to watch it if you are at all curious to see where Brocade is going with their involvement with OpenFlow going forward.  As you’ve no doubt heard before, OpenFlow is really driving the future of networking and how we think about managing data flows.  Seeing what Brocade is doing to implement ideas and driving direction of ONF development is nice because it’s almost like a crystal ball of networking’s future.

The last two videos really go together to illustrate how Brocade is taking OpenFlow and adopting it into their model for software defined networking (SDN).  By now, I’ve heard almost every imaginable definition of SDN support.  On one end of the spectrum, you’ve got Cisco and Juniper.  A lot of their value is tied up in their software.  IOS and Junos represent huge investments for them.  Getting rid of this software so the hardware can be controlled by a server somewhere isn’t the best solution as they see it.  Their response has been to open APIs into their software and allow programmability into their existing structures.  You can use software to drive your networking, but you’re going to do it our way.  At the other extreme end of the scale, you’ve got NEC.  As I’ve said before, NEC is doubling down on OpenFlow mainly for one reason – survival.  If they don’t adapt their hardware to be fully OpenFlow compliant, they run the risk of being swept off the table by the larger vendors.  Their attachment to their switch OS isn’t as important as making their hardware play nice with everyone else.  In the middle, you’ve got Brocade.  They’ve made some significant investments into their switch software and protocols like VCS.  However, they aren’t married to the idea of their OS being the be all, end all of the conversation.  What they do want, however, is Brocade equipment in place that can take advantage of all the additional features offered from areas that aren’t necessarily OpenFlow specific.  I think their idea around OpenFlow is to push the hybrid model, where you can use a relatively inexpensive Brocade switch to fulfill your OpenFlow needs while at the same time allowing for that switch to perform some additional functionality above and beyond that defined by the ONF when it comes to VCS or other proprietary software.  They aren’t doing it for the reasons of survival like NEC, but it offers them the kind of flexibility they need to get within striking distance of the bigger players in the market.

If you’d like to learn more about Brocade, you can check out their website at http://www.brocade.com.  You can also follow them on Twitter as @BRCDComm.

Tom’s Take

I’ve seen a lot of Brocade in the last couple of months.  I’ve gotten a peek at their strategies and had some good conversations with some really smart people.  I feel pretty comfortable understanding where Brocade is going with their Ethernet business.  Yes, whenever you mention them you still get questions about fibre channel and storage connectivity, but Brocade really is doing what they can to get the word out about that other kind of networking that they do.  From the big iron of the VDX to the ability to stack the ICX switches all the way to the planning in the ONF to run OpenFlow on everything they can, Brocade seems to have started looking at the long-term play in the data networking market.  Yes, they may not be falling all over themselves to go to war with Cisco or even HP right now.  However, a bit of visionary thinking can lead one to be standing on the platform when the train comes rumbling down the track.  That train probably has a whistle that sounds an awful lot like “OpenFlow,” so only time can tell who’s going to be riding on it and who’s going to be underneath it.

Tech Field Day Disclaimer

Brocade was a sponsor of Network Field Day 4.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 4.  In addition, Brocade provided me with a gift bag containing a 2GB USB stick with marketing information and a portable cell phone charger. They did not ask for, nor where they promised any kind of consideration in the writing of this review.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.