Should Microsoft Buy Big Switch?

MSBSN

Network virtualization is getting more press than ever.  The current trend seems to be pitting the traditional networking companies, like Cisco and Juniper, against the upstarts in the server virtualization companies, like VMware and OpenStack.  To hear the press and analysts talk about it makes one think that these companies represent all there is in the industry.

Whither Microsoft?

One company that seems to have been left out of the conversation is Microsoft.  The stalwarts of Redmond have been turning heads with their rapid pace of innovation to reach parity with VMware’s offerings.  However, when the conversation turns to networking Microsoft is usually left out in the cold.  That’s because their efforts at networking in the past have been…problematic.  They are very service oriented and care little for the world outside their comfortable servers.  That won’t last forever.  VMware will be able to easily shift the conversation away from feature parity with Hyper-V and concentrate on all the networking expertise that it has now that is missing in the competitor.

Microsoft can fix that problem with a small investment.  If you can innovate by building it, you need to buy it.  Microsoft has the cash to buy several startups, even after sinking a load of it into Nokia.  But which SDN-focused company makes the most sense for Microsoft?  I spent a lot of time thinking about this very question and the answer became clear for me:  Microsoft needs to buy Big Switch Networks.

A Window On The Future

Microsoft needs SDN expertise.  They have no current networking experience outside of creating DHCP and DNS services on their platforms.  I mean, did anyone ever use their Network Access Protocol solution as a NAC option?  Microsoft has traditionally created bare bones network constructs to please their server customers.  They think networking is a resource outside their domain, which coincidentally is just how their competitors used to look at it as well.  At least until Martin Casado changed their minds.

Big Switch is a perfect fit for Microsoft.  They have the chops to talk OpenFlow.  Their recent shift away from overlays to software on bare metal would play well as a marketing point against VMware and their “overlays are the best way” message.  They could also help Microsoft do more development on NV-GRE, the also ran to VxLAN.  Ivan Pepelnjak (@IOSHints) was pretty impressed with NV-GRE last December, but it’s dropped of the radar in the wake of VMware embracing VxLAN in NSX.  I think having a bit more development work from the minds at Big Switch would put it back into the minds of some smaller network virtualization companies looking to support something other than the de facto standard.  I know that Big Switch has moved away from the overlay model, but if NV-GRE can easily be adapted to the work Big Switch was doing a few months ago, it would be a great additional offering to the idea of running everything in an SDN-enabled switch OS.

Microsoft will also benefit from the pile of SDN applications that Big Switch has rumored to be sitting around and festering for lack of attention.  Applications like network taps sell Big Switch products now.  With NSX introducing the ideas of integrated load balancers and firewalls into the base product, Big Switch is going to be hard pressed to charge extra for them.  Instead, they’re going to have to go out on a limb and finish developing them past the alpha stage and hope that they are enough to sell more product and recoup the development costs.  With the deep pockets in Redmond, finishing those applications would be a drop in the bucket if it means that the new product can compete directly on an even field with VMware.

Building A Bigger Switch

Big Switch gains in this partnership also.  They get to take some pressure of their overworked development team.  It can’t be easy switching horses in mid-stream, especially when it involves changing your entire outlook on how SDN should be done.  Adding a few dozen more people to the project will allow you to branch out and investigate how integrating software into your ideas could be done.  Big Switch has already done a great job developing Project Floodlight.  Why not let some big brains chew on other ideas in the same vein for a while.

Big Switch could also use the stability of working for an established company.  They have a pretty big target on their backs now that everyone is developing an SDN strategy.  Writing an OS for bare metal switches is going to bring them into contention with Cumulus Networks.  Why not let an OS vendor do some of the heavy lifting?  It would also allow Microsoft’s well established partner program to offer incentives to partners that want to sell white label switches with software from Big Switch to get into networking much more cheaply than before.  Think about federal or educational discounts that Microsoft already gives to customers.  Do you think they’d be excited to see the same kind of consideration when it comes to networking hardware?

Tom’s Take

Little fish either get eaten by bigger ones or they have to be agile enough to avoid being snapped up.  The smartest little fish in the ocean may be the remora.  It survives by attaching itself to a bigger fish and providing a benefit for them both.  The remora gets the protection of not being eaten while also not taking too much from the host.  Microsoft would do well to setup some kind of similar arrangement with Big Switch.  They could fund future development into NV-GRE compatible options, or they just by the company outright.  Both parties get something out of the deal: Microsoft gets the SDN component they need.  Big Switch gets a backer with so much industry clout that they can no longer be dismissed.

Advertisements

HP Networking and the Software Defined Store

HP

HP has had a pretty good track record with SDN.  Even if it’s not very well-known.  HP has embraced OpenFlow on a good number of its Procurve switches.  Given the age of these devices, there’s a good chance you can find them laying around in labs or in retired network closets to test with.  But where is that going to lead in the long run?

HP Networking was kind enough to come to Interop New York and participate in a Tech Field Day roundtable.  It had been a while since I talked to their team.  I wanted to see how they were handling the battle being waged between OpenFlow proponents like NEC and Brocade, Cisco and their hardware focus, and VMware with NSX.  Jacob Rapp and Chris Young (@NetManChris) stepped up to the plate to talk about SDN and the vision on HP.

They cover a lot of ground in here.  Probably the most important piece to me is the SDN app store.

The press picked up on this quickly.  HP has an interesting idea here.  I should know.  I mentioned it in passing in an article I wrote a month ago.  The more I think about the app store model, the more I realize that many vendors are going to go down the road.  Just not in the way HP is thinking.

HP wants to curate content for enterprises.  They want to ensure that software works with their controller to be sure that there aren’t any hiccups in implementation.  Given their apparent distaste for open source efforts, it’s safe to say that their efforts will only benefit HP customers.  That’s not to say that those same programs won’t work on other controllers.  So long as they operate according to the guidelines laid down by the Open Networking Foundation, all should be good.

Show Me The Money

Where’s the value then?  That’s in positioning the apps in the store.  Yes, you’re going to have some developers come to HP and want to simple apps to put in the store.  Odds are better that you’re going to see more recognizable vendors coming to the HP SDN store.  People are more likely to buy software from a name they recognize, like TippingPoint or F5.  That means that those companies are going to want to have a prime spot in the store.  HP is going to make something from hosting those folks.

The real revenue doesn’t come from an SMB buying a load balancer once.  It comes from a company offering it as a service with a recurring fee.  The vendor gets a revenue stream. HP would be wise to work out a recurring fee as well.  It won’t be the juicy 30% cut that Apple enjoys from their walled garden, but anything would be great for the bottom line.  Vendors win from additional sales.  Customers win from having curated apps that work every time that are easy to purchase, install, and configure.  HP wins because everyone comes to them.

Fragmentation As A Service

Now that HP has jumped on the idea of an enterprise-focused SDN app store, I wonder which company will be the next to offer one?  I also worry that having multiple app stores won’t end up being cumbersome in the long run.  Small developers won’t like submitting their app to four or five different vendor-affiliated stores.  More likely they’ll resort to releasing code on their own rather than jump through hoops.  That will eventually lead to support fragmentation.  Fragmentation helps no one.


Tom’s Take

HP Networking did a great job showcasing what they’ve been doing in SDN.  It was also nice to hear about their announcements the day before they broke wide to the press.  I think HP is going to do well with OpenFlow on their devices.  Integrating OpenFlow visibility into their management tools is also going to do wonders for people worried about keeping up with all the confusing things that SDN can do to a traditional network.  The app store is a very intriguing concept that bears watching.  We can only hope that it ends up being a well-respect entry in a long line of easing customers into the greater SDN world.

Tech Field Day Disclaimer

HP was a presenter at the Tech Field Day Interop Roundtable.  In addition, they also provided the delegates a 1TB USB3 hard disk drive.  They did not ask for any consideration in the writing of this review nor were they promised any.  The conclusions and analysis contained in this post are mine and mine alone.

SDN 101 at ONUG Academy

300x250_TFD10_V2

Software defined networking is king of the hill these days in the greater networking world.  Vendors are contemplating strategies.  Users are demanding functionality.  And engineers are trying to figure out what it all means.  What’s needed is a way for vendor-neutral parties to get together and talk about what SDN represents and how best to implement it.  Most of the talk so far has been at vendor-specific conferences like Cisco Live or at other conferences like Interop.  I think a third option has just presented itself.

Nick Lippis (@NickLippis) has put together a group of SDN-focused people to address concerns about implementation and usage.  The Open Networking User Group (ONUG) was assembled to allow large companies using SDN to have a semi-annual meeting to discuss strategy and results.  It allows Facebook to talk to JP Morgan about what they are doing to simplify networking through use of things like OpenFlow.

This year, ONUG is taking it a step further by putting on the ONUG Academy, a day-long look at SDN through the eyes of those that implement it.  They have assembled a group of amazing people, including the founder of Cumulus Networks and Tech Field Day’s own Brent Salisbury (@NetworkStatic).  There will be classes about optimizing networks for SDN as well as writing SDN applications for the most popular controllers on the market.  Nick shares more details about the ONUG academy here:

If you’re interested in attending ONUG either for the academy or for the customer-focused meetings, you need to register today.  As a special bonus, if you use the code TFD10 when you sign up, you can take 10% of the cost of registration.  Use that extra cash to go out and buy a cannoli or two.

I’ll be at ONUG with Tech Field Day interviewing customers and attendees about their SDN strategies as well as where they think the state of the industry is headed.  If you’re there, stop by and say hello.  And be sure to bring me one of those cannolis.

Know the Process, Not the Tool

rj45process

If there is one thing that amuses me as of late, it’s the “death of CLI” talk that I’m starting to see coming from many proponents of software defined networking. They like to talk about programmatic APIs and GUI-based provisioning and how everything that network engineers have learned is going to fall by the wayside.  Like this Network World article. I think reports of the death of CLI are a bit exaggerated.

Firstly, the CLI will never go away. I learned this when I stared working with an Aerohive access point I got at Wireless Field Day 2. I already had a HiveManager account provisioned thanks to Devin Akin (@DevinAkin), so all I needed to do was add the device to my account and I would be good to go. Except it never showed up. I could see it on my local network, but it never showed up in the online database. I rebooted and reset several times before flipping the device over and finding a curious port labeled “CONSOLE”. Why would a cloud-based device need a console port. In the next hour, I learned a lot about the way Aerohive APs are provisioned and how there were just some commands that I couldn’t enter in the GUI that helped me narrow down the problem. After fixing a provisioning glitch in HiveManager the next day, I was ready to go. The CLI didn’t fix my problem, but I did learn quite a bit from it.

Basic interfaces give people a great way to see what’s going on under the hood. Given that most folks in networking are from the mold of “take it apart to see why it works” the CLI is great for them. I agree that memorizing a 10-argument command to configure something like route redistribution is a pain in the neck, but that doesn’t come from the difficulty of networking. Instead, the difficulty lies in speaking the language.

I’ve traveled to a foreign country once or twice in my life. I barely have a grasp of the English language at times. I can usually figure out some Spanish. My foreign language skills have pretty much left me at this point. However, when I want to make myself understood to people that speak another language, I don’t focus on syntax. Instead, I focus on ideas. Pointing at an object and making gestures for money usually gets the point across that I want to buy something. Pantomiming a drinking gesture will get me to a restaurant.

Networking is no different. When I started trying to learn CLI terminology for Brocade, Arista, and HP I found they were similar in some respects but very different in others. When you try to take your Cisco CLI skills to a Juniper router, you’ll find that you aren’t even in the neighborhood when it comes to syntax. What becomes important is *what* you’re trying to do. If you can think through what you’re trying to accomplish, there’s usually a help file or a Google search that can pull up the right way to do things.

This extends its way into a GUI/API-driven programming interface as well. Rather than trying to intuit the interface just think about what you want to do instead. If you want two hosts to talk to each other through a low-cost link with basic security you just have to figure out what the drag-and-drop is for that. If you want to force application-specific traffic to transit a host running an intrusion prevention system you already know what you want to do. It’s just a matter of find the right combination of interface programming to accomplish it. If you’re working on an API call using Python or Java you probably have to define the constraints of the system anyway. The hard part is writing the code to interface to accomplish the task.


Tom’s Take

Learning the process is the key to making it in networking. So many entry level folks are worried about *how* to do something. Configuring a route or provisioning a VLAN are the end goal. It’s only when those folks take a step back and think about their task without the commands that they begin to become real engineers. When you can visualize what you want to do without thinking about the commands you need to enter to do it, you are taking the logical step beyond being tied to a platform. Some of the smartest people I know break a task down into component parts and steps. When you spend more time on *what* you are doing and less on *how* you are doing it, you don’t need to concern yourself with radical shifts in networking, whether they be SDN, NFV, or the next big thing. Because the process will never change even if the tools might.

Disruption in the New World of Networking

This is the one of the most exciting times to be working in networking. New technologies and fresh takes on existing problems are keeping everyone on their toes when it comes to learning new protocols and integration systems. VMworld 2013 served both as an annoucement of VMware’s formal entry into the larger networking world as well as putting existing network vendors on notice. What follows is my take on some of these announcements. I’m sure that some aren’t going to like what I say. I’m even more sure a few will debate my points vehemently. All I ask is that you consider my position as we go forward.

Captain Over, Captain Under

VMware, through their Nicira acquisition and development, is now *the* vendor to go to when you want to build an overlay network. Their technology augments existing deployments to provide software features such as load balancing and policy deployment. In order to do this and ensure that these features are utilized, VMware uses VxLAN tunnels between the devices. VMware calls these constructs “virtual wires”. I’m going to call them vWires, since they’ll likely be called that soon anyway. vWires are deployed between hosts to provide a pathway for communications. Think of it like a GRE tunnel or a VPN tunnel between the hosts. This means the traffic rides on the existing physical network but that network has no real visibility into the payload of the transit packets.

Nicira’s brainchild, NSX, has the ability to function as a layer 2 switch and a layer 3 router as well as a load balancer and a firewall. VMware is integrating many existing technologies with NSX to provide consistency when provisioning and deploying a new sofware-based network. For those devices that can’t be virtualized, VMware is working with HP, Brocade, and Arista to provide NSX agents that can decapsulate the traffic and send it to an physical endpoint that can’t participate in NSX (yet). As of the launch during the keynote, most major networking vendors are participating with NSX. There’s one major exception, but I’ll get to that in a minute.

NSX is a good product. VMware wouldn’t have released it otherwise. It is the vSwitch we’ve needed for a very long time. It also extends the ability of the virtualization/server admin to provision resources quickly. That’s where I’m having my issue with the messaging around NSX. During the second day keynote, the CTOs on stage said that the biggest impediment to application deployment is waiting on the network to be configured. Note that is my paraphrasing of what I took their intent to be. In order to work around the lag in network provisioning, VMware has decided to build a VxLAN/GRE/STT tunnel between the endpoints and eliminate the network admin as a source of delay. NSX turns your network in a fabric for the endpoints connected to it.

Under the Bridge

I also have some issues with NSX and the way it’s supposed to work on existing networks. Network engineers have spent countless hours optimizing paths and reducing delay and jitter to provide applications and servers with the best possible network. Now, that all doesn’t matter. vAdmins just have to click a couple of times and build their vWire to the other server and all that work on the network is for naught. The underlay network exists to provide VxLAN transport. NSX assumes that everything working beneath is running optimally. No loops, no blocked links. NSX doesn’t even participate in spanning tree. Why should it? After all, that vWire ensures that all the traffic ends up in the right location, right? People would never bridge the networking cards on a host server. Like building a VPN server, for instance. All of the things that network admins and engineers think about in regards to keeping the network from blowing up due to excess traffic are handwaved away in the presentations I’ve seen.

The reference architecture for NSX looks pretty. Prettier than any real network I’ve ever seen. I’m afraid that suboptimal networks are going to impact application and server performance now more than ever. And instead of the network using mechanisms like QoS to battle issues, those packets are now invisible bulk traffic. When network folks have no visibility into the content of the network, they can’t help when performance suffers. Who do you think is going to get blamed when that goes on? Right now, it’s the network’s fault when things don’t run right. Do you think that moving the onus for server network provisioning to NSX and vCenter is going to forgive the network people when things go south? Or are the underlay engineers going to be take the brunt of the yelling because they are the only ones that still understand the black magic outside the GUI drag-and-drop to create vWires?

NSX is for service enablement. It allows people to build network components without knowing the CLI. It also means that network admins are going to have to work twice as hard to build resilient networks that work at high speed. I’m hoping that means that TRILL-based fabrics are going to take off. Why use spanning tree now? Your application and service network sure isn’t. No sense adding any more bells and whistles to your switches. It’s better to just tie them into spine-and-leaf CLOS fabrics and be done with it. It now becomes much more important to concentrate on the user experience. Or maybe the wirless network. As long as at least one link exists between your ESX box and the edge switch let the new software networking guys worry about it.

The Recumbent Incumbent?

Cisco is the only major networking manufacturer not publicly on board with NSX right now. Their CTO Padma Warrior has released a response to NSX that talks about lock-in and vertical integration. Still others have released responses to that response. There’s a lot of talk right now about the war brewing between Cisco and VMware and what that means for VCE. One thing is for sure – the landscape has changed. I’m not sure how this is going to fall out on both sides. Cisco isn’t likely to stop selling switches any time soon. NSX still works just fine with Cisco as an underlay. VCE is still going to make a whole bunch of money selling vBlocks in the next few months. Where this becomes a friction point is in the future.

Cisco has been building APIs into their software for the last year. They want to be able to use those APIs to directly program the network through devices like the forthcoming OpenDaylight controller. Will they allow NSX to program them as well? I’m sure they would – if VMware wrote those instructions into NSX. Will VMware demand that Cisco use the NSX-approved APIs and agents to expose network functionality to their software network? They could. Will Cisco scrap OnePK to implement NSX? I doubt that very much. We’re left with a standoff. Cisco wants VMware to use their tools to program Cisco networks. VMware wants Cisco to use the same tools as everyone else and make the network a commodity compared to the way it is now.

Let’s think about that last part for a moment. Aside from some speed differences, networks are largely going to be identical to NSX. It won’t care if you’re running HP, Brocade, or Cisco. Transport is transport. Someone down the road may build some proprietary features into their hardware to make NSX run better but that day is far off. What if a manufacturer builds a switch that is twice as fast as the nearest competition? Three times? Ten times? At what point does the underlay become so important that the overlay starts preferring it exclusively?


Tom’s Take

I said a lot during the Tuesday keynote at VMworld. Some of it was rather snarky. I asked about full BGP tables and vMotioning the machines onto the new NSX network. I asked because I tend to obsess over details. Forgotten details have broken more of my networks than grand design disasters. We tend to fuss over the big things. We make more out of someone that can drive a golf ball hundreds of yards than we do about the one that can consistently sink a ten foot putt. I know that a lot of folks were pre-briefed on NSX. I wasn’t, so I’m playing catch up right now. I need to see it work in production to understand what value it brings to me. One thing is for sure – VMware needs to change the messaging around NSX to be less antagonistic towards network folks. Bring us into your solution. Let us use our years of experience to help rather than making us seem like pariahs responsible for all your application woes. Let us help you help everyone.

SDN and NFV – The Ups and Downs

TopSDNBottomNFV

I was pondering the dichotomy between Software Defined Networking (SDN) and Network Function Virtualization (NFV) the other day.  I’ve heard a lot of vendors and bloggers talking about how one inevitably leads to the other.  I’ve also seen a lot of folks saying that the two couldn’t be further apart on the scale of software networking.  The more I thought about these topics, the more I realized they are two sides of the coin.  The problem, at least in my mind, is the perspective.

SDN – Planning The Paradigm

Software Defined Networking telegraphs everything about what it is trying to accomplish right there in the name.  Specifically, the “Definition” part of the phrase.  I’ve made jokes in the past about the lack of definition in SDN as vendors try to adapt their solutions to fit the buzzword mold.  What I finally came to realize is that the SDN folks are all about definition. SDN is the Top Down approach to planning.  SDN seeks to decompose the network into subsystems that can be replaced or reprogrammed to suit the needs of those things which utilize the network.

As an example, SDN breaks the idea of switch down into things like “forwarding plane” and “control plane” and seeks to replace the control plane with alternative software, whether it be a controller-based architecture like OpenFlow or an overlay network similar to that of VMware/Nicira.  We can replace the OS of a switch with a concept like OpenFlow easily.  It’s just a mechanism for determining which entries are populated in the Content Addressable Memory (CAM) tables of the forwarding plane.  In top down design, it’s easy to create a stub entry or “black box” to hold information that flows into it.  We don’t particularly care how the black box works from the top of the design, just that it does its job when called upon.

Top Down designs tend to run into issues when those black boxes lack detail or are missing some critical functionality.  What happens when OpenFlow isn’t capable of processing flows fast enough to keep the CAM table of a campus switch populated with entries?  Is the switch going to fall back to process switching the packets?  That could be an big issue.  Top Down designs are usually very academic and elegant.  They also have a tendency to lack concrete examples and real world experience.  When you think about it, that does speak a lot about the early days of SDN – lots of definition of terminology and technology, but a severe lack of actual packet forwarding.

NFV – Working From The Ground Up

Network Function Virtualization takes a very different approach to the idea of turning hardware networks into software networks.  The driving principle behind NFV is replication of existing technology in a software state.  This is classic Bottom Up design.  Rather than spending a large amount of time planning and assembling the perfect system, Bottom Up designers tend to build as they go.  They concentrate on making something work first, then making their things work together second.

NFV is great for hands-on folks because it gives concrete, real results almost immediately. Once you’ve converted an load balancer or a router to a purely software-based construct you can see right away how it works and what the limitations might be.  Does it consume too many resources on the hypervisor?  Does it excel at forwarding small packets?  Does switching a large packet locally cause a fault?  These are problems that can be corrected in the individual system rapidly rather than waiting to modify the overall plan to account for difficulties in the virtualization process.

Bottom Up design does suffer from some issues as well.  The focus in Bottom Up is on getting things done on a case-by-case basis.  What do you do when you’ve converted all your hardware to software?  Do your NFV systems need to talk to one another?  That’s usually where Bottom Up design starts breaking down.  Without a grand plan at a higher level to ensure that systems can talk to each other this design methodology falls back to a series of “hacks” to get them connected.  Units developed in isolation aren’t required to play nice with everyone else until they are forced to do so.  That leads to increasing complex and fragile interconnection systems that could fail spectacularly should the wrong thread be yanked with sufficient force.


Tom’s Take

Which method is better?  Should we spend all our time planning the system and hope that our Powerpoint Designs work the right way when someone codes them in a few months?  Or should we say “damn the torpedoes” and start building things left and right and hope that someone will figure out a way to tie all these individual pieces together at some point?

Surprisingly, the most successful design requires elements of both.  People need to have a basic plan at the least when starting out on a plan to change the networking world.  Once the ideas are sketched out, you need a team of folks willing to burn the midnight oil and get the ideas implemented in real life to ensure that the plan works the right way.  The guidance from the top is essential to making sure everything works together in the end.

Whether you are leading from the top or the bottom, remember that everything has to meet in the middle sooner or later.

A Guide to SDN Spirit Animals

The world of computers and IT has always been linked with animals.  Whether you are referring to Tux the Penguin from the world of Linux or the various zoological specimens that have graced the covers of the O’Reilly Media library you can find almost every member of the animal kingdom represented.  Many of these icons have become mascots for their users.  In the world of software defined networking (SDN), we have our own mascot as well.  However, I’m going to propose that we start considering a few more as well.

The Horned Wonder

If you’ve read any kind of blog post about SDN in the last year, you’ve probably seen reference to a unicorn at some point.  Unicorns are mythical creatures that are full of magic and wonder.  I referenced them once in a post concerning a network where I had trouble understanding how untagged packets were traversing VLANs without causing a meltdown.  When the network admin asked me how it was happening I replied, “They must be getting ferried around on the backs of unicorns!”  That started my association of magical things happening in networks and their subsequent attribution to unicorns.  Greg Ferro (@etherealmind) is fond of saying that new protocols without sufficient documentation must be powered by “unicorn tears”.  Ivan Pepelnjak (@ioshints) is also a huge fan of the unicorn, as evidenced by this picture:

Ivan rides his steed into battle

Ivan rides his steed into battle

The unicorn is popular because it represents a fantastic explanation for a difficult problem.  However, people that I’ve talked to recently are getting tired of attributing mythical properties of various SDN-related technologies to the mighty unicorn.  I thought about it and realized that there are more suitable animals depending on what technology you’re talking about.

King of Beasts

griffin

If you ask most SDN companies, they’ll tell you that their spirit animal is the griffin.  The griffin is a mythical creature with the body and hindquarters of a lion combined with the head, wings, and front legs of an eagle.  This regal beast is regarded as a stately amalgam of the king of beasts and the king of birds.  It typically guards important and sacred treasures.  It is also a popular animal in heraldry, where it represents courage and boldness.

You can tell from that description that anyone writing an API for their existing OS or networking stack probably has one of these things hanging in their cubicle.  It stands for the best possible joining of two great ideas.  Those APIs guard the sacred treasures for those that have always wanted insight into the inner workings of a network operating system.  The griffin is the best case scenario for those that want to write an effective API or access methodology for enabling SDN.  But as we all know, something the best strategies are sometimes poorly implemented.

Design by Committee

Chimera

The opposite of the griffin would have to be the chimera.  A chimera is a mythical beast that has the body, head, and front legs of lion.  It has a goat’s head jutting from the middle of the body and a snake’s head for a tail, although some sources say this is a dragon head with the associated dragon wings as well.  This nightmarish beast comes from Greek mythology where it was an omen of disaster when spotted.

The chimera represents what happens when you try to combine things and end up with the worst possible combination.  Why is there a goat’s head in the middle?  What good does a snake head for a tail really do?  In much the same way, companies that are trying to create SDN strategies by throwing everything they can into the mix will have end results that should use a chimera for a mascot.  Rather than taking the approach of building the product with the best and most useful features, some designers feel the need to attach every thing they can in an effort to replicate existing non-useful functionality.  “Better to have it and not need it” is the rallying cry most often heard.  This leads to the kind of unwieldy and bloated applications that scare people away from SDN and back to traditional networking methodology.

Tom’s Take

Every project needs a mascot.  Every product needs an icon or a fancy drawing on the product page.  Sooner or later, those mascots come to symbolize everything the project stands for.  Content penguins aside, most projects are looking for something cute or cuddly.  Security vendors are notorious for using scary looking animals to get the point across that they aren’t to be messed with.  I think that using mythologic creatures other than the unicorn to symbolize SDN projects is the way to go.  It focuses the developers to ground themselves in real features.  Hopefully it helps them avoid the mentality that could create nightmarish creatures like the chimera.