SDN Use Case: Content Filtering

Embed from Getty Images

K-12 schools face unique challenges with their IT infrastructure.  Their user base needs access to a large amount of information while at the same time facing restrictions.  While it does sound like some corporate network policies, the restrictions in the education environment are legal in nature.  Schools must find new ways to provide the assurance of restricting content without destroying their network in the process.  Which lead me to ask: Can SDN Help?

Online Protection

The government E-Rate program gives schools money each year under Priority 1 funding for Internet access.  Indeed, the whole point of the E-Rate program is to get schools connected to the Internet.  But we all know the Internet comes with a bevy of distractions. Many of those distractions are graphic in nature and must be eliminated in a school.  Because it’s the law.

The Children’s Internet Protection Act (CIPA) mandates that schools and libraries receiving E-Rate funding for high speed broadband Internet connections must filter those connections to remove questionable content.  Otherwise they risk losing funding for all E-Rate services.  That makes content filters very popular devices in schools, even if they aren’t funded by E-Rate (which they aren’t).

Content filters also cause network design issues.  In the old days, we had to put the content filter servers on a hub along with the outbound Internet router in order to insure they could see all the traffic and block the bad bits.  That became increasing difficult as network switch speeds increased.  Forcing hundreds of megabits through a 10Mbit hub was counterproductive.  Moving to switchport mirroring did alleviate the speed issues, but still caused network design problems.  Now, content filters can run on firewalls and bastion host devices or are enabled via proxy settings in the cloud.  But we all know that running too many services on a firewall causes performance issues.  Or leads to buying a larger firewall than needed.

Another issue that has crept up as of late is the use of Virtual Private Networking (VPN) as a way to defeat the content filter.  Setting up an SSL VPN to an outside, non-filtered device is pretty easy for a knowledgeable person.  And if that fails, there are plenty of services out there dedicated to defeating content filtering.  While the aim of these service is noble, such as bypassing the Great Firewall of China or the mandated Internet filtering in the UK, they can also be used to bypass the CIPA-mandated filtering in schools as well.  It’s a high-tech game of cat-and-mouse.  Blocking access to one VPN only for three more to pop up to replace it.

Software Defined Protection

So how can SDN help?  Service chaining allows traffic to be directed to a given device or virtual appliance before being passed on through the network.  This great presentation from Networking Field Day 7 presenter Tail-f Networks shows how service chaining can force traffic through security devices like IDS/IPS and through content filters as well.  There is no need to add hubs or mirrored switch ports in your network.  There is also no need to configure traffic to transit the same outbound router or firewall, thereby creating a single point of failure.  Thanks to the magic of SDN, the packets go to the filter automatically.  That’s because they don’t really have a choice.

It also works well for providers wanting to offer filtering as a service to schools.  This allows a provider to configure the edge network to force traffic to a large central content filter cluster and ensure delivery.  It also allows the service provider network to operate without impact to non-filtered customers.  That’s very useful even in ISPs dedicated to education institutions, as the filter provisions for K-12 schools don’t apply to higher education facilities, like colleges and universities.  Service chaining would allow the college to stay free and clear while the high schools are cleansed of inappropriate content.

The VPN issue is a thorny one for sure.  How do you classify traffic that is trying to hide from you?  Even services like Netflix are having trouble blocking VPN usage and they stand to lose millions if they can’t.  How can SDN help in this situation? We could build policies to drop traffic headed for known VPN endpoints.  That should take care of the services that make it easy to configure and serve as a proxy point.  But what about those tech-savvy kids that setup SSL VPNs back home?

Luckily, SDN can help there as well.  Many unified threat management appliances offer the ability to intercept SSL conversations.  This is an outgrowth of sites like Facebook defaulting to SSL to increase security.  SSL intercept essentially acts as a man-in-the-middle attack.  The firewall decrypts the SSL conversation, scans the packets, and re-encrypts it using a different certificate.  When the packets come back in, the process is reversed.  This SSL intercept capability would allow those SSL VPN packets to be dropped when detected.  The SDN component ensures that HTTPS traffic is always redirected to a device that and do SSL intercept, rather than taking a path through the network that might lead to a different exit point.

Tom’s Take

Content filtering isn’t fun.  I’ve always said that I don’t envy the jobs of people that have to wade through the unsavory parts of the Internet to categorize bits as appropriate or not.  It’s also a pain for network engineers that need to keep redesigning the networking and introducing points of failure to meet federal guidelines for decency.  SDN holds the promise of making that easier.  In the above Tail-f example, the slide deck shows a UI that allows simple blocking of common protocols like Skype.  This could be extended to schools where student computers and wireless networks are identified and bad programs are disallowed while web traffic is pushed to a filter and scrubbed before heading out to the Wild Wild Web.  SDN can’t solve every problem we might have, but if it can make the mundane and time consuming problems easier, it might just give people the breathing room they need to work on the bigger issues.

An Educational SDN Use Case

During the VMUnderground Networking Panel, we had a great discussion about software defined networking (SDN) among other topics. Seems that SDN is a big unknown for many out there. One of the reasons for this is the lack of specific applications of the technology. OSPF and SQL are things that solve problems. Can the same be said of SDN? One specific question regarded how to use SDN in small-to-medium enterprise shops. I fired off an answer from my own experience:

Since then, I’ve had a few people using my example with regards to a great use case for SDN. I decided that I needed to develop it a bit more now that I’ve had time to think about it.

Schools are a great example of the kinds of “do more with less” organizations that are becoming more common. They have enterprise-class networks and needs and live off budgets that wouldn’t buy janitorial supplies. In fact, if it weren’t for E-Rate, most schools would have technology from the Stone Age. But all this new tech doesn’t help if you can’t find a way for it to be used to the fullest for the purposes of educating students.

In my example, I talked about the shift from written standardized testing to online assessments. Oklahoma and Indiana are leading the way in getting rid of Scantrons and pencils in favor of keyboards and monitors. The process works well for the most part with proper planning. My old job saw lots of scrambling to prep laptops, tablets, and lab machines for the rigors of running the test. But no amount of pre-config could prepare for the day when it was time to go live. On those days, the network was squarely in the sights of the administration.

I’ve seen emails go around banning non-testing students from the computers. I’ve seen hard-coded DNS entries on testing machines while the rest of the school had DNS taken offline to keep them from surfing the web. Redundant circuits. QoS policies that would make voice engineers cry. All in the hope of keeping the online test bandwidth free to get things completed in the testing window. All the while, I was thinking to myself, “There has got to be an easier way to do this…”

Redefining with Software

Enter SDN. The original use case for SDN at Stanford was network slicing. The Next-Gen Network Team wanted to use the spare capacity of the network for testing without crashing the whole system. Being able to reconfigure the network on the fly is a huge leap forward. Pushing policy into devices without CLI cuts down on the resume-generating events (RGE) in production equipment. So how can we apply these network slicing principles to my example?

On the day of the test, have the configuration system push down a new policy that gives the testing machines a guaranteed amount of bandwidth. This reservation will ensure each machine is able to get what it needs without being starved out. With SDN, we can set this policy on a per-IP basis to ensure it is enforced. This slice will exist separate from the production network to ensure that no one starting a huge FTP transfer or video upload will disrupt testing. By leaving the remaining bandwidth intact for the rest of the school’s production network administrators can ensure that the rest of the student body isn’t impacted during testing. With the move toward flipped classrooms and online curriculum augmentation, having available bandwidth is crucial.

Could this be done via non-SDN means? Sure. Granted, you’ll have to plan the QoS policy ahead of time and find a way to classify your end-user workstations properly. You’ll also have to tune things to make sure no one is dominating the test machine pool. And you have to get it right on every switch you program. And remove it when you’re done. Unless you missed a student or a window, in which case you’ll need to reprogram everything again. SDN certainly makes this process much easier and less disruptive.


Tom’s Take

SDN isn’t a product. You can’t order a part number for SDN-001 and get a box labeled SDN. Instead, it’s a process. You apply SDN to existing environment and extend the capabilities through new processes. Those processes need use cases. Use cases drive business cases. Business cases provide buy in from the stakeholders. That’s why discussing cases like the one above are so important. When you can find a use for SDN, you can get people to accept it. And that’s half the battle.

Who Wants A Free Puppy?

Embed from Getty Images

Years ago, my wife was out on a shopping trip. She called me excitedly to tell me about a blonde shih-tzu puppy she found and just had to have. As she talked, I thought about all the things that this puppy would need to thrive. Regular walks, food, and love are paramount on the list. I told her to use her best judgement rather than flat out saying “no”. Which is also how I came to be a dog owner. Today, I’ve learned there is a lot more to puppies (and dogs) than walks and feeding. There is puppy-proofing your house. And cleaning up after accidents. And teaching the kids that puppies should be treated gently.

An article from Martin Glassborow last week made me start thinking about our puppy again. Scott McNealy is famous for having told the community that “Open Source is free like a puppy.” back in 2005. While this was a dig at the community in regards to the investment that open source takes, I think Scott was right on the mark. I also think Martin’s recent article illustrates some of the issues that management and stakeholders don’t see with comunity projects.

Open software today takes care and feeding. Only instead of a single OS on a server in the back of the data center, it’s all about new networking paradigms (OpenFlow) or cloud platform plays (OpenStack). This means there are many more moving parts. Engineers and programmers get it. But go to the stakeholders and try to explain what that means. The decision makers love the price of open software. They are ambivalent to the benefits to the community. However, the cost of open projects is usually much higher than the price. People have to invest to see benefits.

TNSTAAFL

At the recent SolidFire Summit, two cloud providers were talking about their software. One was hooked in to the OpenStack community. He talked about having an entire team dedicating to pulling nightly builds and validating them. They hacked their own improvements and pushed them back upstream for the good of the community. He seemed love what he was talking about. The provider next to him was just a little bit larger. When asked what his platform was he answered “CloudStack”. When I asked why, he didn’t hesitate. “They have support options. I can have them fix all my issues.”

Open projects appeal to the hobbiest in all of us. It’s exciting to build something from the ground up. It’s a labor of love in many cases. Labors of love don’t work well for some enterprises though. And that’s the part that most decision makers need to know. Support for this awesome new thing may not alwasy be immediate or complete. To bring this back to the puppy metaphor, you have to have patience as your puppy grows up and learns not to chew on slippers.

The reward for all this attention? A loving pet in the case of the puppy. In the case of open software, you have a workable framework all your own that is customized to your needs and very much a part of your DNA. Supported by your staff and hopefull loved as much or more than any other solution. Just like dog owners that look forward to walking the dog or playing catch at the dog part, your IT organization should look forward to the new and exciting challenges that can be solved with the investment of time.


Tom’s Take

Nothing is free. You either pay for it with money or with time. Free puppies require the latter, just as free software projects do. If the stakeholders in the company look at it as an investment of time and energy then you have the right frame of mind from the outset. If everything isn’t clear up front, you will find yourself needing to defend all the time you’ve spent on your no-cost project. Hopefully your stakeholders are dog people so they understand that the payoff isn’t in the price, but the experience.

End Of The CLI? Or Just The Start?

update1-leak3-hero

Windows 8.1 Update 1 launches today. The latest chapter in Microsoft’s newest OS includes a new feature people been asking for since release: the Start Menu. The biggest single UI change in Windows 8 was the removal of the familiar Start button in favor of a combined dashboard / Start screen. While the new screen is much better for touch devices, the desktop population has been screaming for the return of the Start Menu. Windows 8.1 brought the button back, although it only linked to the Start screen. Update 1 promises to add functionality to the button once more. As I thought about it, I realized there are parallels here that we in the networking world can learn as well.

Some very smart people out there, like Colin McNamara (@ColinMcNamara) and Matt Oswalt (@Mierdin) have been talking about the end of the command line interface (CLI). With the advent of programmable networks and API-driven configuration the CLI is archaic and unnecessary, or so the argument goes. Yet, there is a very strong contingent of the networking world that is clinging to the comfortable glow of a terminal screen and 80-column text entry.

Command The Line

API-driven interfaces provide flexibility that we can’t hope to match in a human interface. There is no doubt that a large portion of the configuration of future devices will be done via API call or some sort of centralized interface that programs the end device automatically. Yet, as I’ve said before, engineers don’t like have visibility into a system. Getting rid of the CLI for the sake of streamlining a device is a bad idea.

I’ve worked with many devices that don’t have a CLI. Cisco Catalyst Express switches leap immediately to mind. Other devices, like the Cisco UC500 SMB phone system, have a CLI but use of it is discouraged. In face, when you configure the UC500 using the CLI, you start getting warnings about not being able to use the GUI tool any longer. Yet there are functions that are only visible through the CLI.

Non-Starter

Will the programmable networking world will make the same mistake Microsoft did with Windows 8? Even a token CLI is better than cutting it out entirely. Programmable networking will allow all kinds of neat tricks. For instance, we can present a Cisco-like CLI for one group of users and a Juniper-like CLI for a different group that both accomplish the same results. We don’t need to have these CLIs sitting around resident memory. We should be able to generate them on the fly or call the appropriate interfaces from a centralized library. Extensibility, even in the archaic interface of last resort.

If all our talk revolves around the removal of the tool people have been using for decades to program devices you will make enemies quickly. The talk needs to shift from the death of CLI and more toward the advantages gained through adding API interfaces to your programming. Even if our interface into calling those APIs looks similar to a comfortable CLI, you’re going to win more converts up front if you give them something they recognize as a transition mechanism.


Tom’s Take

Microsoft bit off more than they could chew when they exiled the Start Menu to the same pile as DOSShell and Microsoft Bob. People have spent almost 20 years humming the Rolling Stones’ “Start Me Up” as they click on that menu. Microsoft drove users to this approach. To pull it out from under them all at once with no transition plan made for unhappy users. Networking advocates need to be just as cognizant of the fact that we’re headed down the same path. We need to provide transition options for the die-hard engineers out there so they can learn how to program devices via non-traditional interfaces. If we try to make them quit cold turkey you can be sure the Start Menu discussion will pale in comparison.

The OpenFlow Longbow

carck_longbow

The label of disruption seems to be thrown around quite a bit.  I’ve heard tablets being called the disruptive technology when it comes to PCs.  I’ve also heard people talking about software defined networking (SDN) as the disruptive technology to the way that we’ve been doing networking for the last decade.  Nowhere is that more true than with OpenFlow.  But what does this disruption mean?  Aren’t we still essentially forwarding packets the same way we have in the past?  To get a frame of reference, let’s look at one of my favorite disruptive technologies – the longbow.

Who Are Yew?

The Welsh yew longbow had been a staple of the English military as far back as 600AD.  The recurve shape of the bow provided for a faster, longer arrow flight than the shorter bows used by foot soldiers and mounted calvary.  The wood was especially important, as the yew tree was only really found in abundance in Wales.  Longbows existed in one form or another in many armies in Europe, but the Welsh bow was the only one that was feared.

The advantage was never more apparent than during the Hundred Years War between England and France.  The longbow was deployed as a mid-range artillery to harass advancing troops.  There are questions about whether or not the arrows were able to pierce the plate armor used by knights at the time, but the results of archery corps can’t be denied.  Especially in the Battle of Agincourt.  The English used longbows to slow the advance of French forces in armor, tiring them out as they crossed the field and holding them at bay until the heavier foot soldiers of the English army could be repositioned to take the advancing enemy apart.  Agincourt was a win for the English and fives years later, the war was over.

The longbow proved itself to be a very disruptive technology.  Not because it killed soldiers better or faster than a mace or broadsword.  It was disruptive because it changed the way generals composed their armies.  Instead of relying on heavy assault troops in armor to punch a hole in the enemy lines, the longbow forced a soldier to become more mobile with less armor to be able to cross the range of the bow much more quickly and close to a range where the technology advantage was negated.  Bowmen were at a distinct advantage at point-blank range.  Armies grew more mobile all the way up to the point where a new technology disrupted the reign of the longbow: gunpowder.  Once musketeers became more prevalent, they replaced the role traditionally held by the longbow archer.

Disrupting the Flow

How does this history lesson apply to OpenFlow?  OpenFlow is poised to disrupt networking in the same way as the longbow forced people to take a new look at their armies.  OpenFlow takes something we know about and turns it on its head.  It gives us much more control over how a switch forwards packets.  It also makes us ask questions about how we build our networks.

Questions about big core switches, spine-and-leaf topologies, and the intelligence of edge devices all become very pertinent in an OpenFlow design.  Should I put a larger switch at the edge since it’s going to be doing a lot of heavy lifting?  Should I use a fabric in place of a three-tier design?  Will my controller allow me to use different interconnects to ensure high-speed traffic flows east and west in the data center?

OpenFlow is taking over some vendors offerings.  NEC and HP have already committed to OpenFlow designs.  Even companies that haven’t really embraced OpenFlow have decided to offer it rather than dismiss it.  Arista and Cisco are offering new switches that have support for OpenFlow, even if that support may not extend to more proprietary enhancements right now.  Just like the longbow, OpenFlow is forcing the opposition to reconfigure the way they fight the battle.  They may not like it.  They may even say in private that they’re just doing it to mollify a part of the customer base looking for specific points in an proposal.  But they are still dedicating time and effort to OpenFlow all the same.


Tom’s Take

Disruption happens all the time.  We don’t use cell phones in bags hardwired into our vehicles any more.  Our computers are the size of a broom closet and run off of punch cards.  Just like weapons in the ancient world, whoever comes up with a more effective way of winning battles enjoys a distinct advantage for a time.  Eventually, something comes along that disrupts the disruption.  OpenFlow is currently the king of the SDN battlefield.  It holds that title by virtue of how many people are racing to interoperate with it.  Eventually, it will be dethroned just as the longbow was.  They key will be recognizing the next new thing first and using it to your advantage.  And arming your archers with it.

HP Networking and the Software Defined Store

HP

HP has had a pretty good track record with SDN.  Even if it’s not very well-known.  HP has embraced OpenFlow on a good number of its Procurve switches.  Given the age of these devices, there’s a good chance you can find them laying around in labs or in retired network closets to test with.  But where is that going to lead in the long run?

HP Networking was kind enough to come to Interop New York and participate in a Tech Field Day roundtable.  It had been a while since I talked to their team.  I wanted to see how they were handling the battle being waged between OpenFlow proponents like NEC and Brocade, Cisco and their hardware focus, and VMware with NSX.  Jacob Rapp and Chris Young (@NetManChris) stepped up to the plate to talk about SDN and the vision on HP.

They cover a lot of ground in here.  Probably the most important piece to me is the SDN app store.

The press picked up on this quickly.  HP has an interesting idea here.  I should know.  I mentioned it in passing in an article I wrote a month ago.  The more I think about the app store model, the more I realize that many vendors are going to go down the road.  Just not in the way HP is thinking.

HP wants to curate content for enterprises.  They want to ensure that software works with their controller to be sure that there aren’t any hiccups in implementation.  Given their apparent distaste for open source efforts, it’s safe to say that their efforts will only benefit HP customers.  That’s not to say that those same programs won’t work on other controllers.  So long as they operate according to the guidelines laid down by the Open Networking Foundation, all should be good.

Show Me The Money

Where’s the value then?  That’s in positioning the apps in the store.  Yes, you’re going to have some developers come to HP and want to simple apps to put in the store.  Odds are better that you’re going to see more recognizable vendors coming to the HP SDN store.  People are more likely to buy software from a name they recognize, like TippingPoint or F5.  That means that those companies are going to want to have a prime spot in the store.  HP is going to make something from hosting those folks.

The real revenue doesn’t come from an SMB buying a load balancer once.  It comes from a company offering it as a service with a recurring fee.  The vendor gets a revenue stream. HP would be wise to work out a recurring fee as well.  It won’t be the juicy 30% cut that Apple enjoys from their walled garden, but anything would be great for the bottom line.  Vendors win from additional sales.  Customers win from having curated apps that work every time that are easy to purchase, install, and configure.  HP wins because everyone comes to them.

Fragmentation As A Service

Now that HP has jumped on the idea of an enterprise-focused SDN app store, I wonder which company will be the next to offer one?  I also worry that having multiple app stores won’t end up being cumbersome in the long run.  Small developers won’t like submitting their app to four or five different vendor-affiliated stores.  More likely they’ll resort to releasing code on their own rather than jump through hoops.  That will eventually lead to support fragmentation.  Fragmentation helps no one.


Tom’s Take

HP Networking did a great job showcasing what they’ve been doing in SDN.  It was also nice to hear about their announcements the day before they broke wide to the press.  I think HP is going to do well with OpenFlow on their devices.  Integrating OpenFlow visibility into their management tools is also going to do wonders for people worried about keeping up with all the confusing things that SDN can do to a traditional network.  The app store is a very intriguing concept that bears watching.  We can only hope that it ends up being a well-respect entry in a long line of easing customers into the greater SDN world.

Tech Field Day Disclaimer

HP was a presenter at the Tech Field Day Interop Roundtable.  In addition, they also provided the delegates a 1TB USB3 hard disk drive.  They did not ask for any consideration in the writing of this review nor were they promised any.  The conclusions and analysis contained in this post are mine and mine alone.

SDN 101 at ONUG Academy

300x250_TFD10_V2

Software defined networking is king of the hill these days in the greater networking world.  Vendors are contemplating strategies.  Users are demanding functionality.  And engineers are trying to figure out what it all means.  What’s needed is a way for vendor-neutral parties to get together and talk about what SDN represents and how best to implement it.  Most of the talk so far has been at vendor-specific conferences like Cisco Live or at other conferences like Interop.  I think a third option has just presented itself.

Nick Lippis (@NickLippis) has put together a group of SDN-focused people to address concerns about implementation and usage.  The Open Networking User Group (ONUG) was assembled to allow large companies using SDN to have a semi-annual meeting to discuss strategy and results.  It allows Facebook to talk to JP Morgan about what they are doing to simplify networking through use of things like OpenFlow.

This year, ONUG is taking it a step further by putting on the ONUG Academy, a day-long look at SDN through the eyes of those that implement it.  They have assembled a group of amazing people, including the founder of Cumulus Networks and Tech Field Day’s own Brent Salisbury (@NetworkStatic).  There will be classes about optimizing networks for SDN as well as writing SDN applications for the most popular controllers on the market.  Nick shares more details about the ONUG academy here:

If you’re interested in attending ONUG either for the academy or for the customer-focused meetings, you need to register today.  As a special bonus, if you use the code TFD10 when you sign up, you can take 10% of the cost of registration.  Use that extra cash to go out and buy a cannoli or two.

I’ll be at ONUG with Tech Field Day interviewing customers and attendees about their SDN strategies as well as where they think the state of the industry is headed.  If you’re there, stop by and say hello.  And be sure to bring me one of those cannolis.

Know the Process, Not the Tool

rj45process

If there is one thing that amuses me as of late, it’s the “death of CLI” talk that I’m starting to see coming from many proponents of software defined networking. They like to talk about programmatic APIs and GUI-based provisioning and how everything that network engineers have learned is going to fall by the wayside.  Like this Network World article. I think reports of the death of CLI are a bit exaggerated.

Firstly, the CLI will never go away. I learned this when I stared working with an Aerohive access point I got at Wireless Field Day 2. I already had a HiveManager account provisioned thanks to Devin Akin (@DevinAkin), so all I needed to do was add the device to my account and I would be good to go. Except it never showed up. I could see it on my local network, but it never showed up in the online database. I rebooted and reset several times before flipping the device over and finding a curious port labeled “CONSOLE”. Why would a cloud-based device need a console port. In the next hour, I learned a lot about the way Aerohive APs are provisioned and how there were just some commands that I couldn’t enter in the GUI that helped me narrow down the problem. After fixing a provisioning glitch in HiveManager the next day, I was ready to go. The CLI didn’t fix my problem, but I did learn quite a bit from it.

Basic interfaces give people a great way to see what’s going on under the hood. Given that most folks in networking are from the mold of “take it apart to see why it works” the CLI is great for them. I agree that memorizing a 10-argument command to configure something like route redistribution is a pain in the neck, but that doesn’t come from the difficulty of networking. Instead, the difficulty lies in speaking the language.

I’ve traveled to a foreign country once or twice in my life. I barely have a grasp of the English language at times. I can usually figure out some Spanish. My foreign language skills have pretty much left me at this point. However, when I want to make myself understood to people that speak another language, I don’t focus on syntax. Instead, I focus on ideas. Pointing at an object and making gestures for money usually gets the point across that I want to buy something. Pantomiming a drinking gesture will get me to a restaurant.

Networking is no different. When I started trying to learn CLI terminology for Brocade, Arista, and HP I found they were similar in some respects but very different in others. When you try to take your Cisco CLI skills to a Juniper router, you’ll find that you aren’t even in the neighborhood when it comes to syntax. What becomes important is *what* you’re trying to do. If you can think through what you’re trying to accomplish, there’s usually a help file or a Google search that can pull up the right way to do things.

This extends its way into a GUI/API-driven programming interface as well. Rather than trying to intuit the interface just think about what you want to do instead. If you want two hosts to talk to each other through a low-cost link with basic security you just have to figure out what the drag-and-drop is for that. If you want to force application-specific traffic to transit a host running an intrusion prevention system you already know what you want to do. It’s just a matter of find the right combination of interface programming to accomplish it. If you’re working on an API call using Python or Java you probably have to define the constraints of the system anyway. The hard part is writing the code to interface to accomplish the task.


Tom’s Take

Learning the process is the key to making it in networking. So many entry level folks are worried about *how* to do something. Configuring a route or provisioning a VLAN are the end goal. It’s only when those folks take a step back and think about their task without the commands that they begin to become real engineers. When you can visualize what you want to do without thinking about the commands you need to enter to do it, you are taking the logical step beyond being tied to a platform. Some of the smartest people I know break a task down into component parts and steps. When you spend more time on *what* you are doing and less on *how* you are doing it, you don’t need to concern yourself with radical shifts in networking, whether they be SDN, NFV, or the next big thing. Because the process will never change even if the tools might.

A Guide to SDN Spirit Animals

The world of computers and IT has always been linked with animals.  Whether you are referring to Tux the Penguin from the world of Linux or the various zoological specimens that have graced the covers of the O’Reilly Media library you can find almost every member of the animal kingdom represented.  Many of these icons have become mascots for their users.  In the world of software defined networking (SDN), we have our own mascot as well.  However, I’m going to propose that we start considering a few more as well.

The Horned Wonder

If you’ve read any kind of blog post about SDN in the last year, you’ve probably seen reference to a unicorn at some point.  Unicorns are mythical creatures that are full of magic and wonder.  I referenced them once in a post concerning a network where I had trouble understanding how untagged packets were traversing VLANs without causing a meltdown.  When the network admin asked me how it was happening I replied, “They must be getting ferried around on the backs of unicorns!”  That started my association of magical things happening in networks and their subsequent attribution to unicorns.  Greg Ferro (@etherealmind) is fond of saying that new protocols without sufficient documentation must be powered by “unicorn tears”.  Ivan Pepelnjak (@ioshints) is also a huge fan of the unicorn, as evidenced by this picture:

Ivan rides his steed into battle

Ivan rides his steed into battle

The unicorn is popular because it represents a fantastic explanation for a difficult problem.  However, people that I’ve talked to recently are getting tired of attributing mythical properties of various SDN-related technologies to the mighty unicorn.  I thought about it and realized that there are more suitable animals depending on what technology you’re talking about.

King of Beasts

griffin

If you ask most SDN companies, they’ll tell you that their spirit animal is the griffin.  The griffin is a mythical creature with the body and hindquarters of a lion combined with the head, wings, and front legs of an eagle.  This regal beast is regarded as a stately amalgam of the king of beasts and the king of birds.  It typically guards important and sacred treasures.  It is also a popular animal in heraldry, where it represents courage and boldness.

You can tell from that description that anyone writing an API for their existing OS or networking stack probably has one of these things hanging in their cubicle.  It stands for the best possible joining of two great ideas.  Those APIs guard the sacred treasures for those that have always wanted insight into the inner workings of a network operating system.  The griffin is the best case scenario for those that want to write an effective API or access methodology for enabling SDN.  But as we all know, something the best strategies are sometimes poorly implemented.

Design by Committee

Chimera

The opposite of the griffin would have to be the chimera.  A chimera is a mythical beast that has the body, head, and front legs of lion.  It has a goat’s head jutting from the middle of the body and a snake’s head for a tail, although some sources say this is a dragon head with the associated dragon wings as well.  This nightmarish beast comes from Greek mythology where it was an omen of disaster when spotted.

The chimera represents what happens when you try to combine things and end up with the worst possible combination.  Why is there a goat’s head in the middle?  What good does a snake head for a tail really do?  In much the same way, companies that are trying to create SDN strategies by throwing everything they can into the mix will have end results that should use a chimera for a mascot.  Rather than taking the approach of building the product with the best and most useful features, some designers feel the need to attach every thing they can in an effort to replicate existing non-useful functionality.  “Better to have it and not need it” is the rallying cry most often heard.  This leads to the kind of unwieldy and bloated applications that scare people away from SDN and back to traditional networking methodology.

Tom’s Take

Every project needs a mascot.  Every product needs an icon or a fancy drawing on the product page.  Sooner or later, those mascots come to symbolize everything the project stands for.  Content penguins aside, most projects are looking for something cute or cuddly.  Security vendors are notorious for using scary looking animals to get the point across that they aren’t to be messed with.  I think that using mythologic creatures other than the unicorn to symbolize SDN projects is the way to go.  It focuses the developers to ground themselves in real features.  Hopefully it helps them avoid the mentality that could create nightmarish creatures like the chimera.