The Walls Are On Fire

There’s no denying the fact that firewalls are a necessary part of modern perimeter security. NAT isn’t a security construct. Attackers have the equivalent of megaton nuclear arsenals with access to so many DDoS networks. Security admins have to do everything they can to prevent these problems from happening. But one look at firewall market tells you something is terribly wrong.

Who’s Protecting First?

Take a look at this recent magic polygon from everyone’s favorite analyst firm:

FW Magic Polygon.  Thanks to @EtherealMind.

FW Magic Polygon. Thanks to @EtherealMind.

I won’t deny that Checkpoint is on top. That’s mostly due to the fact that they have the biggest install base in enterprises. But I disagree with the rest of this mystical tesseract. How is Palo Alto a leader in the firewall market? I thought their devices were mostly designed around mitigating internal threats? And how is everyone not named Cisco, Palo Alto, or Fortinet regulated to the Niche Players corral?

The issue comes down to purpose. Most firewalls today aren’t packet filters. They aren’t designed to keep the bad guys out of your networks. They are unified threat management systems. That’s a fancy way of saying they have a whole bunch of software built on top of the packet filter to monitor outbound connections as well.

Insider threats are a huge security issue. People on the inside of your network have access. They sometimes have motive. And their power goes largely unchecked. They do need to be monitored by something, whether it be an IDS/IPS system or a data loss prevention (DLP) system that keeps sensitive data from leaking out. But how did all of those devices get lumped together?

Deploying security monitoring tools is as much art as it is science. IPS sensors can be placed in strategic points of your network to monitor traffic flows and take action on them. If you build it correctly you can secure a huge enterprise with relatively few systems.

But more is better, right? If three IPS units make you secure, six would make you twice as secure, right? No. What you end up with is twice as many notifications. Those start getting ignored quickly. That means the real problems slip through the cracks because no one pays attention any more. So rather than deploying multiple smaller units throughout the network, the new mantra is to put an IPS in the firewall in the front of the whole network.

The firewall is the best place for those sensors, right? All the traffic in the network goes through there after all. Well, except for the user-to-server traffic. Or traffic that is internally routed without traversing the Internet firewall. Crafty insiders can wreak havoc without ever touching an edge IPS sensor.

And that doesn’t even begin to describe the processing burden placed on the edge device by loading it down with more and more CPU-intensive software. Consider the following conversation:

Me: What is the throughput on your firewall?

 

Them: It’s 1 Gbps!

 

Me: What’s the throughput with all the features turned on?

 

Them: Much less than 1 Gbps…

When a selling point of your UTM firewall is that the numbers are “real”, you’ve got a real problem.

What’s Protecting Second?

There’s an answer out there to fix this issue: disaggregation. We now have the capability to break out the various parts of a UTM device and run them all in virtual software constructs thanks to Network Function Virtualization (NFV). And they will run faster and more efficiently. Add in the ability to use SDN service chaining to ensure packet delivery and you have a perfect solution. For almost everyone.

Who’s not going to like it? The big UTM vendors. The people that love selling oversize boxes to customers to meet throughput goals. Vendors that emphasize that their solution is the best because there’s one dashboard to see every alert and issue, even if those alerts don’t have anything to do with each other.

UTM firewalls that can reliably scan traffic at 1 Gbps are rare. Firewalls that can scan 10 Gbps traffic streams are practically non-existant. And what is out there costs a not-so-small fortune. And if you want to protect your data center you’re going to need a few of them. That’s a mighty big check to write.


Tom’s Take

There’s a reason why we call it Network Function Virtualization. The need for the days when you try and cram all the possible features you could think of onto a single piece of hardware are over. We don’t need complicated all-in-one boxes that have insanely large CPUs. We have software constructs that can take care of all of that now.

While the engineers will like this brave new world, there are those that won’t. Vendors of the single box solutions will still tell you that their solution runs better. Analyst firms with a significant interest in the status quo will tell you NFV solutions are too far out or don’t address all the necessary features. It’s up to you to sort out the smoke from the flame.

SDN and NFV – The Ups and Downs

TopSDNBottomNFV

I was pondering the dichotomy between Software Defined Networking (SDN) and Network Function Virtualization (NFV) the other day.  I’ve heard a lot of vendors and bloggers talking about how one inevitably leads to the other.  I’ve also seen a lot of folks saying that the two couldn’t be further apart on the scale of software networking.  The more I thought about these topics, the more I realized they are two sides of the coin.  The problem, at least in my mind, is the perspective.

SDN – Planning The Paradigm

Software Defined Networking telegraphs everything about what it is trying to accomplish right there in the name.  Specifically, the “Definition” part of the phrase.  I’ve made jokes in the past about the lack of definition in SDN as vendors try to adapt their solutions to fit the buzzword mold.  What I finally came to realize is that the SDN folks are all about definition. SDN is the Top Down approach to planning.  SDN seeks to decompose the network into subsystems that can be replaced or reprogrammed to suit the needs of those things which utilize the network.

As an example, SDN breaks the idea of switch down into things like “forwarding plane” and “control plane” and seeks to replace the control plane with alternative software, whether it be a controller-based architecture like OpenFlow or an overlay network similar to that of VMware/Nicira.  We can replace the OS of a switch with a concept like OpenFlow easily.  It’s just a mechanism for determining which entries are populated in the Content Addressable Memory (CAM) tables of the forwarding plane.  In top down design, it’s easy to create a stub entry or “black box” to hold information that flows into it.  We don’t particularly care how the black box works from the top of the design, just that it does its job when called upon.

Top Down designs tend to run into issues when those black boxes lack detail or are missing some critical functionality.  What happens when OpenFlow isn’t capable of processing flows fast enough to keep the CAM table of a campus switch populated with entries?  Is the switch going to fall back to process switching the packets?  That could be an big issue.  Top Down designs are usually very academic and elegant.  They also have a tendency to lack concrete examples and real world experience.  When you think about it, that does speak a lot about the early days of SDN – lots of definition of terminology and technology, but a severe lack of actual packet forwarding.

NFV – Working From The Ground Up

Network Function Virtualization takes a very different approach to the idea of turning hardware networks into software networks.  The driving principle behind NFV is replication of existing technology in a software state.  This is classic Bottom Up design.  Rather than spending a large amount of time planning and assembling the perfect system, Bottom Up designers tend to build as they go.  They concentrate on making something work first, then making their things work together second.

NFV is great for hands-on folks because it gives concrete, real results almost immediately. Once you’ve converted an load balancer or a router to a purely software-based construct you can see right away how it works and what the limitations might be.  Does it consume too many resources on the hypervisor?  Does it excel at forwarding small packets?  Does switching a large packet locally cause a fault?  These are problems that can be corrected in the individual system rapidly rather than waiting to modify the overall plan to account for difficulties in the virtualization process.

Bottom Up design does suffer from some issues as well.  The focus in Bottom Up is on getting things done on a case-by-case basis.  What do you do when you’ve converted all your hardware to software?  Do your NFV systems need to talk to one another?  That’s usually where Bottom Up design starts breaking down.  Without a grand plan at a higher level to ensure that systems can talk to each other this design methodology falls back to a series of “hacks” to get them connected.  Units developed in isolation aren’t required to play nice with everyone else until they are forced to do so.  That leads to increasing complex and fragile interconnection systems that could fail spectacularly should the wrong thread be yanked with sufficient force.


Tom’s Take

Which method is better?  Should we spend all our time planning the system and hope that our Powerpoint Designs work the right way when someone codes them in a few months?  Or should we say “damn the torpedoes” and start building things left and right and hope that someone will figure out a way to tie all these individual pieces together at some point?

Surprisingly, the most successful design requires elements of both.  People need to have a basic plan at the least when starting out on a plan to change the networking world.  Once the ideas are sketched out, you need a team of folks willing to burn the midnight oil and get the ideas implemented in real life to ensure that the plan works the right way.  The guidance from the top is essential to making sure everything works together in the end.

Whether you are leading from the top or the bottom, remember that everything has to meet in the middle sooner or later.

Brocade’s Pragmatically Defined Network

logo-brocadeMost of the readers of my blog would agree that there is a lot of discussion in the networking world today about software defined networking (SDN) and the various parts and pieces that make up that umbrella term.  There’s argument over what SDN really is, from programmability to orchestration to network function virtualization (NFV).  Vendors are doing their part to take advantage of some, all, or in some cases none of the above to push a particular buzzword strategy to customers.  I like to make sure that everything is as clear as possible before I start discussing the pros and cons.  That’s why I jumped at the chance to get a briefing from Brocade around their new software and hardware releases that were announced on April 30th.

I spoke with Kelly Harrell, Brocade’s new vice president and general manager of the Software Business Unit.  If that name sounds somewhat familiar, it might be because Mr. Harrell was formerly at Vyatta, the software router company that was acquired by Brocade last year.  We walked through a presentation and discussion of the direction that Brocade is taking their software defined networking portfolio.  According to Brocade, the key is to be pragmatic about the new network.  New technologies and methodologies need to be introduced while at the same time keeping in mind that those ideas must be implemented somehow.  I think that a large amount of the frustration with SDN today comes from a lot of vaporware presentations and pie-in-the-sky ideas that aren’t slated to come to fruition for months.  Instead, Brocade talked to me about real products and use cases that should be shipping very soon, if not already.

The key to Brocade is to balance SDN against network function virtualization, something I referred to a bit in my Network Field Day 5 post about Brocade.  Back then, I called NFV “Networking Done (by) Software,” which was my sad attempt to point out how NFV is just the opposite of what I see SDN becoming.  During our discussion, Harrell pointed out that NFV and SDN aren’t totally dissimilar after all.  Both are designed to increase the agility with which a company can execute on strategy and create value for shareholders.  SDN is primarily focused on programmability and orchestration.  NFV is tied more toward lowering costs by implementing existing technology in a flexible way.

NFV seeks to take existing appliances that have been doing tasks, such as load balancers or routers, and free their workloads from being tied to a specific piece of hardware.  In fact, there has been an explosion of these types of migrations from a variety of vendors.  People are virtualizing entire business lines in an effort to remove the reliance on specialized hardware or reduce the ongoing support costs.  Brocade is seeking to do this with two platforms right now.  The first is the Vyatta vRouter, which is the extension what came over in the Vyatta acquisition.  It’s a router and a firewall and even a virtual private networking (VPN) device that can run on just about anything.  It is hypervisor agnostic and cloud platform agnostic as well.  The idea is that Brocade can include a copy of the vRouter with application packages that can be downloaded from an enterprise cloud app store.  Once downloaded and installed, the vRouter can be fired up and pull a predefined configuration from the scripts included in the box.  By making it agnostic to the underlying platform, there’s no worry about support down the road.

The second NFV platform Brocade told me about is the virtual ADX application delivery switch.  It’s basically a software load balancer.  That’s not really the key point of the whole idea of applying the NFV template to an existing hardware platform.  Instead, the idea is that we’re taking something that’s been historically huge and hard to manage and moving it closer to the edge where it can be of better use.  Rather that sticking a huge load balancer at the entry point to the data center to ensure that flows are separated, the vADX allows the load balancer to be deployed very close to the server or servers that need to have the information flow metered.  Now, the agility of SDN/NFV allows these software devices to be moved and reconfigured quickly without needing to worry about how much reprogramming is going to be necessary to pull the primary load balancer out or change a ton of rules to take reroute traffic to a vMotioned cluster.  In fact, I’m sure that we’re going to see a new definition of the “network edge” being to emerge as more software-based NFV devices begin to be deployed closer and closer to the devices that need them.

On the OpenFlow front, Brocade told me about their new push toward something they are calling “Hybrid Port OpenFlow.”  OpenFlow is a great disruptive SDN technology that is gaining traction today, largely in part because of companies like Brocade and NEC that have embraced it and started pushing it out to their customer base well ahead of other manufacturers.  Right now, OpenFlow support really consists to two modes – ON and OFF.  OFF is pretty easy to imagine.  ON is a bit more complicated.  While a switch can be OpenFlow enabled and still forward normal traffic, the practice has always been to either dedicate the switch to OpenFlow forwarding, in effect turning it into a lab switch, or to enable OpenFlow selectively for a group of ports out of the whole switch, kind of like creating a lab VLAN for testing on a production box.  Brocade’s Hybrid Port OpenFlow model allows you to enable OpenFlow on a port and still allow it to do regular traffic forwarding sans OpenFlow.  That may be the best model for adopters going forward due to one overriding factor – cost.  When you take a switch or a group of ports on a switch and dedicate them for OpenFlow, you are cost the enterprise something.  Every port on the switch costs a certain amount of money.  Every minute an engineer spends working on a crazy lab project incurs a cost.  By enabling the network engineers to turn on OpenFlow at will without disrupting the existing traffic flow, Brocade can reduce the opportunity cost of enabling OpenFlow to almost zero.  If OpenFlow just becomes something that works as soon as you enable it, like IPv6 in Windows 7, you don’t have to spend as much time planning for your end node configuration.  You just build the core and let the end nodes figure out they have new capabilities.  I figure that large Brocade networks will see their OpenFlow adoption numbers skyrocket simply because Hybrid Port mode turns the configuration into Easy Mode.

The last interesting software piece that Brocade showed me is a prime example of the kinds of things that I expect SDN to deliver to us in the future.  Brocade has created an application called the Application Resource Broker (ARB).  It sits above the fray of the lower network layers and monitors indicators of a particular application’s health, such as latency and load.  When one of those indicators hits a specific threshold, ARB kicks in to request more resources from vCenter to balance things out.  If the demand on the application continues to rise beyond the available resources, ARB can dynamically move the application to a public cloud instance with a much deeper pool of resources, a process known as cloudbursting.  All of this can happen automatically without the intervention of IT.  This is one of the things that shows me what SDN can really do.  Software can take care of itself and dynamically move things around when abnormal demand happens.  Intelligent choices about the network environment can be made on solid data.  No guess what about what “might” be happening.  ARB removes doubt and lag in response time to allow for seamless network repair.  Try doing that with a telnet session.

There’s a lot more to the Brocade announcement than just software.  You can check it out at http://www.brocade.com.  You can also follow them on Twitter as @BRCDComm.


Tom’s Take

The future looks interesting at first.  Flying cars, moving sidewalks, and 3D user interfaces are all staples of futuristic science fiction.  The problem for many arises when we need to start taking steps to build those fanciful things.  A healthy dose of pragmatism helps to figure out what we need to do today to make tomorrow happen.  If we root our views of what we want to do with what we can do, then the future becomes that much more achievable.  Even the amazing gadgets we take for granted today have a basis in the real technology of the time they were first created.  By making those incremental steps, we can arrive where we want to be a whole lot sooner with a better understanding of how amazing things really are.