Cisco and OpenDNS – The Name Of The Game?


This morning, Cisco announced their intent to acquire OpenDNS, a security-as-a-service (SaaS) provider based around the idea of using Domain Naming Service (DNS) as a method for preventing the spread of malware and other exploits. I’ve used the OpenDNS free offering in the past as a way to offer basic web filtering to schools without funds as well as using OpenDNS at home for speedy name resolution when my local name servers have failed me miserably.

This acquistion is curious to me. It seems to be a line of business that is totally alien to Cisco at this time. There are a couple of interesting opportunities that have arisen from the discussions around it though.

Internet of Things With Names

The first and most obivious synergy with Cisco and OpenDNS is around Internet of Things (IoT) or Internent of Everything (IoE) as Cisco has branded their offering. IoT/IoE has gotten a huge amount of attention from Cisco in the past 18 months as more and more devices come online from thermostats to appliances to light sockets. The number of formerly dumb devices that now have wireless radios and computers to send information is staggering.

All of those devices depend on certain services to work properly. One of those services is DNS. IoT/IoE devices aren’t going to use pure IP to communicate with cloud servers. That’s because IoT uses public cloud offerings to communicate with devices and dashboards. As I said last year, capacity and mobility can be ensure by using AWS, Google Cloud, or Azure to host the servers to which IoT/IoE devices communicate.

The easiest way to communicate with AWS instances is via DNS. This ensures that a service can be mobile and fault tolerant. That’s critical to ensure the service never goes down. Losing your laptop or your phone for a few minutes is annoying but survivable. Losing a thermostat or a smoke detector is a safety hazard. Services that need to be resilient need to use DNS.

More than that, with control of OpenDNS Cisco now has a walled DNS garden that they can populate with Cisco service entries. Rather than allowing IoT/IoE devices to inherit local DNS resolution from a home ISP, they can hard code the DNS name servers in the device and ensure that the only resolution used will be controled by Cisco. This means they can activate new offerings and services and ensure that they are reachable by the devices. It also allows them to police the entries in DNS and prevent people from creating “workarounds” to enable to disable features and functions. Walled-garden DNS is as important to IoT/IoE as the walled-garden app store is to mobile devices.

Predictive Protection

The other offering hinted at in the acquistion post from Cisco talks about the professional offerings from OpenDNS. The OpenDNS Umbrella security service helps enterprises protect themselves from malware and security breaches through control and visibility. There is also a significant amount of security intelligence available due to the amount of traffic OpenDNS processes every day. This gives them insight into the state of the Internet as well as sourcing infection vectors and identifying threats at their origin.

Cisco hopes to utilize this predictive intelligence in their security products to help aid in fast identification and mitigation of threats. By combining OpenDNS with SourceFire and Ironport the hope is that this giant software machine will be able to protect customers even faster before they get exposed and embarrased and even sued for negligence.

The part that worries me about that superior predictive intelligence is how it’s gathered. If the only source of that information comes from paying OpenDNS customers then everything should be fine. But I can almost guarantee that users of the free OpenDNS service (like me) are also information sources. It makes the most sense for them. Free users provide information for the paid service. Paid users are happy at the level of intelligence they get, and those users pay for the free users to be able to keep using those features at no cost. Win/win for everyone, right?

But what happens if Cisco decides to end the free offering from OpenDNS? Let’s think about that a little. If free users are locked out from OpenDNS or required to pay even a small nominal fee, that means their source of information is lost in the database. Losing that information reduces the visibility OpenDNS has into the Internet and slows their ability to identify and vector threats quickly. Paying users then lose effectiveness of the product and start leaving in droves. That loss accelerates the failure of that intelligence. Any products relying on this intelligence also reduce in effectiveness. A downward spiral of disaster.

Tom’s Take

The solution for Cisco is very easy. In order to keep the effectiveness of OpenDNS and their paid intelligence offerings, Cisco needs to keep the free offering and not lock users out of using their DNS name servers for no cost. Adding IoT/IoE into the equation helps somewhat, but Cisco has to have the information from small enterprises and schools that use OpenDNS. It benefits everyone for Cisco to let OpenDNS operate just as they have been for the past few years. Cisco gains signficant intelligence for their security offerings. They also gain the OpenDNS customer base to sell new security devices to. And free users gain the staying power of a brand like Cisco.

Thanks to Greg Ferro (@EtherealMind), Brad Casemore (@BradCasemore) and many others for the discussion about this today.

The Walls Are On Fire

There’s no denying the fact that firewalls are a necessary part of modern perimeter security. NAT isn’t a security construct. Attackers have the equivalent of megaton nuclear arsenals with access to so many DDoS networks. Security admins have to do everything they can to prevent these problems from happening. But one look at firewall market tells you something is terribly wrong.

Who’s Protecting First?

Take a look at this recent magic polygon from everyone’s favorite analyst firm:

FW Magic Polygon.  Thanks to @EtherealMind.

FW Magic Polygon. Thanks to @EtherealMind.

I won’t deny that Checkpoint is on top. That’s mostly due to the fact that they have the biggest install base in enterprises. But I disagree with the rest of this mystical tesseract. How is Palo Alto a leader in the firewall market? I thought their devices were mostly designed around mitigating internal threats? And how is everyone not named Cisco, Palo Alto, or Fortinet regulated to the Niche Players corral?

The issue comes down to purpose. Most firewalls today aren’t packet filters. They aren’t designed to keep the bad guys out of your networks. They are unified threat management systems. That’s a fancy way of saying they have a whole bunch of software built on top of the packet filter to monitor outbound connections as well.

Insider threats are a huge security issue. People on the inside of your network have access. They sometimes have motive. And their power goes largely unchecked. They do need to be monitored by something, whether it be an IDS/IPS system or a data loss prevention (DLP) system that keeps sensitive data from leaking out. But how did all of those devices get lumped together?

Deploying security monitoring tools is as much art as it is science. IPS sensors can be placed in strategic points of your network to monitor traffic flows and take action on them. If you build it correctly you can secure a huge enterprise with relatively few systems.

But more is better, right? If three IPS units make you secure, six would make you twice as secure, right? No. What you end up with is twice as many notifications. Those start getting ignored quickly. That means the real problems slip through the cracks because no one pays attention any more. So rather than deploying multiple smaller units throughout the network, the new mantra is to put an IPS in the firewall in the front of the whole network.

The firewall is the best place for those sensors, right? All the traffic in the network goes through there after all. Well, except for the user-to-server traffic. Or traffic that is internally routed without traversing the Internet firewall. Crafty insiders can wreak havoc without ever touching an edge IPS sensor.

And that doesn’t even begin to describe the processing burden placed on the edge device by loading it down with more and more CPU-intensive software. Consider the following conversation:

Me: What is the throughput on your firewall?


Them: It’s 1 Gbps!


Me: What’s the throughput with all the features turned on?


Them: Much less than 1 Gbps…

When a selling point of your UTM firewall is that the numbers are “real”, you’ve got a real problem.

What’s Protecting Second?

There’s an answer out there to fix this issue: disaggregation. We now have the capability to break out the various parts of a UTM device and run them all in virtual software constructs thanks to Network Function Virtualization (NFV). And they will run faster and more efficiently. Add in the ability to use SDN service chaining to ensure packet delivery and you have a perfect solution. For almost everyone.

Who’s not going to like it? The big UTM vendors. The people that love selling oversize boxes to customers to meet throughput goals. Vendors that emphasize that their solution is the best because there’s one dashboard to see every alert and issue, even if those alerts don’t have anything to do with each other.

UTM firewalls that can reliably scan traffic at 1 Gbps are rare. Firewalls that can scan 10 Gbps traffic streams are practically non-existant. And what is out there costs a not-so-small fortune. And if you want to protect your data center you’re going to need a few of them. That’s a mighty big check to write.

Tom’s Take

There’s a reason why we call it Network Function Virtualization. The need for the days when you try and cram all the possible features you could think of onto a single piece of hardware are over. We don’t need complicated all-in-one boxes that have insanely large CPUs. We have software constructs that can take care of all of that now.

While the engineers will like this brave new world, there are those that won’t. Vendors of the single box solutions will still tell you that their solution runs better. Analyst firms with a significant interest in the status quo will tell you NFV solutions are too far out or don’t address all the necessary features. It’s up to you to sort out the smoke from the flame.

Insecurity Guards


Pick a random headline related to security today and you’ll see lots of exclamation points and dire warnings about the insecurity of a something we thought was inviolate, such as Apple Pay or TLS. It’s enough to make you jump out of your skin and crawl into a dark hole somewhere never to use electricity again. Until you read the article, that is. After going through a couple of paragraphs, you realize that a click-bait headline about a new technology actually underscores an age-old problem: people are the weakest link.

Engineered To Be Social

We can engineer security for protocols and systems until the cows come home. We can use ciphers so complicated that even Deep Thought couldn’t figure them out. We can create a system so secure that it could never be hacked. But in the end that system needs to be used by people. And people are where everything breaks down.

Take the most recent Apple Pay “exploit” in the news that’s been making all the headlines. The problem has nothing to do with Apple Pay itself, or the way the device interacts with the point-of-sale terminal. It has everything to do with enterprising crooks calling in to banks an impersonating users to get a live, breathing person on the other end of the phone to override security safeguards and break the system down. An hourly employee of the bank can put all the defense-in-depth research to naught in a matter of keystrokes.

It is the way it is because people are dumb, panicky, and dangerous. When confronted with situations that are outside their norm they tend to freeze up and do the wrong thing. Take this scene from Sneakers (which is an excellent movie you should go watch right now):

When I originally started writing this post, that scene stuck out in my mind as a brilliant way to illustrate how less-than-savory people get around high technology with simple solutions, like kicking in a door protected by a keypad. But then I watched the scene again and found an even better example of my point. Look how Robert Redford and River Phoenix work together to distract and eventually overwhelm the security guard. The guard knows that no one should be able to get through the gate without the right keycard. With a bit of distraction, some added stress, and an apparently helpless but irritated user, Redford is able to social engineer his way into the building with little effort. The movie is full of these kinds of scenes.

The point is not that Robert Redford can talk his way into a building. The point that should be illustrated is that people override security decisions every day. Writing down passwords. Ignoring security warnings. Clicking on believable but fake exploits. It’s done because it’s quicker or easier or it’s done to remove a screaming customer on the other end of the phone. Polices are ignored and shortcuts are taken to make things easy. So how do we fix it?

Teach It, Don’t Tech It

The absolute last thing you should do when trying to fix these issues is to create another layer of technology to insulate the issue. That leads to two problems. The first is that people will being to see the new solution as yet another problem and try to create shortcuts to work around it. The second, which is a more sinister issue, is that you’ve essentially told those people that they can’t understand why this is a problem and you’ve decided to marginalize them instead of teaching them. They may not realize it, but you’ve silently placed them lower on the intelligence ladder than a few bytes of code.

People need to know why things are the way they are. If the policy says not to write down a password, tell people why that is. If the rules say you don’t override a lockout for a PIN or add a card to a person’s account without certain information then you need to tell people why you don’t do that. A policy or security feature without an explanation is merely an annoyance. One that will be circumvented. Making your users aware of the reason for a policy makes it something that’s hard to ignore. You’re more likely to get traction by treating your users like people, not automatons.

Tom’s Take

Kevin Mitnick (@KevinMitnick) wrote an entire book about social engineering and how easy it is to accomplish. As security systems become more complicated and much less simple to fool, the majority of miscreants aren’t going to spend hours upon hours trying to hack a handshake protocol or create hash collisions. Instead, they will attack the weakest link in the chain. That will almost undoubtedly be the users of the system. We have to make our users smart enough to know when people are trying to take advantage of them and close that loop. Or at least make that loop as difficult to breach as the rest of the system. That’s the only way to be sure that the security measures we put in place can be used to their fullest potential. Just make sure that everyone knows the Eddie Vedder doesn’t work in accounting.


Rules Shouldn’t Have Exceptions


On my way to Virtualization Field Day 4, I ran into a bit of a snafu at the airport that made me think about policy and application. When I put my carry-on luggage through the X-ray, the officer took it to the back and gave it a thorough screening. During that process, I was informed that my double-edged safety razor would not be able to make the trip (or the blade at least). I was vexed, as this razor had flown with me for at least a whole year with nary a peep from security. When I related as much to the officer, the response was “I’m sorry no one caught it before.”

Everyone Is The Same, Except For Me

This incident made me start thinking about polices in networking and security and how often they are arbitrarily enforced. We see it every day. The IT staff comes up with a new plan to reduce mailbox sizes or reduce congestion by enforcing quality of service (QoS). Everyone is all for the plan during the discussion stages. When the time comes to implement the idea, the exceptions start happening. Upper management won’t have mailbox limitations. The accounting department is exempt from the QoS policy. The list goes on and on until it’s larger than the policy itself.

Why does this happen? How can a perfect policy go from planning to implementation before it falls apart? Do people sit around making up rules they know they’ll never follow? That does happen in some cases, but more often it happens that the folks that the policy will end up impacting the most have no representation in the planning process.

Take mailboxes for example. The IT department, being diligent technology users, strive for inbox zero every day. They process and deal with messages. They archive old mail. They keep their mailbox a barren wasteland of in-process things and shuffle everything else off to the static archive. Now, imagine an executive. These people are usually overwhelmed by email. They process what they can but the wave will always overtake them. In addition, they have no archive. Their read mail sits around in folders for easy searching and quick access when a years-old issue becomes present again.

In modern IT, any policies limiting mailbox sizes would be decided by the IT staff based on their mailbox size. Sure, a 1 GB limit sounds great. Then, when the policy is implemented the executive staff pushes back with their 5 GB (or larger) mailboxes and says that the policy does not apply to them. IT relents and finds a way to make the executives exempt.

In a perfect world, the executive team would have been consulted or had representation on the planning team prior to the decision. The idea of having huge mailboxes would have been figured out in the planning stage and dealt with early instead of making exceptions after the fact. Maybe the IT staff needed to communicate more. Perhaps the executive team needed to be more involved. Those are problems that happen every day. So how do we fix them?

Exceptions Are NOT The Rule

The way to increase buy-in for changes and increase communication between stakeholders is easy but not without pain. When policies are implemented, no deviations are allowed. It sounds harsh. People are going to get mad at you. But you can’t budge an inch. If a policy exception is not documented in the policy it will get lost somewhere. People will continue to be uninvolved in the process as long as they think they can negotiate a reprieve after the fact.

IT needs to communicate up front exactly what’s going into the change before the the implementation. People need to know how they will be impacted. Ideally, that will mean that people have talked about the change up front so there are no surprises. But we all know that doesn’t happen. So making a “no exceptions” policy or rule change will get them involved. Because not being able to get out of a rule means you want to be there when the rules get decided so you can make your position clear and ensure the needs of you and your department are met.

Tom’s Take

As I said yesterday on Twitter, people don’t mind rules and polices. They don’t even mind harsh or restrictive rules. What they have a problem with is when those rules are applied in an arbitrary fashion. If the corporate email policy says that mailboxes are supposed to be no more than 1 GB in size then people in the organization will have a problem if someone has a 20 GB mailbox. The rules must apply to everyone equally to be universally adopted. Likewise, rules must encompass as many outlying cases as possible in order to prevent one-off exceptions for almost everyone. Planning and communication are more important than ever when planning those rules.

Expiring The Internet

An article came out this week that really made me sigh.  The title was “Six Aging Protocols That Could Cripple The Internet“.  I dove right in, expecting to see how things like Finger were old and needed to be disabled and removed.  Imagine my surprise when I saw things like BGP4 and SMTP on the list.  I really tried not to smack my own forehead as I flipped through the slideshow of how the foundation of the Internet is old and is at risk of meltdown.

If It Ain’t Broke

Engineers love the old adage “If it ain’t broke, don’t fix it!”.  We spend our careers planning and implementing.  We also spend a lot of time not touching things afterwards in order to prevent it from collapsing in a big heap.  Once something is put in place, it tends to stay that way until something necessitates a change.

BGP is a perfect example.  The basics of BGP remain largely the same from when it was first implemented years ago.  BGP4 has been in use since 1994 even though RFC 4271 didn’t officially formalize it until 2006.  It remains a critical part of how the Internet operates.  According to the article, BGP is fundamentally flawed because it’s insecure and trust based.  BGP hijacking has been occurring with more frequency, even as resources to combat it are being hotly debated.  Is BGP to blame for the issue?  Or is it something more deeply rooted?

Don’t Fix It

The issues with BGP and other protocols mentioned in the article, including IPv6, aren’t due to the way the protocol was constructed.  It is due in large part to the humans that implement those protocols.  BGP is still in use in the current insecure form because it works.  And because no one has proposed a simple replacement that accomplishes the goal of fixing all the problems.

Look at IPv6.  It solves the address exhaustion issue.  It solves hierarchical addressing issues.  It restores end-to-end connectivity on the Internet.  And yet adoption numbers still languish in the single digit percentage.  Why?  Is it because IPv6 isn’t technically superior? Or because people don’t want to spend the time to implement it?  It’s expensive.  It’s difficult to learn.  Reconfiguring infrastructures to support new protocols takes time and effort.  Things that are better spent on answering user problems or taking on additional tasks as directed by management that doesn’t care about BGP insecurity until the Internet goes down.

It Hurts When I Do This

Instead of complaining about how protocols are insecure, the solution to the problem should be two fold: First, we need to start building security into protocols and expiring their older, insecure versions.  POODLE exploited SSLv3, an older version that served as a fallback to TLS.  While some old browsers still used SSLv3, the simple easy solution was to disable SSL and force people to upgrade to TLS-capable clients.  In much the same way, protocols like NTP and BGP can be modified to use more security.  Instead of suggesting that people use those versions, architects and engineers need to implement those versions and discourage use of the old insecure protocols by disabling them.  It’s not going to be easy at first.  But as the movement gains momentum, the solution will work.

The next step in the process is to build easy-to-configure replacements.  Bolting security onto a protocol after the fact does stop the bleeding.  But to fix the underlying symptoms, the security needs to be baked into the protocol from the beginning.  But doing this with an entirely new protocol that has no backwards compatibility will be the death of that new protocol.  Just look at how horrible the transition to IPv6 has been.  Lack of an easy transition coupled with no monetary incentive and lack of an imminent problem caused the migration to drag out until the eleventh hour.  And even then there is significant pushback against an issue that can no longer be ignored.

Building the next generation of secure Internet protocols is going to take time and transparent effort.  People need to see what’s going into something to understand why it’s important.  The next generation of engineers needs to understand why things are being built the way they are.  We’re lucky in that many of the people responsible for building the modern Internet are still around.  When asked about limitations in protocols the answer remains remarkably the same – “We never thought it would be around this long.”

The longevity of quick fixes seems to be the real issue.  When the next generation of Internet protocols is built there needs to be a built-in expiration date.  A point-of-no-return beyond which the protocol will cease to function.  And there should be no method for extending the shelf life of a protocol to forestall it’s demise.  In order to ensure that security can’t be compromised we have to resign ourselves to the fact that old things need to be put out to pasture.  And the best way to ensure that new things are put in place to supplant them is to make sure the old things go away on time.

Tom’s Take

The Internet isn’t fundamentally broken.  It’s a collection of things that work well in their roles that maybe have been continued a little longer than necessary.  The probability of an exploit being created for something rises with every passing day it is still in use.  We can solve the issues of the current Internet with some security engineering.  But to make sure the problem never comes back again, we have to make a hard choice to expire protocols on a regular basis.  It will mean work.  It will create strife.  And in the end we’ll all be better for it.

Security Dessert Models


I had the good fortune last week to read a great post from Maish Saidel-Keesing (@MaishSK) that discussed security models in relation to candy.  It reminded me that I’ve been wanting to discuss security models in relation to desserts.  And since Maish got me hungry for a Snicker’s bar, I decided to lay out my ideas.

When we look at traditional security models of the past, everything looks similar to creme brûlée.  The perimeter is very crunchy, but it protects a soft interior.  This is the predominant model of the world where the “bad guys” all live outside of your network.  It works when you know where your threats are located.  This model is still in use today where companies explicitly trust their user base.

The creme brûlée model doesn’t work when you have large numbers of guest users or BYOD-enabled users.  If one of them brings in something that escapes into the network, there’s nothing to stop it from wreaking havoc everywhere.  In the past, this has caused massive virus outbreaks and penetrations from things like malicious USB sticks in the parking lot being activated on “trusted” computers internally.

A Slice Of Pie

A more modern security model looks more like an apple pie.  Rather than trusting that everything inside the network, the smart security team will realize that users are as much of a threat as the “bad guys” outside.  They crunchy crust on top will also be extended around the whole soft area inside.  Users that connect tablets, phones, and personal systems will have a very aggressive security posture in place to prevent access of anything that could cause problems in the network (and data center).  This model is great when you know that the user base is not to be trusted.  I wrote about it over a year ago on the Aruba Airheads community site.

The apple pie model does have some drawbacks.  While it’s a good idea to isolate your users outside the “crust”, you still have nothing protecting your internal systems if a rogue device or “trusted” user manages to get inside the perimeter.  The pie model will protect you from careless intrusions but not from determined attackers.  To fix that problem, you’re going to have to protect things inside the network with a crunchy shell as well.

Melts In Your Firewall, Not In Your Hand

Maish was right on when he talked about M&Ms being a good metaphor for security.  They also do a great job of visualizing the untrusted user “pie” model.  But the ultimate security model will end up looking more like an M&M cookie.  It will have a crunchy edge all around.  It will be “soft” in the middle.  And it will also have little crunchy edges around the important chocolate parts of your network (or data center).  This is how you protect the really important things like customer data.  You make sure that even getting past the perimeter won’t grant access.  This is the heart of “defense in depth”.

The M&M cookie model isn’t easy by any means. It requires you to identify assets that need to be protected.  You have to build the protection in at the beginning.  No ACLs that permit unrestricted access.  The communications between untrusted devices and trusted systems needs to be kept to the bare minimum necessary.  Too many M&Ms in a cookie makes for a bad dessert.  So too must you make sure to identify the critical systems that need be protected and group them together to minimize configuration effort and attack surface.


Tom’s Take

Security is a world of protecting the important things while making sure they can be used by people.  If you err on the side of too much caution, you have a useless system.  If you are too permissive, you have a security risk.  Balance is the key.  Just like the recipe for cookies, pie, or even creme brûlée the proportion of ingredients must be just right to make a tasty dessert.  In security you have to have the same mix of permissions and protections.  Otherwise, the whole thing falls apart like a deflated soufflé.