Setting Sail on Secret Seas with Trireme

trireme-b

Container networking is a tough challenge to solve. The evolving needs of creating virtual networks to allow inter-container communications is difficult. But ensuring security at the same time is enough to make you pull your hair out. Lots of companies are taking a crack at it as has been demonstrated recently by microsegmentation offerings from Cisco, VMware NSX, and many others. But a new development on this front set sail today. And the captain is an old friend.

Sailing the Security Sea

Dimitri Stiladis did some great things in his time at Nuage Networks. He created a great overlay network solution that not only worked well for software defined systems but also extended into the container world as more and more people started investigating containers as the new way to provide application services. He saw many people rushing into this area with their existing solutions as well as building new solutions. However, those solutions were all based on existing technology and methods that didn’t work well in the container world. If you ever heard someone say, “Oh, containers are just lightweight VMs…” you know what kind of thinking I’m talking about.

Late last year, Dimitri got together with some of his friends to build a new security solution for containers. He founded Aporeto, which is from the Greek for “confidential”. And that really informs the whole idea of what they are trying to build. Container communications should be something easy to secure. All the right pieces are in place. But what’s missing is the way to do it easily and scale it up quickly. This is where existing solutions are missing the point by using existing ideas and constructs.

Enter Trireme. This project is an open source version of the technology Aporeto is working on was released yesterday to help container admins understand why securing communications between containers is critical and yet simple to do. I got a special briefing from Dimitri yesterday about it and once he helped me understand it I immediately saw the power of what they’ve done.

In The Same Boat

Trireme works by doing something very simple. All containers have a certificate that is generated at creation. This allows them to be verified for consistency and other things. What Trireme is doing is using a TCP Authorization Proxy to grab the digital identity of the container and insert it into the TCP SYN setup messages. Now, the receiving container will know who the sender is because the confirmed identity of the sender is encoded in the setup message. If the sender is authorized to talk to the receiver the communications can be setup. Otherwise the connection is dropped.

This is one of the “so simple I can’t believe I missed it” moments. If there is already a secure identity setup for the container it should be used. And adding that information to the TCP setup ensures that we don’t just take for granted that containers with similar attributes are allowed to talk to each other just because they are on the same network. This truly is microsegmentation with the addition of identity protection. Even if you spin up a new container with identical attributes, it won’t have the same digital identity as the previous container, which means it will need to be authorized all over again.

Right now the security model is simple. If the attributes of the containers match, they are allowed to talk. You can setup some different labels and try it yourself. But with the power behind using Kubernetes as the management platform, you can extend this metaphor quite a bit. Imagine being able to create a policy setup that allows containers with the “dev” label to communication if and only if they have the “shared” label as well. Or making sure that “dev” containers can never talk to “prod” containers for any reason, even if they are on the same network. It’s an extension of a lot of things already being looked at in the container world but it has the benefit of built in identity confirmation as well as scalability.

How does Trireme scale? Well, it’s not running a central controller or database of any kind. Instead, the heavy lifting is done by a local process on the container. That’s how Trireme can scale. No dependency on a central process or device failing and leaving everyone stranded. No need to communicate with anything other than the local container host. Kubernetes has the infrastructure to push down the policy changes to processes in the container which are then checked by the Trireme process. That means that Trireme never has to leave the local container to make decisions. Everything that is needed is right on deck.


Tom’s Take

It took me a bit to understand what Dimitri and his group are trying to do with Trireme and later with their Aporeto solution. Creating digital signatures and signing communications between containers is going to be a huge leap forward for security. If all communications are secured by default then security becomes the kind of afterthought that we need.

The other thing that Aporeto illustrates to me is the need for containers to be isolated processes, not heavy VMs. By creating a process boundary per container, Trireme and other solutions can help keep things as close to completely secure as possible. Lowering the attack surface of a construct down to the process level is making it a tiny target in a big ocean.

The People Versus Security

PinkLock

It all comes back to people. People are the users of the system. They are the source of great imagination and great innovation. They are also the reason why security professionals pull their hair out day in and day out. Because computer systems don’t have the capability to bypass, invalidated, and otherwise screw up security quite like a living, breathing human being.

Climb Every Mountain

Security is designed to make us feel safe. Door locks keep out casual prowlers. Alarm systems alert us when our home or business is violated. That warm fuzzy feeling we get when we know the locks are engaged and we are truly secure is one of bliss.

But when security gets in our way, it’s annoying. Think of all the things in your life that would be easier if people just stopped trying to make you secure. Airport security is the first that comes to mind. Or the annoying habit of needing to show your ID when you make a credit card purchase. How about systems that scan your email for data loss prevention (DLP) purposes and kick back emails with sensitive data that you absolutely need to share?

Security only benefits us when it’s unobtrusive yet visibly reassuring. People want security that works yet doesn’t get in their way. And when it does, they will go out of their way to do anything they can to bypass it. Some of the most elaborate procedures I’ve ever seen to get around security lockouts happened because people pushed back against the system.

Cases in point? The US Air Force was forced to put a code on nuclear missiles to protect them from being accidentally launched at the height of the cold war. What did they make that code? 00000000. No, really. How about the more recent spate of issues with the US transition to Chip-and-Signature credit card authentication as opposed to the old swipe method? Just today I was confronted with a card reader that had a piece of paper shoved in the chip reader slot saying “Please Swipe”. Reportedly it’s because transactions are taking 10 seconds or more to process. Much more secure for sure, but far too slow for busy people on the go, I guess.

Computers don’t get imaginative when it comes to overcoming security. They follow the rules. When something happens that violates a rule or triggers a policy to deny an action that policy rule is executed. No exceptions. When an incoming connection is denied at a firewall, that connection is dropped. When the rule says to allow it then it is allowed. Computers are very binary like that (yes, pun intended).

Bring The Mountain To Them

We’ve spent a huge amount of time and effort making security unobtrusive. Think of Apple’s Touch ID. It created a novel and secure way to encourage users to put passcode locks on phones. People can now just unlock their phone with a thumbprint instead of a long passcode. Yet even Touch ID was slow at first. It took some acclimation. And when it was sped up to the point where it caused issues for the way people checked their phones for notifications and such. Apple has even gone to greater lengths in iOS 10 to introduce features to get around the fast Touch ID authentication times caused by new sensors.

Technology will always be one or more steps ahead of where people want it to be. It will always work faster than people think and cause headaches when it behaves in a contrary way. The key to solving security issues related to people is not to try and outsmart them with a computer. People are far too inventive to lose that battle. Even the most computer illiterate person can find a way to bypass a lockout or write a domain administrator password on a sticky note.


Tom’s Take

We need to teach people to think about security from a perspective of need. Why do we have complex passwords? Why do we need to rotate them? Why do the doors of a mantrap open separately? People can understand security when it’s explained in a way that makes them understand the purpose. They still may not like it, but at least they won’t be trying to circumvent it any longer. We hope.

Thoughts On Encryption

encryption

The debate on encryption has heated up significantly in the last couple of months. Most of the recent discussion has revolved around a particular device in a specific case but encryption is older than that. Modern encryption systems represent the culmination of centuries of development of making sure things aren’t seen.

Encryption As A Weapon

Did you know that twenty years ago the U.S. Government classified encryption as a munition? Data encryption was classified as a military asset and placed on the U.S. Munitions List as an auxiliary asset. The control of encryption as a military asset meant that exporting strong encryption to foreign countries was against the law. For a number of years the only thing that could be exported without fear of legal impact was regular old Data Encryption Standard (DES) methods. Even 3DES, which is theoretically much stronger but practically not much better than it’s older counterpart, was restricted for export to foreign countries.

While the rules around encryption export have been relaxed since the early 2000s, there are still some restrictions in place. Those rules are for countries that are on U.S. Government watch lists for terror states or governments deemed “rogue” states. This is why you must attest to not living in or doing business with one of those named countries when you download any software that contains encryption protocols. The problem today is that almost every piece of software includes some form of encryption thanks to ubiquitous functions like OpenSSL.

Even the father of Pretty Good Privacy (PGP) was forced to circumvent U.S. Law to get PGP into the hands of the world. Phil Zimmerman took the novel approach of printing the entirety of PGP’s source code in book form and selling it far and wide. Since books are protected speech, no one could stop them from being sold. The only barrier to creating PGP from the text was how fast one could type. Today, PGP is one of the most popular forms of encrypting written communications, such as emails.

Encryption As A Target

Today’s issues with encryption are rooted in the idea that it shouldn’t be available to people that would use it for “bad” reasons. However, instead of being able to draw a line around a place on a map and say “The people inside this line can’t get access to strong encryption”, we now live in a society where strong encryption is available on virtually any device to protect the growing amount of data we store there. Twenty years ago no one would have guessed that we could use a watch to pay for coffee, board an airplane, or communicate with loved ones.

All of that capability comes with a high information cost. Our devices need to know more and more about us in order to seem so smart. The amount of data contained in innocuous things causes no end of trouble should that data become public. Take the amount of data contained on the average boarding pass. That information is enough to know more about you than is publicly available in most places. All from one little piece of paper.

Keeping that information hidden from prying eyes is the mission of encryption. The spotlight right now is on the government and their predilection to looking at communications. Even the NSA once stated that strong encryption abroad would weaken the ability of their own technology to crack signal intelligence (SIGINT) communications. Instead, the NSA embarked on a policy of sniffing the data before it was ever encrypted by installing backdoors in ISPs and other areas to grab the data in flight. Add in the recent vulnerabilities found in the key exchange process and you can see why robust encryption is critical to protecting data.

Weakening encryption to enable it to be easily overcome by brute force is asking for a huge Pandora’s box to be opened. Perhaps in the early nineties it was unthinkable for someone to be able to command enough compute resources to overcome large number theory. Today it’s not unheard of to have control over resources vast enough to reverse engineer simple problems in a matter or hours or days instead of weeks or years. Every time a new vulnerability comes out that uses vast computing power to break theory it weakens us all.


Tom’s Take

Encryption isn’t about one device. It’s not about one person. It’s not even about a group of people that live in a place with lines drawn on a map that believe a certain way. It’s about the need for people to protect their information from exposure and harm. It’s about the need to ensure that that information can’t be stolen or modified or rerouted. It’s not about setting precedents or looking good for people in the press.

Encryption comes down to one simple question: If you dropped your phone on the floor of DefCon or BlackHat, would you feel comfortable knowing that it would take longer to break into than the average person would care to try versus picking on an easier target or a more reliable method? If the answer to that question is “yes”, then perhaps you know which side of the encryption debate you’re on without even asking the question.

Will Cisco Shine On?

Digital Lights

Cisco announced their new Digital Ceiling initiative today at Cisco Live Berlin. Here’s the marketing part:

And here’s the breakdown of protocols and stuff:

Funny enough, here’s a presentation from just three weeks ago at Networking Field Day 11 on a very similar subject:

Cisco is moving into Internet of Things (IoT) big time. They have at least learned that the consumer side of IoT isn’t a fun space to play in. With the growth of cloud connectivity and other things on that side of the market, Cisco knows that is an uphill battle not worth fighting. Seems they’ve learned from Linksys and Flip Video. Instead, they are tracking the industrial side of the house. That means trying to break into some networks that are very well put together today, even if they aren’t exactly Internet-enabled.

Digital Ceiling isn’t just about the PoE lighting that was announced today. It’s a framework that allows all other kinds of dumb devices to be configured and attached to networks that have intelligence built in. The Constrained Application Protocol (CoaP) is designed in such a way as to provide data about a great number of devices, not just lights. Yet lights are the launch “thing” for this line. And it could be lights out for Cisco.

A Light In The Dark

Cisco wants in on the possibility that PoE lighting will be a huge market. No other networking vendor that I know of is moving into the market. The other building automation company has the manufacturing chops to try and pull off an entire connected infrastructure for lighting. But lighting isn’t something to take lightly (pun intended).

There’s a lot that goes into proper lighting planning. Locations of fixtures and power levels for devices aren’t accidents. It requires a lot of planning and preparation. Plan and prep means there are teams of architects and others that have formulas and other knowledge on where to put them. Those people don’t work on the networking team. Any changes to the lighting plan are going to require input from these folks to make sure the illumination patterns don’t change. It’s not exactly like changing a lightbulb.

The other thing that is going to cause problems is the electrician’s union. These guys are trained and certified to put in anything that has power running to it. They aren’t just going to step aside and let untrained networking people start pulling down light fixtures and put up something new. Finding out that there are new 60-watt LED lights in a building that they didn’t put up is going to cause concern and require lots of investigation to find out if it’s even legal in certain areas for non-union, non-certified employees to install things that are only done by electricians now.

The next item of concern is the fact that you now have two parallel networks running in the building. Because everyone that I’ve talked to about PoE Lighting and Digital Ceiling has had the same response: Not On My Network. The switching infrastructure may be the same, but the location of the closets is different. The requirements of the switches are different. And the air gap between the networks is crucial to avoid any attackers compromising your lighting infrastructure and using it as an on-ramp into causing problems for your production data network.

The last issue in my mind is the least technically challenging, but the most concerning from the standpoint of longevity of the product line – Where’s the value in PoE lighting? Every piece of collateral I’ve seen and every person I’ve heard talk about it comes back to the same points. According to the experts, it’s effectively the same cost to install intelligent PoE lighting as it is to stick with traditional offerings. But that “effective” word makes me think of things like Tesla’s “Effective Lease Payment”.

By saying “effective”, what Cisco is telling you is that the up-front cost of a Digital Ceiling deployment is likely to be expensive. That large initial number comes down by things like electricity cost savings and increased efficiencies or any one of another of clever things that we tell each other to pretend that it doesn’t cost lots of money to buy new things. It’s important to note that you should evaluate the cost of a Digital Ceiling deployment completely on its own before you start taking into account any kind of cost savings in an equation that come months or years from now.


Tom’s Take

I’m not sure where IoT is going. There’s a lot of learning that needs to happen before I feel totally comfortable talking about the pros and cons of having billions of devices connected and talking to each other. But in this time of baby steps toward solutions, I can honestly say that I’m not entirely sold on Digital Ceiling. It’s clever. It’s catchy. But it ultimately feels like Cisco is headed down a path that will lead to ruin. If they can get CoAP working on many other devices and start building frameworks and security around all these devices then there is a chance that they can create a lasting line of products that will help them capitalize on the potential of IoT. What worries me is that this foray into a new realm will be fraught with bad decisions and compromises and eventually we’ll fondly remember Digital Ceiling as yet another Cisco product that had potential and not much else.

Cisco and OpenDNS – The Name Of The Game?

SecureDNS

This morning, Cisco announced their intent to acquire OpenDNS, a security-as-a-service (SaaS) provider based around the idea of using Domain Naming Service (DNS) as a method for preventing the spread of malware and other exploits. I’ve used the OpenDNS free offering in the past as a way to offer basic web filtering to schools without funds as well as using OpenDNS at home for speedy name resolution when my local name servers have failed me miserably.

This acquistion is curious to me. It seems to be a line of business that is totally alien to Cisco at this time. There are a couple of interesting opportunities that have arisen from the discussions around it though.

Internet of Things With Names

The first and most obivious synergy with Cisco and OpenDNS is around Internet of Things (IoT) or Internent of Everything (IoE) as Cisco has branded their offering. IoT/IoE has gotten a huge amount of attention from Cisco in the past 18 months as more and more devices come online from thermostats to appliances to light sockets. The number of formerly dumb devices that now have wireless radios and computers to send information is staggering.

All of those devices depend on certain services to work properly. One of those services is DNS. IoT/IoE devices aren’t going to use pure IP to communicate with cloud servers. That’s because IoT uses public cloud offerings to communicate with devices and dashboards. As I said last year, capacity and mobility can be ensure by using AWS, Google Cloud, or Azure to host the servers to which IoT/IoE devices communicate.

The easiest way to communicate with AWS instances is via DNS. This ensures that a service can be mobile and fault tolerant. That’s critical to ensure the service never goes down. Losing your laptop or your phone for a few minutes is annoying but survivable. Losing a thermostat or a smoke detector is a safety hazard. Services that need to be resilient need to use DNS.

More than that, with control of OpenDNS Cisco now has a walled DNS garden that they can populate with Cisco service entries. Rather than allowing IoT/IoE devices to inherit local DNS resolution from a home ISP, they can hard code the DNS name servers in the device and ensure that the only resolution used will be controled by Cisco. This means they can activate new offerings and services and ensure that they are reachable by the devices. It also allows them to police the entries in DNS and prevent people from creating “workarounds” to enable to disable features and functions. Walled-garden DNS is as important to IoT/IoE as the walled-garden app store is to mobile devices.

Predictive Protection

The other offering hinted at in the acquistion post from Cisco talks about the professional offerings from OpenDNS. The OpenDNS Umbrella security service helps enterprises protect themselves from malware and security breaches through control and visibility. There is also a significant amount of security intelligence available due to the amount of traffic OpenDNS processes every day. This gives them insight into the state of the Internet as well as sourcing infection vectors and identifying threats at their origin.

Cisco hopes to utilize this predictive intelligence in their security products to help aid in fast identification and mitigation of threats. By combining OpenDNS with SourceFire and Ironport the hope is that this giant software machine will be able to protect customers even faster before they get exposed and embarrased and even sued for negligence.

The part that worries me about that superior predictive intelligence is how it’s gathered. If the only source of that information comes from paying OpenDNS customers then everything should be fine. But I can almost guarantee that users of the free OpenDNS service (like me) are also information sources. It makes the most sense for them. Free users provide information for the paid service. Paid users are happy at the level of intelligence they get, and those users pay for the free users to be able to keep using those features at no cost. Win/win for everyone, right?

But what happens if Cisco decides to end the free offering from OpenDNS? Let’s think about that a little. If free users are locked out from OpenDNS or required to pay even a small nominal fee, that means their source of information is lost in the database. Losing that information reduces the visibility OpenDNS has into the Internet and slows their ability to identify and vector threats quickly. Paying users then lose effectiveness of the product and start leaving in droves. That loss accelerates the failure of that intelligence. Any products relying on this intelligence also reduce in effectiveness. A downward spiral of disaster.


Tom’s Take

The solution for Cisco is very easy. In order to keep the effectiveness of OpenDNS and their paid intelligence offerings, Cisco needs to keep the free offering and not lock users out of using their DNS name servers for no cost. Adding IoT/IoE into the equation helps somewhat, but Cisco has to have the information from small enterprises and schools that use OpenDNS. It benefits everyone for Cisco to let OpenDNS operate just as they have been for the past few years. Cisco gains signficant intelligence for their security offerings. They also gain the OpenDNS customer base to sell new security devices to. And free users gain the staying power of a brand like Cisco.

Thanks to Greg Ferro (@EtherealMind), Brad Casemore (@BradCasemore) and many others for the discussion about this today.

The Walls Are On Fire

There’s no denying the fact that firewalls are a necessary part of modern perimeter security. NAT isn’t a security construct. Attackers have the equivalent of megaton nuclear arsenals with access to so many DDoS networks. Security admins have to do everything they can to prevent these problems from happening. But one look at firewall market tells you something is terribly wrong.

Who’s Protecting First?

Take a look at this recent magic polygon from everyone’s favorite analyst firm:

FW Magic Polygon.  Thanks to @EtherealMind.

FW Magic Polygon. Thanks to @EtherealMind.

I won’t deny that Checkpoint is on top. That’s mostly due to the fact that they have the biggest install base in enterprises. But I disagree with the rest of this mystical tesseract. How is Palo Alto a leader in the firewall market? I thought their devices were mostly designed around mitigating internal threats? And how is everyone not named Cisco, Palo Alto, or Fortinet regulated to the Niche Players corral?

The issue comes down to purpose. Most firewalls today aren’t packet filters. They aren’t designed to keep the bad guys out of your networks. They are unified threat management systems. That’s a fancy way of saying they have a whole bunch of software built on top of the packet filter to monitor outbound connections as well.

Insider threats are a huge security issue. People on the inside of your network have access. They sometimes have motive. And their power goes largely unchecked. They do need to be monitored by something, whether it be an IDS/IPS system or a data loss prevention (DLP) system that keeps sensitive data from leaking out. But how did all of those devices get lumped together?

Deploying security monitoring tools is as much art as it is science. IPS sensors can be placed in strategic points of your network to monitor traffic flows and take action on them. If you build it correctly you can secure a huge enterprise with relatively few systems.

But more is better, right? If three IPS units make you secure, six would make you twice as secure, right? No. What you end up with is twice as many notifications. Those start getting ignored quickly. That means the real problems slip through the cracks because no one pays attention any more. So rather than deploying multiple smaller units throughout the network, the new mantra is to put an IPS in the firewall in the front of the whole network.

The firewall is the best place for those sensors, right? All the traffic in the network goes through there after all. Well, except for the user-to-server traffic. Or traffic that is internally routed without traversing the Internet firewall. Crafty insiders can wreak havoc without ever touching an edge IPS sensor.

And that doesn’t even begin to describe the processing burden placed on the edge device by loading it down with more and more CPU-intensive software. Consider the following conversation:

Me: What is the throughput on your firewall?

 

Them: It’s 1 Gbps!

 

Me: What’s the throughput with all the features turned on?

 

Them: Much less than 1 Gbps…

When a selling point of your UTM firewall is that the numbers are “real”, you’ve got a real problem.

What’s Protecting Second?

There’s an answer out there to fix this issue: disaggregation. We now have the capability to break out the various parts of a UTM device and run them all in virtual software constructs thanks to Network Function Virtualization (NFV). And they will run faster and more efficiently. Add in the ability to use SDN service chaining to ensure packet delivery and you have a perfect solution. For almost everyone.

Who’s not going to like it? The big UTM vendors. The people that love selling oversize boxes to customers to meet throughput goals. Vendors that emphasize that their solution is the best because there’s one dashboard to see every alert and issue, even if those alerts don’t have anything to do with each other.

UTM firewalls that can reliably scan traffic at 1 Gbps are rare. Firewalls that can scan 10 Gbps traffic streams are practically non-existant. And what is out there costs a not-so-small fortune. And if you want to protect your data center you’re going to need a few of them. That’s a mighty big check to write.


Tom’s Take

There’s a reason why we call it Network Function Virtualization. The need for the days when you try and cram all the possible features you could think of onto a single piece of hardware are over. We don’t need complicated all-in-one boxes that have insanely large CPUs. We have software constructs that can take care of all of that now.

While the engineers will like this brave new world, there are those that won’t. Vendors of the single box solutions will still tell you that their solution runs better. Analyst firms with a significant interest in the status quo will tell you NFV solutions are too far out or don’t address all the necessary features. It’s up to you to sort out the smoke from the flame.