Friction as a Network Security Concept

I had the recent opportunity to record a podcast with Curtis Preston about security, data protection, and networking. I loved being a guest and we talked about quite a bit in the episode about how networking operates and how to address ransomware issues when they arise. I wanted to talk a bit more about some concepts here to help flesh out my advice as we talked about it.

Compromise is Inevitable

If there’s one thing I could say that would make everything make sense it’s this: you will be compromised. It’s not a question of if. You will have your data stolen or encrypted at some point. The question is really more about how much gets taken or how effectively attackers are able to penetrate your defenses before they get caught.

Defenses are designed to keep people out. But they also need to be designed to contain damage. Think about a ship on the ocean. Those giant bulkheads aren’t just there for looks. They’re designed to act as compartments to seal off areas in case of catastrophic damage. The ship doesn’t assume that it’s never going to have a leak. Instead, the designers created it in such a way as to be sure that when it does you can contain the damage and keep the ship floating. Without those containment systems even the smallest problem can bring the whole ship down.

Likewise, you need to design your network to be able to contain areas that could be impacted. One giant flat network is a disaster waiting to happen. A network with a DMZ for public servers is a step in the right direction. However, you need to take it further than that. You need to isolate critical hosts. You need to put devices on separate networks if they have no need to directly talk to each other. You need to ensure management interfaces are in a separate, air-gapped network that has strict access controls. It may sound like a lot of work but the reality is that failure to provide isolation will lead to disaster. Just like a leak on the ocean.

The key here is that the controls you put in place create friction with your attackers. That’s the entire purpose of defense in depth. The harder it is for attackers to get through your defenses the more likely they are to give up earlier or trigger alarms designed to warn you when it happens. This kind of friction is what you want to see. However, it’s not the only kind of friction you face.

Failing Through Friction

Your enemy in this process isn’t nefarious actors. It’s not technology. Instead, it’s the bad kind of friction. Security is designed by its very nature to create friction with systems. Networks are designed to transmit data. Security controls are designed to prevent the transmission of data. This bad friction comes when these two aspects are interacting with each other. Did you open the right ports? Are the access control lists denying a protocol that should be working? Did you allow the right VLANs on the trunk port?

Friction between controls is maddening but it’s a solvable problem with time. The real source of costly friction comes when you add people into the mix. Systems don’t complain about access times. They don’t call you about error messages. And, worst of all, they don’t have the authority to make you compromise your security controls for the sake of ease-of-use.

Everyone in IT has been asked at some point to remove a control or piece of software for the sake of users. In organizations where the controls are strict or regulatory issues are at stake the requests are usually disregarded. However, when the executives are particularly insistent or the IT environment is more carefree you can find yourself putting in a shortcut to get the CEO’s laptop connected faster or allow their fancy new phone to connect without a captive portal. The results are often happy and have no impact. That is, until someone finds out they can get in through your compromised control and create a lot of additional friction.

How can you reduce friction? One way is to create more friction in the planning stages. Ask lots of questions about ports and protocols and access list requirements before something is implemented. Do your homework ahead of time instead of trying to figure it out on the fly. If you know that a software package needs to communicate to these four addresses on these eight ports then anything outside of that list should be suspect and be examined. Likewise, if someone can’t tell you what ports need to be opened for a package to work you should push back until they can give you that info. Better to spend time up front learning than spend more time later triaging.

The other way to reduced friction in implementation is to shift the friction to policy. If the executives want you to compromise a control for the sake of their own use make them document it. Have them write it down that you have been directed to add a special configuration just for them. Keep that information stored in your DR plan and note it in your configuration repositories as well. Even a comment in the access list can help understand why you had to do something a certain way. Often the request to document the special changes will have the executives questioning the choice. More importantly, if something does go sideways you have evidence of why the change was made. And for executives that don’t like to look like fools this is a great way to have these kinds of one-off policy changes stopped quickly when something goes wrong and they get to answer questions from a reporter.


Tom’s Take

Friction is the real secret of security. When properly applied it prevents problems. When it’s present in too many forms it causes frustration and eventually leads to abandonment of controls or short circuits to get around them. The key isn’t to eliminate it entirely. Instead you need to apply it properly and make sure to educate about why it exists in the first place. Some friction is important, such as verifying IDs before entering a secure facility. The more that people know about the reasons behind your implementation the less likely they are to circumvent it. That’s how you keep the bad actors out and the users happy.

Mind the Air Gap

I recently talked to some security friends on a CloudBytes podcast recording that will be coming out in a few weeks. One of the things that came up was the idea of an air gapped system or network that represents the ultimate in security. I had a couple of thoughts that felt like a great topic for a blog post.

The Gap is Wide

I can think of a ton of classical air gapped systems that we’ve seen in the media. Think about Mission: Impossible and the system that holds the NOC list:

Makes sense right? Totally secure unless you have Tom Cruise in your ductwork. It’s about as safe as you can make your data. It’s also about as inconvenient as you can make your data too. Want to protect a file so no one can ever steal it? Make it so no one can ever access it! Works great for data that doesn’t need to be updated regularly or even analyzed at any point. It’s offline for all intents and purposes.

Know what works great as an air gapped system? Root certificate authority servers. Even Microsoft agrees. So secure that you have to dig it out of storage to ever do anything that requires root. Which means you’re never going to have to worry about it getting corrupted or stolen. And you’re going to spend hours booting it up and making sure it’s functional if you ever need to do anything with it other than watch it collect dust.

Air gapping systems feels like the last line of defense to me. This data is so important that it can never be exposed in any way to anything that might ever read it. However, the concept of a gap between systems is more appealing from a security perspective. Because you can create gaps that aren’t physical in nature and accomplish much of the same idea as isolating a machine in a cool vault.

Gap Insurance

By now you probably know that concepts like zero-trust network architecture are the key to isolating systems on your network. You don’t remove them from being accessed. Instead, you restrict their access levels. You make users prove they need to see the system instead of just creating a generic list. If you consider the ZTNA system as a classification method you understand more how it works. It’s not just that you have clearance to see the system or the data. You also need to prove your need to access it.

Building on these ideas of ZTNA and host isolation are creating virtual air gaps that you can use effectively without the need to physically isolate devices. I’ve seen some crazy stories about IDS sensors that have the transmit pairs of their Ethernet cables cut so they don’t inadvertently trigger a response to an intruder. That’s the kind of crazy preparedness that creates more problems that it solves.

Instead, by creating robust policies that prevent unauthorized users from accessing systems you can ensure that data can be safely kept online as well as making it available to anyone that wants to analyze it or use it in some way. That does mean that you’re going to need to spend some time building a system that doesn’t have inherent flaws.

In order to ensure your new virtual air gap is secure, you need to use policy manual programming. Why? Because a well-defined policy is more restrictive than just randomly opening ports or creating access lists without context. It’s more painful at first because you have to build the right policy before you can implement it. However, a properly built policy can be automated and reused instead of a just pasting lines from a text document into a console.

If you want to be sure your system is totally isolated, start with the basics. No traffic to the system. Then you build out from there. Who needs to talk to it? What user or class of users? How should they access it? What levels of security do you require? Answering each of those questions also gives you the data you need to define and maintain policy. By relying on these concepts you don’t need to spend hours hunting down user logins or specific ACL entries to lock someone out after they leave or after they’ve breached a system. You just need to remove them from the policy group or change the policy to deny them. Much easier than sweeping the rafters for super spies.


Tom’s Take

The concept of an air gapped system works for spy movies. The execution of one needs a bit more thought. Thankfully we’ve graduated from workstations that need biometric access to a world where we can control the ways that users and groups can access things. By thinking about those systems in new ways and treating them like isolated islands that must be accessed on purpose instead of building an overly permissive infrastructure in the first place you’ll get ahead of a lot of old technical debt-laden thinking and build a more secure enterprise, whether on premises or in the cloud.

Trust Will Do You In

If you’re a fan of the Gestalt IT Rundown that I do every week on the Gestalt IT YouTube channel, you have probably heard about the recent hacks of NVIDIA and Samsung. The original investigation into those hacks talked about using MDM platforms and other vectors to gain access to the information that was obtained by the hacking groups. An interesting tweet popped up on my feed yesterday that helped me reframe the attacks:

It would appear that the group behind these attacks are going after their targets the old fashioned way. With people. For illustration, see XKCD from 2009:

The Weakest Links

People are always the weakest link in any security situation. They choose to make something insecure through bad policy or by trying to evade the policy. Perhaps they are trying to do harm to the organization or even try to shine a light on corrupt practices. Whatever the reason, people are the weak link. Because you can change hardware or software to eliminate failures and bugs. You can’t reprogram people.

We’ve struggled for years to keep people out of our systems. Perimeter security and bastion hosts were designed to make sure that the bad actors stayed off our stage. Alas, as we’ve gotten more creative about stopping them we’ve started to realize that more and more of the attacks aren’t coming from outside but instead from inside.

There are whole categories of solutions dedicated to stopping internal attackers now. Data Loss Prevention (DLP) can catch data being exfiltrated by sophisticated attackers but it is more often used to prevent people from leaking sensitive data either accidentally or purposefully. There are solutions to monitor access to systems and replay logs to find out how internal systems folks were able to get privileges they should have.

To me, this is the crux of the issue. As much as we want to create policies that prevent people from exceeding their authority we seem to have a hard time putting them into practice. For every well-meaning solution or rule that is designed to prevent someone from gaining access to something or keep them secure you will have someone making a shortcut around it. I’ve done it myself so I know it’s pretty common. For every rule that’s supposed to keep me safe I have an example of a way I went around it because it got in my way.

Who Do YOU Trust?

This is one of the reasons why a Zero Trust Network Architecture (ZTNA) appeals to me. At its core it makes a very basic assumption that I learned from the X-Files: Trust No One. If you can’t verify who you are you can’t gain access. And you only gain access to the bare minimum you need and have no capability for moving laterally.

We have systems that operate like this today. Call your doctor and ask them a question. They will first need to verify that you are who you say you are with information like your birthdate or some other key piece of Personally Identifiable Information (PII). Why? Because if they make a recommendation for a medication and you’re not the person they think they’re talking to you can create a big problem.

Computer systems should work the same way, shouldn’t they? We should need to verify who we are before we can access important data or change a configuration setting. Yet we constantly see blank passwords or people logging in to a server as a super user to “just make it easier”. And when someone gains access to that system through clever use of a wrench, as above, you should be able to see the potential for disaster.

ZTNA just says you can’t do that. Period. If the policy says no super user logins from remote terminals they mean it. If the policy says no sensitive data access from public networks then that is the law. And no amount of work to short circuit the system is going to work.

This is where I think the value of ZTNA is really going to help modern enterprises. It’s not the nefarious actor that is looking to sell their customer lists that creates security issues. It does happen but not nearly as often as an executive that wants a special exception for the proxy server because one of their things doesn’t work properly. Or maybe it’s a developer that created a connection from a development server into production because it was easier to copy data back and forth that way. Whatever the reasons the inadvertent security issues cause chaos because they are configured and forgotten. At least until someone hacks you and you end up on the news.

ZTNA forces you to look at your organization and justify why things are the way they are. Think of it like a posture audit with immediate rule creation. If development servers should never talk to production units then that is the rule. If you want that to happen for some strange reason you have to explicitly configure it. And your name is attached to that configuration so you know who did it. Hopefully something like this either requires sign off from multiple teams or triggers a notification for the SOC that then comes in to figure out why policy was violated. At best it’s an accident or a training situation. At worst you may have just caught a bad actor before they step into the limelight.


Tom’s Take

Security isn’t perfect. We can always improve. Every time we build a good lock someone can build a better way to pick it. The real end goal is to make things sufficiently secure that we don’t have to worry about them being compromised with no effort while at the same time keeping it easy enough for people to get their jobs done. ZTNA is an important step because it creates rules and puts teeth behind them to prevent easy compromise of the rules by people that are trying to err on the side of easy. If you don’t already have plans to include ZTNA in your enterprise now you really should start looking at them. I’d tell you to trust me, but not trusting me is the point.