Trust Will Do You In

If you’re a fan of the Gestalt IT Rundown that I do every week on the Gestalt IT YouTube channel, you have probably heard about the recent hacks of NVIDIA and Samsung. The original investigation into those hacks talked about using MDM platforms and other vectors to gain access to the information that was obtained by the hacking groups. An interesting tweet popped up on my feed yesterday that helped me reframe the attacks:

It would appear that the group behind these attacks are going after their targets the old fashioned way. With people. For illustration, see XKCD from 2009:

The Weakest Links

People are always the weakest link in any security situation. They choose to make something insecure through bad policy or by trying to evade the policy. Perhaps they are trying to do harm to the organization or even try to shine a light on corrupt practices. Whatever the reason, people are the weak link. Because you can change hardware or software to eliminate failures and bugs. You can’t reprogram people.

We’ve struggled for years to keep people out of our systems. Perimeter security and bastion hosts were designed to make sure that the bad actors stayed off our stage. Alas, as we’ve gotten more creative about stopping them we’ve started to realize that more and more of the attacks aren’t coming from outside but instead from inside.

There are whole categories of solutions dedicated to stopping internal attackers now. Data Loss Prevention (DLP) can catch data being exfiltrated by sophisticated attackers but it is more often used to prevent people from leaking sensitive data either accidentally or purposefully. There are solutions to monitor access to systems and replay logs to find out how internal systems folks were able to get privileges they should have.

To me, this is the crux of the issue. As much as we want to create policies that prevent people from exceeding their authority we seem to have a hard time putting them into practice. For every well-meaning solution or rule that is designed to prevent someone from gaining access to something or keep them secure you will have someone making a shortcut around it. I’ve done it myself so I know it’s pretty common. For every rule that’s supposed to keep me safe I have an example of a way I went around it because it got in my way.

Who Do YOU Trust?

This is one of the reasons why a Zero Trust Network Architecture (ZTNA) appeals to me. At its core it makes a very basic assumption that I learned from the X-Files: Trust No One. If you can’t verify who you are you can’t gain access. And you only gain access to the bare minimum you need and have no capability for moving laterally.

We have systems that operate like this today. Call your doctor and ask them a question. They will first need to verify that you are who you say you are with information like your birthdate or some other key piece of Personally Identifiable Information (PII). Why? Because if they make a recommendation for a medication and you’re not the person they think they’re talking to you can create a big problem.

Computer systems should work the same way, shouldn’t they? We should need to verify who we are before we can access important data or change a configuration setting. Yet we constantly see blank passwords or people logging in to a server as a super user to “just make it easier”. And when someone gains access to that system through clever use of a wrench, as above, you should be able to see the potential for disaster.

ZTNA just says you can’t do that. Period. If the policy says no super user logins from remote terminals they mean it. If the policy says no sensitive data access from public networks then that is the law. And no amount of work to short circuit the system is going to work.

This is where I think the value of ZTNA is really going to help modern enterprises. It’s not the nefarious actor that is looking to sell their customer lists that creates security issues. It does happen but not nearly as often as an executive that wants a special exception for the proxy server because one of their things doesn’t work properly. Or maybe it’s a developer that created a connection from a development server into production because it was easier to copy data back and forth that way. Whatever the reasons the inadvertent security issues cause chaos because they are configured and forgotten. At least until someone hacks you and you end up on the news.

ZTNA forces you to look at your organization and justify why things are the way they are. Think of it like a posture audit with immediate rule creation. If development servers should never talk to production units then that is the rule. If you want that to happen for some strange reason you have to explicitly configure it. And your name is attached to that configuration so you know who did it. Hopefully something like this either requires sign off from multiple teams or triggers a notification for the SOC that then comes in to figure out why policy was violated. At best it’s an accident or a training situation. At worst you may have just caught a bad actor before they step into the limelight.


Tom’s Take

Security isn’t perfect. We can always improve. Every time we build a good lock someone can build a better way to pick it. The real end goal is to make things sufficiently secure that we don’t have to worry about them being compromised with no effort while at the same time keeping it easy enough for people to get their jobs done. ZTNA is an important step because it creates rules and puts teeth behind them to prevent easy compromise of the rules by people that are trying to err on the side of easy. If you don’t already have plans to include ZTNA in your enterprise now you really should start looking at them. I’d tell you to trust me, but not trusting me is the point.