Mind the Air Gap

I recently talked to some security friends on a CloudBytes podcast recording that will be coming out in a few weeks. One of the things that came up was the idea of an air gapped system or network that represents the ultimate in security. I had a couple of thoughts that felt like a great topic for a blog post.

The Gap is Wide

I can think of a ton of classical air gapped systems that we’ve seen in the media. Think about Mission: Impossible and the system that holds the NOC list:

Makes sense right? Totally secure unless you have Tom Cruise in your ductwork. It’s about as safe as you can make your data. It’s also about as inconvenient as you can make your data too. Want to protect a file so no one can ever steal it? Make it so no one can ever access it! Works great for data that doesn’t need to be updated regularly or even analyzed at any point. It’s offline for all intents and purposes.

Know what works great as an air gapped system? Root certificate authority servers. Even Microsoft agrees. So secure that you have to dig it out of storage to ever do anything that requires root. Which means you’re never going to have to worry about it getting corrupted or stolen. And you’re going to spend hours booting it up and making sure it’s functional if you ever need to do anything with it other than watch it collect dust.

Air gapping systems feels like the last line of defense to me. This data is so important that it can never be exposed in any way to anything that might ever read it. However, the concept of a gap between systems is more appealing from a security perspective. Because you can create gaps that aren’t physical in nature and accomplish much of the same idea as isolating a machine in a cool vault.

Gap Insurance

By now you probably know that concepts like zero-trust network architecture are the key to isolating systems on your network. You don’t remove them from being accessed. Instead, you restrict their access levels. You make users prove they need to see the system instead of just creating a generic list. If you consider the ZTNA system as a classification method you understand more how it works. It’s not just that you have clearance to see the system or the data. You also need to prove your need to access it.

Building on these ideas of ZTNA and host isolation are creating virtual air gaps that you can use effectively without the need to physically isolate devices. I’ve seen some crazy stories about IDS sensors that have the transmit pairs of their Ethernet cables cut so they don’t inadvertently trigger a response to an intruder. That’s the kind of crazy preparedness that creates more problems that it solves.

Instead, by creating robust policies that prevent unauthorized users from accessing systems you can ensure that data can be safely kept online as well as making it available to anyone that wants to analyze it or use it in some way. That does mean that you’re going to need to spend some time building a system that doesn’t have inherent flaws.

In order to ensure your new virtual air gap is secure, you need to use policy manual programming. Why? Because a well-defined policy is more restrictive than just randomly opening ports or creating access lists without context. It’s more painful at first because you have to build the right policy before you can implement it. However, a properly built policy can be automated and reused instead of a just pasting lines from a text document into a console.

If you want to be sure your system is totally isolated, start with the basics. No traffic to the system. Then you build out from there. Who needs to talk to it? What user or class of users? How should they access it? What levels of security do you require? Answering each of those questions also gives you the data you need to define and maintain policy. By relying on these concepts you don’t need to spend hours hunting down user logins or specific ACL entries to lock someone out after they leave or after they’ve breached a system. You just need to remove them from the policy group or change the policy to deny them. Much easier than sweeping the rafters for super spies.

Tom’s Take

The concept of an air gapped system works for spy movies. The execution of one needs a bit more thought. Thankfully we’ve graduated from workstations that need biometric access to a world where we can control the ways that users and groups can access things. By thinking about those systems in new ways and treating them like isolated islands that must be accessed on purpose instead of building an overly permissive infrastructure in the first place you’ll get ahead of a lot of old technical debt-laden thinking and build a more secure enterprise, whether on premises or in the cloud.

Trust Will Do You In

If you’re a fan of the Gestalt IT Rundown that I do every week on the Gestalt IT YouTube channel, you have probably heard about the recent hacks of NVIDIA and Samsung. The original investigation into those hacks talked about using MDM platforms and other vectors to gain access to the information that was obtained by the hacking groups. An interesting tweet popped up on my feed yesterday that helped me reframe the attacks:

It would appear that the group behind these attacks are going after their targets the old fashioned way. With people. For illustration, see XKCD from 2009:

The Weakest Links

People are always the weakest link in any security situation. They choose to make something insecure through bad policy or by trying to evade the policy. Perhaps they are trying to do harm to the organization or even try to shine a light on corrupt practices. Whatever the reason, people are the weak link. Because you can change hardware or software to eliminate failures and bugs. You can’t reprogram people.

We’ve struggled for years to keep people out of our systems. Perimeter security and bastion hosts were designed to make sure that the bad actors stayed off our stage. Alas, as we’ve gotten more creative about stopping them we’ve started to realize that more and more of the attacks aren’t coming from outside but instead from inside.

There are whole categories of solutions dedicated to stopping internal attackers now. Data Loss Prevention (DLP) can catch data being exfiltrated by sophisticated attackers but it is more often used to prevent people from leaking sensitive data either accidentally or purposefully. There are solutions to monitor access to systems and replay logs to find out how internal systems folks were able to get privileges they should have.

To me, this is the crux of the issue. As much as we want to create policies that prevent people from exceeding their authority we seem to have a hard time putting them into practice. For every well-meaning solution or rule that is designed to prevent someone from gaining access to something or keep them secure you will have someone making a shortcut around it. I’ve done it myself so I know it’s pretty common. For every rule that’s supposed to keep me safe I have an example of a way I went around it because it got in my way.

Who Do YOU Trust?

This is one of the reasons why a Zero Trust Network Architecture (ZTNA) appeals to me. At its core it makes a very basic assumption that I learned from the X-Files: Trust No One. If you can’t verify who you are you can’t gain access. And you only gain access to the bare minimum you need and have no capability for moving laterally.

We have systems that operate like this today. Call your doctor and ask them a question. They will first need to verify that you are who you say you are with information like your birthdate or some other key piece of Personally Identifiable Information (PII). Why? Because if they make a recommendation for a medication and you’re not the person they think they’re talking to you can create a big problem.

Computer systems should work the same way, shouldn’t they? We should need to verify who we are before we can access important data or change a configuration setting. Yet we constantly see blank passwords or people logging in to a server as a super user to “just make it easier”. And when someone gains access to that system through clever use of a wrench, as above, you should be able to see the potential for disaster.

ZTNA just says you can’t do that. Period. If the policy says no super user logins from remote terminals they mean it. If the policy says no sensitive data access from public networks then that is the law. And no amount of work to short circuit the system is going to work.

This is where I think the value of ZTNA is really going to help modern enterprises. It’s not the nefarious actor that is looking to sell their customer lists that creates security issues. It does happen but not nearly as often as an executive that wants a special exception for the proxy server because one of their things doesn’t work properly. Or maybe it’s a developer that created a connection from a development server into production because it was easier to copy data back and forth that way. Whatever the reasons the inadvertent security issues cause chaos because they are configured and forgotten. At least until someone hacks you and you end up on the news.

ZTNA forces you to look at your organization and justify why things are the way they are. Think of it like a posture audit with immediate rule creation. If development servers should never talk to production units then that is the rule. If you want that to happen for some strange reason you have to explicitly configure it. And your name is attached to that configuration so you know who did it. Hopefully something like this either requires sign off from multiple teams or triggers a notification for the SOC that then comes in to figure out why policy was violated. At best it’s an accident or a training situation. At worst you may have just caught a bad actor before they step into the limelight.

Tom’s Take

Security isn’t perfect. We can always improve. Every time we build a good lock someone can build a better way to pick it. The real end goal is to make things sufficiently secure that we don’t have to worry about them being compromised with no effort while at the same time keeping it easy enough for people to get their jobs done. ZTNA is an important step because it creates rules and puts teeth behind them to prevent easy compromise of the rules by people that are trying to err on the side of easy. If you don’t already have plans to include ZTNA in your enterprise now you really should start looking at them. I’d tell you to trust me, but not trusting me is the point.

Fast Friday Thoughts From Security Field Day

It’s a busy week for me thanks to Security Field Day but I didn’t want to leave you without some thoughts that have popped up this week from the discussions we’ve been having. Security is one of those topics that creates a lot of thought-provoking ideas and makes you seriously wonder if you’re doing it right all the time.

  • Never underestimate the value of having plumbing that connects all your systems. You may look at a solution and think to yourself “All this does is aggregate data from other sources”. Which raises the question: How do you do it now? Sure, antivirus fires alerts like a car alarm. But when you get breached and find out that those alerts caught it weeks ago you’re going to wish you had a better idea of what was going on. You need a way to send that data somewhere to be dealt with and cataloged properly. This is one of the biggest reasons why machine learning is being applied to the massive amount of data we gather in security. Having an algorithm working to find the important pieces means you don’t miss things that are important to you.
  • Not every solution is going to solve every problem you have. My dishwasher does a horrible job of washing my clothes or vacuuming my carpets. Is it the fault of the dishwasher? Or is it my issue with defining the problem? We need to scope our issues and our solutions appropriately. Just because my kitchen knives can open a package in a pinch doesn’t mean that the makers need to include package-opening features in a future release because I use them exclusively for that purpose. Once we start wanting the vendors to build a one-stop-shop kind of solution we’re going to create the kind of technical debt that we need to avoid. We also need to remember to scope problems so that they’re solvable. Postulating that there are corner cases with no clear answers are important for threat hunting or policy creation. Not so great when shopping through a catalog of software.
  • Every term in every industry is going to have a different definition based on who is using it. A knife to me is either a tool used on a campout or a tool used in a kitchen. Others see a knife as a tool for spreading butter or even doing surgery. It’s a matter of perspective. You need to make sure people know the perspective you’re coming from before you decide that the tool isn’t going to work properly. I try my best to put myself in the shoes of others when I’m evaluating solutions or use cases. Just because I don’t use something in a certain way doesn’t mean it can’t be used that way. And my environment is different from everyone else’s. Which means best practices are really just recommended suggestions.
  • Whatever acronym you’ve tied yourself to this week is going to change next week because there’s a new definition of what you should be doing according to some expert out there. Don’t build your practice on whatever is hot in the market. Build it on what you need to accomplish and incorporate elements of new things into what you’re doing. The story of people ripping and replacing working platforms because of an analyst suggestion sounds horrible but happens more often than we’d like to admit. Trust your people, not the brochures.

Tom’s Take

Security changes faster than any area that I’ve seen. Cloud is practically a glacier compare to EPP, XDR, and SOPV. I could even make up an acronym and throw it on that list and you might not even notice. You have to stay current but you also have to trust that you’re doing all you can. Breaches are going to happen no matter what you do. You have to hope you’ve done your best and that you can contain the damage. Remember that good security comes from asking the right questions instead of just plugging tools into the mix to solve issues you don’t have.

Getting Blasted by Backdoors

Open Door from http://viktoria-lyn.deviantart.com/

I wanted to take minute to talk about a story I’ve been following that’s had some new developments this week. You may have seen an article talking about a backdoor in Juniper equipment that caused some issues. The issue at hand is complicated at the linked article does a good job of explaining some of the nuance. Here’s the short version:

  • The NSA develops a version of Dual EC random number generation that includes a pretty substantial flaw.
  • That flaw? If you know the pseudorandom value used to start the process you can figure out the values, which means you can decrypt any traffic that uses the algorithm.
  • NIST proposes the use of Dual EC and makes it a requirement for vendors to be included on future work. Don’t support this one? You don’t get to even be considered.
  • Vendors adopt the standard per the requirement but don’t make it the default for some pretty obvious reasons.
  • Netscreen, a part of Juniper, does use Dual EC as part of their default setup.
  • The Chinese APT 5 hacking group figures out the vulnerability and breaks into Juniper to add code to Netscreen’s OS.
  • They use their own seed value, which allows them to decrypt packets being encrypted through the firewall.
  • Hilarity does not ensue and we spend the better part of a decade figuring out what has happened.

That any of this even came to light is impressive considering the government agencies involved have stonewalled reporters and it took a probe from a US Senator, Ron Wyden, to get as far as we have in the investigation.

Protecting Your Platform

My readers know that I’m a pretty staunch advocate for not weakening encryption. Backdoors and “special” keys for organizations that claim they need them are a horrible idea. The safest lock is one that can’t be bypassed. The best secret is one that no one knows about. Likewise, the best encryption algorithms are ones that can’t be reversed or calculated by anyone other than the people using them to send messages.

I get that the flood of encrypted communications today is making life difficult for law enforcement agencies all over the world. It’s tempting to make it a requirement to allow them a special code that will decrypt messages to keep us safe and secure. That’s the messaging I see every time a politician wants to compel a security company to create a vulnerability in their software just for them. It’s all about being safe.

Once you create that special key you’ve already lost. As we saw above, the intentions of creating a backdoor into an OS so that we could spy on other people using it backfired spectacularly. Once someone else figured out that you could guess the values and decrypt the traffic they set about doing it for themselves. I can only imagine the surprise at the NSA when they realized that someone had changed the values in the OS and that, while they themselves were no longer able to spy with impunity, someone else could be decrypting their communications at that very moment. If you make a key for a lock someone will figure out how to make a copy. It’s that simple.

We focus so much on the responsible use of these backdoors that we miss the bigger picture. Sure, maybe we can make it extremely difficult for someone in law enforcement to get the information needed to access the backdoor in the name of national security. But what about other nations? What about actors not tied to a political process or bound by oversight from the populace. I’m more scared that someone that actively wishes to do me harm could find a way to exploit something that I was told had to be there for my own safety.

The Juniper story gets worse the more we read into it but they were only the unlucky dancer with a musical chair to slip into when the music stopped. Any one of the other companies that were compelled to include Dual EC by government order could have gotten the short straw here. It’s one thing to create a known-bad version of software and hope that someone installs it. It’s an entirely different matter to force people to include it. I’m honestly shocked the government didn’t try to mandate that it must be used exclusively of other algorithms. In some other timeline Cisco or Palo Alto or even Fortinet are having very bad days unwinding what happened.

Tom’s Take

The easiest way to avoid having your software exploited is not to create your own exploit for it. Bugs happen. Strange things occur in development. Even the most powerful algorithms must eventually yield to Moore’s Law or Shor’s Algorithm. Why accelerate the process by cutting a master key? Why weaken yourself on purpose by repeating over and over again that this is “for the greater good”? Remember that the greater good may not include people that want the best for you. If you’re wiling to hand them a key to unlock the chaos that we’re seeing in this case then you have overestimated your value to the process and become the very bad actor you hoped to stop.

Pegasus Pisses Me Off


In this week’s episode of the Gestalt IT Rundown, I jumped on my soapbox a bit regarding the latest Pegasus exploit. If you’re not familiar with Pegasus you should catch up with the latest news.

Pegasus is a toolkit designed by NSO Group from Israel. It’s designed for counterterrorism investigations. It’s essentially a piece of malware that can be dropped on a mobile phone through a series of unpatched exploits that allows you to create records of text messages, photos, and phone calls and send them to a location for analysis. On the surface it sounds like a tool that could be used to covertly gather intelligence on someone of interest and ensure that they’re known to law enforcement agencies so they can be stopped in the event of some kind of criminal activity.

Letting the Horses Out

If that’s where Pegasus stopped, I’d probably not care one way or the other. A tool used by law enforcement to figure out how to stop things that are tough to defend against. But because you’re reading this post you know that’s not where it stopped. Pegasus wasn’t merely a tool developed by intelligence agencies for targeted use. If I had to guess, I’d say the groundwork for it was laid when the creators did work in some intelligence capacity. Where things went off the rails was when they no longer did.

I’m sure that all of the development work on the tool that was done for the government they worked for stayed there. however, things like Pegasus evolve all the time. Exploits get patches. Avenues of installation get closed. And some smart targets figure out how to avoid getting caught or even how to detect that they’ve been compromised. That means that work has to continue for this to be effective in the future. And if the government isn’t paying for it who is?

If you guessed interested parties you’d be right! Pegasus is for sale for anyone that wants to buy it. I’m sure there are cursory checks done to ensure that people that aren’t supposed to be using it can’t buy it. But I also know that in those cases a few extra zeros at the end of a wire transfer can work wonders to alleviate those concerns.Whether or not it was supposed to be sold to everyone or just a select group of people it got out.

Here’s where my hackles get raised a bit. The best way to prevent a tool like this from escaping is to never have created it in the first place. Just like a biological or nuclear weapon, the only way to be sure it can never be used is to never have it. Weapons are a temptation. Bombs were built to be dropped. Pegasus was built to be installed somewhere. Sure, the original intentions were pure. This tool was designed to save lives. What happens when the intentions aren’t so pure? What happens when your enemies are terrorist but politicians with different views? You might scoff at the suggestion of using a counterterrorism tool to spy on your ideological opponents, but look around the world today and ask yourself if your opponents are so inclined.

Once Pegasus was more widely available I’m sure it became a very tempting way to eavesdrop on people you wanted to know more about. Journalist getting leaks from someone in your government? Just drop Pegasus on that phone and find out who it is. Annoying activist making the media hate you? Text him the Pegasus installer and dump his phone looking for incriminating evidence to shut him up. Suspect your girlfriend of being unfaithful? Pegasus can tell you for sure! See how quickly we went from “necessary evil to protect the people” to “petty personal reasons”?

The danger of the slippery slope is that once you’re on it you can’t stop. Pegasus may have saved some lives but it has undoubtedly cost many others too. It has been detected as far back as 2014. That means every source that has been compromised or every journalist killed doing their work could have been found out thanks to this tool. That’s an awful lot of unknowns to carry on your shoulders. I’m sure that NSO Group will protest and say that they never knowingly sold it to someone that used it for less-than-honorable purposes. Can they say for sure that their clients never shared it? Or that it was never stolen and used by the very people that it was designed to be deployed against?

Closing the Barn Door

The escalation of digital espionage is only going to increase. In the US we already have political leaders calling on manufacturers and developers to create special backdoors for law enforcement to use to detect criminals and arrest them as needed. This is along the same lines as Pegasus, just formalized and legislated. It’s a terrible idea. If the backdoor is created it will be misused. Count on that. Even if the people that developed it never intended to use it improperly someone without the same moral fortitude will eventually. Oppenheimer and Einstein may have regretted the development of nuclear weapons but you can believe that by 1983 the powers that held onto them weren’t so opposed to using them if the need should arise.

I’m also not so naive as to believe for an instant that the governments of the world are just going to agree to play nice and not developer these tools any longer. They represent a competitive advantage over their opponents and that’s not something they’re going to give up easily. The only thing holding them back is oversight and accountability to the people they protect.

What about commercial entities though? If governments are restrained by the people then businesses are only restrained by their stakeholders and shareholders. And those people only seem to care about making money. So if the best tool to do the thing appears and it can make them a fortune, would they forego they profits to take a stand against categorically evil behavior? Can you say for certain that would always be the case?

Tom’s Take

Governments may not ever stop making these weapons but perhaps it’s time for the private sector to stop. The best ways to keep the barn doors closed so the horses can’t get out is not to build doors in the first place. If you build a tool like Pegasus it will get out. If you sell it, even to the most elite clientele, someone you don’t want to have it will end up with it. It sounds like a pretty optimistic viewpoint for sure. So maybe the other solution is to have them install their tool on their own devices and send the keys to a random person. That way they will know they are being watched and that whomever is watching them can decide when and where to expose the things they don’t want known. And if that doesn’t scare them into no longer developing tools like this then nothing will.

Building Backdoors and Fixing Malfeasance

You might have seen the recent news this week that there is an exploitable backdoor in Zyxel hardware that has been discovered and is being exploited. The backdoor admin account with the clever name ‘zyfwp’ is not something that has been present in the devices forever. The account was put in during firmware version 4.60, which was released in Q4 2020.

Zyxel is rushing to patch the devices and remove the backdoor account. Users are being advised to disable remote administration until the accounts can be deactivated and proven to be removed. However, the bigger question in my mind relates to the addition of the user account in the first place. Why would you knowingly install a backdoor?

Hello, Joshua

Backdoors are nothing new in the computer world. I’d argue the most famous backdoor account in the history of computer hacking belongs to Joshua, the dormant login for the War Operations Programmed Response (WOPR) computer system in the 1983 movie Wargames. Joshua was an old login for the creator to access the system outside of the military chain of command. When the developer was removed from the project the account was forgotten about until a kid discovered it and kicked off the plot of the movie.

Joshua tells us a lot about developers and their desire to have access to the system. I’ll admit I’ve been in the same boat before. I’ve created my own logins to systems with elevated access to get tasks accomplished. I’ve also notified the users and administrators of those systems about my account and let them deal with it as needed. Most were okay with it being there. Some were hesitant and required it to be disabled after my work was done. Either way, I was up front about what was going on.

Joshua and zyfwp are examples of what happens when those systems are installed outside of the knowledge of the operators. What would have happened if the team in the Netherlands hand’t found the account? What if Zyxel devices were getting hacked and networks breached without anyone knowing the vector? I’m sure the account showed up in all the admin dashboards, right?

Easter Egg Hunts

Do you remember the Windows 3.1 Bear? It was a hidden reference in the credits to the development team’s mascot. You had to jump through a hoop to find it by holding down a keystroke combination and clicking a specific square in the Windows logo. People loved finding those little nuggets in the software all the way up to Windows 98.

What changed? Turns out, as part of Microsoft’s Trustworth Computing Initiative in 2002 they removed all undocumented features and code that could cause these kinds of things. It also might have had something to do with the antitrust investigations into Microsoft in the 1990s and how undocumented features in Windows and Office might have given the company a competitive advantage. Whatever the reason, Microsoft has committed to removing undocumented code.

Easter eggs are fun to find but represent the bright side of the dark issue above. What happens when the easter egg in question isn’t a credit roll but an undocumented account? What if the keystroke doesn’t bring up a teddy bear but instead gives the current user account full admin access? You scoff at the possibility but there’s nothing stopping a developer from making that happen.

These issues are part of the reason why all code and features need to be documented. We need to know what’s going on in the program and how it could impact us. This means no backdoors. If there is a way to access the system aside from the controls built in already it needs to be known and be able to be disabled if necessary. If it can’t be disabled then the users need to be aware of that fact and make the choice to not use the software because of security issues.

If you’re following along closely, you should have picked up on the fact that this same logic applies to backdoors that have been mandated by the government too. The current slate of US Senators seem to believe that we need to create keys that allow end-to-end encryption to be weakened and readable by law enforcement. However, as stated by companies like Apple for years, if you create a key for a lock that should only ever be opened under special circumstances you have still created a weakness that can be unlocked. We’ve seen the tools used by intelligence agencies stolen and used to create malware unlike anything we’ve ever seen before. What do you think might happen if they get the backdoor keys to go through encrypted messaging systems?

Tom’s Take

I don’t run Zyxel equipment in my home or anywhere I used to work. But if I did there would be a pile of it in the dumpster after this mess. Having a backdoor is one thing. Purposely making one is another. And having that backdoor discovered and exploited by the Internet is an entirely differently conversation. The only way to be sure that you’ve fixed your backdoor problem is to not have one in the first place. Joshua and zyfwp are what we need to get away from, not what we need to work toward. Malfeasance only stops when you don’t do it in the first place.

Securing Your Work From Home


Wanna make your security team’s blood run cold? Remind them that all that time and effort they put in to securing the enterprise from attackers and data exfiltration is currently sitting unused while we all work from home. You might have even heard them screaming at the sky just now.

Enterprise security isn’t easy, nor should it be. We constantly have to be on the offensive to find new attack vectors and hunt down threats and exploits. We have spent years and careers building defense-in-depth to an artform not unlike making buttery croissants. It’s all great when that apparatus is protecting our enterprise data center and cloud presence like a Scottish castle repelling invaders. Right now we’re in the wilderness with nothing but a tired sentry to protect us from the marauders.

During Security Field Day 4, I led a discussion panel with the delegates about the challenges of working from home securely. Here’s a link to our discussion that I wanted to spend some time elaborating on:

Home Is Where the Exploits Are

BYOD was a huge watershed moment for the enterprise because we realized for the first time that we had to learn to secure other people’s devices. We couldn’t rely on locked-down laptops and company-issued phones to keep us safe. Security exploded when we no longer had control of the devices we were trying to protect. We all learned hard lessons about segmenting networks and stopping lateral attacks from potentially compromised machines. It’s all for naught now because we’re staring at those protections gathering dust in an empty office. With the way that commercial real estate agents are pronouncing a downturn in their market, we may not see them again soon.

Now, we have to figure out how to protect devices we don’t own on networks we don’t control. For all the talk of VPNs for company devices and SD-WAN devices at the edge to set up on-demand protection, we’re still in the dark when it comes to the environment around our corporate assets. Sure, the company Thinkpad is safe and sound and isolated at the CEO’s house. But what about his wife’s laptop? Or the kids and their Android tablets? Or even the smart speakers and home IoT devices around it? How can we be sure those are all safe?

Worse still, how do you convince the executives of a company that their networks aren’t up to par? How can you tell someone that controls your livelihood they need to install more firewalls or segment their network for security? If the PlayStation suddenly needs to authenticate to the wireless and is firewalled away from the TV to play movies over AirPlay, you’re going to get a lot of panicked phone calls.

Security As A Starting Point

If we’re going to make Build Your Own Office (BYOO) security work for our enterprise employees, we need to reset our goals. Are we really trying to keep everyone 100% safe and secure 100% of the time? Are we trying for total control over all assets? Or is there a level of insecurity we are willing to accept to make things work more smoothly?

On-demand VPNs are a good example. It’s fine to require them to access company resources behind a firewall in the enterprise data center. But does it need to be enabled to access things in the public cloud? Should the employee have to have it enabled if they decide to work on the report at 8:00pm when they haven’t ever needed it on before? These challenges are more about policy than technology.

Policy is the heart of how we need to rebuild BYOO security. We need to examine which policies are in place now and determine if they make sense for people that may never come back into the office. Don’t want to use the VPN for connectivity? Fine. However, you will need to enable two-factor authentication (2FA) on your accounts and use a software token on your phone to access our systems. Don’t want to install the apps on your laptop to access cloud resources? We’re going to lock you out until we’ve evaluated everything remotely for security purposes.

Policy has an important role to play. It is the reason for our technology and the driver for our work. Policy is why I have 2FA enabled on all my corporate accounts. Policy is why I don’t have superuser rights to certain devices but instead authenticate changes as needed with suitable logging. Policy is why I can’t log in to a corporate email server from a vacation home in the middle of nowhere because I’m not using a secured connection. It’s all relevant to the way we do business.

Pushing Policy Buttons

You, as a security professional, need to spend the rest of 2020 doing policy audits. You’re going to get crosseyed. You’re going to hate it. So will anyone you contact about it. Trust me, they hate it just like you do. But you have to make it happen., You have to justify why you’re doing things the way you’re doing them. “This is how we’ve always done it” is no longer justification for a policy. We’re still trying to pull through a global pandemic that has costs thousands their jobs and displaced thousands more to a home they never thought was going to support actual work. Now is not the time to get squeamish.

It’s time to scrub your policies down to the baseboards and get to cleaning and building them back up. Figure out what you need and what is required. Implement changes you’ve always needed to make, like software updates or applications that enhance security. If you want to make it stick in this new world of working from home you need to put it in place at the deepest levels now. And it needs to stick for everyone. No executive workarounds. No grace extensions for them to keep their favorite insecure apps or allowing them to not have 2FA enabled on everything. They need to lead by example from the front, not lag in the back being insecure.

Tom’s Take

I loved the talk at Security Field Day about security at home. We weighed a lot of things that people aren’t really thinking about right now because we haven’t had a major breach in “at home” security. Yet. We know it’s coming and if it happens the current state of network segementation isn’t going to be kind to whomever is under the gun. Definitely watch the video above and tell me your thoughts, either on the video comments or here. We can keep things safe and secure no matter where we are. We just need to think about what that looks like at the lowest policy level and build up from there.

Security and Salt

One of the things I picked up during the quarantine is a new-found interest in cooking. I’ve been spending more time researching recipes and trying to understand how my previous efforts to be a four-star chef have fallen flat. Thankfully, practice does indeed make perfect. I’m slowly getting better , which is to say that my family will actually eat my cooking now instead of just deciding that pizza for the fourth night in a row is a good choice.

One of the things I learned as I went on was about salt. Sodium Chloride is a magical substance. Someone once told me that if you taste a dish and you know it needs something but you’re not quite sure what that something is, the answer is probably salt. It does a lot to tie flavors together. But it’s also a fickle substance. It has the power to make or break a dish in very small amounts. It can be the difference between perfection and disaster. As it turns out, it’s a lot like security too.

Too Much is Exactly Enough

Security and salt are alike in the first way because you need the right amount to make things work. You have to have a minimum amount of both to make something viable. If you don’t have enough salt in your dish you won’t be able to taste it. But you also won’t be able to pull the flavors in the dish together with it. So you have to work with a minimum. Whether its a dash or salt or a specific minimum security threshold, you have to have enough to matter otherwise it’s the same as not having it at all.

To The Salt Mines

Likewise, the opposite effect is also detrimental. If you need to have the minimum amount to be effective, the maximum amount of both salt and security is bad. We all know what happens when we put too much salt into a dish. You can’t eat it at all. While there are tricks to getting too much salt out of a dish they change the overall flavor profile of whatever you’re making. Even just a little too much salt is very apparent depending on the dish you’re trying to make. Likewise, too much security is a deterrent to getting actual work done. Restrictive controls get in the way of productivity and ultimately lead to people trying to work out solutions that don’t solve the problem but instead try to bypass the control.

Now you may be saying to yourself, “So, the secret is to add just the right amount of security, right?” And you would be correct. But what is the right amount? Well, it’s not unlike trying to measure salt by sight instead of using a measuring device. Have you ever seen a chef or TV host pour an amount of salt into their hands and say it needs “about that much”? Do you know how they know how much salt to add? It’s not rocket science. Instead, it’s the tried-and-true practice of practice. They know about how much salt a dish needs for a given cooking time or flavor profile. They may have even made the dish a few times in order to understand when it might need more or less salt. They know that starches need more salt and delicate foods need less. Most importantly, they measured how much salt they can hold in their cupped hand. So they know what a teaspoon and tablespoon of salt look like in their palm.

How is this like security? Most Infosec professionals know inherently how to make things more secure. Their experience and their training tell them how much security to add to a system to make it more secure without putting too much in place to impede the operations of the system. They know where to put an IPS to provide maximum coverage without creating too many false positives. And they can do that because they have the experience to know how to do it right without guessing. Because the odds are good they’ve done it wrong at least one time.

The last salty thing to remember is that even when you have the right amounts down to a science you’re still going to need to figure out how to make it perfect. Potato soup is a notoriously hard dish to season properly. As mentioned above, starchy foods tend to soak up salt. You can fix a salty dish by putting a piece of a potato in it to soak up the salt. But is also means that it’s super hard to get it right when everything in your dish soaks up salt. But the best chefs can get it right. Because they know where to start and they know to test the dish before they do more. They know they need to start from a safe setup and push out from there without ruining everything. They know that no exact amount is the same between two dishes and the only way to make sure it’s right is to test until you get it right. Then make notes so you know how to make it better the next time.

Tom’s Take

Salt is one of my downfalls. I tend to like things salty, so I put too much in when I taste things. It’s never too salty for me unless my mouth shrinks up like a desiccated dish. That’s why I also have to rely on my team at home to help me understand when something is just right for them so I don’t burn out their taste buds either. Security is the same. You need a team that understands everything from their own perspective so they can help you get it right all over. You can’t take salt out of a dish without a massive crutch. And you can’t really reduce too much security without causing issues like budge overruns or costly meetings to decide what to remove. It’s better to get your salt and your security right in the first place.

Eventually Secure?

I have a Disney+ account. I have kids and I like Star Wars, so it made sense. I got it all set up the day it came out and started binge watching the Mandalorian. However, in my haste to get things up and running I reused an old password instead of practicing good hygiene. As the titular character might scold me, “This is not the way.” I didn’t think anything about it until I got a notification that someone from New Jersey logged into my account.

I panicked and reset my password like a good security person should have done in the first place. I waited for the usual complaints that people had been logged out of the app and prepared to log everyone in again and figure out how to remove my New Jersey interloper. Imagine my surprise when no one came to ask me to turn Phineas and Ferb back on. Imagine my further surprise when I looked in the app and on the Disney+ website and couldn’t find a way to see which devices were logged in to this account. Nor could I find a way to disconnect a rogue device as I could with Netflix or Hulu.

I later found out that this functionality exists but you have to call the Disney+ support team to make it happen. I also have no doubts that the functionality will eventually come to the app as more and more people are sharing account information so they can binge watch Clone Wars. However, this eventual security planning has me a bit concerned. And that concern extends beyond Mice and Mandalorians.

Minimum Secure Product

If you’re figuring out how to secure your newest application or a new building or even just a new user, you first have to figure out what “secure” looks like. If you have trouble figuring that out, all you need to do is look at your closest competitor. They will usually have a good baseline of the security and accessibility features you should have.

Maybe it’s basic device and user controls like the Disney+ example above. Maybe it’s encryption of your traffic end-to-end, as Zoom learned a couple of weeks ago. Or maybe it’s something as simple as ensuring that you don’t have a hard-coded backdoor password for SSH, like Fortinet remembered earlier this year. The real point is that you can survey the landscape and figure out what you need to do to make your product or app meet a minimum standard.

On the extremely off-chance that you’re developing something new and unique and never-before-seen in the world, you have a different problem. For one, you need to chill on the marketing. Maybe you’re using something in a novel and different way. But unless you’ve developed psychic powers or anti-gravity boosters or maybe teleportation you haven’t come up with anything completely unique. Secondly, you still have some references to draw on. You can look for similar things and use similar security controls.

If your teleport requires a login by a qualified person to operate you should look at login security for other industries that are similar to determine what is appropriate. Maybe it’s like a medical facility where you have two-factor authentication (2FA) with smart cards or tokens as well as passwords or biometrics. Maybe it’s a lockout system with two operators required to engage the mechanism so someone’s arm doesn’t actually get teleported away without the rest of them. Even if your teleport produces massive amounts of logs you should keep them lest someone show up on the other pad with a different color hair than when they left. Those logs may be different from anything ever seen before, but even Airbus knows how to store the flight data from every A380 flight.

Security isn’t a hard problem. It’s a series of challenges that must be overcome. All of them are rooted in common sense and discovery. Sure, you may not know all the problems right now. But you know what they look like in general and you also know what the outcome should look like. Common sense comes into play when you start thinking like a bad actor. If I were able to get into this app, what would I want to do? Maybe I want to sign up for the all-inclusive package and not get a confirmation sent to an account. So put a control in place that makes you confirm that. Sure, it reduces the likelihood that someone is going to sign up for something without realizing what they’ve done. But the side effect is that you also have happier customers because they were stopped from doing something they may not have wanted to do. Your security controls served a double purpose.

Tom’s Take

Ultimately, security should be about preventing bad or unwanted outcomes. Theft, destruction, and impersonation are all undesired outcomes of something. If your platform doesn’t protect against those you are not secure. If your process requires intervention to make those outcomes happen you’re not there yet. Disney+ could have launched with device reports and the ability to force logoff after password change. But the developers were focused on other things. It’s time for developers to learn how to examine what the minimum requirements are to be secure and ensure they’re included in the process along the way. We shouldn’t have to hope that we might one day become eventually secure.

Denial of Services as a Service

Hacking isn’t new. If you follow the 2600 Magazine culture of know the name Mitnick or Draper you know that hacking has been a part of systems as long as their have been systems. What has changed in recent years is the malicious aspect of what’s going on in the acts themselves. The pioneers of hacking culture were focused on short term gains or personal exploitation. It was more about proving you could break into a system and getting the side benefit of free phone calls or an untraceable mobile device. Today’s hacking cultures are driven by massive amounts of theft and exploitation of resources to a degree that would make any traditional hacker blush.

It’s much like the difference between petty street crime and “organized” crime. With a patron and a purpose, the organizers of the individual members can coordinate to accomplish a bigger goal than was ever thought possible by the person on the street. Just like a wolf pack or jackals, you can take down a much bigger target with come coordination. I talked a little bit about how the targets were going to start changing almost seven years ago and how we needed to start figuring out how to protect soft targets like critical infrastructure. What I didn’t count on was how effectively people would create systems that can cripple us without total destruction.

Deny, Deny, Deny

During RSA Conference this year, I had a chance to speak briefly with Tom Kellerman of Carbon Black. He’s a great guy and I loved the chance to chat with him about some of the crazy stuff that Carbon Black has been seeing in the wild. He gave me a peek at their 2020 Cybersecurity Report and some of the new findings they’ve been seeing. A couple of things jumped out at me during our discussion though.

The first is that the bad actors that have started pushing attacks toward critical infrastructure have realized that denying that infrastructure to users is just as good as destroying it. Why should I take the time to craft a beautiful and elegant piece of malware like Stuxnet to deny a key piece of SCADA systems when I can just use a cyptolocker to infect all the control boxes for a tenth of the cost? And, if the target does pay up to get things unlocked, just leave them there in a state of shutdown!

A recent episode of the Risky Business podcast highlights this to great effect. A natural gas processing plant system was infected and needed to be cleaned. However, when gas is flowing through the pipelines you can’t just shut off one site to have it cleaned. You have to do a full system shutdown! That meant knocking the entire facility offline for two days to restore one site’s systems. That’s just the tip of the iceberg.

Imagine if you could manage to shut down a hospital like the accidental spanning tree meltdown at Beth Israel Deaconess Medical Center in 2002. Now, imagine a cyptolocker or a wiper that could shut down all the hospitals in California during a virus outbreak. Or maybe one that could infected and wipe out the control systems for all the dams providing power for the Tennessee Valley Authority. Getting worried yet? Because the kinds of people that are targeting these installations don’t care about $5,000 worth of Bitcoin to unlock stuff. They care about causing damage. They want stuff knocked offline. Or someone that organizes them does. And the end goal is the same: chaos. It doesn’t matter if the system is out because of the malware or down for days or weeks to clean it. The people looking to benefit from the chaos win no matter what.

Money, Money, Money

The biggest key to this kind of attack is the same as it always has been. If you want to know where the problems are coming from, follow the money. In the past, it was following the money to the people that are getting paid to do the attacks. Today, it’s more about following the money to the people that make the money from these kinds of attacks. It’s not enough to get Bitcoin or some other amount of peanuts in an untraceable wallet. If you can do something that manipulates global futures markets or causes fluctuations in commodity prices on the order of hundreds of thousands or even millions of dollars you suddenly don’t care about whether or not some company’s insurance is going to pay out to unlock their HR files.

Think about it in the most simple terms. If I could pay someone to shut down half the refineries in the US for a month to spike oil prices for my own ends would that be worth paying a few thousand dollars to a hacking team to pull off? Better yet, if that same hacking team was now under my “protection” from retaliation from the target, do you think they’d continue to work for me to ensure that they couldn’t be caught in the future? Sure, go ahead and freelance when you want. Just don’t attack my targets and be on-call when I need something big done. It’s not unlike all those crazy spy movies where a government agency keeps a Rolodex full of assassins on tap just in case.

Tom’s Take

The thought of what would happen in a modern unrestricted war scares me. Even 4-5 years from now would be a massive problem. We don’t secure things from the kind of determined attackers that can cause mass chaos. Let’s just shut down all the autonomous cars or connected vehicles in NY and CA. Let’s crash all the hospital MRI machines or shut down all the power plants in the US for a day or four. That’s the kind of coordination that can really upset the balance of power in a conflict. And we’re already starting to see that kind of impact with freelance groups. You don’t need a war to deny access to a service. Sometimes you just need to hire it out to the right people for the right price.