Friction as a Network Security Concept

I had the recent opportunity to record a podcast with Curtis Preston about security, data protection, and networking. I loved being a guest and we talked about quite a bit in the episode about how networking operates and how to address ransomware issues when they arise. I wanted to talk a bit more about some concepts here to help flesh out my advice as we talked about it.

Compromise is Inevitable

If there’s one thing I could say that would make everything make sense it’s this: you will be compromised. It’s not a question of if. You will have your data stolen or encrypted at some point. The question is really more about how much gets taken or how effectively attackers are able to penetrate your defenses before they get caught.

Defenses are designed to keep people out. But they also need to be designed to contain damage. Think about a ship on the ocean. Those giant bulkheads aren’t just there for looks. They’re designed to act as compartments to seal off areas in case of catastrophic damage. The ship doesn’t assume that it’s never going to have a leak. Instead, the designers created it in such a way as to be sure that when it does you can contain the damage and keep the ship floating. Without those containment systems even the smallest problem can bring the whole ship down.

Likewise, you need to design your network to be able to contain areas that could be impacted. One giant flat network is a disaster waiting to happen. A network with a DMZ for public servers is a step in the right direction. However, you need to take it further than that. You need to isolate critical hosts. You need to put devices on separate networks if they have no need to directly talk to each other. You need to ensure management interfaces are in a separate, air-gapped network that has strict access controls. It may sound like a lot of work but the reality is that failure to provide isolation will lead to disaster. Just like a leak on the ocean.

The key here is that the controls you put in place create friction with your attackers. That’s the entire purpose of defense in depth. The harder it is for attackers to get through your defenses the more likely they are to give up earlier or trigger alarms designed to warn you when it happens. This kind of friction is what you want to see. However, it’s not the only kind of friction you face.

Failing Through Friction

Your enemy in this process isn’t nefarious actors. It’s not technology. Instead, it’s the bad kind of friction. Security is designed by its very nature to create friction with systems. Networks are designed to transmit data. Security controls are designed to prevent the transmission of data. This bad friction comes when these two aspects are interacting with each other. Did you open the right ports? Are the access control lists denying a protocol that should be working? Did you allow the right VLANs on the trunk port?

Friction between controls is maddening but it’s a solvable problem with time. The real source of costly friction comes when you add people into the mix. Systems don’t complain about access times. They don’t call you about error messages. And, worst of all, they don’t have the authority to make you compromise your security controls for the sake of ease-of-use.

Everyone in IT has been asked at some point to remove a control or piece of software for the sake of users. In organizations where the controls are strict or regulatory issues are at stake the requests are usually disregarded. However, when the executives are particularly insistent or the IT environment is more carefree you can find yourself putting in a shortcut to get the CEO’s laptop connected faster or allow their fancy new phone to connect without a captive portal. The results are often happy and have no impact. That is, until someone finds out they can get in through your compromised control and create a lot of additional friction.

How can you reduce friction? One way is to create more friction in the planning stages. Ask lots of questions about ports and protocols and access list requirements before something is implemented. Do your homework ahead of time instead of trying to figure it out on the fly. If you know that a software package needs to communicate to these four addresses on these eight ports then anything outside of that list should be suspect and be examined. Likewise, if someone can’t tell you what ports need to be opened for a package to work you should push back until they can give you that info. Better to spend time up front learning than spend more time later triaging.

The other way to reduced friction in implementation is to shift the friction to policy. If the executives want you to compromise a control for the sake of their own use make them document it. Have them write it down that you have been directed to add a special configuration just for them. Keep that information stored in your DR plan and note it in your configuration repositories as well. Even a comment in the access list can help understand why you had to do something a certain way. Often the request to document the special changes will have the executives questioning the choice. More importantly, if something does go sideways you have evidence of why the change was made. And for executives that don’t like to look like fools this is a great way to have these kinds of one-off policy changes stopped quickly when something goes wrong and they get to answer questions from a reporter.


Tom’s Take

Friction is the real secret of security. When properly applied it prevents problems. When it’s present in too many forms it causes frustration and eventually leads to abandonment of controls or short circuits to get around them. The key isn’t to eliminate it entirely. Instead you need to apply it properly and make sure to educate about why it exists in the first place. Some friction is important, such as verifying IDs before entering a secure facility. The more that people know about the reasons behind your implementation the less likely they are to circumvent it. That’s how you keep the bad actors out and the users happy.

Getting Tough with Cyberinsurance

I’ve been hearing a lot of claims recently about how companies are starting to rely more and more on cyberinsurance policies to cover them in the event of a breach or other form of disaster. While I’m a fan of insurance policies in general I think the companies trying to rely on these payouts to avoid doing any real security work is going to be a big surprise to them in the future.

Due Diligence

The first issue that I see is that companies are so worried about getting breached that they think taking out big insurance policies are the key to avoiding any big liability. Think about an organization that holds personally identifiable information (PII) and how likely it is that they would get sued in the event of a breach. The idea is that cyberinsurance would pay out for the breach and be used as a way to pay off the damages in a lawsuit.

The issue I have with this is that companies are expecting to get paid. They see cyberinsurance as a guaranteed payout instead of a last resort. In the initial days of taking out these big policies the insurers were happy to pay out because they were getting huge premiums in return. However, as soon as the flood of payouts started happening the insurers had to come back down to earth and realize they had to put safeguards in place to keep themselves from going bankrupt.

For anyone out there hoping to take out a big insurance policy and get paid when you inevitably get compromised you’re about to face a harsh reality. Gone are the days of insuring your enterprise against cyber threats without doing some form of due diligence on the setup. You’re going to have to prove you did everything you could to prevent this from happening before you get to collect. And if you’ve ever filed an insurance claim for a car or a house you know that it can take weeks for them to investigate everything to find out if there is a way for them to not pay out.

There is a very reasonable chance your policy will exclude certain conditions that could have easily been prevented. It would be potentially embarrassing for your executives to find out you are unable to collect on an insurance policy because it specifically doesn’t cover social engineering or business email compromise (BEC).

Getting Ahead of the Insurance Game

How can you prevent this from happening? What steps can you take today to make sure you’re not going to find yourself on the losing end of a security incident?

  1. Check Your Coverage – It’s boring and reads like stereo instructions but you really do need to check your insurance policy completely, especially the parts that mention things that are specifically excluded from coverage. You need to know what isn’t going to be covered in a breach and have a response for those things. You need to know how to respond in areas that are potential weak points and be ready to confirm that you didn’t end up getting attacked there.
  2. Look for Suggestions From the Insurer – I know people that will only buy cars based on safety reports from industry groups. They’d rather have something less flashy or cool in favor of a car that is going to keep them protected in the event of an accident. The insurance companies love to publish those reports because it means that more sales of those cars means smaller payouts on claims. Likewise, more companies that provide cyberinsurance are starting to publish lists of software that they would encourage or outright require in an organization in order to have coverage or be eligible for a payout. If your company has such a list you should really get it and make sure you’ve checked the boxes. You don’t want to find yourself in a situation where one missed avenue of attack cost you the whole policy.
  3. Make Sure Your Reports Are Working – In the event that everything does go wrong you’re going to need to provide proof to people that you did all you could to prevent it. That means logs and incident reports and even more data about what went wrong and when. You don’t want to go and pull up that reporting data after the worst day of your cybersecurity life only to find the reporting system hasn’t been working properly for months. Then you’re not only behind on getting the incident dealt with but you’re also slowing down the potential recovery on the policy. The insurance company is happy for you to take as much time as you need because every day that they don’t pay you is one more day they’re making money off their investments. Don’t delay yourself any more than you need to.

Tom’s Take

The best insurance is the kind you don’t need. That doesn’t mean you don’t get it, especially if it’s a requirement. However, even if you do have it you need to act like you don’t. Assuming that there’s a safety net to catch you isn’t always the case when that net comes with conditions that could pull the rug out from under you. You need to know what your potential exposure could be and what could prevent you from collecting. You need to be prepared to put new mechanisms in place to protect your enterprise and have a plan for what exactly to do when things go wrong. That should be paramount even without the policy. If you have everything ready to go you won’t need to worry about what happens when disaster strikes.

Mind the Air Gap

I recently talked to some security friends on a CloudBytes podcast recording that will be coming out in a few weeks. One of the things that came up was the idea of an air gapped system or network that represents the ultimate in security. I had a couple of thoughts that felt like a great topic for a blog post.

The Gap is Wide

I can think of a ton of classical air gapped systems that we’ve seen in the media. Think about Mission: Impossible and the system that holds the NOC list:

Makes sense right? Totally secure unless you have Tom Cruise in your ductwork. It’s about as safe as you can make your data. It’s also about as inconvenient as you can make your data too. Want to protect a file so no one can ever steal it? Make it so no one can ever access it! Works great for data that doesn’t need to be updated regularly or even analyzed at any point. It’s offline for all intents and purposes.

Know what works great as an air gapped system? Root certificate authority servers. Even Microsoft agrees. So secure that you have to dig it out of storage to ever do anything that requires root. Which means you’re never going to have to worry about it getting corrupted or stolen. And you’re going to spend hours booting it up and making sure it’s functional if you ever need to do anything with it other than watch it collect dust.

Air gapping systems feels like the last line of defense to me. This data is so important that it can never be exposed in any way to anything that might ever read it. However, the concept of a gap between systems is more appealing from a security perspective. Because you can create gaps that aren’t physical in nature and accomplish much of the same idea as isolating a machine in a cool vault.

Gap Insurance

By now you probably know that concepts like zero-trust network architecture are the key to isolating systems on your network. You don’t remove them from being accessed. Instead, you restrict their access levels. You make users prove they need to see the system instead of just creating a generic list. If you consider the ZTNA system as a classification method you understand more how it works. It’s not just that you have clearance to see the system or the data. You also need to prove your need to access it.

Building on these ideas of ZTNA and host isolation are creating virtual air gaps that you can use effectively without the need to physically isolate devices. I’ve seen some crazy stories about IDS sensors that have the transmit pairs of their Ethernet cables cut so they don’t inadvertently trigger a response to an intruder. That’s the kind of crazy preparedness that creates more problems that it solves.

Instead, by creating robust policies that prevent unauthorized users from accessing systems you can ensure that data can be safely kept online as well as making it available to anyone that wants to analyze it or use it in some way. That does mean that you’re going to need to spend some time building a system that doesn’t have inherent flaws.

In order to ensure your new virtual air gap is secure, you need to use policy manual programming. Why? Because a well-defined policy is more restrictive than just randomly opening ports or creating access lists without context. It’s more painful at first because you have to build the right policy before you can implement it. However, a properly built policy can be automated and reused instead of a just pasting lines from a text document into a console.

If you want to be sure your system is totally isolated, start with the basics. No traffic to the system. Then you build out from there. Who needs to talk to it? What user or class of users? How should they access it? What levels of security do you require? Answering each of those questions also gives you the data you need to define and maintain policy. By relying on these concepts you don’t need to spend hours hunting down user logins or specific ACL entries to lock someone out after they leave or after they’ve breached a system. You just need to remove them from the policy group or change the policy to deny them. Much easier than sweeping the rafters for super spies.


Tom’s Take

The concept of an air gapped system works for spy movies. The execution of one needs a bit more thought. Thankfully we’ve graduated from workstations that need biometric access to a world where we can control the ways that users and groups can access things. By thinking about those systems in new ways and treating them like isolated islands that must be accessed on purpose instead of building an overly permissive infrastructure in the first place you’ll get ahead of a lot of old technical debt-laden thinking and build a more secure enterprise, whether on premises or in the cloud.

Trust Will Do You In

If you’re a fan of the Gestalt IT Rundown that I do every week on the Gestalt IT YouTube channel, you have probably heard about the recent hacks of NVIDIA and Samsung. The original investigation into those hacks talked about using MDM platforms and other vectors to gain access to the information that was obtained by the hacking groups. An interesting tweet popped up on my feed yesterday that helped me reframe the attacks:

It would appear that the group behind these attacks are going after their targets the old fashioned way. With people. For illustration, see XKCD from 2009:

The Weakest Links

People are always the weakest link in any security situation. They choose to make something insecure through bad policy or by trying to evade the policy. Perhaps they are trying to do harm to the organization or even try to shine a light on corrupt practices. Whatever the reason, people are the weak link. Because you can change hardware or software to eliminate failures and bugs. You can’t reprogram people.

We’ve struggled for years to keep people out of our systems. Perimeter security and bastion hosts were designed to make sure that the bad actors stayed off our stage. Alas, as we’ve gotten more creative about stopping them we’ve started to realize that more and more of the attacks aren’t coming from outside but instead from inside.

There are whole categories of solutions dedicated to stopping internal attackers now. Data Loss Prevention (DLP) can catch data being exfiltrated by sophisticated attackers but it is more often used to prevent people from leaking sensitive data either accidentally or purposefully. There are solutions to monitor access to systems and replay logs to find out how internal systems folks were able to get privileges they should have.

To me, this is the crux of the issue. As much as we want to create policies that prevent people from exceeding their authority we seem to have a hard time putting them into practice. For every well-meaning solution or rule that is designed to prevent someone from gaining access to something or keep them secure you will have someone making a shortcut around it. I’ve done it myself so I know it’s pretty common. For every rule that’s supposed to keep me safe I have an example of a way I went around it because it got in my way.

Who Do YOU Trust?

This is one of the reasons why a Zero Trust Network Architecture (ZTNA) appeals to me. At its core it makes a very basic assumption that I learned from the X-Files: Trust No One. If you can’t verify who you are you can’t gain access. And you only gain access to the bare minimum you need and have no capability for moving laterally.

We have systems that operate like this today. Call your doctor and ask them a question. They will first need to verify that you are who you say you are with information like your birthdate or some other key piece of Personally Identifiable Information (PII). Why? Because if they make a recommendation for a medication and you’re not the person they think they’re talking to you can create a big problem.

Computer systems should work the same way, shouldn’t they? We should need to verify who we are before we can access important data or change a configuration setting. Yet we constantly see blank passwords or people logging in to a server as a super user to “just make it easier”. And when someone gains access to that system through clever use of a wrench, as above, you should be able to see the potential for disaster.

ZTNA just says you can’t do that. Period. If the policy says no super user logins from remote terminals they mean it. If the policy says no sensitive data access from public networks then that is the law. And no amount of work to short circuit the system is going to work.

This is where I think the value of ZTNA is really going to help modern enterprises. It’s not the nefarious actor that is looking to sell their customer lists that creates security issues. It does happen but not nearly as often as an executive that wants a special exception for the proxy server because one of their things doesn’t work properly. Or maybe it’s a developer that created a connection from a development server into production because it was easier to copy data back and forth that way. Whatever the reasons the inadvertent security issues cause chaos because they are configured and forgotten. At least until someone hacks you and you end up on the news.

ZTNA forces you to look at your organization and justify why things are the way they are. Think of it like a posture audit with immediate rule creation. If development servers should never talk to production units then that is the rule. If you want that to happen for some strange reason you have to explicitly configure it. And your name is attached to that configuration so you know who did it. Hopefully something like this either requires sign off from multiple teams or triggers a notification for the SOC that then comes in to figure out why policy was violated. At best it’s an accident or a training situation. At worst you may have just caught a bad actor before they step into the limelight.


Tom’s Take

Security isn’t perfect. We can always improve. Every time we build a good lock someone can build a better way to pick it. The real end goal is to make things sufficiently secure that we don’t have to worry about them being compromised with no effort while at the same time keeping it easy enough for people to get their jobs done. ZTNA is an important step because it creates rules and puts teeth behind them to prevent easy compromise of the rules by people that are trying to err on the side of easy. If you don’t already have plans to include ZTNA in your enterprise now you really should start looking at them. I’d tell you to trust me, but not trusting me is the point.

Fast Friday Thoughts From Security Field Day

It’s a busy week for me thanks to Security Field Day but I didn’t want to leave you without some thoughts that have popped up this week from the discussions we’ve been having. Security is one of those topics that creates a lot of thought-provoking ideas and makes you seriously wonder if you’re doing it right all the time.

  • Never underestimate the value of having plumbing that connects all your systems. You may look at a solution and think to yourself “All this does is aggregate data from other sources”. Which raises the question: How do you do it now? Sure, antivirus fires alerts like a car alarm. But when you get breached and find out that those alerts caught it weeks ago you’re going to wish you had a better idea of what was going on. You need a way to send that data somewhere to be dealt with and cataloged properly. This is one of the biggest reasons why machine learning is being applied to the massive amount of data we gather in security. Having an algorithm working to find the important pieces means you don’t miss things that are important to you.
  • Not every solution is going to solve every problem you have. My dishwasher does a horrible job of washing my clothes or vacuuming my carpets. Is it the fault of the dishwasher? Or is it my issue with defining the problem? We need to scope our issues and our solutions appropriately. Just because my kitchen knives can open a package in a pinch doesn’t mean that the makers need to include package-opening features in a future release because I use them exclusively for that purpose. Once we start wanting the vendors to build a one-stop-shop kind of solution we’re going to create the kind of technical debt that we need to avoid. We also need to remember to scope problems so that they’re solvable. Postulating that there are corner cases with no clear answers are important for threat hunting or policy creation. Not so great when shopping through a catalog of software.
  • Every term in every industry is going to have a different definition based on who is using it. A knife to me is either a tool used on a campout or a tool used in a kitchen. Others see a knife as a tool for spreading butter or even doing surgery. It’s a matter of perspective. You need to make sure people know the perspective you’re coming from before you decide that the tool isn’t going to work properly. I try my best to put myself in the shoes of others when I’m evaluating solutions or use cases. Just because I don’t use something in a certain way doesn’t mean it can’t be used that way. And my environment is different from everyone else’s. Which means best practices are really just recommended suggestions.
  • Whatever acronym you’ve tied yourself to this week is going to change next week because there’s a new definition of what you should be doing according to some expert out there. Don’t build your practice on whatever is hot in the market. Build it on what you need to accomplish and incorporate elements of new things into what you’re doing. The story of people ripping and replacing working platforms because of an analyst suggestion sounds horrible but happens more often than we’d like to admit. Trust your people, not the brochures.

Tom’s Take

Security changes faster than any area that I’ve seen. Cloud is practically a glacier compare to EPP, XDR, and SOPV. I could even make up an acronym and throw it on that list and you might not even notice. You have to stay current but you also have to trust that you’re doing all you can. Breaches are going to happen no matter what you do. You have to hope you’ve done your best and that you can contain the damage. Remember that good security comes from asking the right questions instead of just plugging tools into the mix to solve issues you don’t have.

Getting Blasted by Backdoors

Open Door from http://viktoria-lyn.deviantart.com/

I wanted to take minute to talk about a story I’ve been following that’s had some new developments this week. You may have seen an article talking about a backdoor in Juniper equipment that caused some issues. The issue at hand is complicated at the linked article does a good job of explaining some of the nuance. Here’s the short version:

  • The NSA develops a version of Dual EC random number generation that includes a pretty substantial flaw.
  • That flaw? If you know the pseudorandom value used to start the process you can figure out the values, which means you can decrypt any traffic that uses the algorithm.
  • NIST proposes the use of Dual EC and makes it a requirement for vendors to be included on future work. Don’t support this one? You don’t get to even be considered.
  • Vendors adopt the standard per the requirement but don’t make it the default for some pretty obvious reasons.
  • Netscreen, a part of Juniper, does use Dual EC as part of their default setup.
  • The Chinese APT 5 hacking group figures out the vulnerability and breaks into Juniper to add code to Netscreen’s OS.
  • They use their own seed value, which allows them to decrypt packets being encrypted through the firewall.
  • Hilarity does not ensue and we spend the better part of a decade figuring out what has happened.

That any of this even came to light is impressive considering the government agencies involved have stonewalled reporters and it took a probe from a US Senator, Ron Wyden, to get as far as we have in the investigation.

Protecting Your Platform

My readers know that I’m a pretty staunch advocate for not weakening encryption. Backdoors and “special” keys for organizations that claim they need them are a horrible idea. The safest lock is one that can’t be bypassed. The best secret is one that no one knows about. Likewise, the best encryption algorithms are ones that can’t be reversed or calculated by anyone other than the people using them to send messages.

I get that the flood of encrypted communications today is making life difficult for law enforcement agencies all over the world. It’s tempting to make it a requirement to allow them a special code that will decrypt messages to keep us safe and secure. That’s the messaging I see every time a politician wants to compel a security company to create a vulnerability in their software just for them. It’s all about being safe.

Once you create that special key you’ve already lost. As we saw above, the intentions of creating a backdoor into an OS so that we could spy on other people using it backfired spectacularly. Once someone else figured out that you could guess the values and decrypt the traffic they set about doing it for themselves. I can only imagine the surprise at the NSA when they realized that someone had changed the values in the OS and that, while they themselves were no longer able to spy with impunity, someone else could be decrypting their communications at that very moment. If you make a key for a lock someone will figure out how to make a copy. It’s that simple.

We focus so much on the responsible use of these backdoors that we miss the bigger picture. Sure, maybe we can make it extremely difficult for someone in law enforcement to get the information needed to access the backdoor in the name of national security. But what about other nations? What about actors not tied to a political process or bound by oversight from the populace. I’m more scared that someone that actively wishes to do me harm could find a way to exploit something that I was told had to be there for my own safety.

The Juniper story gets worse the more we read into it but they were only the unlucky dancer with a musical chair to slip into when the music stopped. Any one of the other companies that were compelled to include Dual EC by government order could have gotten the short straw here. It’s one thing to create a known-bad version of software and hope that someone installs it. It’s an entirely different matter to force people to include it. I’m honestly shocked the government didn’t try to mandate that it must be used exclusively of other algorithms. In some other timeline Cisco or Palo Alto or even Fortinet are having very bad days unwinding what happened.


Tom’s Take

The easiest way to avoid having your software exploited is not to create your own exploit for it. Bugs happen. Strange things occur in development. Even the most powerful algorithms must eventually yield to Moore’s Law or Shor’s Algorithm. Why accelerate the process by cutting a master key? Why weaken yourself on purpose by repeating over and over again that this is “for the greater good”? Remember that the greater good may not include people that want the best for you. If you’re wiling to hand them a key to unlock the chaos that we’re seeing in this case then you have overestimated your value to the process and become the very bad actor you hoped to stop.

Pegasus Pisses Me Off

UnicornPegasus

In this week’s episode of the Gestalt IT Rundown, I jumped on my soapbox a bit regarding the latest Pegasus exploit. If you’re not familiar with Pegasus you should catch up with the latest news.

Pegasus is a toolkit designed by NSO Group from Israel. It’s designed for counterterrorism investigations. It’s essentially a piece of malware that can be dropped on a mobile phone through a series of unpatched exploits that allows you to create records of text messages, photos, and phone calls and send them to a location for analysis. On the surface it sounds like a tool that could be used to covertly gather intelligence on someone of interest and ensure that they’re known to law enforcement agencies so they can be stopped in the event of some kind of criminal activity.

Letting the Horses Out

If that’s where Pegasus stopped, I’d probably not care one way or the other. A tool used by law enforcement to figure out how to stop things that are tough to defend against. But because you’re reading this post you know that’s not where it stopped. Pegasus wasn’t merely a tool developed by intelligence agencies for targeted use. If I had to guess, I’d say the groundwork for it was laid when the creators did work in some intelligence capacity. Where things went off the rails was when they no longer did.

I’m sure that all of the development work on the tool that was done for the government they worked for stayed there. however, things like Pegasus evolve all the time. Exploits get patches. Avenues of installation get closed. And some smart targets figure out how to avoid getting caught or even how to detect that they’ve been compromised. That means that work has to continue for this to be effective in the future. And if the government isn’t paying for it who is?

If you guessed interested parties you’d be right! Pegasus is for sale for anyone that wants to buy it. I’m sure there are cursory checks done to ensure that people that aren’t supposed to be using it can’t buy it. But I also know that in those cases a few extra zeros at the end of a wire transfer can work wonders to alleviate those concerns.Whether or not it was supposed to be sold to everyone or just a select group of people it got out.

Here’s where my hackles get raised a bit. The best way to prevent a tool like this from escaping is to never have created it in the first place. Just like a biological or nuclear weapon, the only way to be sure it can never be used is to never have it. Weapons are a temptation. Bombs were built to be dropped. Pegasus was built to be installed somewhere. Sure, the original intentions were pure. This tool was designed to save lives. What happens when the intentions aren’t so pure? What happens when your enemies are terrorist but politicians with different views? You might scoff at the suggestion of using a counterterrorism tool to spy on your ideological opponents, but look around the world today and ask yourself if your opponents are so inclined.

Once Pegasus was more widely available I’m sure it became a very tempting way to eavesdrop on people you wanted to know more about. Journalist getting leaks from someone in your government? Just drop Pegasus on that phone and find out who it is. Annoying activist making the media hate you? Text him the Pegasus installer and dump his phone looking for incriminating evidence to shut him up. Suspect your girlfriend of being unfaithful? Pegasus can tell you for sure! See how quickly we went from “necessary evil to protect the people” to “petty personal reasons”?

The danger of the slippery slope is that once you’re on it you can’t stop. Pegasus may have saved some lives but it has undoubtedly cost many others too. It has been detected as far back as 2014. That means every source that has been compromised or every journalist killed doing their work could have been found out thanks to this tool. That’s an awful lot of unknowns to carry on your shoulders. I’m sure that NSO Group will protest and say that they never knowingly sold it to someone that used it for less-than-honorable purposes. Can they say for sure that their clients never shared it? Or that it was never stolen and used by the very people that it was designed to be deployed against?

Closing the Barn Door

The escalation of digital espionage is only going to increase. In the US we already have political leaders calling on manufacturers and developers to create special backdoors for law enforcement to use to detect criminals and arrest them as needed. This is along the same lines as Pegasus, just formalized and legislated. It’s a terrible idea. If the backdoor is created it will be misused. Count on that. Even if the people that developed it never intended to use it improperly someone without the same moral fortitude will eventually. Oppenheimer and Einstein may have regretted the development of nuclear weapons but you can believe that by 1983 the powers that held onto them weren’t so opposed to using them if the need should arise.

I’m also not so naive as to believe for an instant that the governments of the world are just going to agree to play nice and not developer these tools any longer. They represent a competitive advantage over their opponents and that’s not something they’re going to give up easily. The only thing holding them back is oversight and accountability to the people they protect.

What about commercial entities though? If governments are restrained by the people then businesses are only restrained by their stakeholders and shareholders. And those people only seem to care about making money. So if the best tool to do the thing appears and it can make them a fortune, would they forego they profits to take a stand against categorically evil behavior? Can you say for certain that would always be the case?


Tom’s Take

Governments may not ever stop making these weapons but perhaps it’s time for the private sector to stop. The best ways to keep the barn doors closed so the horses can’t get out is not to build doors in the first place. If you build a tool like Pegasus it will get out. If you sell it, even to the most elite clientele, someone you don’t want to have it will end up with it. It sounds like a pretty optimistic viewpoint for sure. So maybe the other solution is to have them install their tool on their own devices and send the keys to a random person. That way they will know they are being watched and that whomever is watching them can decide when and where to expose the things they don’t want known. And if that doesn’t scare them into no longer developing tools like this then nothing will.

Building Backdoors and Fixing Malfeasance

You might have seen the recent news this week that there is an exploitable backdoor in Zyxel hardware that has been discovered and is being exploited. The backdoor admin account with the clever name ‘zyfwp’ is not something that has been present in the devices forever. The account was put in during firmware version 4.60, which was released in Q4 2020.

Zyxel is rushing to patch the devices and remove the backdoor account. Users are being advised to disable remote administration until the accounts can be deactivated and proven to be removed. However, the bigger question in my mind relates to the addition of the user account in the first place. Why would you knowingly install a backdoor?

Hello, Joshua

Backdoors are nothing new in the computer world. I’d argue the most famous backdoor account in the history of computer hacking belongs to Joshua, the dormant login for the War Operations Programmed Response (WOPR) computer system in the 1983 movie Wargames. Joshua was an old login for the creator to access the system outside of the military chain of command. When the developer was removed from the project the account was forgotten about until a kid discovered it and kicked off the plot of the movie.

Joshua tells us a lot about developers and their desire to have access to the system. I’ll admit I’ve been in the same boat before. I’ve created my own logins to systems with elevated access to get tasks accomplished. I’ve also notified the users and administrators of those systems about my account and let them deal with it as needed. Most were okay with it being there. Some were hesitant and required it to be disabled after my work was done. Either way, I was up front about what was going on.

Joshua and zyfwp are examples of what happens when those systems are installed outside of the knowledge of the operators. What would have happened if the team in the Netherlands hand’t found the account? What if Zyxel devices were getting hacked and networks breached without anyone knowing the vector? I’m sure the account showed up in all the admin dashboards, right?

Easter Egg Hunts

Do you remember the Windows 3.1 Bear? It was a hidden reference in the credits to the development team’s mascot. You had to jump through a hoop to find it by holding down a keystroke combination and clicking a specific square in the Windows logo. People loved finding those little nuggets in the software all the way up to Windows 98.

What changed? Turns out, as part of Microsoft’s Trustworth Computing Initiative in 2002 they removed all undocumented features and code that could cause these kinds of things. It also might have had something to do with the antitrust investigations into Microsoft in the 1990s and how undocumented features in Windows and Office might have given the company a competitive advantage. Whatever the reason, Microsoft has committed to removing undocumented code.

Easter eggs are fun to find but represent the bright side of the dark issue above. What happens when the easter egg in question isn’t a credit roll but an undocumented account? What if the keystroke doesn’t bring up a teddy bear but instead gives the current user account full admin access? You scoff at the possibility but there’s nothing stopping a developer from making that happen.

These issues are part of the reason why all code and features need to be documented. We need to know what’s going on in the program and how it could impact us. This means no backdoors. If there is a way to access the system aside from the controls built in already it needs to be known and be able to be disabled if necessary. If it can’t be disabled then the users need to be aware of that fact and make the choice to not use the software because of security issues.

If you’re following along closely, you should have picked up on the fact that this same logic applies to backdoors that have been mandated by the government too. The current slate of US Senators seem to believe that we need to create keys that allow end-to-end encryption to be weakened and readable by law enforcement. However, as stated by companies like Apple for years, if you create a key for a lock that should only ever be opened under special circumstances you have still created a weakness that can be unlocked. We’ve seen the tools used by intelligence agencies stolen and used to create malware unlike anything we’ve ever seen before. What do you think might happen if they get the backdoor keys to go through encrypted messaging systems?


Tom’s Take

I don’t run Zyxel equipment in my home or anywhere I used to work. But if I did there would be a pile of it in the dumpster after this mess. Having a backdoor is one thing. Purposely making one is another. And having that backdoor discovered and exploited by the Internet is an entirely differently conversation. The only way to be sure that you’ve fixed your backdoor problem is to not have one in the first place. Joshua and zyfwp are what we need to get away from, not what we need to work toward. Malfeasance only stops when you don’t do it in the first place.

Securing Your Work From Home

UnlockedDoor

Wanna make your security team’s blood run cold? Remind them that all that time and effort they put in to securing the enterprise from attackers and data exfiltration is currently sitting unused while we all work from home. You might have even heard them screaming at the sky just now.

Enterprise security isn’t easy, nor should it be. We constantly have to be on the offensive to find new attack vectors and hunt down threats and exploits. We have spent years and careers building defense-in-depth to an artform not unlike making buttery croissants. It’s all great when that apparatus is protecting our enterprise data center and cloud presence like a Scottish castle repelling invaders. Right now we’re in the wilderness with nothing but a tired sentry to protect us from the marauders.

During Security Field Day 4, I led a discussion panel with the delegates about the challenges of working from home securely. Here’s a link to our discussion that I wanted to spend some time elaborating on:

Home Is Where the Exploits Are

BYOD was a huge watershed moment for the enterprise because we realized for the first time that we had to learn to secure other people’s devices. We couldn’t rely on locked-down laptops and company-issued phones to keep us safe. Security exploded when we no longer had control of the devices we were trying to protect. We all learned hard lessons about segmenting networks and stopping lateral attacks from potentially compromised machines. It’s all for naught now because we’re staring at those protections gathering dust in an empty office. With the way that commercial real estate agents are pronouncing a downturn in their market, we may not see them again soon.

Now, we have to figure out how to protect devices we don’t own on networks we don’t control. For all the talk of VPNs for company devices and SD-WAN devices at the edge to set up on-demand protection, we’re still in the dark when it comes to the environment around our corporate assets. Sure, the company Thinkpad is safe and sound and isolated at the CEO’s house. But what about his wife’s laptop? Or the kids and their Android tablets? Or even the smart speakers and home IoT devices around it? How can we be sure those are all safe?

Worse still, how do you convince the executives of a company that their networks aren’t up to par? How can you tell someone that controls your livelihood they need to install more firewalls or segment their network for security? If the PlayStation suddenly needs to authenticate to the wireless and is firewalled away from the TV to play movies over AirPlay, you’re going to get a lot of panicked phone calls.

Security As A Starting Point

If we’re going to make Build Your Own Office (BYOO) security work for our enterprise employees, we need to reset our goals. Are we really trying to keep everyone 100% safe and secure 100% of the time? Are we trying for total control over all assets? Or is there a level of insecurity we are willing to accept to make things work more smoothly?

On-demand VPNs are a good example. It’s fine to require them to access company resources behind a firewall in the enterprise data center. But does it need to be enabled to access things in the public cloud? Should the employee have to have it enabled if they decide to work on the report at 8:00pm when they haven’t ever needed it on before? These challenges are more about policy than technology.

Policy is the heart of how we need to rebuild BYOO security. We need to examine which policies are in place now and determine if they make sense for people that may never come back into the office. Don’t want to use the VPN for connectivity? Fine. However, you will need to enable two-factor authentication (2FA) on your accounts and use a software token on your phone to access our systems. Don’t want to install the apps on your laptop to access cloud resources? We’re going to lock you out until we’ve evaluated everything remotely for security purposes.

Policy has an important role to play. It is the reason for our technology and the driver for our work. Policy is why I have 2FA enabled on all my corporate accounts. Policy is why I don’t have superuser rights to certain devices but instead authenticate changes as needed with suitable logging. Policy is why I can’t log in to a corporate email server from a vacation home in the middle of nowhere because I’m not using a secured connection. It’s all relevant to the way we do business.

Pushing Policy Buttons

You, as a security professional, need to spend the rest of 2020 doing policy audits. You’re going to get crosseyed. You’re going to hate it. So will anyone you contact about it. Trust me, they hate it just like you do. But you have to make it happen., You have to justify why you’re doing things the way you’re doing them. “This is how we’ve always done it” is no longer justification for a policy. We’re still trying to pull through a global pandemic that has costs thousands their jobs and displaced thousands more to a home they never thought was going to support actual work. Now is not the time to get squeamish.

It’s time to scrub your policies down to the baseboards and get to cleaning and building them back up. Figure out what you need and what is required. Implement changes you’ve always needed to make, like software updates or applications that enhance security. If you want to make it stick in this new world of working from home you need to put it in place at the deepest levels now. And it needs to stick for everyone. No executive workarounds. No grace extensions for them to keep their favorite insecure apps or allowing them to not have 2FA enabled on everything. They need to lead by example from the front, not lag in the back being insecure.


Tom’s Take

I loved the talk at Security Field Day about security at home. We weighed a lot of things that people aren’t really thinking about right now because we haven’t had a major breach in “at home” security. Yet. We know it’s coming and if it happens the current state of network segementation isn’t going to be kind to whomever is under the gun. Definitely watch the video above and tell me your thoughts, either on the video comments or here. We can keep things safe and secure no matter where we are. We just need to think about what that looks like at the lowest policy level and build up from there.

Security and Salt

One of the things I picked up during the quarantine is a new-found interest in cooking. I’ve been spending more time researching recipes and trying to understand how my previous efforts to be a four-star chef have fallen flat. Thankfully, practice does indeed make perfect. I’m slowly getting better , which is to say that my family will actually eat my cooking now instead of just deciding that pizza for the fourth night in a row is a good choice.

One of the things I learned as I went on was about salt. Sodium Chloride is a magical substance. Someone once told me that if you taste a dish and you know it needs something but you’re not quite sure what that something is, the answer is probably salt. It does a lot to tie flavors together. But it’s also a fickle substance. It has the power to make or break a dish in very small amounts. It can be the difference between perfection and disaster. As it turns out, it’s a lot like security too.

Too Much is Exactly Enough

Security and salt are alike in the first way because you need the right amount to make things work. You have to have a minimum amount of both to make something viable. If you don’t have enough salt in your dish you won’t be able to taste it. But you also won’t be able to pull the flavors in the dish together with it. So you have to work with a minimum. Whether its a dash or salt or a specific minimum security threshold, you have to have enough to matter otherwise it’s the same as not having it at all.

To The Salt Mines

Likewise, the opposite effect is also detrimental. If you need to have the minimum amount to be effective, the maximum amount of both salt and security is bad. We all know what happens when we put too much salt into a dish. You can’t eat it at all. While there are tricks to getting too much salt out of a dish they change the overall flavor profile of whatever you’re making. Even just a little too much salt is very apparent depending on the dish you’re trying to make. Likewise, too much security is a deterrent to getting actual work done. Restrictive controls get in the way of productivity and ultimately lead to people trying to work out solutions that don’t solve the problem but instead try to bypass the control.

Now you may be saying to yourself, “So, the secret is to add just the right amount of security, right?” And you would be correct. But what is the right amount? Well, it’s not unlike trying to measure salt by sight instead of using a measuring device. Have you ever seen a chef or TV host pour an amount of salt into their hands and say it needs “about that much”? Do you know how they know how much salt to add? It’s not rocket science. Instead, it’s the tried-and-true practice of practice. They know about how much salt a dish needs for a given cooking time or flavor profile. They may have even made the dish a few times in order to understand when it might need more or less salt. They know that starches need more salt and delicate foods need less. Most importantly, they measured how much salt they can hold in their cupped hand. So they know what a teaspoon and tablespoon of salt look like in their palm.

How is this like security? Most Infosec professionals know inherently how to make things more secure. Their experience and their training tell them how much security to add to a system to make it more secure without putting too much in place to impede the operations of the system. They know where to put an IPS to provide maximum coverage without creating too many false positives. And they can do that because they have the experience to know how to do it right without guessing. Because the odds are good they’ve done it wrong at least one time.

The last salty thing to remember is that even when you have the right amounts down to a science you’re still going to need to figure out how to make it perfect. Potato soup is a notoriously hard dish to season properly. As mentioned above, starchy foods tend to soak up salt. You can fix a salty dish by putting a piece of a potato in it to soak up the salt. But is also means that it’s super hard to get it right when everything in your dish soaks up salt. But the best chefs can get it right. Because they know where to start and they know to test the dish before they do more. They know they need to start from a safe setup and push out from there without ruining everything. They know that no exact amount is the same between two dishes and the only way to make sure it’s right is to test until you get it right. Then make notes so you know how to make it better the next time.


Tom’s Take

Salt is one of my downfalls. I tend to like things salty, so I put too much in when I taste things. It’s never too salty for me unless my mouth shrinks up like a desiccated dish. That’s why I also have to rely on my team at home to help me understand when something is just right for them so I don’t burn out their taste buds either. Security is the same. You need a team that understands everything from their own perspective so they can help you get it right all over. You can’t take salt out of a dish without a massive crutch. And you can’t really reduce too much security without causing issues like budge overruns or costly meetings to decide what to remove. It’s better to get your salt and your security right in the first place.