User Discomfort As A Security Function

If you grew up in the 80s watching movies like me, you’ll remember Wargames. I could spend hours lauding this movie but for the purpose of this post I want to call out the sequence at the beginning when the two airmen are trying to operate the nuclear missile launch computer. It requires the use of two keys, one each in the possession of one of the airmen. They must be inserted into two different locks located more than ten feet from each other. The reason is that launching the missile requires two people to agree to do something at the same time. The two key scene appears in a number of movies as a way to show that so much power needs to have controls.

However, one thing I wanted to talk about in this post is the notion that those controls need to be visible to be effective. The two key solution is pretty visible. You carry a key with you but you can also see the locks that are situated apart from each other. There is a bit of challenge in getting the keys into the locks and turning them simultaneously. That not only shows that the process has controls but also ensures the people doing the turning understand what they’re about to do.

Consider a facility that is so secure that you must leave your devices in a locker or secured container before entering. I’ve been in a couple before and it’s a weird feeling to be disconnected from the world for a while. Could the facility do something to ensure that the device didn’t work inside? Sure they could. Technology has progressed to the point where we can do just about anything. But leaving the device behind is as much about informing the user that they aren’t supposed to be sharing things as it is about controlling the device. Controlling a device is easy. Controlling a person isn’t. Sometimes you have to be visible.

Discomfort Design

Security solutions that force the user out of a place of comfort are important. Whether it’s a SCIF for sharing sensitive data or forcing someone to log in with a more secure method the purpose of the method is about attention. You need the user to know they’re doing something important and understand the risks. If the user doesn’t know they’re doing something that could cause problems or expose something crucial you will end up doing damage control at some point.

Think of something as simple as sitting in the exit row on an airplane. In my case, it’s for Southwest Airlines. There’s more leg room but there’s also a responsibility to open the door and assist in evacuation if needed. That’s why the flight attendants need to hear you acknowledge that warning with a verbal “yes” before you’re allowed to sit in those seats. You have admitted you understand the risks and responsibilities of sitting there and you’re ready to do the job if needed.

Security has tried to become unobtrusive in recent years to reduce user friction. I’m all about features like using SSL/TLS by default in websites or easing restrictions on account sharing or even using passkeys in place of passwords. But there also comes a point when encapsulating the security reduces its effectiveness. What about fishing emails that put lock emojis next to URLs to make they seem secure even when they aren’t? How about cleverly crafted login screens for services that are almost indistinguishable from the real thing unless you bother to check the URL? It could even be the tried-and-true cloned account on Facebook or Instagram asking a friend for help unlocking their account only to steal your login info and start scamming everyone on your friends list.

The solution is to make users know they’re secure. Make it uncomfortable for them so they are acutely aware of heightened security. We deal with it all the time in other areas of our lives outside of IT. Airport screenings are a great example. So are heightened security measures at federal buildings. You know you’re going somewhere that has placed an emphasis on security.

Why do we try to hide it in IT? Is it because IT causes stress due to it being advanced technology? Are we worried that users are going to drop our service if it is too cumbersome to use the security controls? Or do we think that the investment in making that security front and center isn’t worth the risk of debugging it when it goes wrong? I would argue that these are solved problems in other areas of the world and we have just accepted them over time. IT shouldn’t be any different.

Note that discomfort shouldn’t lead to a complete lack of usability. It’s very easy to engineer a system that needs you to reconfirm your credentials every 10 minutes to ensure that no one has hacked you. And you’d quit using it because you don’t want to type in a password that often. You have to strike the right balance between user friendly and user friction. You want them to notice they’re doing something that needs their attention to security but not so much that they’re unable to do their job or use the service. That’s where the attention should be placed, not in cleverly hiding a biometric scanning solution or certificate-based service for the sake of saying it’s secure.


Tom’s Take

I’ll admit that I tend to take things for granted. I had to deal with a cloned Facebook profile this past weekend and I worried that someone might try to log in and do something with my account. Then I remembered that I have two-factor authentication turned on and my devices are trusted so no one can impersonate me. But that made me wonder if the “trust this device” setting was a bit too easy to trust. I think making sure that your users know they’re protected is more critical. Even if it means they have to do something more performative from time to time. They may gripe about changing a password every 30 days or having to pull out a security token but I promise you that discomfort will go away when it saves them from a very bad security day.

Victims of Success

It feels like the cybersecurity space is getting more and more crowded with breaches in the modern era. I joke that on our weekly Gestalt IT Rundown news show that we could include a breach story every week and still not cover them all. Even Risky Business can’t keep up. However, the defenders seem to be gaining on the attackers and that means the battle lines are shifting again.

Don’t Dwell

A recent article from The Register noted that dwell times for detection of ransomware and malware hav dropped almost a full day in the last year. Dwell time is especially important because detecting the ransomware early means you can take preventative measures before it can be deployed. I’ve seen all manner of early detection systems, such as data protection companies measuring the entropy of data-at-rest to determine when it is no longer able to be compressed, meaning it likely has been encrypted and should be restored.

Likewise, XDR companies are starting to reduce the time it takes to catch behaviors on the network that are out of the ordinary. When a user starts scanning for open file shares and doing recon on the network you can almost guarantee they’ve been compromised somehow. You can start limiting access and begin cleanup right away to ensure that they aren’t going to get much further. This is an area where zero trust network architecture (ZTNA) is shining. The less a particular user has access to without additional authentication, the less they can give up before the controls in place in the system catch them doing something out of the ordinary. This holds true even if the user hasn’t been tricked into giving up their credentials but instead is working with the attackers through monetary compensation or misguided ire toward the company.

Thanks to the advent of technologies like AI, machine learning, and automation we can now put controls in place quickly to prevent the spread of disaster. You might be forgiven for thinking that kind of response will eradicate this vector of attack. After all, killing off the nasty things floating in our systems means we’re healthier overall, right? It’s not like we’re breeding a stronger strain of disease?

Breeding Grounds

Ask a frightening question and get a frightening answer, right? In the same linked Register article the researchers point out that while dwell times have been reduced the time it takes attackers to capitalize on their efforts has also been accelerated. In addition, attackers are looking at multiple vectors of persistence in order to accomplish their ultimate goal of getting paid.

Let’s assume for the moment that you are an attacker that knows the company you’re going after is going to notice your intrusion much more quickly than before. Do you try to sneak in and avoid detection for an extra day? Or do you crash in through the front door and cause as much chaos as possible before anyone notices? Rather than taking the sophisticated approach of persistence and massive system disruption, attackers are instead taking a more low-tech approach to grabbing whatever they can before they get spotted and neutralized.

If you look at the most successful attacks so far in 2023 you might notice they’ve gone for a “quantity over quality” approach. Sure, a heist like Oceans 11 is pretty impressive. But so is smashing the display case and running out with the jewels. Maybe it’s not as lucrative but when you hit twenty jewelry stores a week you’re going to make up the low per capita take with volume.

Half of all the intrusion attempts are coming at the expense of stolen or compromised credentials. There are a number of impressive tools out there that can search for weak points in the system and expose bugs you never even dreamed could exist. There are also much easier ways to phish knowledge workers for their passwords or just bribe them to gain access to restricted resources. Think of it like the crowbar approach to the heist scenario above.

Lock It Down

Luckily, even the fastest attackers still have to gain access to the system to do damage. I know we all harp on it constantly but the best way to prevent attacks is to minimize the ways that attack vectors get exploited in the first place. Rotate credentials frequently. Have knowledge workers use generated passwords in place of ones that can be tied back to them. Invest in password management systems or, more broadly, identity management solutions in the enterprise. You can’t leak what you don’t know or can’t figure out quickly.

After that, look at how attackers capitalize on leaks or collusion. I know it’s a tale as old as time but you shouldn’t be running anything with admin access that doesn’t absolutely need it. Yes, even YOUR account. You can’t be the vector for a breach if you are just as unimportant as everyone else. Have a separate account with a completely different password for doing those kinds of tasks. Regularly audit accounts that have system-level privilege and make sure they’re being rotated too. Another great reason for having an identity solution is that the passwords can be rotated quickly without disruption. Oh, and make sure the logins to the identity system are as protected as anything else.

Lastly, don’t make the mistake of thinking you’re an unappealing target. Just because you don’t deal with customer data or have personally identifiable information (PII) stored in your system doesn’t mean you’re not going to get swept up in the next major attack. With the quantity approach the attackers don’t care what they grab as long as they can get out with something. They can spend time analyzing it later to figure out how to best take advantage of what they’ve stolen. Don’t give them the chance. Security through obscurity doesn’t work well in an age where you can be targeted and exposed before you realize what’s going on.


Tom’s Take

Building a better mousetrap means you catch more mice. However, the ones that you don’t catch just get smarter and figure out how to avoid the bait. That’s the eternal game in security. You stamp out the low-level threats quickly but that means the ones that aren’t ensnared become more resistant to your efforts. You can’t assume every attack is going to be a sophisticated nation state attempt to steal classified info. You may just be the unlucky target of a smash-and-grab with stolen passwords. Don’t become a victim of your own success. Keep tightening the defenses and make sure you don’t wind up missing the likely while looking for the impossible.

The Essence of Cisco and Splunk

You no doubt noticed that Cisco bought Splunk last week for $28 billion. It was a deal that had been rumored for at least a year if not longer. The purchase makes a lot of sense from a number of angles. I’m going to focus on a couple of them here with some alliteration to help you understand why this may be one of the biggest signals of a shift in the way that Cisco does business.

The S Stands for Security

Cisco is now a premier security company now. The addition of the most power SIEM on the market means that Cisco’s security strategy now has a completeness of vision. SecureX has been a very big part of the sales cycle for Cisco as of late and having all the parts to make it work top to bottom is a big win. XDR is a great thing for organizations but it doesn’t work without massive amounts of data to analyze. Guess where Splunk comes in?

Aside from some very specialized plays, Cisco now has an answer for just about everything a modern enterprise could want in a security vendor. They may not be number one in every market but they’re making a play for number two in as many places as possible. More importantly it’s a stack that is nominally integrated together to serve as a single source for customers. I’ll be the first person to say that the integration of Cisco software acquisitions isn’t seamless. However, when the SKUs all appear together on a bill of materials most organizations won’t look beyond that. Especially if there are professional services available to just make it work.

Cisco is building out their security presence in a big way. All thanks to a big investment in Splunk.

The S Stands for Software

When the pundits said that Cisco could never really transform themselves from a hardware vendor to a software company there was a lot of agreement. Looking back at the transition that Chuck Robbins has led since then what would you say now? Cisco has aggressively gone after software companies across the board to build up a portfolio of recurring revenue that isn’t dependent on refresh cycles or silicon innovations.

Software is the future for Cisco. Iteration on their core value products is going to increase their profit far beyond what they could hope to realize through continuing to push switches and routers. That doesn’t mean that Cisco is going to abandon the hardware market. It just means that Cisco is going to spend more time investing in things with better margins. The current market for software subscriptions and recurring licensing revenue is hot and investors want to see those periodic returns instead of a cycle-based push for companies to adopt new technologies.

What makes more sense to you? Betting on a model where customers need to pay per gigabyte of data stored or a technology which may be dead on the vine?. Taking the nerd hat off for a moment means you need to see the value that companies want to realize, not the hope that something is going to be big in the future. Hardware will come along when the software is ready to support it. Blu-Ray didn’t win over HD-DVD because it was technically superior. It won because Sony supported it and convinced Disney to move their media onto it exclusively.

Software is the way forward for Cisco. Software that provides value for enterprises and drives upgrades and expansion. The hardware itself isn’t going to pull more software onto the bottom line.

The S Stands for Synergy

The word synergy is overused in business vernacular. Jack Welch burned us out on the idea that we can get more out of things together by finding hidden gems that aren’t readily apparent. I think that the real value in the synergy between Cisco and Splunk can be found in the value of creating better code and better programmers.

A bad example of synergy is Cisco’s purchase of Flip Video. When it became clear that the market for consumer video gear wasn’t going to blow up quite like Cisco had hoped they pivoted to talk about using the optics inside the cameras to improve video quality for their video collaboration products. Which never struck me as a good argument. They bet on something that didn’t pay off and had to salvage it the best they could. How many people went on to use Cisco cameras outside of the big telepresence rooms? How many are using them today instead of phone cameras or cheap webcams?

Real synergy comes when the underlying processes are examined and improved or augmented. Cisco is gaining a group of developers and product people that succeed based on the quality of their code. If Splunk didn’t work it wouldn’t be a market leader. The value of what that team can deliver to Cisco across the organization is important. The ongoing critiques of Cisco’s code quality has created a group of industry professionals that are guarded when it comes to using Cisco software on their devices or for their enterprises. I think adding the expertise of the Splunk teams to that group will go a long way to helping Cisco write more stable code in the long term.


Tom’s Take

Cisco needed Splunk. They needed a SIEM to compete in the wider security market and rather than investing in R&D to build one they decided to pick up the best that was available. The long courtship between the two companies finally paid off for the investors. Now the key is to properly integrate Splunk into the wider Cisco Security strategy and also figure out how to get additional benefits from the development teams to offset the big purchase price. The essence of those S’s above is that Cisco is continuing their transformation away from a networking hardware company and becoming a more diversified software firm. It will take time for the market to see if that is the best course of action.

Don’t Let the Cybersecurity Trust Mark Become Like Food Labeling

I got several press releases this week talking about the newest program from the US Federal government for cybersecurity labeling. This program is something designed to help consumers understand how secure IoT devices are and the challenges that can be faced trying to keep your network secure from the large number of smart devices that are being implemented today. Consumer Reports has been pushing for something like this for a while and lauded the move with some caution. I’m going to take it a little further. We need to be very careful about this so it doesn’t become as worthless as the nutrition labels mandated by the government.

Absolute Units

Having labels is certainly better than not having them. Knowing how much sugar a sports drink has is way more helpful than when I was growing up and we had to guess. Knowing where to find that info on a package means I’m not having to go find it somewhere on the Internet1. However, all is not sunshine and roses. That’s because of the way that companies choose to fudge their numbers.

Food companies spent a lot of time trying to work the numbers on those nutrition labels for years. The most common way to do it is to adjust the serving size listed on the box. For example, a 20-ounce soda bottle isn’t a single serving of liquid. It’s 2.5 servings at 8 ounces each. In order to find the true nutritional value of the whole bottle you need to read close enough to do the math and find out it’s more sugar and calories than you were expecting. The whole game was so bad the FDA forced companies to change labeling in 2022.

One of the other ways that labeling guidelines have allowed companies to get away with misinformation is through clever interpretation. Did you know that TicTacs are sugar free? If you look at the nutritional label information they contain zero sugar despite being made of nothing but sugar. How can they accurately say that? Because the serving size is so small it rounds down to zero. You’re probably groaning now but this is what has happened for years unless some group steps in to fix the issue.

The Fine Print

Now let’s look at how this could be adapted to go horribly wrong with IoT devices. One of the simple ways that I could see it being an issue is with something like a baby monitor. These devices are usually low-cost and don’t have much security built in. If you know the address of the device you can often connect to it and watch the video feed. Adding more software controls on top of the hardware is going to increase the price significantly. So are the manufacturers going to add pricey software to meet labeling guidelines? Or are they going to pull a TicTac? Say, for example, labeling the device as secure against remote access with an asterisk saying it’s only secure if you turn off the Wi-Fi and only look at it in the same room?

The label is going to be a valuable thing to add to the box to differentiate the product from competitors. Given the choice between a box without a label and one with a label, which one would you pick Tommy boy? That being said, how far do you think someone would go to put the label on the box? The program is voluntary but it still has requirements that need to be met. Someone could potentially create specific scenarios that allow them to meet the guidelines under specific circumstances and include the label despite not being the most secure device.

If the government wants to ensure that users aren’t getting attacked and have their data stolen, they need to put explicit guidelines in place to specify how the labels need to be created. No creative interpretation. No asterisks or fine print. It needs to be a table that has simple answers. If you don’t meet the guidelines you don’t get the check mark. Don’t let the manufacturers interpret your rules in their favor. It’s a bit more of a pain for those administering the program but a little sweat equity up front is going to be more comforting than the news articles after the fact.


Tom’s Take

I want this program to work. I really do. I also know how capitalism works. Companies are going to work this label as much as possible in their favor, including some creative thoughts on the requirements. I’d rather have some fusing now that leads to proper implementation in the future than lots of bad press about how the labels are worthless. If the industry is going to take steps to make things better for consumers let’s make sure it’s really better and not some sugar-free version.


  1. Provided the packaging is big enough for it to be printed, that is. ↩︎

Cross Training for Career Completeness

Are you good at your job? Have you spent thousands of hours training to be the best at a particular discipline? Can you configure things with your eyes closed and are finally on top of the world? What happens next? Where do you go if things change?

It sounds like an age-old career question. You’ve mastered a role. You’ve learned all there is to learn. What more can you do? It’s not something specific to technology either. One of my favorite stories about this struggle comes from the iconic martial artist Bruce Lee. He spent his formative years becoming an expert at Wing Chun and no one would argue he wasn’t one of the best. As the story goes, in 1967 he engaged in a sparring match with a practitioner of a different art and, although he won, he was exhausted and thought things had gone on far too long. This is what encouraged him to develop Jeet Kun Do as a way to incorporate new styles together for more efficiency and eventually led to the development of mixed martial arts (MMA).

What does Bruce Lee have to do with tech? The value of cross training with different tech disciplines is critical for your ability to continue to exist as a technology practitioner.

Time Marches On

A great example of this came up during Mobility Field Day back in May. During the Fortinet presentation there was a discussion about wireless and SASE. I’m sure a couple of the delegates were shrugging their shoulders in puzzlement about this inclusion. After all, what does SASE have to do with SNR or Wi-Fi 6E? Why should they care about software running on an AP when the real action is in the spectrum?

To me, as someone who sees the bigger picture, the value of talking about SASE is crucial. Access points are no longer radio bridges. They are edge computing devices that run a variety of software programs. In the old days it took everything the CPU had to process the connection requests and forward the frames to the right location. Today there is a whole suite of security being done at the edge to keep users safe and reduce the amount of traffic being forwarded into the network.

Does that mean that every wireless engineer needs to become a security expert? No. Far from it. There is specialized knowledge in both areas that people will spend years perfecting. Does that mean that wireless people need to ignore the bigger security picture? That’s also a negative. APs are going to be running more and more software in the modern IT world because it makes sense to put it there and not in the middle of the enterprise or the cloud. Why process traffic if you don’t have to?

It also means that people need to look outside of their specific skillset to understand the value of cross training. There are some areas that have easy crossover potential. Networking and wireless have a lot of commonality. So do storage and cloud, as well as virtualization and storage and cloud. We constantly talk about the importance of including security in the discussion everywhere, from implementation to development. Yet when we talk about the need to understand these technologies at a basic level we often face resistance from operations teams that just want to focus on their area and not the bigger picture.

New Approaches

Jeet Kune Do is a great example of why cross training has valuable lessons for us to learn about disruption. In a traditional martial arts fight, you attack your opponent. The philosophy of Jeet Kun Do is to attack your opponent’s attacks. You spend time defending by keeping them from attacking you. That’s a pretty different approach.

Likewise, in IT we need to examine how to we secure users and operate networks. Fortinet believes security needs to happen at the edge. Their philosophy is informed by their expertise in developing edge hardware to do this role. Other companies would say this is best performed in the cloud using their software, which is often their strength. Which approach is better? There is no right answer. I will say that I am personally a proponent of doing the security stuff as close the edge as possible to reduce the need for more complexity in the core. It might be a remnant of my old “three tier” network training but I feel the edge is the best place to do the hard work, especially given the power of the modern edge compute node CPU.

That doesn’t mean it’s always going to be the best way to do things. That’s why you have to continuously learn and train on new ways of doing things. SASE itself came from SD-WAN which came from SDN. Ten years ago most of this was theoretical or in the very early deployment stage. Today we have practical applications and real-world examples. Where will it go in five years? You only know if you learn how it works now.


Tom’s Take

I’ve always been a voracious learner and training myself on different aspects of technology has given me the visibility to understand the importance of how it all works together. Like Bruce Lee I always look for what’s important and incorporate it into my knowledge base and discard the rest. I know that learning about multiple kinds of technology is the key to having a long career in the industry. You just have to want to see the bigger picture for cross training to be effective.

Disclaimer: This post mentions Fortinet, a presenter at Mobility Field Day 9. The opinions expressed in this post reflect my own perspective and were not influenced by consideration from any companies mentioned.

Using AI for Attack Attribution

While I was hanging out at Cisco Live last week, I had a fun conversation with someone about the use of AI in security. We’ve seen a lot of companies jump in to add AI-enabled services to their platforms and offerings. I’m not going to spend time debating the merits of it or trying to argue for AI versus machine learning (ML). What I do want to talk about is something that I feel might be a little overlooked when it comes to using AI in security research.

Whodunnit?

After a big breach notification or a report that something has been exposed there are two separate races that start. The most visible is the one to patch the exploit and contain the damage. Figure out what’s broken and fix it so there’s no more threat of attack. The other race involves figuring out who is responsible for causing the issue.

Attribution is something that security researchers value highly in the post-mortem of an attack. If the attack is the first of its kind the researchers want to know who caused it. They want to see if the attackers are someone new on the scene that have developed new tools and skills or if it is an existing person or group that has expanded their target list or repertoire. If you think of a more traditional definition of crime from legal dramas and police procedurals you are wondering if this is a one-off crime or if this is a group expanding their reach.

Attribution requires analysis. You need to look for the digital fingerprints of a group in the attack patterns. Did they favor a particular entry point? Are they looking for the same kinds of accounts to do privilege escalation? Did they deface the web servers with the same digital graffiti? For attackers looking to make a name for themselves, attribution is pretty easy to figure out. They want to make a splash. However, for state-sponsored crews or organizations looking to keep a low profile it is much more likely they’re going to obfuscate their methods to avoid detection as long as possible. They might even throw out a few red herrings to make people attribute the attack to a different group.

Picking Out Patterns

If the methodology of doing attribution requires pattern matching and research, why not use AI to assist? We already use AI and ML to help us detect the breaches. Why not apply it to figuring out who is doing the breaching? We already know that AI can help us identify people based on a variety of characteristics. Just look up any kind of market research done by advertising agencies and you can see how scary they can predict buyer behavior based on all kinds of pattern recognition.

Let’s apply that same methodology to attack attribution. AI and ML are great at not only sifting through the noise when it comes to pattern recognition but they can also build a profile of the patterns to confirm those suspicions. Imagine profiling an attacker by seeing that they use one or two methods for gaining entry, such as spearphishing, to gain access to start privilege escalation. They always go after the same service accounts and move laterally to the same servers after gaining it. This is all great information for predicting attacks and stopping them. But it’s super valuable for tracking down who is doing it.

Assuming that crews bring new attackers on board frequently to keep their crime pipeline full you can also see how much of the attack profile is innate talent versus training. One could assume that these organizations aren’t terribly different from your average IT shop when it comes to training. It’s just the result of that training that differs. If you start seeing a large influx of attacks that use repetition of similar techniques from different locations it could be assumed that there is some kind of training going on somewhere in the loop.

The other thing that provides value is determining when someone is trying to masquerade as a different group using techniques to obfuscate or misattribute breaches. Building a profile of an attacker means you know how long it takes them to move to new targets or how likely they are to take certain actions within a specific window. If you work out the details of an attack you can see quickly if someone is following a script or if they’re doing something in a specific way to make it look like someone else is trying to get in. This especially applies at the level of nation-state sponsored groups, since creating doubt in the attribution can prevent your detection or even cause diplomatic sanctions against the wrong country.

Of course, the real challenges is that AI and ML aren’t foolproof. They aren’t the ultimate arbiter of attack recognition and attribution. Instead, they are tools that should be introduced into the kit to help speed identification and provide assurances that you’ve got the right group before you publicize what you’ve found.


Tom’s Take

There’s a good chance that some security companies out there are already looking at or using AI to do attribution. I think it’s important to broaden our toolkits and use of models in all areas of cybersecurity. It also provides a baseline for creating normalized investigation. There have been too many cases where a researcher has rushed to pin attribution on a given group only to find out it wasn’t them at all. Using tools to confirm your suspicions not only reduces the likelihood you will name the wrong attacker but it also reduces the need to publicize quickly to claim credit for the identification. This should be about protection, no publicity.

Friction as a Network Security Concept

I had the recent opportunity to record a podcast with Curtis Preston about security, data protection, and networking. I loved being a guest and we talked about quite a bit in the episode about how networking operates and how to address ransomware issues when they arise. I wanted to talk a bit more about some concepts here to help flesh out my advice as we talked about it.

Compromise is Inevitable

If there’s one thing I could say that would make everything make sense it’s this: you will be compromised. It’s not a question of if. You will have your data stolen or encrypted at some point. The question is really more about how much gets taken or how effectively attackers are able to penetrate your defenses before they get caught.

Defenses are designed to keep people out. But they also need to be designed to contain damage. Think about a ship on the ocean. Those giant bulkheads aren’t just there for looks. They’re designed to act as compartments to seal off areas in case of catastrophic damage. The ship doesn’t assume that it’s never going to have a leak. Instead, the designers created it in such a way as to be sure that when it does you can contain the damage and keep the ship floating. Without those containment systems even the smallest problem can bring the whole ship down.

Likewise, you need to design your network to be able to contain areas that could be impacted. One giant flat network is a disaster waiting to happen. A network with a DMZ for public servers is a step in the right direction. However, you need to take it further than that. You need to isolate critical hosts. You need to put devices on separate networks if they have no need to directly talk to each other. You need to ensure management interfaces are in a separate, air-gapped network that has strict access controls. It may sound like a lot of work but the reality is that failure to provide isolation will lead to disaster. Just like a leak on the ocean.

The key here is that the controls you put in place create friction with your attackers. That’s the entire purpose of defense in depth. The harder it is for attackers to get through your defenses the more likely they are to give up earlier or trigger alarms designed to warn you when it happens. This kind of friction is what you want to see. However, it’s not the only kind of friction you face.

Failing Through Friction

Your enemy in this process isn’t nefarious actors. It’s not technology. Instead, it’s the bad kind of friction. Security is designed by its very nature to create friction with systems. Networks are designed to transmit data. Security controls are designed to prevent the transmission of data. This bad friction comes when these two aspects are interacting with each other. Did you open the right ports? Are the access control lists denying a protocol that should be working? Did you allow the right VLANs on the trunk port?

Friction between controls is maddening but it’s a solvable problem with time. The real source of costly friction comes when you add people into the mix. Systems don’t complain about access times. They don’t call you about error messages. And, worst of all, they don’t have the authority to make you compromise your security controls for the sake of ease-of-use.

Everyone in IT has been asked at some point to remove a control or piece of software for the sake of users. In organizations where the controls are strict or regulatory issues are at stake the requests are usually disregarded. However, when the executives are particularly insistent or the IT environment is more carefree you can find yourself putting in a shortcut to get the CEO’s laptop connected faster or allow their fancy new phone to connect without a captive portal. The results are often happy and have no impact. That is, until someone finds out they can get in through your compromised control and create a lot of additional friction.

How can you reduce friction? One way is to create more friction in the planning stages. Ask lots of questions about ports and protocols and access list requirements before something is implemented. Do your homework ahead of time instead of trying to figure it out on the fly. If you know that a software package needs to communicate to these four addresses on these eight ports then anything outside of that list should be suspect and be examined. Likewise, if someone can’t tell you what ports need to be opened for a package to work you should push back until they can give you that info. Better to spend time up front learning than spend more time later triaging.

The other way to reduced friction in implementation is to shift the friction to policy. If the executives want you to compromise a control for the sake of their own use make them document it. Have them write it down that you have been directed to add a special configuration just for them. Keep that information stored in your DR plan and note it in your configuration repositories as well. Even a comment in the access list can help understand why you had to do something a certain way. Often the request to document the special changes will have the executives questioning the choice. More importantly, if something does go sideways you have evidence of why the change was made. And for executives that don’t like to look like fools this is a great way to have these kinds of one-off policy changes stopped quickly when something goes wrong and they get to answer questions from a reporter.


Tom’s Take

Friction is the real secret of security. When properly applied it prevents problems. When it’s present in too many forms it causes frustration and eventually leads to abandonment of controls or short circuits to get around them. The key isn’t to eliminate it entirely. Instead you need to apply it properly and make sure to educate about why it exists in the first place. Some friction is important, such as verifying IDs before entering a secure facility. The more that people know about the reasons behind your implementation the less likely they are to circumvent it. That’s how you keep the bad actors out and the users happy.

Getting Tough with Cyberinsurance

I’ve been hearing a lot of claims recently about how companies are starting to rely more and more on cyberinsurance policies to cover them in the event of a breach or other form of disaster. While I’m a fan of insurance policies in general I think the companies trying to rely on these payouts to avoid doing any real security work is going to be a big surprise to them in the future.

Due Diligence

The first issue that I see is that companies are so worried about getting breached that they think taking out big insurance policies are the key to avoiding any big liability. Think about an organization that holds personally identifiable information (PII) and how likely it is that they would get sued in the event of a breach. The idea is that cyberinsurance would pay out for the breach and be used as a way to pay off the damages in a lawsuit.

The issue I have with this is that companies are expecting to get paid. They see cyberinsurance as a guaranteed payout instead of a last resort. In the initial days of taking out these big policies the insurers were happy to pay out because they were getting huge premiums in return. However, as soon as the flood of payouts started happening the insurers had to come back down to earth and realize they had to put safeguards in place to keep themselves from going bankrupt.

For anyone out there hoping to take out a big insurance policy and get paid when you inevitably get compromised you’re about to face a harsh reality. Gone are the days of insuring your enterprise against cyber threats without doing some form of due diligence on the setup. You’re going to have to prove you did everything you could to prevent this from happening before you get to collect. And if you’ve ever filed an insurance claim for a car or a house you know that it can take weeks for them to investigate everything to find out if there is a way for them to not pay out.

There is a very reasonable chance your policy will exclude certain conditions that could have easily been prevented. It would be potentially embarrassing for your executives to find out you are unable to collect on an insurance policy because it specifically doesn’t cover social engineering or business email compromise (BEC).

Getting Ahead of the Insurance Game

How can you prevent this from happening? What steps can you take today to make sure you’re not going to find yourself on the losing end of a security incident?

  1. Check Your Coverage – It’s boring and reads like stereo instructions but you really do need to check your insurance policy completely, especially the parts that mention things that are specifically excluded from coverage. You need to know what isn’t going to be covered in a breach and have a response for those things. You need to know how to respond in areas that are potential weak points and be ready to confirm that you didn’t end up getting attacked there.
  2. Look for Suggestions From the Insurer – I know people that will only buy cars based on safety reports from industry groups. They’d rather have something less flashy or cool in favor of a car that is going to keep them protected in the event of an accident. The insurance companies love to publish those reports because it means that more sales of those cars means smaller payouts on claims. Likewise, more companies that provide cyberinsurance are starting to publish lists of software that they would encourage or outright require in an organization in order to have coverage or be eligible for a payout. If your company has such a list you should really get it and make sure you’ve checked the boxes. You don’t want to find yourself in a situation where one missed avenue of attack cost you the whole policy.
  3. Make Sure Your Reports Are Working – In the event that everything does go wrong you’re going to need to provide proof to people that you did all you could to prevent it. That means logs and incident reports and even more data about what went wrong and when. You don’t want to go and pull up that reporting data after the worst day of your cybersecurity life only to find the reporting system hasn’t been working properly for months. Then you’re not only behind on getting the incident dealt with but you’re also slowing down the potential recovery on the policy. The insurance company is happy for you to take as much time as you need because every day that they don’t pay you is one more day they’re making money off their investments. Don’t delay yourself any more than you need to.

Tom’s Take

The best insurance is the kind you don’t need. That doesn’t mean you don’t get it, especially if it’s a requirement. However, even if you do have it you need to act like you don’t. Assuming that there’s a safety net to catch you isn’t always the case when that net comes with conditions that could pull the rug out from under you. You need to know what your potential exposure could be and what could prevent you from collecting. You need to be prepared to put new mechanisms in place to protect your enterprise and have a plan for what exactly to do when things go wrong. That should be paramount even without the policy. If you have everything ready to go you won’t need to worry about what happens when disaster strikes.

Mind the Air Gap

I recently talked to some security friends on a CloudBytes podcast recording that will be coming out in a few weeks. One of the things that came up was the idea of an air gapped system or network that represents the ultimate in security. I had a couple of thoughts that felt like a great topic for a blog post.

The Gap is Wide

I can think of a ton of classical air gapped systems that we’ve seen in the media. Think about Mission: Impossible and the system that holds the NOC list:

Makes sense right? Totally secure unless you have Tom Cruise in your ductwork. It’s about as safe as you can make your data. It’s also about as inconvenient as you can make your data too. Want to protect a file so no one can ever steal it? Make it so no one can ever access it! Works great for data that doesn’t need to be updated regularly or even analyzed at any point. It’s offline for all intents and purposes.

Know what works great as an air gapped system? Root certificate authority servers. Even Microsoft agrees. So secure that you have to dig it out of storage to ever do anything that requires root. Which means you’re never going to have to worry about it getting corrupted or stolen. And you’re going to spend hours booting it up and making sure it’s functional if you ever need to do anything with it other than watch it collect dust.

Air gapping systems feels like the last line of defense to me. This data is so important that it can never be exposed in any way to anything that might ever read it. However, the concept of a gap between systems is more appealing from a security perspective. Because you can create gaps that aren’t physical in nature and accomplish much of the same idea as isolating a machine in a cool vault.

Gap Insurance

By now you probably know that concepts like zero-trust network architecture are the key to isolating systems on your network. You don’t remove them from being accessed. Instead, you restrict their access levels. You make users prove they need to see the system instead of just creating a generic list. If you consider the ZTNA system as a classification method you understand more how it works. It’s not just that you have clearance to see the system or the data. You also need to prove your need to access it.

Building on these ideas of ZTNA and host isolation are creating virtual air gaps that you can use effectively without the need to physically isolate devices. I’ve seen some crazy stories about IDS sensors that have the transmit pairs of their Ethernet cables cut so they don’t inadvertently trigger a response to an intruder. That’s the kind of crazy preparedness that creates more problems that it solves.

Instead, by creating robust policies that prevent unauthorized users from accessing systems you can ensure that data can be safely kept online as well as making it available to anyone that wants to analyze it or use it in some way. That does mean that you’re going to need to spend some time building a system that doesn’t have inherent flaws.

In order to ensure your new virtual air gap is secure, you need to use policy manual programming. Why? Because a well-defined policy is more restrictive than just randomly opening ports or creating access lists without context. It’s more painful at first because you have to build the right policy before you can implement it. However, a properly built policy can be automated and reused instead of a just pasting lines from a text document into a console.

If you want to be sure your system is totally isolated, start with the basics. No traffic to the system. Then you build out from there. Who needs to talk to it? What user or class of users? How should they access it? What levels of security do you require? Answering each of those questions also gives you the data you need to define and maintain policy. By relying on these concepts you don’t need to spend hours hunting down user logins or specific ACL entries to lock someone out after they leave or after they’ve breached a system. You just need to remove them from the policy group or change the policy to deny them. Much easier than sweeping the rafters for super spies.


Tom’s Take

The concept of an air gapped system works for spy movies. The execution of one needs a bit more thought. Thankfully we’ve graduated from workstations that need biometric access to a world where we can control the ways that users and groups can access things. By thinking about those systems in new ways and treating them like isolated islands that must be accessed on purpose instead of building an overly permissive infrastructure in the first place you’ll get ahead of a lot of old technical debt-laden thinking and build a more secure enterprise, whether on premises or in the cloud.

Trust Will Do You In

If you’re a fan of the Gestalt IT Rundown that I do every week on the Gestalt IT YouTube channel, you have probably heard about the recent hacks of NVIDIA and Samsung. The original investigation into those hacks talked about using MDM platforms and other vectors to gain access to the information that was obtained by the hacking groups. An interesting tweet popped up on my feed yesterday that helped me reframe the attacks:

It would appear that the group behind these attacks are going after their targets the old fashioned way. With people. For illustration, see XKCD from 2009:

The Weakest Links

People are always the weakest link in any security situation. They choose to make something insecure through bad policy or by trying to evade the policy. Perhaps they are trying to do harm to the organization or even try to shine a light on corrupt practices. Whatever the reason, people are the weak link. Because you can change hardware or software to eliminate failures and bugs. You can’t reprogram people.

We’ve struggled for years to keep people out of our systems. Perimeter security and bastion hosts were designed to make sure that the bad actors stayed off our stage. Alas, as we’ve gotten more creative about stopping them we’ve started to realize that more and more of the attacks aren’t coming from outside but instead from inside.

There are whole categories of solutions dedicated to stopping internal attackers now. Data Loss Prevention (DLP) can catch data being exfiltrated by sophisticated attackers but it is more often used to prevent people from leaking sensitive data either accidentally or purposefully. There are solutions to monitor access to systems and replay logs to find out how internal systems folks were able to get privileges they should have.

To me, this is the crux of the issue. As much as we want to create policies that prevent people from exceeding their authority we seem to have a hard time putting them into practice. For every well-meaning solution or rule that is designed to prevent someone from gaining access to something or keep them secure you will have someone making a shortcut around it. I’ve done it myself so I know it’s pretty common. For every rule that’s supposed to keep me safe I have an example of a way I went around it because it got in my way.

Who Do YOU Trust?

This is one of the reasons why a Zero Trust Network Architecture (ZTNA) appeals to me. At its core it makes a very basic assumption that I learned from the X-Files: Trust No One. If you can’t verify who you are you can’t gain access. And you only gain access to the bare minimum you need and have no capability for moving laterally.

We have systems that operate like this today. Call your doctor and ask them a question. They will first need to verify that you are who you say you are with information like your birthdate or some other key piece of Personally Identifiable Information (PII). Why? Because if they make a recommendation for a medication and you’re not the person they think they’re talking to you can create a big problem.

Computer systems should work the same way, shouldn’t they? We should need to verify who we are before we can access important data or change a configuration setting. Yet we constantly see blank passwords or people logging in to a server as a super user to “just make it easier”. And when someone gains access to that system through clever use of a wrench, as above, you should be able to see the potential for disaster.

ZTNA just says you can’t do that. Period. If the policy says no super user logins from remote terminals they mean it. If the policy says no sensitive data access from public networks then that is the law. And no amount of work to short circuit the system is going to work.

This is where I think the value of ZTNA is really going to help modern enterprises. It’s not the nefarious actor that is looking to sell their customer lists that creates security issues. It does happen but not nearly as often as an executive that wants a special exception for the proxy server because one of their things doesn’t work properly. Or maybe it’s a developer that created a connection from a development server into production because it was easier to copy data back and forth that way. Whatever the reasons the inadvertent security issues cause chaos because they are configured and forgotten. At least until someone hacks you and you end up on the news.

ZTNA forces you to look at your organization and justify why things are the way they are. Think of it like a posture audit with immediate rule creation. If development servers should never talk to production units then that is the rule. If you want that to happen for some strange reason you have to explicitly configure it. And your name is attached to that configuration so you know who did it. Hopefully something like this either requires sign off from multiple teams or triggers a notification for the SOC that then comes in to figure out why policy was violated. At best it’s an accident or a training situation. At worst you may have just caught a bad actor before they step into the limelight.


Tom’s Take

Security isn’t perfect. We can always improve. Every time we build a good lock someone can build a better way to pick it. The real end goal is to make things sufficiently secure that we don’t have to worry about them being compromised with no effort while at the same time keeping it easy enough for people to get their jobs done. ZTNA is an important step because it creates rules and puts teeth behind them to prevent easy compromise of the rules by people that are trying to err on the side of easy. If you don’t already have plans to include ZTNA in your enterprise now you really should start looking at them. I’d tell you to trust me, but not trusting me is the point.