The Future of Hidden Features

 

948AD2EC-79D1-4828-AF55-C71EA8715771You may have noticed last week that Ubiquiti added a new “feature” to their devices in a firmware updated. According to this YouTube video from @TomLawrenceTech, Ubiquiti built an new service that contacts a URL to “phone home” and check in with their servers. It got some heavy discussion going, especially on Reddit.

The consensus is that Ubiquiti screwed up here by not informing people they were adding the feature up front and also not allowing users to opt-out initially. The support people at Ubiquiti even posted a quick workaround of blocking the URL at a perimeter firewall to prevent the communications until they could patch in the option to opt-out. If this was an isolated incident I could see some manner of outcry about it, but the fact of the matter is that companies are adding these hidden features more and more every day.

The first issue comes from the fact that most release notes for apps any more are nothing aside from platitudes. “Hey, we fixed some bugs and stuff so turn on automatic updates so you get the best version of our stuff!” is somewhat common now when it comes to a list of changes. That has a lot to do with applications developers doing unannounced A/B testing with people. It’s not uncommon to have two identical version numbers of an app running two wildly different code releases or having dissimilar UIs just because some bean counter wants to know how well Feature X polls with a certain demographic.

Slipstreaming Stuff

While that’s all well and good for consumer applications, the trend is starting to seep into enterprise software as well. While we still get an exhaustive list of things that have been fixed in a release, we’re getting more than we bargained for on occasion as well. Doing a scrub of code to ensure that it actually fixes the bugs that you have in your environment is important for reliability purposes. When the quality of the code you’re trying to publish is of less-than-stellar quality, you have have to spend more time ensuring that the stated fixes aren’t going to cause issues during implementation.

But what about when the stated features aren’t the only things included? Could you imagine the nightmare of installing a piece of software to fix an issue only to find out that something that was hidden in the code, like a completely undocumented feature, caused some other issue elsewhere? And when I say “undocumented feature” I’m not using it as a euphemism for another bug. I’m talking about a service that got installed that no one knows about. Like a piece of an app that will be enabled at a later time for instance.

Remember Microsoft back in the early 90s? They were invincible. They had everything they wanted in the palm of their hand. And then their world exploded. They got sued by the US government in an anti-trust case. One of the things that came out of this lawsuit was a focus on undocumented features in Microsoft software. The government said that Microsoft was adding features to their application software that gave them performance advantages over other vendors. And only they knew about them.

Microsoft agreed to get rid of all undocumented features, including some of the Easter eggs that were hidden in their software. That irritated some people that enjoyed finding these fun little gifts. But in 2005, there was a great blog post that discussed why Microsoft has a strict “no Easter eggs” policy now. And reading it almost 15 years later makes it sound so prescient.

Security Sounds Simple, Right?

Undocumented features are a security risk. If there is code in your app that isn’t executing or hasn’t been completely checked against the interactions in your environment you’re adding a huge amount of risk. There can be interactions that someone doesn’t understand or that you have no way of seeing. And you can better believe that if the regulators of your industry find them before you do you’re going to have a lot of explaining to do.

That’s even if you notice it in the first place. How many times have you been told to whitelist a specific URL or IP stack to make something work? I can remember being told to whitelist 17.0.0.0/8 for Apple to make their push notification service work. That’s an awful lot of IP addresses to enable push notifications! And that’s a lot of data that could be corrupted or misused by a smart threat actor.

The solution we need to fix the problem is actually pretty simple. We need to push back against the idea that we can slip undocumented features into updates. Release notes need to list everyting that comes in the software package. We need to tell our vendors and companies that we have to have a full listing of the software features. And regulatory bodies need to be ready to share the blame when someone breaks those rules instead of punishing the people that had no idea there was something in the code that was misbehaving.


Tom’s Take

I’m not a fan of finding things I wasn’t expecting. At one point my wife and I had the latest Facebook app updates and somehow her UI looked radically different than mine. But if it’s on my phone or my home computer I don’t really have much to complain about. Finding undocumented apps and features in my enterprise software is a huge issue though. Security is paramount and undocumented code is an entry point to disaster. We are the only ones that can stem the tide by pushing back against this practice now before it becomes commonplace in the future.

The End of SD-WAN’s Party In China

As I was listening to Network Break Episode 257 from my friends at Packet Pushers, I heard Greg and Drew talking about a new development in China that could be the end of SD-WAN’s big influence there.

China has a new policy in place, according to Axios, that enforces a stricter cybersecurity stance for companies. Companies doing business in China or with offices in China must now allow Chinese officials to get into their networks to check for security issues as well as verifying the supply chain for network security.

In essence, this is saying that Chinese officials can have access to your networks at any time to check for security threats. But the subtext is a little less clear. Do they get to control the CPE as well? What about security constructs like VPNs? This article seems to indicate that as of January 1, 2020, there will be no intra-company VPNs authorized by any companies in China, whether Chinese or foreign businesses in China.

Tunnel Collapse

I talked with a company doing some SD-WAN rollouts globally in China all the way back in 2018. One of the things that was brought up in that interview was that China was an unknown for American companies because of the likelihood of changing that model in the future. MPLS is the current go-to connectivity for branch offices. However, because you can put an SD-WAN head-end unit there and build an encrypted tunnel back to your overseas HQ it wasn’t a huge deal.

SD-WAN is a wonderful way to ensure your branches are secure by default. Since CPE devices “phone home” and automatically build encrypted tunnels back to a central location, such as an HQ, you can be sure that as soon as the device powers on and establishes global connectivity that all traffic will be secure over your VPN until you change that policy.

Now, what happens with China’s new policy? All traffic must transit outside of a VPN. Things like web traffic aren’t as bad but what about email? Or traffic destined for places like AWS or Azure? It was an unmentioned fact that using SD-WAN VPNs to transit through the content filters in place in China was a way around issues that might arise from accessing resources inside of a very well secured country-wide network.

With the policy change and enforcement guidelines set forth to be enacted in 2020, this could be a very big deal for companies hoping to use SD-WAN in China. First and foremost, you can’t use your intra-company VPN functions any longer. That effectively means that your branch office can’t connect to the HQ or the rest of your corporate network. Given some of the questions around intellectual property issues in China that might not be a bad thing. However, it is going to cause issues for your users trying to access the mail and other support services. Especially if they are hosted somewhere that is going to create additional scrutiny.

The other potential issue is whether or not Chinese officials are even going to allow you to use CPE of your own choosing in the future. If the mandate is that officials should have access to your network for security concerns, who is to say they can’t just dictate what CPE you should use in order to facilitate that access. Larger companies can probably negotiate for come kind of on-site server that does network scanning. But smaller branches are likely going to need to have an all-in-one device at the head end doing all the work. The additional benefit for the Chinese is that control of the head end CPE ensures that you can’t build a site-to-site VPN anywhere.

Peering Into The Future

Greg and Drew pontificate a bit on the future on what this means for organizations from foreign countries doing business in China in the future. I tend to agree with them on a few points. I think you’re going to see a push for Chinese offices of major companies treating them like zero-trust endpoints. All communications will be trading minimal information. Networks won’t be directly connected, either by VPN substitute or otherwise.

Looking further down the road makes the plans even more murky. Is there a way that you can certify yourself to have a standard for cybersecurity? We have something similar with regulations here in the US were we can submit compliance reports for various agencies and submit to audits or have audits performed by third parties. But if the government won’t take that as an answer how do you even go about providing the level of detail they want? If the answer is “you can’t”, then the larger discussion becomes whether or not you can comply with their regulations and reduce your business exposure while still making money in this market. And that’s a conversation no technology can solve.


Tom’s Take

SD-WAN gives us a wonderful set of features included in the package. Things like application inspection are wonderful to look at on a dashboard but I’ve always been a bigger fan of the automatic VPN service. I like knowing that as soon as I turn up my devices they become secure endpoints for all my traffic. Alas, all the technology in the world can be defeated by business or government regulation. If the rules say you can’t have a feature, you either have to play by the rules or quit playing the game. It’s up to businesses to decide how they’ll react going forward. But SD-WAN’s greatest feature may now have to be an unchecked box on that dashboard.

Fast Friday- Black Hat USA 2019

I just got back from my first Black Hat and it was an interesting experience. It was crazy to see three completely different security-focused events going on in town all at once. There was Black Hat, B-Sides Las Vegas, and DEFCON all within the space of a day or so of each other. People were flowing back and forth between them all and it was quite amazing.

A wanted to share a few quick thoughts about the event from my perspective being a first timer.

  • The show floor wasn’t as bit as VMworld or Cisco Live, but it was as big as it needed to be. Lots of companies that I’ve heard of, but several more that were new to me. That’s usually a good sign of lots of investment in the security space.
  • Speaking of which, I talked to quite a few companies about a variety of analytics, telemetry, and insider threat monitoring solutions. And almost all of them had a founder from Israel or someone that was involved in the cybersecurity areas of the IDF. That’s a pretty good track record for where the investment is going.
  • The Vegas booth gimmicks never change. I think I’ve spent too much time at Vegas conferences because I’m starting to recognize the magicians and other “performers” at the booths. I’m glad they can get some work but I don’t know if the companies realize that there needs to be some new blood out there.
  • I found it very different that you could print pretty much any name on your badge that you wanted. I saw a few El Chapos, Pablo Escobars, and even a generic “IT Buyer”. Consequently, people were a little curious about my Twitter badge flag. I guess the idea of announcing your identity to people is a bit strange at a security conference.
  • Being on the press list for the event meant that I got to see some cool briefings. But it also meant sorting through some things that didn’t make sense. And there there was the Quasi-Prime Number presentation spam that I got. I don’t go into much more detail other than to point you to this Twitter thread which is a comedy goldmine of the presentation referenced in said email. Thanks to @MalwareJake for pointing out the original thread and all the amazing comments about how the harmony of music can be an input into crypto randomization.
  • Lastly, I wish I would have had more time to go down and check out DEFCON. A lot of my friends that were in town were there and seemed to be having the time of their lives. DEFCON seems more in line with my Batman job instead of my Bruce Wayne job though. Guess I’ll have to take some vacation to check out DEFCON next year.

Ultimately, I had a great time checking out Black Hat. There were some parts that needed polish and some things about having 20,000+ in Vegas that I’m not keen on. But it’s a successful conference and likely will be one I attend in 2020. If for no other reason than to give my VPN a workout again!

 

Home on the Palo Alto Networks Cyber Range

You’ve probably heard many horror stories by now about the crazy interviews that companies in Silicon Valley put you though. Sure, some of the questions are downright silly. How would I know how to weigh the moon? But the most insidious are the ones designed to look like skills tests. You may have to spend an hour optimizing a bubble sort or writing some crazy code that honestly won’t have much impact on the outcome of what you’ll be doing for the company.

Practical skills tests have always been the joy and the bane of people the world over. Many disciplines require you to have a practical examination before you can be certified. Doctors are one. The Cisco CCIE is probably the most well-known in IT. But what is the test really quizzing you on? Most people will admit that the CCIE is an imperfect representation of a network at best. It’s a test designed to get people to think about networks in different ways. But what about other disciplines? What about the ones where time is even more of the essence than it was in CCIE lab?

Red Team Go!

I was at Palo Alto Networks Ignite19 this past week and I got a chance to sit down with Pamela Warren. She’s the Director of Government and Industry Initiatives at Palo Alto Networks. She and her team have built a very interesting concept that I loved to see in action. They call it the Cyber Range.

The idea is simple enough on the surface. You take a classroom setting with some workstations and some security devices racked up in the back. You have your students log into a dashboard to a sandbox environment. Then you have your instructors at the front start throwing everything they can at the students. And you see how they respond.

The idea for the Cyber Range came out of military exercises that NATO used to run for their members. They wanted to teach their cyberwarfare people how to stop sophisticated attacks and see what their skill levels were with regards to stopping the people that could do potential harm to nation state infrastructure or worse to critical military assets during a war. Palo Alto Networks get involved in helping years ago and Pamela grew the idea into something that could be offered as a class.

Cyber Range has a couple of different levels of interaction. Level 1 is basic stuff. It’s designed to teach people how to respond to incidents and stop common exploits from happening. The students play the role of a security operations team member from a fictitious company that’s having a very bad week. You learn how to see the log files, collect forensics data, and ultimately how to identify and stop attackers across a wide range of exploits.

If Level 1 is the undergrad work, Cyber Range Level 2 is postgrad in spades. You dig into some very specific and complicated exploits, some of which have only recently been discovered. During my visit the instructors were teaching everyone about the exploits used by OilRig, a persistent group of criminals that love to steal data through things like DNS exfiltration tunnels. Level 2 of the Cyber Range takes you deep down the rabbit hole to see inside specific attacks and learn how to combat them. It’s a great way to keep up with current trends in malware and exploitive behavior.

Putting Your Money Where Your Firewall Is

To me, the most impressive part of this whole endeavor is how Palo Alto Networks realizes that security isn’t just about sitting back and watching an alert screen. It’s about knowing how to recognize the signs that something isn’t right. And it’s about putting an action plan into place as soon as that happens.

We talk a lot about automation of alerts and automated incident response. But at the end of the day we still need a human being to take a look at the information and make a decision. We can winnow that decision down to a simple Yes or No with all the software in the world but we need a brain doing the hard work after the automation and data analytics pieces give you all the information they can find.

More importantly, this kind of pressure cooker testing is a great way to learn how to spot the important things without failing in reality. Sure, we’ve heard all the horror stories about CCIE candidates that typed in debug IP packet detail on core switch in production and watched it melt down. But what about watching an attacker recon your entire enterprise and start exfiltrating data. And you being unable to stop them because you either don’t recognize the attack vector or you don’t know where to find the right info to lock everything down? That’s the value of training like the Cyber Range.

The best part for me? Palo Alto Networks will bring a Cyber Range to your facility to do the experience for your group! There are details on the page above about how to set this up, but I got a great pic of everything that’s involved here (sans tables to sit at):

How can you turn down something like this? I would have loved to put something like this on for some of my education customers back in the day!


Tom’s Take

I really wish I would have had something like the Cyber Range for myself back when I was fighting virus outbreaks and trying to tame Conficker infections. Because having a sandbox to test myself against scripted scenarios with variations run by live people beats watching a video about how to “easily” fix a problem you may never see in that form. I applaud Palo Alto Networks for their approach to teaching security to folks and I can’t wait to see how Pamela grows the Cyber Range program!

For more information about Palo Alto Networks and Cyber Range, make sure to visit http://Paloaltonetworks.com/CyberRange/

Increasing Entropy with Crypto4A

Have you ever thought about the increasing disorder in your life? Sure, it may seem like things are constantly getting crazier every time you turn around, but did you know that entropy is always increasing in the universe? It’s a Law of Thermodynamics!

The idea that organized systems want to fall into disorder isn’t too strange when you think about it. Maintaining order takes a lot of effort and disorder is pretty easy to accomplish by just giving up. Anyone with a teenager knows that the amount of disorder that can be accomplished in a bedroom is pretty impressive.

One place where we don’t actually see a lot of disorder is in the computing realm. Computers are based on the idea that there is order and rationality in everything that we do. This is so prevalent that finding a way to be random is actually pretty hard. Computer programmers have tried a number of ways to come up with random number generators that take a variety of inputs into the formula and come up with something that looks sufficiently random. For most people just wanting the system to guess a number between 1 and 100 it’s not too bad. But when it comes to really, really large numbers like the ones used in cryptography, those pseudorandom numbers aren’t good enough.

This All Looks So Familiar…

One of the reasons for this comes down to good old fashioned efficiency. In the old days computers programmers could rely on people to generate pseudorandom input. By sampling mouse clicks or delay between computer keyboard keystrokes you could easily come up with a number that looks nice and random. However, we’ve taken people out of the loop now. Thanks to the cloud and automation and any one of a number of new ways to reduce human input we’ve managed to remove mouse clicks and keystrokes.

That’s fine for running scripts and programs. It’s even good for building things at a huge scale. But it’s really bad when you need something that looks relatively random. And it’s really, really bad when your program relies on that randomness to keep you secure. Kind of like key generation in Public Key Cryptography (PKI).

A group of security researchers working for the National Institute of Standards and Technology (NIST) found out a few years ago that public keys were starting to collide at greater rates than random chance. The study, conducted in 2012, found that 5% of HTTPS and 10% of SSH public keys were duplicates. A collision in a hashing algorithm is when two inputs produce the same output, which renders that hashing function broken. In PKI, having a two different inputs output the same public key is really bad, because it could lead to key collisions that impact a variety of service.

What caused it? As it turns out, lack of orderly disorder. Because automation and non-human interaction have led to other pseudorandom inputs being used in key generation it appeared to the researchers that the same inputs were being used all over the place. That meant that a lot of the public keys that were being generated were being done in such as way as to make collisions more likely. When you look at how many things are relying on automated sources to generate keys it can be quite scary. Think about a smart lightbulb or other IoT device that’s trying to generate pseudorandom input from a CPU that’s just big enough to turn things on. Now imagine that CPU multiplied by the number of smart lightbulbs out there. Not a pleasant thought, is it?

Disorder In The Court

This fascinating discussion came from an interview I had with Bruno Couillard, the President and CTO of Crypto4A. Crypto4A is a company that provides Entropy-as-a-Service. What exactly does that mean?

Crypto4A has an appliance they call QAOS. QAOS is designed to give you the best possible disorder that you can get. It does this the old fashioned way. Instead of trying to use software as a Random Number Generator (RNG) QAOS instead uses hardware sources to generate entropy for their RNG. This includes a quantum RNG, which produces high quality disorder that’s difficult to fake any other way.

QAOS is designed to feed software with entropy to generate randomness sufficient to prevent PKI public key collisions. The software developers can follow the NIST guidelines on EaaS to have the program call an entropy source. QAOS, acting as that entropy source, will seed the RNG on the target system with good randomness and allow it to generate good keys. This could also be configured in the kernel of the OS to call a system like QAOS on boot and start the seed value with a good amount of random entropy in the case of old programs that can’t be modified to call anything other than a system-based RNG source like /dev/random/.


Tom’s Take

The NIST guidelines around EaaS are constantly evolving, but the idea that companies are already racing to fill the void that has been created by insufficient randomness in cryptography is telling. When you think about nth the number of devices that are going to be using PKI for secure communications, the need for something like Crypto4A QAOS is pretty clear. If we are going to rely on automated systems to run our daily lives, we need to have the resources in place to ensure they have a solid foundation of randomness to build on.

What Makes IoT A Security Risk?

IoT security is a pretty hot topic in today’s world. That’s because the increasing number of smart devices is causing issues with security professionals everywhere. Consumer IoT devices are expected to top 20 billion by 2020. And each of these smart devices represents an attack surface. Or does it?

Hello, Dave

Adding intelligence to a device increases the number of ways that it can compromised. Take a simple thermostat, for example. The most basic themostat is about as dumb as you can get. It uses the expansion properties of metal to trigger switches inside of the housing. You set a dial or a switch and it takes care of the rest. Once you start adding things like programmability or cloud connection, you increase the number of ways that you can access the device. Maybe it’s a webpage or an app. Maybe you can access it via wireless or Bluetooth. No matter how you do it, it’s more available than the simple version of the thermostat.

What about industrial IoT devices? The same rule applies. In this case, we’re often adding remote access to Supervisory Control And Data Acquistion (SCADA) systems. There’s a big market from enterprise IT providers to create secured equipment that allows access to existing industrial equipment from centralized control dashboards. It makes these devices “smart” and allows you to make them easier to manage.

Industrial IoT has the same kind of issues that consumer devices do. We’re increasing the number of access avenues to these devices. But does that mean they’re a security risk? The question could be as simple as asking if the devices are any easier to hack than their dumb counterparts. If that is our only yardstick, then the answer is most assuredly yes they are a security risk. My fridge does not have the ability for me to access it over the internet. By installing an operating system and connecting it to the wireless network in my house I’m increasing the attack surface.

Another good example of this increasing attack surface is in home devices that aren’t consumer focused. Let’s take a look at the electrical grid. Our homes are slowly being upgraded with so-called “smart” electrical meters that allow us to have more control over power usage in our homes. It also allows the electric companies to monitor the devices more closely and read the electric meters remotely instead of needing to dispatch humans to read those meters. These smart meters often operate on Wi-Fi networks for ease-of-access. If all we do is add the meters to a wireless network, are we really creating security issues?

Bigfoot-Sized Footprints

No matter how intelligent the device, increasing access avenues to the device creates security access issues. A good example of this is the “hidden” diagnostic port on the original Apple Watch. Even though the port had no real use beyond internal diagnostics at Apple, it was a tempting target for people to try and get access to the system. Sometimes these hidden ports can dump hidden data or give low-level access to areas of the system that aren’t normally available. While the Apple Watch port didn’t have this kind of access, other devices can offer it.

Giving access to any device allows you to attack it in a way that can gain you access that can get you into data that you’re not supposed to have. Sure, a smart speaker is a very simple device. But what if someone found a way to remotely access the data and capture the data stream? Or the recording buffer? Most smart speakers are monitoring audio data listening for their trigger word to take commands. Normally this data stream is dumped. But what if someone found a way to reconstruct it? Do you think that could qualify as a hack? All it takes is an enterprising person to figure out how to get low-level access. And before you say it’s impossible, remember that we allow access to these devices in other ways. It’s only a matter of time before someone finds a hole.

As for industrial machines, these are even more tempting. By gaining access to the master control systems, you can cause some pretty credible havoc with their programming. You can shut down all manner of industrial devices. Stuxnet was a great example of writing a very specific piece of malware that was designed to cause problems for a specific kind of industrial equipment. Because of the nature of the program it was very difficult to figure out exactly what was causing the issues. All it took was access to the systems, which was reportedly caused by hiding the program on USB drives and seeding them in parking lots where they would be picked up and installed in the target facilities.

IoT devices, whether consumer or enterprise, represent potential threat vectors. You can’t simply assume that a simple device is safe because there isn’t much to hack. The Mirai bonnet exploited bad password hygiene in devices to allow them to be easily hacked. It wasn’t a complicated silicon-level hack or a coordinated nation state effort. It was the result of someone cracking a hard-coded password and exploiting that for their own needs. Smart devices can be made to make dumb decisions when used improperly.


Tom’s Take

IoT security is both simple and hard at the same time. Securing these devices is a priority for your organization. You may never have the compromised, but you have to treat them just like you would any other device that could potentially be hacked and turned against you. Zero-trust security models are a great way to account for this, but you need to make sure you’re not overlooking IoT when you build that model. Because the invisible devices helping us get our daily work done could quickly become the vector for hacking attacks that bring our day to a grinding halt.

Facebook’s Mattress Problem with Privacy

If you haven’t had a chance to watch the latest episode of the Gestalt IT Rundown that I do with my co-workers every Wednesday, make sure you check this one out. Because it’s the end of the year it’s customary to do all kinds of fun wrap up stories. This episode focused on what we all thought was the biggest story of the year. For me, it was the way that Facebook completely trashed our privacy. And worse yet, I don’t see a way for this to get resolved any time soon. Because of the difference between assets and liabilities.

Contact The Asset

It’s no secret that Facebook knows a ton about us. We tell it all kinds of things every day we’re logged into the platform. We fill out our user profiles with all kinds of interesting details. We click Like buttons everywhere, including the one for the Gestalt IT Rundown. Facebook then keeps all the data somewhere.

But Facebook is collecting more data than that. They track where our mouse cursors are in the desktop when we’re logged in. They track the amount of time we spend with the mobile app open. They track information in the background. And they collect all of this secret data and they store it somewhere as well.

This data allows them to build an amazingly accurate picture of who we are. And that kind of picture is extremely valuable to the right people. At first, I thought it might be the advertisers that crave this kind of data. Advertisers are the people that want to know exactly who is watching their programs. The more data they have about demographics the better they can tailor the message. We’ve already seen that with specific kinds of targeted posts on Facebook.

But the people that really salivate over this kind of data live in the shadows. They look at the data as a way to offer new kinds of services. Don’t just sell people things. Make them think differently. Change their opinions about products or ideas without them even realizing it. The really dark and twisted stuff. Like propaganda on a whole new scale. Enabled by the fact that we have all the data we could ever want on someone without even needing to steal it from them.

The problem with Facebook collecting all this data about us is that it’s an asset. It’s not too dissimilar from an older person keeping all their money under a mattress. We scoff at that person because a mattress is a terrible place to keep money. It’s not safe. And a bank will pay you keep your money there, right?

On the flip side, depending on the age of that person, they may not believe that banks are safe. Before FDIC, there was no guarantee your money would be repaid in a pinch. And if the bank goes out of business you can’t get your investment back. For a person that lived through the Great Depression that had to endure bank holidays and the like, keeping your asset under a mattress is way safer than giving it to someone else.

As an aside here, remember that banks don’t like leaving your money laying around either. If you deposit money in a bank, they take that money and invest it in other places. They put the money to work for them making money. The interest that you get paid for savings accounts and the like is just a small bonus to encourage you to keep your money in the bank and not to pull it out. That’s why they even have big disclaimers saying that your money may not be available to withdraw at a moment’s notice. Because if you do decide to get all of your money out of the bank at once, they need to go find the money to give you.

Now, let’s examine our data. Or, at least the data that Facebook has been storing on us. How do you think Facebook looks at that data? Do you believe they want to keep it under the mattress where it’s safe from the outside world? Do you think that Facebook wants to keep all these information locked in a vault somewhere where no one can get to it?

Or perhaps Facebook looks at your data as an asset like a bank does. Instead of keeping it around and letting it sit fallow they’d rather put it to work. That’s the nature of a valuable asset. To the average person, their privacy is one of the most important parts of their lives. To Facebook, your privacy is simply an asset. It can either sit by itself and make them nothing. Or it can be put to use by Facebook or third-party companies to make more money from the things that they can do with good data sources. To believe that a company like Facebook has your best interests at heart when it comes to privacy is not a good bet to make.

Would I Lie-ability To You?

In fact, the only thing that can make Facebook really sit up and pay attention is if that asset they have farmed out and working for them were to suddenly become a liability for some reason. Liabilities are a problem for companies because they are the exact opposite of making money. They cost money. Just as the grandmother in the above example sees an insolvent bank as a liability, so too would someone see a bad asset as a possible exposure.

Liabilities are a problem. Anything that can be an exposure is an issue for company, especially one with investors that like to get dividends. Any reduction in profit equals a loss. Liabilities on a balance sheet are giant red flags for anyone taking a close look at the operations of a business.

Turning Facebook’s data assets into a liability is the only way to make them sit up and realize that what they’re doing is wrong. Selling access to our data to anyone that wants it is a horrible idea. But it won’t stop until there is some way to make them pay through he nose for screwing up. Up until this year, that was a long shot at best. Most fines were in the thousands of dollars range, whereas most companies would pay millions for access to data. A carefully crafted statement admitting no fault after the exposure was uncovered means that Facebook and the offending company get away without a black mark and get to pocket all their gains.

The European GDPR law is a great step in the right direction. It clearly spells out what has to happen to keep a person’s data safe. That eliminates wiggle room in the laws. It also puts a stiff fine in place to ensure that any violations can be compounded quickly to drain a company and turn data into a liability instead of an asset. There are moves in the US to introduce legislation similar to GDPR, either at the federal level or in individual states like California, the location of Facebook’s headquarters.

That’s not to say that these laws are going to get it right every time. There are people out there that live to find ways to turn liabilities into assets. They want to find ways around the laws and make it so that they can continue to take their assets and make money from them even if the possibility of exposure is high. It’s one thing when that exposure is the money of people that invested in them. It’s another thing entirely when it’s personally identifiable information (PII) or protected information about people. We’re not imaginary money. We live and breath and exist long past losses. And trying to get our life back on track after an exposure is not easy for sure.


Tom’s Take

If I sound grumpy, it’s because I am tired of this mess. When I was researching my discussion for the Gestalt IT Rundown I simply Googled “Facebook data breach 2018” looking for examples that weren’t Cambridge Analytica. The number was more than it should have been. We cry about Target and Equifax and many other exposures that have happened in the last five years, but we also punish those companies by not doing business with them or moving our information elsewhere. Facebook has everyone hooked. We share photos on Facebook. We RSVP to events on Facebook. And we talk to people on Facebook as much or more than we do on the phone. That kind of reach requires a company to be more careful with who has access to our data. And if the solution is building the world’s biggest mattress to keep it all safe put me down for a set of box springs.