Security Dessert Models

MMCOOKIE

I had the good fortune last week to read a great post from Maish Saidel-Keesing (@MaishSK) that discussed security models in relation to candy.  It reminded me that I’ve been wanting to discuss security models in relation to desserts.  And since Maish got me hungry for a Snicker’s bar, I decided to lay out my ideas.

When we look at traditional security models of the past, everything looks similar to creme brûlée.  The perimeter is very crunchy, but it protects a soft interior.  This is the predominant model of the world where the “bad guys” all live outside of your network.  It works when you know where your threats are located.  This model is still in use today where companies explicitly trust their user base.

The creme brûlée model doesn’t work when you have large numbers of guest users or BYOD-enabled users.  If one of them brings in something that escapes into the network, there’s nothing to stop it from wreaking havoc everywhere.  In the past, this has caused massive virus outbreaks and penetrations from things like malicious USB sticks in the parking lot being activated on “trusted” computers internally.

A Slice Of Pie

A more modern security model looks more like an apple pie.  Rather than trusting that everything inside the network, the smart security team will realize that users are as much of a threat as the “bad guys” outside.  They crunchy crust on top will also be extended around the whole soft area inside.  Users that connect tablets, phones, and personal systems will have a very aggressive security posture in place to prevent access of anything that could cause problems in the network (and data center).  This model is great when you know that the user base is not to be trusted.  I wrote about it over a year ago on the Aruba Airheads community site.

The apple pie model does have some drawbacks.  While it’s a good idea to isolate your users outside the “crust”, you still have nothing protecting your internal systems if a rogue device or “trusted” user manages to get inside the perimeter.  The pie model will protect you from careless intrusions but not from determined attackers.  To fix that problem, you’re going to have to protect things inside the network with a crunchy shell as well.

Melts In Your Firewall, Not In Your Hand

Maish was right on when he talked about M&Ms being a good metaphor for security.  They also do a great job of visualizing the untrusted user “pie” model.  But the ultimate security model will end up looking more like an M&M cookie.  It will have a crunchy edge all around.  It will be “soft” in the middle.  And it will also have little crunchy edges around the important chocolate parts of your network (or data center).  This is how you protect the really important things like customer data.  You make sure that even getting past the perimeter won’t grant access.  This is the heart of “defense in depth”.

The M&M cookie model isn’t easy by any means. It requires you to identify assets that need to be protected.  You have to build the protection in at the beginning.  No ACLs that permit unrestricted access.  The communications between untrusted devices and trusted systems needs to be kept to the bare minimum necessary.  Too many M&Ms in a cookie makes for a bad dessert.  So too must you make sure to identify the critical systems that need be protected and group them together to minimize configuration effort and attack surface.


 

Tom’s Take

Security is a world of protecting the important things while making sure they can be used by people.  If you err on the side of too much caution, you have a useless system.  If you are too permissive, you have a security risk.  Balance is the key.  Just like the recipe for cookies, pie, or even creme brûlée the proportion of ingredients must be just right to make a tasty dessert.  In security you have to have the same mix of permissions and protections.  Otherwise, the whole thing falls apart like a deflated soufflé.

 

Don’t Track My MAC!

track

The latest technology in mobile seems to be identification.  It has nothing to do with credentials.  Instead, it has everything to do with creating a database of who you are and where you are.  Location-based identification is the new holy grail for marketing people.  And the privacy implications are frightening.

Who Are You?

The trend now is to get your device MAC address and store it in a database.  This allows the location tracking systems, like Aruba Meridian or Cisco CMX, to know that they’ve seen you in the past.  They can see where you’ve been in the store with a resolution of a couple of feet (much better than GPS).  They now know which shelf you are standing in front of.  Coupled with new technologies like Apple iBeacon, the retailer can push information to your mobile device like a coupon or a price comparison with competitors.

It’s a fine use of mobile technology.  Provided I wanted that in the first place.  The model should be opt-in.  If I download your store’s app and connect to your wifi then I clicked the little “agree” box that allows you to send me that information.  If I opt-in, feel free to track me and email me coupons.  Or even to pop them up on store displays when my device gets close to a shelf that contains a featured item.  I knew what I was getting into when I opted in.  But what happens when you didn’t?

Wifi, Can You Hear Me?

The problem comes when the tracking system is listening to devices when it shouldn’t be. When my mobile device walks into a store, it will start beaconing for available wifi access points.  It will interrogate them about the SSIDs that they have and whether my device has associated with them.  That’s the way wifi works.  You can’t stop that unless you shut off your wireless.

If the location system is listening to the devices beaconing for wifi, it could be enabled to track those MAC addresses that are beaconing for connectivity even if they don’t connect.  So now, my opt-in is worthless.  If the location system knows about my MAC address even when I don’t connect, they can push information to iBeacon displays without my consent.  I would see a coupon for a camping tent based on the fact that I stood next to the camp stoves last week for five minutes.  It doesn’t matter that I was on a phone call and don’t have the slightest care about camping.  Now the system has started building a profile of me based on erroneous information it gathered when it shouldn’t have been listening.

Think about Minority Report.  When Tom Cruise is walking through the subway, retinal scanners read his print and start showing him very directed advertising.  While we’re still years away from that technology, being able to fingerprint a mobile device when it enters the store is the next best thing.  If I look down to text my wife about which milk to buy, I could get a full screen coupon telling me about a sale on bread.

My (MAC) Generation

This is such a huge issue that Apple has taken a step to “fix” the problem in the beta release for iOS 8.  As reported by The Verge, iOS 8 randomizes the MAC address used when probing for wifi SSIDs.  This means that the MAC used to probe for wifi requests won’t be the same as the one used to connect to the actual AP.  That’s huge for location tracking.  It means that the only way people will know who I am for sure is for me to connect to the wifi network.  Only then will my true MAC address be revealed.  It also means that I have to opt-in to the location tracking.  That’s a great relief for privacy advocates and tin foil hat aficionados everywhere.

It does make iBeacon configuration a bit more time consuming.  But you’ll find that customers will be happier overall knowing their information isn’t being stored without consent.  Because there’s never been a situation where customer data was leaked, right? Not more than once, right?  Oh, who am I kidding.  If you are a retailer, you don’t want that kind of liability on your hands.

Won’t Get Fooled Again

If you’re one of the retailers deploying location based solutions for applications like iBeacon, now is the time to take a look at what you’re doing.  If you’re collecting MAC address information from probing mobile devices you should turn it off now.  Yes, privacy is a concern.  But so is your database.  Assuming iOS randomizes the entire MAC address string including the OUI and not just the 24-bit NIC at the end, your database is going to fill up quickly with bogus entries.  Sure, there may be a duplicate here and there from the random iOS strings, but they will be few and far between.

More likely, your database will overflow from the sheer number of MACs being reported by iOS 8 devices.  And since iOS7 adoption was at 87% of compatible devices just 8 months after release, you can guarantee there will be a large number of iOS devices coming into your environment running with obfuscated MAC addresses.


Tom’s Take

I don’t like the idea of being tracked when I’m not opted in to a program.  Sure, I realize that my usage statistics are being used for research.  I know that clicking those boxes in the EULA gives my data to parties unknown for any purpose they choose.  And I’m okay with it.  Provided that box is checked.

When I find out my data is being collected without my consent, it gives me the creeps.  When I learned about the new trends in data collection for the grand purposes of marketing and sales, I wanted to scream from the rooftops that the vendors needs to put a halt to this right away.  Thankfully, Apple must have heard my silent screams.  We can only hope that other manufacturers start following suit and giving us a method to prevent this from happening.  This tweet from Jan Dawson sums it up nicely:

Security’s Secret Shame

photo-2

Heartbleed captured quite a bit of news these past few days.  A hole in the most secure of web services tends to make people a bit anxious.  Racing to release patches and assess the damage consumed people for days.  While I was a bit concerned about the possibilities of exposed private keys on any number of web servers, the overriding thought in my mind was instead about the speed at which we went from “totally secure” to “Swiss cheese security” almost overnight.

Jumping The Gun

As it turns out, the release of the information about Heartbleed was a bit sudden.  The rumor is that the people that discovered the bug were racing to release the information as soon as the OpenSSL patch was ready because they were afraid that the information had already leaked out to the wider exploiting community.  I was a bit surprised when I learned this little bit of information.

Exploit disclosure has gone through many phases in recent years.  In days past, the procedure was to report the exploit to the vendor responsible.  The vendor in question would create a patch for the issue and prepare their support organization to answer questions about the patch.  The vendor would then release the patch with a report on the impact.  Users would read the report and patch the systems.

Today, researchers that aren’t willing to wait for vendors to patch systems instead perform an end-run around the system.  Rather than waiting to let the vendors release the patches on their cycle, they release the information about the exploit as soon as they can.  The nice ones give the vendors a chance to fix things.  The less savory folks want the press of discovering a new hole and project the news far and wide at every opportunity.

Shame Is The Game

Part of the reason to release exploit information quickly is to shame the vendor into quickly releasing patch information.  Researchers taking this route are fed up with the slow quality assurance (QA) cycle inside vendor shops.  Instead, they short circuit the system by putting the issue out in the open and driving the vendor to release a fix immediately.

While this approach does have its place among vendors that move a glacial patching pace, one must wonder how much good is really being accomplished.  Patching systems isn’t a quick fix.  If you’ve ever been forced to turn out work under a crucial deadline while people were shouting at you, you know the kind of work that gets put out.  Vendor patches must be tested against released code and there can be no issues that would cause existing functionality to break.  Imagine the backlash if a fix to the SSL module cause the web service to fail on startup.  The fix would be worse than the problem.

Rather than rushing to release news of an exploit, researchers need to understand the greater issues at play.  Releasing an exploit for the sake of saving lives is understandable.  Releasing it for the fame of being first isn’t as critical.  Instead of trying to shame vendors into releasing a patch rapidly to plug some hole, why not work with them instead to identify the issue and push the patch through?  Shaming vendors will only put pressure on them to release questionable code.  It will also alienate the vendors from researchers doing   things the right way.


Tom’s Take

Shaming is the rage now.  We shame vendors, users, and even pets.  Problems have taken a back seat to assigning blame.  We try to force people to change or fix things by making a mockery of what they’ve messed up.  It’s time to stop.  Rather than pointing and laughing at those making the mistake, you should pick up a keyboard and help them fix it. Shaming doesn’t do anything other than upset people.  Let’s put it to bed and make things better by working together instead of screaming “Ha ha!” when things go wrong.

Google+ And The Quest For Omniscience

GooglePlusEverything

When you mention Google+ to people, you tend to get a very pointed reaction. Outside of a select few influencers, I have yet to hear anyone say good things about it. This opinion isn’t helped by the recent moves by Google to make Google+ the backend authentication mechanism for their services. What’s Google’s aim here?

Google+ draws immediate comparisons to Facebook. Most people will tell you that Google+ is a poor implementation of the world’s most popular social media site. I would tend to agree for a few reasons. I find it hard to filter things in Google+. The lack of a real API means that I can’t interact with it via my preferred clients. I don’t want to log into a separate web interface simply to ingest endless streams of animated GIFs with the occasional nugget of information that was likely posted somewhere else in the first place.

It’s the Apps

One thing the Google of old was very good at doing was creating applications that people needed. GMail and Google Apps are things I use daily. Youtube gets visits all the time. I still use Google Maps to look things up when I’m away from my phone. Each of these apps represent a separate development train and unique way of looking at things. They were more integrated than some of the attempts I’ve seen to tie together applications at other vendors. They were missing one thing as far as Google was concerned: you.

Google+ isn’t a social network. It’s a database. It’s an identity store that Google uses to nail down exactly who you are. Every +1 tells them something about you. However, that’s not good enough. Google can only prosper if they can refine their algorithms.  Each discrete piece of information they gather needs to be augmented by more information.  In order to do that, they need to increase their database.  That means they need to drive adoption of their social network.  But they can’t force people to use Google+, right?

That’s where the plan to integrate Google+ as the backend authentication system makes nefarious sense. They’ve already gotten you hooked on their apps. You comment on Youtube or use Maps to figure out where the nearest Starbucks already. Google wants to know that. They want to figure out how to structure AdWords to show you more ads for local coffee shops or categorize your comments on music videos to sell you Google Play music subscriptions. Above all else, they can use that information as a product to advertisers.

Build It Before They Come

It’s devilishly simple. It’s also going to be more effective than Facebook’s approach. Ask yourself this: when’s the last time you used Facebook Mail? Facebook started out with the lofty goal of gathering all the information that it could about people. Then they realized the same thing that Google did: You have to collect information on what people are using to get the whole picture. Facebook couldn’t introduce a new system, so they had to start making apps.

Except people generally look at those apps and push them to the side. Mail is a perfect example. Even when Facebook tried to force people to use it as their primary communication method their users rebelled against the idea. Now, Facebook is being railroaded into using their data store as a backend authentication mechanism for third party sites. I know you’ve seen the “log In With Facebook” buttons already. I’ve even written about it recently. You probably figured out this is going to be less successful for a singular reason: control.

Unlike Google+ and the integration will all Google apps, third parties that utilize Facebook logins can choose to restrict the information that is shared with Facebook. Given the climate of privacy in the world today, it stands to reason that people are going to start being very selective about the information that is shared with these kinds of data sinks. Thanks to the Facebook login API, a significant portion of the collected information never has to be shared back to Facebook. On the other hand, Google+ is just making a simple backend authorization. Given that they’ve turned on Google+ identities for Youtube commenting without a second though, it does make you wonder what other data their collecting without really thinking about it.


Tom’s Take

I don’t use Google+. I post things there via API hacks. I do it because Google as a search engine is too valuable to ignore. However, I don’t actively choose to use Google+ or any of the apps that are now integrated into it. I won’t comment on Youtube. I doubt I’ll use the Google Maps functions that are integrated into Google+. I don’t like having a half-baked social media network forced on me. I like it even less when it’s a ham-handed attempt to gather even more data on me to sell to someone willing to pay to market to me. Rather than trying to be the anti-Facebook, Google should stand up for the rights of their product…uh, I mean customers.

The Slippery Slope of Social Sign-In

FBTalons

At the recent Wireless Field Day 6, we got a chance to see a presentation from AirTight Networks about their foray into Social Wifi. The idea is that business can offer free guest wifi for customers in exchange for a Facebook Like or by following the business on Twitter. AirTight has made the process very seamless by allowing an integrated Facebook login button. Users can just click their way to free wifi.

I’m a bit guarded about this whole approach. It has nothing to do with AirTight’s implementation. In face, several other wireless companies are racing to have similar integration. It does have everything to do with the way that data is freely exchanged in today’s society. Sometimes more freely than it should.

Don’t Forget Me

Facebook knows a lot about me. They know where I live. They know who my friends are. They know my wife and how many kids we have. While I haven’t filled out the fields, there are others that have indicated things like political views and even more personal information like relationship status or sexual orientation. Facebook has become a social data dump for hundreds of millions of people.

For years, I’ve said that Facebook holds the holy grail of advertising – an searchable database of everything a given demographic “likes”. Facebook knows this, which is why they are so focused on growing their advertising arm. Every change to the timeline and every misguided attempt to “fix” their profile security has a single aim: convincing business to pay for access to your information.

Now, with social wifi, those business can get access to a large amount of data easily. When you create the API integration with Facebook, you can indicate a large number of discreet data points easily. It’s just a bunch of checkboxes. Having worked in IT before, I know the siren call that could cause a business owner to check every box he could with the idea that it’s better to collect more data rather than less. It’s just harmless, right?

Give It Away Now

People don’t safeguard their social media permissions and data like they should. If you’ve ever gotten DM spam from a follower or watched a Facebook wall swamped with “on behalf of” postings you know that people are willing to sign over the rights to their accounts for a 10% discount coupon or a silly analytics game. And that’s after the warning popup telling the user what permissions they are signing away. What if the data collection is more surreptitious?

The country came unglued when it was revealed that a government agency was collecting metadata and other discreet information about people that used online services. The uproar led to hearings and debate about how far reaching that program was. Yet many of those outraged people don’t think twice about letting a coffee shop have access to a wealth of data that would make the NSA salivate.

Providers are quick to say that there are ways to limit how much data is collected. It’s trivial to disable the ability to see how many children a user has. But what if that’s the data the business wants? Who is to say that Target or Walmart won’t collect that information for an innocent purpose today only to use it to target advertisements to users at a later date. That’s the exact kind of thing that people don’t think about.

Big data and our analytic integrations are allowing it to happen with ease today. The abundance of storage means we can collect everything and keep it forever without needing to worry about when we should throw things away. Ubiquitous wireless connectivity means we are never truly disconnected from the world. Services that we rely on to tell us about appointments or directions collect data they shouldn’t because it’s too difficult to decide how to dispose of it. It may sound a bit paranoid but you would be shocked to see what people are willing to trade without realizing.


Tom’s Take

Given the choice between paying a few dollars for wifi access or “liking” a company’s page on Facebook, I’ll gladly fork over the cash. I’d rather give up something of middling value (money) instead of giving up something more important to me (my identity). The key for vendors investigating social wifi is simple: transparency. Don’t just tell me that you can restrict the data that a business can collect. Show me exactly what data they are collecting. Don’t rely on the generalized permission prompts that Facebook and Twitter provide. If business really want to know how I voted in the last election then the wifi provider has a social responsibility to tell me that before I sign up. If shady businesses are forced to admit they are overstepping their data collection bounds then they might just change their tune. Let’s make technology work to protect our privacy for once.

Why An iPhone Fingerprint Scanner Makes Sense

silver-apple-thumb

It’s hype season again for the Cupertino Fruit and Phone Company.  We are mere days away from a press conference that should reveal the specs of a new iPhone, likely to be named the iPhone 5S.  As is customary before these events, the public is treated to all manner of Wild Mass Guessing as to what will be contained in the device.  WIll it have dual flashes?  Will it have a slow-motion camera?  NFC? 802.11ac?  The list goes on and on.  One of the most spectacular rumors comes in a package the size of your thumb.

Apple quietly bought a company called AuthenTec last year.  AuthentTec made fingerprint scanners for a variety of companies, including those that included the technology in some Android devices.  After the $365 million acquisition, AuthenTec disappeared into a black hole.  No one (including Apple) said much of anything about them.  Then a few weeks ago, a patent application was revealed that came from Apple and included fingerprint technology from AuthenTec.  This sent the rumor mill into overdrive.  Now all signs point to a convex sapphire home button that contains a fingerprint scanner that will allow iPhones to use biometrics for security.  A developer even managed to ferret out a link to a BiometrickKitUI bundle in one of the iOS 7 beta releases (which was quickly removed in the next beta).

Giving Security The Finger

I think adding a fingerprint scanner to the hardware of an iDevice is an awesome idea.  Passcode locks are good for a certain amount of basic device security, but the usefulness of a passcode is inversely proportional to it’s security level.  People don’t make complex passcodes because they take far too long to type in.  If you make a complex alphanumeric code, typing the code in quickly one-handed isn’t easy.  That leaves most people choosing to use a 4-digit code or forgoing it altogether.  That doesn’t bode well for people whose phones are lost or stolen.

Apple has already publicly revealed that it will include enhanced security in iOS 7 in the form of an activation lock that prevents a thief from erasing the phone and reactivating it for themselves.  This makes sense in that Apple wants to discourage thieves.  But that step only makes sense if you consider that Apple wants to beef up the device security as well.  Biometric fingerprint scanners are a quick method of inputting a unique unlock code quickly.  Enabling this technology on a new phone should show a sharp increase in the number of users that have enabled an unlock code (or finger, in this case).

Not all people thing fingerprint scanners are a good idea.  A link from Angelbeat says that Apple should forget about the finger and instead use a combination of picture and voice to unlock the phone.  The writer says that this would provide more security because it requires your face as well as your voice.  The writer also says that it’s more convenient than taking a glove off to use a finger in cold weather.  I happen to disagree on a couple of points.

A Face For Radio

Facial recognition unlock for phones isn’t new.  It’s been in Android since the release of Ice Cream Sandwich.  It’s also very easy to defeat.  This article from last year talks about how flaky the system is unless you provide it several pictures to reference from many different angles.  This video shows how snapping a picture on a different phone can easily fool the facial recognition.  And that’s only the first video of several that I found on a cursory search for “Android Facial Recognition”.  I could see this working against the user if the phone is stolen by someone that knows their target.  Especially if there is a large repository of face pictures online somewhere.  Perhaps in a “book” of “faces”.

Another issue I have is Siri.  As far as I know, Siri can’t be trained to recognize a users voice.  In fact, I don’t believe Siri can distinguish one user from another at all.  To prove my point, go pick up a friend’s phone and ask Siri to find something.  Odds are good Siri will comply even though you aren’t the phone’s owner.  In order to defeat the old, unreliable voice command systems that have been around forever, Apple made Siri able to recognize a wide variety of voices and accents.  In order to cover that wide use case, Apple had to sacrifice resolution of a specific voice.  Apple would have to build in a completely new set of Siri APIs that query a user to speak a specific set of phrases in order to build a custom unlock code.  Based on my experience with those kinds of old systems, if you didn’t utter the phrase exactly the way it was originally recorded it would fail spectacularly.  What happens if you have a cold?  Or there is background noise?  Not exactly easier that putting your thumb on a sensor.

Don’t think that means that fingerprints are infallible.  The Mythbusters managed to defeat an unbeatable fingerprint scanner in one episode.  Of course, they had access to things like ballistics gel, which isn’t something you can pick up at the corner store.  Biometrics are only as good as the sensors that power them.  They also serve as a deterrent, not a complete barrier.  Lifting someone’s fingerprints isn’t easy and neither is scanning them into a computer to produce a sharp enough image to fool the average scanner.  The idea is that a stolen phone with a biometric lock will simply be discarded and a different, more vulnerable phone would be exploited instead.


Tom’s Take

I hope that Apple includes a fingerprint scanner in the new iPhone.  I hope it has enough accuracy and resolution to make biometric access easy and simple.  That kind of implementation across so many devices will drive the access control industry to take a new look at biometrics and being integrating them into more products.  Hopefully that will spur things like home door locks, vehicle locks, and other personal devices to being using these same kind of sensors to increase security.  Fingerprints aren’t perfect by any stretch, but they are the best option of the current generation of technology.  One day we may reach the stage of retinal scanners or brainwave pattern matches for security locks.  For now, a fingerprint scanner on my phone will get a “thumbs up” from me.

Cisco ASA CX 9.1 Update

Cisco LogoEvery day I seem to get three or four searches looking for my ASA CX post even though it was written over a year ago.  I think that’s due in part to the large amount of interest in next-generation firewalls and also in the lack of information that Cisco has put out there about the ASA CX in general.  Sure, there’s a lot of marketing.  When you try to dig down into the tech side of things though, you find yourself quickly running out of release notes and whitepapers to read.  I wanted to write a bit about the things that have changed in the last year that might shed some light on the positioning of the ASA CX now that it has had time in the market.

First and foremost, the classic ASA as you know it is gone.  Cisco made the End of Sale announcement back in March.  After September 16, 2013 you won’t be able to buy one any longer.  Considering the age of the platform this isn’t necessarily a bad thing.  Firstly, the software that’s been released since version 8.3 has required more RAM than the platform initially shipped with.  That makes keeping up with the latest patches difficult.  Also, there was a change in the way that NAT is handled around the 8.3/8.4 timeframe.  That lead to some heartache from people that were just getting used to the way that it worked prior to that code release.  Even though it behaves more like IOS now (i.e. the right way), it’s still confusing to a lot of people.  When you’ve got an underpowered platform that requires expensive upgrades to function at a baseline level, it’s time to start looking at replacing it.  Cisco has already had the replacement available for a while in the ASA-X line, but there hasn’t been a compelling reason to cause customers to upgrade there existing boxes.  The End of Sale/End of Life notice is the first step in migrating the existing user base to the ASA-X line.

The second reason the ASA-X line is looking more attractive to people today is the inclusion of ASA CX functionality in the entire ASA-X line.  If you recall from my previous post, the only ASA capable of running the CX module was the 5585.  It had the spare processing power needed to work the kinks out of the system during the initial trial runs.  Now that the ASA CX software is up to version 9.1, you can install it on any ASA-X appliance.  As always, there is a bit of a catch.  While the release notes tell you that the ASA CX for the mid-range (non 5585) platforms is software based, please note that you need to have a secondary solid state disk (SSD) drive installed in the chassis in order to even download the software.  If you are running ASA OS 9.1 and try to pull down the ASA CX software, you’re going to get an error about a missing storage device.  Even if you purchased the software licensing for the ASA CX, you won’t get very far without some hardware.  The part you’re looking for is ASA5500X-SSD120=, which is a spare 120GB SSD that you can install in the ASA chassis.  If you don’t already have an ASA-X and want the ASA CX functionality, you’re much better off ordering one of the bundle part numbers.  That’s because it includes the SSD in the chassis preloaded with a copy of the ASA CX software.  Save yourself some effort and just order the bundle.

Another thing that I found curious about the 9.1 release of the ASA CX software was in the release notes.  As previously mentioned, the UI for the ASA CX is a copy of Cisco Prime Security Manager (PRSM), also pronounced “prism.”  At first, I just thought this meant that Cisco had borrowed concepts from PRSM to make the ASA CX UI a bit more familiar to people.  Then I read the 9.1 release notes.  Those notes are combined for the ASA CX and PRSM 9.1.  You’d almost never know it though, outside of a couple of mentions for the ASA CX.  Almost the entire document references PRSM, which makes sense when you think about it.  That really did clear up a lot of the questions I had about the ASA CX functionality.  I wondered what kind of strange parallel development track Cisco had used to come up with their answer in the next generation firewall space.  I was also worried that they had either borrowed or licensed software from a third part and that their effort would end up as doomed as the ASA UTM module that died a painful death thanks to Trend Micro‘s strange licensing.

ASA CX isn’t really a special kit.  It’s an on-box copy of PRSM.  The ASA is configured with a rule to punt packets to PRSM for inspection before being shunted back for forwarding.  No magic.  No special sauce.  Just placing one product inside another.  When you think about how IDS/IPS has worked in the ASA for the past several years I suppose it shouldn’t come as too big of a shock.  While vendors like Palo Alto and Sonicwall have rewritten their core OS to take advantage of fast next generation processing, Cisco is still going back to their tried-and-true method of passing all that traffic to a module.  In this case, I’m not even sure what that “module” is in the midrange devices, as it just appears to be an SSD for storing the software and not actually doing any of the processing.  That means that the ASA CX is likely a separate context on the ASA-X.  All the processing for both packet forwarding and next generation inspection is done by the firewall processor.  I know that that the ASA-X has much more in the processing department than its predecessor, but I wonder how much traffic those boxes are going to be able to take before they give out?


Tom’s Take

Cisco is playing catch up in the next generation market.  Yes, I understand that the term didn’t even really exist until Palo Alto started using it to differentiate their offering.  Still, when you look at vendors like Sonicwall, Fortinet, and even Watchguard, you see that they are embracing the idea of expanding unifed threat management (UTM) into a specific focus designed to let IT people root out traffic that’s doing something it’s not supposed to be.  Cisco needs to take a long hard look at the ASA-X platform.  If it is selling well enough against units like the Juniper SRX and the various Checkpoint boxes then the next generation piece needs to be spun out into a different offering.  If the ASA-X is losing ground, what harm could there be in pushing the reset button and turning the whole box into something a bit more grand that a high speed packet filter?  The ASA CX is a great first step.  But given the lack of publicity and difficulty in finding information about it, I think Cisco is in danger of stumbling before the race is even going.

Blog Posts and CISSP CPE Credit

CISSPLogoAmong my more varied certifications, I’m a Certified Information Systems Security Professional (CISSP).  I got it a few years ago since it was one of the few non-vendor specific certifications available at the time.  I studied my tail off and managed to pass the multiple choice scantron-based exam.  One of the things about the CISSP that appealed to me was the idea that I didn’t need to keep taking that monster exam every three years to stay current.  Instead, I could submit evidence that I had kept up with the current state of affairs in the security world in the form of Continuing Professional Education (CPE) credits.

CPEs are nothing new to some professions.  My lawyer friends have told me in the past that they need to attend a certain number of conferences and talks each year to earn enough CPEs to keep their license to practice law.  For a CISSP, there are many things that can be done to earn CPEs.  You can listen to webcasts and podcasts, attend major security conferences like RSA Conference or the ISC2 Security Congress, or even give a security presentation to a group of people.  CPEs can be earned from a variety of research tasks like reading books or magazines.  You can even earn a mountain of CPEs from publishing a security book or article.

That last point is the one I take a bit of umbrage with.  You can earn 5 CPEs for having a security article published in a print magazine or other established publishing house.  You can write all you want but you still have to wait on an old fashioned editor to decide that your material was worth of publication before it can be counted.  Notice that “blog post” is nowhere on the list of activities that can earn credit.  I find that rather interesting considering that the majority of security related content that I read today comes in the form of a blog post.

Blog posts are topical.  With the speed that things move in the security world, the ability to react quickly to news as it happens means you’ll be able to generate much more discussion.  For instance, I wrote a piece for Aruba titled Is It Time For a Hacking Geneva Convention?  It was based on the idea that the new frontier of hacking as a warfare measure is going to need the same kinds of protections that conventional non-combat targets are offered today.  I wrote it in response to a NY Times article about the Chinese calling for Global Hacking Rules.  A week later, NATO released a set of rules for cyberwarfare that echoed my ideas that dams and nuclear plants should be off limits due to potential civilian casualties.  Those ideas developed in the span of less than two weeks. How long would it have taken to get that published in a conventional print magazine?

I spend time researching and gathering information for my blog posts.  Even those that are primarily opinion still have facts that must be verified.  I spend just as much time writing my posts as I do writing my presentations.  I have a much wider audience for my blog posts than I do for my in-person talks.  Yet those in-person talks count for CPEs while my blog posts count for nothing.  Blogs are the kind of rapid response journalism that gets people talking and debating much faster than an article in a security magazine that may be published once a quarter.

I suppose there is something to be said for the relative ease with which someone can start a blog and write posts that may be inaccurate or untrue.  As a counter to that, blog posts exist and can be referenced and verified.  If submitted as a CPE, they should need to stay up for a period of time.  They can be vetted by a committee or by volunteers.  I’d even volunteer to read over blog post CPE submissions.  There’s a lot of smart people out there writing really thought provoking stuff.  If those people happen to be CISSPs, why can’t they get credit for it?

To that end, it’s time for (ISC)^2 to start allowing blog posts to count for CPE credit.  There are things that would need to change on the backend to ensure that the content that is claimed is of high quality.  The desire to have only written material allowed for CPEs is more than likely due to the idea that an editor is reading over it and ensuring that it’s top notch.  There’s nothing to prevent the same thing from occurring for blog authors as well.  After all, I can claim CPE credits for reading a lot of posts.  Why can I get credit for writing them?

The company that oversees the CISSP, (ISC)^2, has taken their time in updating their tests to the modern age.  I’ve not only taken the pencil-and-paper version, I’ve proctored it as well.  It took until 2012 before the CISSP was finally released as a computer-based exam that could be taken in a testing center as opposed to being herded into a room with Scantrons and #2 pencils.  I don’t know whether or not they’re going to be progressive enough to embrace new media at this time.  They seem to be getting around to modernizing things on their own schedule, even with recent additions of more activist board members like Dave Lewis (@gattaca).

Perhaps the board doesn’t feel comfortable allowing people to post whatever they want without oversight or editing.  Maybe reactionary journalism from new media doesn’t meet the strict guidelines needed for people to learn something.  It’s tough to say if blogs are more popular than the print magazines that they forced into email distribution models and quarterly publication as opposed to monthly.  What I will be willing to guarantee is that the quality of security-related blog posts will continue to be high and can only get higher as those that want to start claiming those posts for CPE credit really dig in and begin to write riveting and useful articles.  The fact that they don’t have to be wasted on dead trees and overpriced ink just makes the victory that much sweeter.

Cisco Borderless Idol

Cisco Logo

Day one of Network Field Day 5 (NFD5) included presentations from the Cisco Borderless team. You probably remember their “speed dating” approach at NFD4 which gave us a wealth of information in 15 minute snippets. The only drawback to that lineup is when you find a product or a technology that interests you there really isn’t any time to quiz the presenter before they are ushered off stage. Someone must have listened when I said that before, because this time they brought us 20 minute segments – 10 minutes of presentation, 10 minutes of demo. With the switching team, we even got to vote on our favorite to bring the back for the next round (hence the title of the post). More on that in a bit.

6500 Quad Supervisor Redundancy

First up on the block was the Catalyst 6500 team. I swear this switch is the Clint Howard of networking, because I see it everywhere. The team wanted to tell us about a new feature available in the ((verify code release)) code on the Supervisor 2T (Sup2T). Previously, the supervisor was capable of performing a couple of very unique functions. The first of these was Stateful Switch Over (SSO). During SSO, the redundant supervisor in the chassis can pick up where the primary left off in the event of a failure. All of the traffic sessions can keep on trucking even if the active sup module is rebooting. This gives the switch a tremendous uptime, as well as allowing for things like hitless upgrades in production. The other existing feature of the Sup2T is Virtual Switching System (VSS). VSS allows two Sup2Ts to appear as one giant switch. This is helpful for applications where you don’t want to trust your traffic to just one chassis. VSS allows for two different chassis to terminate Multi-Chassis EtherChannel (MLAG) connections so that distribution layer switches don’t have a single point of failure. Traffic looks like it’s flowing to one switch when in actuality it may be flowing to one or the other. In the event that a Supervisor goes down, the other one can keep forwarding traffic.

Enter the Quad Sup SSO ability. Now, instead of having an RPR-only failover on the members of a VSS cluster, you can setup the redundant Sup2T modules to be ready and waiting in the event of a failure. This is great because you can lose up to three Sup2Ts at once and still keep forwarding while they reboot or get replaced. Granted, anything that can take out 3 Sup2Ts at once is probably going to take down the fourth (like power failure or power surge), but it’s still nice to know that you have a fair amount of redundancy now. This only works on the Sup2T, so you can’t get this if you are still running the older Sup720. You also need to make sure that your linecards support the newer Distributed Forwarding Card 3 (DFC3), which means you aren’t going to want to do this with anything less than a 6700-series line card. In fact, you really want to be using the 6800 series or better just to be on the safe side. As Josh O’brien (@joshobrien77) commented, this is a great feature to have. But it should have been there already. I know that there are a lot of technical reasons why this wasn’t available earlier, and I’m sure the increase fabric speeds in the Sup2T, not to mention the increased capability of the DFC3, are the necessary component for the solution. Still, I think this is something that probably should have shipped in the Sup2T on the first day. I suppose that given the long road the Sup2T took to get to us that “better late than never” is applicable here.

UCS-E

Next up was the Cisco UCS-E series server for the ISR G2 platform. This was something that we saw at NFD4 as well. The demo was a bit different this time, but for the most part this is similar info to what we saw previously.


Catalyst 3850 Unified Access Switch

The Catalyst 3800 is Cisco’s new entry into the fixed-configuration switch arena. They are touting this a “Unified Access” solution for clients. That’s because the 3850 is capable of terminating up to 50 access points (APs) per stack of four. This think can basically function as a wiring closet wireless controller. That’s because it’s using the new IOS wireless controller functionality that’s also featured in the new 5760 controller. This gets away from the old Airespace-like CLI that was so prominent on the 2100, 2500, 4400, and 5500 series controllers. The 3850, which is based on the 3750X, also sports a new 480Gbps Stackwise connector, appropriately called Stackwise480. This means that a stack of 3850s can move some serious bits. All that power does come at a cost – Stackwise480 isn’t backwards compatible with the older Stackwise v1 and v2 from the 3750 line. This is only an issue if you are trying to deploy 3850s into existing 3750X stacks, because Cisco has announced the End of Sale (EOS) and End of Life (EOL) information for those older 3750s. I’m sure the idea is that when you go to rip them out, you’ll be more than happy to replace them with 3850s.

The 3850 wireless setup is a bit different from the old 3750 Access Controller that had a 4400 controller bolted on to it. The 3850 uses Cisco’s IOS-XE model of virtualizing IOS into a sort of VM state that can run on one core of a dual-core processor, leaving the second core available to do other things. Previously at NFD4, we’d seen the Catalyst 4500 team using that other processor core for doing inline Wireshark captures. Here, the 3850 team is using it to run the wireless controller. That’s a pretty awesome idea when you think about it. Since I no longer have to worry about IOS taking up all my processor and I know that I have another one to use, I can start thinking about some interesting ideas.

The 3850 does have a couple of drawbacks. Aside from the above Stackwise limitations, you have to terminate the APs on the 3850 stack itself. Unlike the CAPWAP connections that tunnel all the way back to the Airespace-style controllers, the 3850 needs to have the APs directly connected in order to decapsulate the tunnel. That does provide for some interesting QoS implications and applications, but it doesn’t provide much flexibility from a wiring standpoint. I think the primary use case is to have one 3850 switch (or stack) per wiring closet, which would be supported by the current 50 AP limitation. the othe drawback is that the 3850 is currently limited to a stack of four switches, as opposed to the increased six switch limit on the 3750X. Aside from that, it’s a switch that you probably want to take a look at in your wiring closets now. You can buy it with an IP Base license today and then add on the AP licenses down the road as you want to bring them online. You can even use the 3850s to terminate CAPWAP connections and manage the APs from a central controller without adding the AP license.

Here is the deep dive video that covers a lot of what Cisco is trying to do from a unified wired and wireless access policy standpoint. Also, keep an eye out for the cute Unifed Access video in the middle.

Private Data Center Mobility

I found it interesting this this demo was in the Borderless section and not the Data Center presentation. This presentation dives into the world of Overlay Transport Virtualization (OTV). Think of OTV like an extra layer of 802.1 q-in-q tunneling with some IS-IS routing mixed in. OTV is Cisco’s answer to extending the layer 2 boundary between data centers to allow VMs to be moved to other sites without breaking their networking. Layer 2 everywhere isn’t the most optimal solution, but it’s the best thing we’ve got to work with the current state of VM networking (until Nicira figures out what they’re going to do).

We loved this session so much that we asked Mostafa to come back and talk about it more in depth.

The most exciting part of this deep dive to me was the introduction of LISP. To be honest, I haven’t really been able to wrap my head around LISP the first couple of times that I saw it. Now, thanks to the Borderless team and Omar Sultan (@omarsultan), I’m going to dig into a lot more in the coming months. I think there are some very interesting issues that LISP can solve, including my IPv6 Gordian Knot.


Tom’s Take

I have to say that I liked Cisco’s approach to the presentations this time.  Giving us discussion time along with a demo allowed us to understand things before we saw them in action.  The extra five minutes did help quite a bit, as it felt like the presenters weren’t as rushed this time.  The “Borderless Idol” style of voting for a presentation to get more info out of was brilliant.  We got to hear about something we wanted to go into depth about, and I even learned something that I plan on blogging about later down the line.  Sure, there was a bit of repetition in a couple of areas, most notably UCS-E, but I can understand how those product managers have invested time and effort into their wares and want to give them as much exposure as possible.  Borderless hits all over the spectrum, so keeping the discussion focused in a specific area can be difficult.  Overall, I would say that Cisco did a good job, even without Ryan Secrest hosting.

Tech Field Day Disclaimer

Cisco was a sponsor of Network Field Day 5.  As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 5.  In addition, Cisco provided me with a breakfast and lunch at their offices.  They also provided a Moleskine notebook, a t-shirt, and a flashlight toy.  At no time did they ask for, nor where they promised any kind of consideration in the writing of this review.  The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.

The Coming Cyber Cold War?

This week a couple of interesting tidbits landed in my security news feed.  The first comes from the Middle East where security researchers have uncovered a new infection known by the cutsy moniker of “The Flame”.  It’s a very advanced attack that seems to function more as a collection of infection vectors organized into scripted modules than a plain virus.  It’s notable for two things – first, the collection of files is almost 20MB, which is huge in terms of malware or spyware payloads.  Generally, the idea is that the smaller the package is, the less likely it is to be detected before delivery.  Also curious is that the writers of this nasty little bug decided to think outside the box and use the Lua scripting language.  This allows not only for some pretty high-level programming logic but also enables the writers to extend the functions of the program by utilizing C code at some point down the road.  Lua isn’t typically seen in malware today due to the complexity of writing code.  Even the ~3000 lines of Lua code in “The Flame” would take the average Lua programmer about a month to work out.  Most researchers are calling “The Flame” one of the most complicated pieces of malicious code ever encountered.

The second piece of news that caught my attention was the uncovering of a “backdoor” in some military Field Programmable Gate Array (FPGA) chips.  At first, many were scrambling to accuse the Chinese of putting this particular hole into the hardware.  However, a very detailed analysis by Robert David Graham (@ErrataRob) has shown that in all likelyhood the Chinese had nothing to do with this.  Instead, a debugging interface that is normally disabled when a device ships was instead found to have capabilities of accessing the system in an unintended way.  You know, kind of like the point of having a debugger in the first place?  Rob goes on to pick apart the other pieces of the released story, taking special consideration to downplay any involvement in the Chinese government may or may not have had in “planting” the backdoor in the first place.  This also isn’t the first time I’ve heard about the idea that the Chinese government was installing backdoors or other kinds of monitoring technology in things being shipped to the US.  I’ve also heard that even travelers headed behind the Great Wall take extra precautions not to expose too much information or technology while abroad.  It honestly sounded like something out of a James Bond film with all the formatting and burn phones.

After reading both of these items, I started thinking a bit more.  All of this discussion and rhetoric seems vaguely familiar.  To me, it sounds an awful lot like the Cold War-era that I heard as a kid.  Sure, I’ve seen Red Dawn a few times.  I can remember watching the Berlin Wall fall down.  I enjoy watching movies and reading books about when the Russkies were the bad guys.  All of the discussion about state-sponsored cyber espionage and discussions about the Chinese hacking everything in sight bring me back to those times.  I do believe that there will soon be a Cyber Cold War if it’s not already upon us.  However, instead of the interactions of spies in places like Berlin and the moves and countermoves from Langley and Moscow, all of the conflict in this Cold War will take place in the ether(net).  Information seems to be fairly accessible now to anyone that wants it.  Organized groups of malcontents seem to be amusing themselves with hacking every kind of database imaginable and spilling the contents far and wide in an attempt to make a name for themselves.  These people don’t really worry me.  As I said before when I talked about Stuxnet, the real concern in my mind comes from organized groups of state-sponsored agents that spend a large amount of time attacking cyber infrastructure quietly for the purpose of stealing and not getting caught.  It’s the kind of feeling you get when you read about old-schools spy stories like those of Aldrich Ames and Robert Hanssen.  The Advanced Persistent Threat (APT) technology of today allows programs to sit in place for months (if not years) and quietly exfiltrate data back to interested parties with little to no clue about what might be going on.  APTs don’t go out and buy fancy cars or new houses. APTs don’t make suspicious phone calls (usually) or get tailed by FBI agents hot on their trail.  They just collect data and send it away for someone else to look at.  APTs are low profile on purpose.  And they scare me a lot more than the worst spies in history.

At the rate things are headed right now, it won’t be long before the new Berlin Wall is instead a firewall doing a horrible job of separating your network from those that would seek to take all the data they can find.  Instead of the CIA looking for moles, it’s going to be security researchers and IT admins looking for all manner of programs lurking around, stealing data.  With access to big data technology, it wouldn’t take long for someone in the know to start crunching data and finding out things they aren’t supposed to know.  Yeah, it sure does sound like the plot of a TV show or some movie.  But back in 1985, the idea that the Russians would be our friends was pretty far-fetched as well.  I’m very interested to see what happens in the coming months in regards to advances in state-sponsored hacking.  I think things are only going to escalate from here.  The question is whether or not those of us in the private sector are in the crosshairs as well.  And if we are, how quickly we can adapt.