Security Dessert Models

MMCOOKIE

I had the good fortune last week to read a great post from Maish Saidel-Keesing (@MaishSK) that discussed security models in relation to candy.  It reminded me that I’ve been wanting to discuss security models in relation to desserts.  And since Maish got me hungry for a Snicker’s bar, I decided to lay out my ideas.

When we look at traditional security models of the past, everything looks similar to creme brûlée.  The perimeter is very crunchy, but it protects a soft interior.  This is the predominant model of the world where the “bad guys” all live outside of your network.  It works when you know where your threats are located.  This model is still in use today where companies explicitly trust their user base.

The creme brûlée model doesn’t work when you have large numbers of guest users or BYOD-enabled users.  If one of them brings in something that escapes into the network, there’s nothing to stop it from wreaking havoc everywhere.  In the past, this has caused massive virus outbreaks and penetrations from things like malicious USB sticks in the parking lot being activated on “trusted” computers internally.

A Slice Of Pie

A more modern security model looks more like an apple pie.  Rather than trusting that everything inside the network, the smart security team will realize that users are as much of a threat as the “bad guys” outside.  They crunchy crust on top will also be extended around the whole soft area inside.  Users that connect tablets, phones, and personal systems will have a very aggressive security posture in place to prevent access of anything that could cause problems in the network (and data center).  This model is great when you know that the user base is not to be trusted.  I wrote about it over a year ago on the Aruba Airheads community site.

The apple pie model does have some drawbacks.  While it’s a good idea to isolate your users outside the “crust”, you still have nothing protecting your internal systems if a rogue device or “trusted” user manages to get inside the perimeter.  The pie model will protect you from careless intrusions but not from determined attackers.  To fix that problem, you’re going to have to protect things inside the network with a crunchy shell as well.

Melts In Your Firewall, Not In Your Hand

Maish was right on when he talked about M&Ms being a good metaphor for security.  They also do a great job of visualizing the untrusted user “pie” model.  But the ultimate security model will end up looking more like an M&M cookie.  It will have a crunchy edge all around.  It will be “soft” in the middle.  And it will also have little crunchy edges around the important chocolate parts of your network (or data center).  This is how you protect the really important things like customer data.  You make sure that even getting past the perimeter won’t grant access.  This is the heart of “defense in depth”.

The M&M cookie model isn’t easy by any means. It requires you to identify assets that need to be protected.  You have to build the protection in at the beginning.  No ACLs that permit unrestricted access.  The communications between untrusted devices and trusted systems needs to be kept to the bare minimum necessary.  Too many M&Ms in a cookie makes for a bad dessert.  So too must you make sure to identify the critical systems that need be protected and group them together to minimize configuration effort and attack surface.


 

Tom’s Take

Security is a world of protecting the important things while making sure they can be used by people.  If you err on the side of too much caution, you have a useless system.  If you are too permissive, you have a security risk.  Balance is the key.  Just like the recipe for cookies, pie, or even creme brûlée the proportion of ingredients must be just right to make a tasty dessert.  In security you have to have the same mix of permissions and protections.  Otherwise, the whole thing falls apart like a deflated soufflé.

 

Don’t Track My MAC!

track

The latest technology in mobile seems to be identification.  It has nothing to do with credentials.  Instead, it has everything to do with creating a database of who you are and where you are.  Location-based identification is the new holy grail for marketing people.  And the privacy implications are frightening.

Who Are You?

The trend now is to get your device MAC address and store it in a database.  This allows the location tracking systems, like Aruba Meridian or Cisco CMX, to know that they’ve seen you in the past.  They can see where you’ve been in the store with a resolution of a couple of feet (much better than GPS).  They now know which shelf you are standing in front of.  Coupled with new technologies like Apple iBeacon, the retailer can push information to your mobile device like a coupon or a price comparison with competitors.

It’s a fine use of mobile technology.  Provided I wanted that in the first place.  The model should be opt-in.  If I download your store’s app and connect to your wifi then I clicked the little “agree” box that allows you to send me that information.  If I opt-in, feel free to track me and email me coupons.  Or even to pop them up on store displays when my device gets close to a shelf that contains a featured item.  I knew what I was getting into when I opted in.  But what happens when you didn’t?

Wifi, Can You Hear Me?

The problem comes when the tracking system is listening to devices when it shouldn’t be. When my mobile device walks into a store, it will start beaconing for available wifi access points.  It will interrogate them about the SSIDs that they have and whether my device has associated with them.  That’s the way wifi works.  You can’t stop that unless you shut off your wireless.

If the location system is listening to the devices beaconing for wifi, it could be enabled to track those MAC addresses that are beaconing for connectivity even if they don’t connect.  So now, my opt-in is worthless.  If the location system knows about my MAC address even when I don’t connect, they can push information to iBeacon displays without my consent.  I would see a coupon for a camping tent based on the fact that I stood next to the camp stoves last week for five minutes.  It doesn’t matter that I was on a phone call and don’t have the slightest care about camping.  Now the system has started building a profile of me based on erroneous information it gathered when it shouldn’t have been listening.

Think about Minority Report.  When Tom Cruise is walking through the subway, retinal scanners read his print and start showing him very directed advertising.  While we’re still years away from that technology, being able to fingerprint a mobile device when it enters the store is the next best thing.  If I look down to text my wife about which milk to buy, I could get a full screen coupon telling me about a sale on bread.

My (MAC) Generation

This is such a huge issue that Apple has taken a step to “fix” the problem in the beta release for iOS 8.  As reported by The Verge, iOS 8 randomizes the MAC address used when probing for wifi SSIDs.  This means that the MAC used to probe for wifi requests won’t be the same as the one used to connect to the actual AP.  That’s huge for location tracking.  It means that the only way people will know who I am for sure is for me to connect to the wifi network.  Only then will my true MAC address be revealed.  It also means that I have to opt-in to the location tracking.  That’s a great relief for privacy advocates and tin foil hat aficionados everywhere.

It does make iBeacon configuration a bit more time consuming.  But you’ll find that customers will be happier overall knowing their information isn’t being stored without consent.  Because there’s never been a situation where customer data was leaked, right? Not more than once, right?  Oh, who am I kidding.  If you are a retailer, you don’t want that kind of liability on your hands.

Won’t Get Fooled Again

If you’re one of the retailers deploying location based solutions for applications like iBeacon, now is the time to take a look at what you’re doing.  If you’re collecting MAC address information from probing mobile devices you should turn it off now.  Yes, privacy is a concern.  But so is your database.  Assuming iOS randomizes the entire MAC address string including the OUI and not just the 24-bit NIC at the end, your database is going to fill up quickly with bogus entries.  Sure, there may be a duplicate here and there from the random iOS strings, but they will be few and far between.

More likely, your database will overflow from the sheer number of MACs being reported by iOS 8 devices.  And since iOS7 adoption was at 87% of compatible devices just 8 months after release, you can guarantee there will be a large number of iOS devices coming into your environment running with obfuscated MAC addresses.


Tom’s Take

I don’t like the idea of being tracked when I’m not opted in to a program.  Sure, I realize that my usage statistics are being used for research.  I know that clicking those boxes in the EULA gives my data to parties unknown for any purpose they choose.  And I’m okay with it.  Provided that box is checked.

When I find out my data is being collected without my consent, it gives me the creeps.  When I learned about the new trends in data collection for the grand purposes of marketing and sales, I wanted to scream from the rooftops that the vendors needs to put a halt to this right away.  Thankfully, Apple must have heard my silent screams.  We can only hope that other manufacturers start following suit and giving us a method to prevent this from happening.  This tweet from Jan Dawson sums it up nicely:

Security’s Secret Shame

photo-2

Heartbleed captured quite a bit of news these past few days.  A hole in the most secure of web services tends to make people a bit anxious.  Racing to release patches and assess the damage consumed people for days.  While I was a bit concerned about the possibilities of exposed private keys on any number of web servers, the overriding thought in my mind was instead about the speed at which we went from “totally secure” to “Swiss cheese security” almost overnight.

Jumping The Gun

As it turns out, the release of the information about Heartbleed was a bit sudden.  The rumor is that the people that discovered the bug were racing to release the information as soon as the OpenSSL patch was ready because they were afraid that the information had already leaked out to the wider exploiting community.  I was a bit surprised when I learned this little bit of information.

Exploit disclosure has gone through many phases in recent years.  In days past, the procedure was to report the exploit to the vendor responsible.  The vendor in question would create a patch for the issue and prepare their support organization to answer questions about the patch.  The vendor would then release the patch with a report on the impact.  Users would read the report and patch the systems.

Today, researchers that aren’t willing to wait for vendors to patch systems instead perform an end-run around the system.  Rather than waiting to let the vendors release the patches on their cycle, they release the information about the exploit as soon as they can.  The nice ones give the vendors a chance to fix things.  The less savory folks want the press of discovering a new hole and project the news far and wide at every opportunity.

Shame Is The Game

Part of the reason to release exploit information quickly is to shame the vendor into quickly releasing patch information.  Researchers taking this route are fed up with the slow quality assurance (QA) cycle inside vendor shops.  Instead, they short circuit the system by putting the issue out in the open and driving the vendor to release a fix immediately.

While this approach does have its place among vendors that move a glacial patching pace, one must wonder how much good is really being accomplished.  Patching systems isn’t a quick fix.  If you’ve ever been forced to turn out work under a crucial deadline while people were shouting at you, you know the kind of work that gets put out.  Vendor patches must be tested against released code and there can be no issues that would cause existing functionality to break.  Imagine the backlash if a fix to the SSL module cause the web service to fail on startup.  The fix would be worse than the problem.

Rather than rushing to release news of an exploit, researchers need to understand the greater issues at play.  Releasing an exploit for the sake of saving lives is understandable.  Releasing it for the fame of being first isn’t as critical.  Instead of trying to shame vendors into releasing a patch rapidly to plug some hole, why not work with them instead to identify the issue and push the patch through?  Shaming vendors will only put pressure on them to release questionable code.  It will also alienate the vendors from researchers doing   things the right way.


Tom’s Take

Shaming is the rage now.  We shame vendors, users, and even pets.  Problems have taken a back seat to assigning blame.  We try to force people to change or fix things by making a mockery of what they’ve messed up.  It’s time to stop.  Rather than pointing and laughing at those making the mistake, you should pick up a keyboard and help them fix it. Shaming doesn’t do anything other than upset people.  Let’s put it to bed and make things better by working together instead of screaming “Ha ha!” when things go wrong.

Google+ And The Quest For Omniscience

GooglePlusEverything

When you mention Google+ to people, you tend to get a very pointed reaction. Outside of a select few influencers, I have yet to hear anyone say good things about it. This opinion isn’t helped by the recent moves by Google to make Google+ the backend authentication mechanism for their services. What’s Google’s aim here?

Google+ draws immediate comparisons to Facebook. Most people will tell you that Google+ is a poor implementation of the world’s most popular social media site. I would tend to agree for a few reasons. I find it hard to filter things in Google+. The lack of a real API means that I can’t interact with it via my preferred clients. I don’t want to log into a separate web interface simply to ingest endless streams of animated GIFs with the occasional nugget of information that was likely posted somewhere else in the first place.

It’s the Apps

One thing the Google of old was very good at doing was creating applications that people needed. GMail and Google Apps are things I use daily. Youtube gets visits all the time. I still use Google Maps to look things up when I’m away from my phone. Each of these apps represent a separate development train and unique way of looking at things. They were more integrated than some of the attempts I’ve seen to tie together applications at other vendors. They were missing one thing as far as Google was concerned: you.

Google+ isn’t a social network. It’s a database. It’s an identity store that Google uses to nail down exactly who you are. Every +1 tells them something about you. However, that’s not good enough. Google can only prosper if they can refine their algorithms.  Each discrete piece of information they gather needs to be augmented by more information.  In order to do that, they need to increase their database.  That means they need to drive adoption of their social network.  But they can’t force people to use Google+, right?

That’s where the plan to integrate Google+ as the backend authentication system makes nefarious sense. They’ve already gotten you hooked on their apps. You comment on Youtube or use Maps to figure out where the nearest Starbucks already. Google wants to know that. They want to figure out how to structure AdWords to show you more ads for local coffee shops or categorize your comments on music videos to sell you Google Play music subscriptions. Above all else, they can use that information as a product to advertisers.

Build It Before They Come

It’s devilishly simple. It’s also going to be more effective than Facebook’s approach. Ask yourself this: when’s the last time you used Facebook Mail? Facebook started out with the lofty goal of gathering all the information that it could about people. Then they realized the same thing that Google did: You have to collect information on what people are using to get the whole picture. Facebook couldn’t introduce a new system, so they had to start making apps.

Except people generally look at those apps and push them to the side. Mail is a perfect example. Even when Facebook tried to force people to use it as their primary communication method their users rebelled against the idea. Now, Facebook is being railroaded into using their data store as a backend authentication mechanism for third party sites. I know you’ve seen the “log In With Facebook” buttons already. I’ve even written about it recently. You probably figured out this is going to be less successful for a singular reason: control.

Unlike Google+ and the integration will all Google apps, third parties that utilize Facebook logins can choose to restrict the information that is shared with Facebook. Given the climate of privacy in the world today, it stands to reason that people are going to start being very selective about the information that is shared with these kinds of data sinks. Thanks to the Facebook login API, a significant portion of the collected information never has to be shared back to Facebook. On the other hand, Google+ is just making a simple backend authorization. Given that they’ve turned on Google+ identities for Youtube commenting without a second though, it does make you wonder what other data their collecting without really thinking about it.


Tom’s Take

I don’t use Google+. I post things there via API hacks. I do it because Google as a search engine is too valuable to ignore. However, I don’t actively choose to use Google+ or any of the apps that are now integrated into it. I won’t comment on Youtube. I doubt I’ll use the Google Maps functions that are integrated into Google+. I don’t like having a half-baked social media network forced on me. I like it even less when it’s a ham-handed attempt to gather even more data on me to sell to someone willing to pay to market to me. Rather than trying to be the anti-Facebook, Google should stand up for the rights of their product…uh, I mean customers.

The Slippery Slope of Social Sign-In

FBTalons

At the recent Wireless Field Day 6, we got a chance to see a presentation from AirTight Networks about their foray into Social Wifi. The idea is that business can offer free guest wifi for customers in exchange for a Facebook Like or by following the business on Twitter. AirTight has made the process very seamless by allowing an integrated Facebook login button. Users can just click their way to free wifi.

I’m a bit guarded about this whole approach. It has nothing to do with AirTight’s implementation. In face, several other wireless companies are racing to have similar integration. It does have everything to do with the way that data is freely exchanged in today’s society. Sometimes more freely than it should.

Don’t Forget Me

Facebook knows a lot about me. They know where I live. They know who my friends are. They know my wife and how many kids we have. While I haven’t filled out the fields, there are others that have indicated things like political views and even more personal information like relationship status or sexual orientation. Facebook has become a social data dump for hundreds of millions of people.

For years, I’ve said that Facebook holds the holy grail of advertising – an searchable database of everything a given demographic “likes”. Facebook knows this, which is why they are so focused on growing their advertising arm. Every change to the timeline and every misguided attempt to “fix” their profile security has a single aim: convincing business to pay for access to your information.

Now, with social wifi, those business can get access to a large amount of data easily. When you create the API integration with Facebook, you can indicate a large number of discreet data points easily. It’s just a bunch of checkboxes. Having worked in IT before, I know the siren call that could cause a business owner to check every box he could with the idea that it’s better to collect more data rather than less. It’s just harmless, right?

Give It Away Now

People don’t safeguard their social media permissions and data like they should. If you’ve ever gotten DM spam from a follower or watched a Facebook wall swamped with “on behalf of” postings you know that people are willing to sign over the rights to their accounts for a 10% discount coupon or a silly analytics game. And that’s after the warning popup telling the user what permissions they are signing away. What if the data collection is more surreptitious?

The country came unglued when it was revealed that a government agency was collecting metadata and other discreet information about people that used online services. The uproar led to hearings and debate about how far reaching that program was. Yet many of those outraged people don’t think twice about letting a coffee shop have access to a wealth of data that would make the NSA salivate.

Providers are quick to say that there are ways to limit how much data is collected. It’s trivial to disable the ability to see how many children a user has. But what if that’s the data the business wants? Who is to say that Target or Walmart won’t collect that information for an innocent purpose today only to use it to target advertisements to users at a later date. That’s the exact kind of thing that people don’t think about.

Big data and our analytic integrations are allowing it to happen with ease today. The abundance of storage means we can collect everything and keep it forever without needing to worry about when we should throw things away. Ubiquitous wireless connectivity means we are never truly disconnected from the world. Services that we rely on to tell us about appointments or directions collect data they shouldn’t because it’s too difficult to decide how to dispose of it. It may sound a bit paranoid but you would be shocked to see what people are willing to trade without realizing.


Tom’s Take

Given the choice between paying a few dollars for wifi access or “liking” a company’s page on Facebook, I’ll gladly fork over the cash. I’d rather give up something of middling value (money) instead of giving up something more important to me (my identity). The key for vendors investigating social wifi is simple: transparency. Don’t just tell me that you can restrict the data that a business can collect. Show me exactly what data they are collecting. Don’t rely on the generalized permission prompts that Facebook and Twitter provide. If business really want to know how I voted in the last election then the wifi provider has a social responsibility to tell me that before I sign up. If shady businesses are forced to admit they are overstepping their data collection bounds then they might just change their tune. Let’s make technology work to protect our privacy for once.

Why An iPhone Fingerprint Scanner Makes Sense

silver-apple-thumb

It’s hype season again for the Cupertino Fruit and Phone Company.  We are mere days away from a press conference that should reveal the specs of a new iPhone, likely to be named the iPhone 5S.  As is customary before these events, the public is treated to all manner of Wild Mass Guessing as to what will be contained in the device.  WIll it have dual flashes?  Will it have a slow-motion camera?  NFC? 802.11ac?  The list goes on and on.  One of the most spectacular rumors comes in a package the size of your thumb.

Apple quietly bought a company called AuthenTec last year.  AuthentTec made fingerprint scanners for a variety of companies, including those that included the technology in some Android devices.  After the $365 million acquisition, AuthenTec disappeared into a black hole.  No one (including Apple) said much of anything about them.  Then a few weeks ago, a patent application was revealed that came from Apple and included fingerprint technology from AuthenTec.  This sent the rumor mill into overdrive.  Now all signs point to a convex sapphire home button that contains a fingerprint scanner that will allow iPhones to use biometrics for security.  A developer even managed to ferret out a link to a BiometrickKitUI bundle in one of the iOS 7 beta releases (which was quickly removed in the next beta).

Giving Security The Finger

I think adding a fingerprint scanner to the hardware of an iDevice is an awesome idea.  Passcode locks are good for a certain amount of basic device security, but the usefulness of a passcode is inversely proportional to it’s security level.  People don’t make complex passcodes because they take far too long to type in.  If you make a complex alphanumeric code, typing the code in quickly one-handed isn’t easy.  That leaves most people choosing to use a 4-digit code or forgoing it altogether.  That doesn’t bode well for people whose phones are lost or stolen.

Apple has already publicly revealed that it will include enhanced security in iOS 7 in the form of an activation lock that prevents a thief from erasing the phone and reactivating it for themselves.  This makes sense in that Apple wants to discourage thieves.  But that step only makes sense if you consider that Apple wants to beef up the device security as well.  Biometric fingerprint scanners are a quick method of inputting a unique unlock code quickly.  Enabling this technology on a new phone should show a sharp increase in the number of users that have enabled an unlock code (or finger, in this case).

Not all people thing fingerprint scanners are a good idea.  A link from Angelbeat says that Apple should forget about the finger and instead use a combination of picture and voice to unlock the phone.  The writer says that this would provide more security because it requires your face as well as your voice.  The writer also says that it’s more convenient than taking a glove off to use a finger in cold weather.  I happen to disagree on a couple of points.

A Face For Radio

Facial recognition unlock for phones isn’t new.  It’s been in Android since the release of Ice Cream Sandwich.  It’s also very easy to defeat.  This article from last year talks about how flaky the system is unless you provide it several pictures to reference from many different angles.  This video shows how snapping a picture on a different phone can easily fool the facial recognition.  And that’s only the first video of several that I found on a cursory search for “Android Facial Recognition”.  I could see this working against the user if the phone is stolen by someone that knows their target.  Especially if there is a large repository of face pictures online somewhere.  Perhaps in a “book” of “faces”.

Another issue I have is Siri.  As far as I know, Siri can’t be trained to recognize a users voice.  In fact, I don’t believe Siri can distinguish one user from another at all.  To prove my point, go pick up a friend’s phone and ask Siri to find something.  Odds are good Siri will comply even though you aren’t the phone’s owner.  In order to defeat the old, unreliable voice command systems that have been around forever, Apple made Siri able to recognize a wide variety of voices and accents.  In order to cover that wide use case, Apple had to sacrifice resolution of a specific voice.  Apple would have to build in a completely new set of Siri APIs that query a user to speak a specific set of phrases in order to build a custom unlock code.  Based on my experience with those kinds of old systems, if you didn’t utter the phrase exactly the way it was originally recorded it would fail spectacularly.  What happens if you have a cold?  Or there is background noise?  Not exactly easier that putting your thumb on a sensor.

Don’t think that means that fingerprints are infallible.  The Mythbusters managed to defeat an unbeatable fingerprint scanner in one episode.  Of course, they had access to things like ballistics gel, which isn’t something you can pick up at the corner store.  Biometrics are only as good as the sensors that power them.  They also serve as a deterrent, not a complete barrier.  Lifting someone’s fingerprints isn’t easy and neither is scanning them into a computer to produce a sharp enough image to fool the average scanner.  The idea is that a stolen phone with a biometric lock will simply be discarded and a different, more vulnerable phone would be exploited instead.


Tom’s Take

I hope that Apple includes a fingerprint scanner in the new iPhone.  I hope it has enough accuracy and resolution to make biometric access easy and simple.  That kind of implementation across so many devices will drive the access control industry to take a new look at biometrics and being integrating them into more products.  Hopefully that will spur things like home door locks, vehicle locks, and other personal devices to being using these same kind of sensors to increase security.  Fingerprints aren’t perfect by any stretch, but they are the best option of the current generation of technology.  One day we may reach the stage of retinal scanners or brainwave pattern matches for security locks.  For now, a fingerprint scanner on my phone will get a “thumbs up” from me.

Cisco ASA CX 9.1 Update

Cisco LogoEvery day I seem to get three or four searches looking for my ASA CX post even though it was written over a year ago.  I think that’s due in part to the large amount of interest in next-generation firewalls and also in the lack of information that Cisco has put out there about the ASA CX in general.  Sure, there’s a lot of marketing.  When you try to dig down into the tech side of things though, you find yourself quickly running out of release notes and whitepapers to read.  I wanted to write a bit about the things that have changed in the last year that might shed some light on the positioning of the ASA CX now that it has had time in the market.

First and foremost, the classic ASA as you know it is gone.  Cisco made the End of Sale announcement back in March.  After September 16, 2013 you won’t be able to buy one any longer.  Considering the age of the platform this isn’t necessarily a bad thing.  Firstly, the software that’s been released since version 8.3 has required more RAM than the platform initially shipped with.  That makes keeping up with the latest patches difficult.  Also, there was a change in the way that NAT is handled around the 8.3/8.4 timeframe.  That lead to some heartache from people that were just getting used to the way that it worked prior to that code release.  Even though it behaves more like IOS now (i.e. the right way), it’s still confusing to a lot of people.  When you’ve got an underpowered platform that requires expensive upgrades to function at a baseline level, it’s time to start looking at replacing it.  Cisco has already had the replacement available for a while in the ASA-X line, but there hasn’t been a compelling reason to cause customers to upgrade there existing boxes.  The End of Sale/End of Life notice is the first step in migrating the existing user base to the ASA-X line.

The second reason the ASA-X line is looking more attractive to people today is the inclusion of ASA CX functionality in the entire ASA-X line.  If you recall from my previous post, the only ASA capable of running the CX module was the 5585.  It had the spare processing power needed to work the kinks out of the system during the initial trial runs.  Now that the ASA CX software is up to version 9.1, you can install it on any ASA-X appliance.  As always, there is a bit of a catch.  While the release notes tell you that the ASA CX for the mid-range (non 5585) platforms is software based, please note that you need to have a secondary solid state disk (SSD) drive installed in the chassis in order to even download the software.  If you are running ASA OS 9.1 and try to pull down the ASA CX software, you’re going to get an error about a missing storage device.  Even if you purchased the software licensing for the ASA CX, you won’t get very far without some hardware.  The part you’re looking for is ASA5500X-SSD120=, which is a spare 120GB SSD that you can install in the ASA chassis.  If you don’t already have an ASA-X and want the ASA CX functionality, you’re much better off ordering one of the bundle part numbers.  That’s because it includes the SSD in the chassis preloaded with a copy of the ASA CX software.  Save yourself some effort and just order the bundle.

Another thing that I found curious about the 9.1 release of the ASA CX software was in the release notes.  As previously mentioned, the UI for the ASA CX is a copy of Cisco Prime Security Manager (PRSM), also pronounced “prism.”  At first, I just thought this meant that Cisco had borrowed concepts from PRSM to make the ASA CX UI a bit more familiar to people.  Then I read the 9.1 release notes.  Those notes are combined for the ASA CX and PRSM 9.1.  You’d almost never know it though, outside of a couple of mentions for the ASA CX.  Almost the entire document references PRSM, which makes sense when you think about it.  That really did clear up a lot of the questions I had about the ASA CX functionality.  I wondered what kind of strange parallel development track Cisco had used to come up with their answer in the next generation firewall space.  I was also worried that they had either borrowed or licensed software from a third part and that their effort would end up as doomed as the ASA UTM module that died a painful death thanks to Trend Micro‘s strange licensing.

ASA CX isn’t really a special kit.  It’s an on-box copy of PRSM.  The ASA is configured with a rule to punt packets to PRSM for inspection before being shunted back for forwarding.  No magic.  No special sauce.  Just placing one product inside another.  When you think about how IDS/IPS has worked in the ASA for the past several years I suppose it shouldn’t come as too big of a shock.  While vendors like Palo Alto and Sonicwall have rewritten their core OS to take advantage of fast next generation processing, Cisco is still going back to their tried-and-true method of passing all that traffic to a module.  In this case, I’m not even sure what that “module” is in the midrange devices, as it just appears to be an SSD for storing the software and not actually doing any of the processing.  That means that the ASA CX is likely a separate context on the ASA-X.  All the processing for both packet forwarding and next generation inspection is done by the firewall processor.  I know that that the ASA-X has much more in the processing department than its predecessor, but I wonder how much traffic those boxes are going to be able to take before they give out?


Tom’s Take

Cisco is playing catch up in the next generation market.  Yes, I understand that the term didn’t even really exist until Palo Alto started using it to differentiate their offering.  Still, when you look at vendors like Sonicwall, Fortinet, and even Watchguard, you see that they are embracing the idea of expanding unifed threat management (UTM) into a specific focus designed to let IT people root out traffic that’s doing something it’s not supposed to be.  Cisco needs to take a long hard look at the ASA-X platform.  If it is selling well enough against units like the Juniper SRX and the various Checkpoint boxes then the next generation piece needs to be spun out into a different offering.  If the ASA-X is losing ground, what harm could there be in pushing the reset button and turning the whole box into something a bit more grand that a high speed packet filter?  The ASA CX is a great first step.  But given the lack of publicity and difficulty in finding information about it, I think Cisco is in danger of stumbling before the race is even going.