Is Dell The New HP? Or The Old IBM?

Dell announced it’s intention today to acquire Sonicwall, a well-respected firewall vendor.  This is just a latest in a long line of fairly recent buys for Dell, including AppAssure, Force10, and Compellent.  There’s been a lot of speculation about the reasons behind the recent flurry of purchases coming out of Austin, TX.  I agree with the majority of what I’m hearing, but I thought I’d point out a few things that I think make a lot of sense and might give us a glimpse into where Dell might be headed next.

Dell is a wonderful supply chain company.  I’ve heard them compared to Walmart and the US military in the same breath when discussing efficiency of logistic management.  Dell has the capability of putting a box of something on your doorstep within days of ordering.  It just so happens that they make computer stuff.  For years, Dell seemed to be content to partner with companies to utilize their supply chain to deliver other people’s stuff.  After a while, Dell decided to start making that stuff for themselves and cut out the middle man.  This is why you see things like Dell printers and switches.  It didn’t take long for Dell to change it’s mind, though.  It made little sense to devote so much R&D to copying other products.  Why not just spend the money on buying those companies outright?  I mean, that’s how HP does it, right?  And so we start the acquisition phase for Dell.  Since acquiring Equallogic in 2008, they’ve bought 5 other companies that make everything from enterprise storage to desktop management. They only thing they’ve missed on was acquiring 3PAR, which happened because HP threw a pile of cash at 3PAR to not go to Dell.  I’m sure that was more about denying Dell an enterprise storage vendor than it was using 3PAR to its fullest capabilities.

Dell still has a lot of OEM relationships, though.  Their wireless solution is OEMed from Aruba.  They resell Juniper and Brocade equipment as their J-series and B-series respectively.  However, Dell is trying to move into the data center to fight with HP, Cisco, and IBM.  HP already owns a data center solution top to bottom.  Cisco is currently OEMing their solution with EMC (vBlock).  I think Dell realizes that it’s not only more profitable to own the entire solution in the DC, it’s also safer in the long term.  You either support all your own equipment, or you have to support everyone’s equipment.  And if you try to support someone else’s stuff, you have to be very careful you don’t upset the apple cart.  Case in point: last year many assumed Cisco was on the outs with EMC because they started supporting NetApp and Hyper-V.  If you can’t keep your OEM DC solution partners happy, you don’t have a solution.  From Dell’s perspective, it’s much easier to appease everyone if they’re getting their paychecks from the same HR department.  Dell’s acquisitions of Force10 and, now, Sonicwall seem to support the idea that they want the “one throat to choke” model of solution delivery.  Very strategic.

They only problem that I have with this kind of Innovation by Acquisition strategy is that it only works when upper management is competent and focused.  So long as Michael Dell is running the show in Austin, I’m confident that Dell will make solid choices and bring on companies that complement their strategies.  Where the “buy it” model breaks down is when you bring in someone that runs counter to your core beliefs.  Yes, I’m looking at HP now.  Ask them how they feel about Mark Hurd basically shutting down R&D and spending their war chest on Palm/WebOS.  Ask them if they’re still okay with Leo Apotheker reversing that decision only months later and putting PSG on a chopping block because he needed some cash to buy a software company (Autonomy) because software is all he knows.  If the ship has a good captain, you get where you’re going.  If the cook’s assistant is in charge, you’re just going to steam in circles until you run out of gas.  HP is having real issues right now trying to figure out who they want to be.  A year of second guessing and trajectory reversals (and re-reversals) have left many shell shocked and gun shy, afraid to make any more bold moves until the dust settles.  The same can be said of many other vendors.  In this industry, you’re only as successful as your last failed acquisition.

On the other hand, you also have to keep moving ahead and innovating.  Otherwise the mighty giants get left behind.  Ask IBM how it feels to now be considered an also-ran in the industry.  I can remember not too long ago when IBM was a three-letter combination that commanded power and respect.  After all, as the old saying goes “No one every got fired for buying IBM.”  Today, the same can’t be said.  IBM has divested all of its old power to Lenovo, spinning off the personal systems and small server business to concentrate more on the data center and services division.  It’s made them a much leaner, meaner competitor.  However, it’s also reaved away much of what made them so unstoppable in the past.  People now look to companies like Dell and HP to provide top-to-bottom support for every part of their infrastructure.  I can speak from experience here.  I work for a company founded by an ex-IBMer.  For years we wouldn’t sell anything that didn’t have a Big Blue logo on it.  Today, I can’t tell you the last time I sold something from IBM.  It feels like the industry that IBM built passed them by because they sold off much of who they were trying to be what they wanted.  Now that they are where they want to be, no one recognizes who they were.  They will need to start fighting again to regain their relevance.  Dell would do good to avoid acquiring too much too fast to avoid a similar fate.  Once you grow too large, you have to start shedding things to stay agile.  That’s when you start losing your identity.


Tom’s Take

So far, reaction to the Sonicwall purchase has been overwhelmingly positive.  It sets the stage for Dell to begin to compete with the Big Boys of Networking across their product lines.  It also more or less completes Dell’s product line by bringing everything they need in-house.  They only major piece they are still missing is wireless.  They OEM from Aruba today, but if they want to seriously compete they’ll need to acquire a wireless company sooner rather than later.  Aruba is the logical target, but are they too big to swallow so soon after Sonicwall?  And what of their new switching line?  No sense trampling on PowerConnect or Force10.  That leaves other smaller vendors like Aerohive or Meraki.  Either one might be a good fit for Dell.  But that’s a blog post for another day.  For right now, Dell needs to spend time making the transition with Sonicwall as smooth as possible.  That way, they can just be Dell.  Not the New HP.  And not the Old IBM.

Backdoors By Design

I was listening to the new No Strings Attached Wireless podcast on my way to work and Andrew von Nagy (@revolutionwifi) and his guests were talking about the new exploit in WiFi Protected Setup (WPS).  Essentially, a hacker can brute force the 8-digit setup PIN in WPS, which was invented in the first place because people needed help figuring out how to setup more secure WiFi at home.  Of course, that got me to thinking about other types of hacks that involve ease-of-use features being exploited.  Ask Sarah Palin about how the password reset functionality in Yahoo mail could be exploited for nefarious purposes.  Talk to Paris Hilton about why not having a PIN on your cell phone’s voice mail account when calling from a known number (i.e. your own phone) is a bad idea when there  are so many caller ID spoofing tools in the wild today.

Security isn’t fun or glamorous.  In the IT world, the security people are pariahs.  We’re the mean people that make you have strong passwords or limit access to certain resources.  Everyone thinks were a bunch of wet blankets.  Why is that exactly?  Why do the security people insist on following procedures or protecting everything with an extra step or two of safety?  Wouldn’t it just be easier if we didn’t have to?

The truth is that security people act the way we do because users have been trying for years to make it easy on themselves.  The issues with WPS highlight how a relatively secure protocol like WPA can be affected by something minor like WPS because we had to make things easy for the users.  We spend an inordinate amount of time taking a carefully constructed security measure and eviscerating it so that users can understand it.  We spend almost zero time educating users about why we should follow these procedures.  At the end of the day, users circumvent them because they don’t understand why they should be followed and complain that they are forced to do so in the first place.

Kevin Mitnick had a great example of this kind of exploitation in his book The Art of Intrusion.  All of the carefully planned security for accessing a facility through the front doors was invalidated because there was a side door into the building for smokers that had no guard or even a secure entrance mechanism.  They even left it propped open most of the time!  Given the chance, people will circumvent security in a heartbeat if it means their jobs are easier to do.  Can you imagine if the US military decided during the Cold War to move the missile launch key systems closer together so that one man could operate them in case the other guy was in the bathroom?  Or what if RSA allowed developers to access the seed code for their token system from a non-secured terminal?  I mean, what would happen if someone accessed the code from a terminal that had been infected with an APT trojan horse?  Oh, wait…

We have been living in the information age for more than a generation now.  We can’t use ignorance as an excuse any longer.  There is no reason why people shouldn’t be educated about proper security and why it’s so important to prevent not only exposure of our information but possible exposure of the information of others as well.  In the same manner, it’s definitely time that was stop coddling users by creating hacking points in technology deemed “too complicated” for them to understand.  The average user has a good grasp of technology.  Why not give them the courtesy of explaining how WPA works and how to set it up on their router?  If we claim that it’s “too hard” to setup or the user interface is too difficult to navigate to setup a WPA key, isn’t that more an indictment of the user interface design than the user’s technical capabilities?

Tom’s Take

I resolve to spend more time educating people and less time making their lives easy.  I resolve to tell people why I’ve forced them to use a regular user account instead of giving them admin privileges.  I promise to spend as much time as it takes with my mom explaining how wireless security works and why she shouldn’t use WPS no matter how easy it seems to be. I look at it just like exercise.  Exercise shouldn’t be easy.  You have to spend time applying yourself to get results.  The same goes for users.  You need to spend some time applying yourself to learn about things in order to have true security.  Creating backdoors and workarounds does nothing but keep those that need to learn ignorant and make those that care spend more time fixing problems than creating solutions.

If you’d like to learn more about the WPS hack, check out Dan Cybulsike’s blog or follow him on twitter (@simplywifi)

Ghost in the Wires – Review

Anyone who is old enough to remember the heady days of the formation of what we recognize as today’s Internet knows the name Kevin Mitnick.  Depending on who you ask, Mitnick is either a curious computer user that was wrongfully accused of horrendous crimes or he’s the most evil person to ever sit behind a keyboard and is capable of causing Armageddon with nothing more than a telephone.  Of course, the truth lies somewhere in the middle.

Mitnick has written books before that discuss social engineering.  The Art of Intrusion and The Art of Deception are both interesting books for security professionals that talk about the myriad of ways that hackers can exploit trust and other factors to compromise networks and systems.  However, both books lack something.  Deception is written as a series of “what if” methods of social engineering.  Intrusion uses real examples from a variety of sources, but not from Mitnick.  I’m sure there were lots of things that prevented him from talking about his past in these two books.  What people have really waited for though is the story of the World’s Most Wanted Hacker.  Well, wait no longer:

Ghost in the Wires is the autobiography of Kevin Mitnick.  Now that I’ve finished my CCIE studies, I have a couple of hours of free time to enjoy reading something that isn’t a whitepaper or a lab workbook.  I picked this up as soon as it was available on Amazon and cracked it open right away.  I took my time going through it, enjoying each chapter as it built up the story of Mitnick from his early years onward.  As the story progressed more into his social engineering stories and hacking exploits, I found myself spending more and more time reading about them.  I was drawn into the book not only because of the content, but the writing style as well.  Mitnick and his co-author William Simon decided to keep the content at a fairly non-technical level.  Other than a couple of expositions about gaining access via .rhosts files or spoofing IPs, the book as a whole doesn’t really go much deeper than programming a VCR.

What you do get from this book is a sense of what drives Mitnick.  It’s not wealth or fame or anarchy.  It’s the pursuit of knowledge.  Unlike the fame seeking kids today, Mitnick outlines that he only went after the targets he did because of the challenge of breaking into the them.  He didn’t do it to steal credit card numbers or to hold computers for ransom in some strange blackmail scheme.  Sure, he gained from his knowledge by virtue of his unfettered access to the phone company or his ability to clone his cell phone’s ESN whenever he wished.  However, rather than exploit this on a grand scale or sell his access privileges on the Internet, he held on to them and used them as capital only for bragging rights to other hackers.

Mitnick also takes some time to address the “Myth of Kevin Mitnick”, the legend that has grown up and been propagated about his crimes.  Stories of his flight from early prosecution to another country of his “ability” to whistle launch codes into pay phones elicit laughter but also show how the legal system in the early days of person computing was ill-equipped to deal with people like Mitnick that pushed systems to their boundaries and used them for their own purposes.  At times, it seems like the legal system in this book is run by a collection of scare mongers, ready at a moment’s notice to say whatever it takes to keep their suspects locked in solitary confinement and safely away from any form of communication, electronic or otherwise.  The second half of the book details his flight from the federal authorities and the ease with which Mitnick was able to create a new identity for himself.  Back in 1993 he was able to create a string of identities to elude his pursuers.  Today, however, I wonder if it would be as easy as before with all the linking of databases and sharing of information among all the different departments that Mitnick used to set himself up and someone else.  I’m sure it would be a very difficult challenge, which is just the kind Mitnick admits he loves.

Tom’s Take

I loved this book.  I’m a sucker for computer history, especially from someone as famous as Kevin Mitnick.  Yes, he violated laws and treated security procedures like recommendations instead of guidelines.  In truth, his crimes consisted of theft of things like source code or free telephone calls.  He did it because he liked the challenge of getting things he wasn’t supposed to have.  He was like a kid that would take his toys apart as a child to see how they worked.  I can identify with this kind of mentality, as I’m sure many of you can.  Mitnick chose to express this desire in ways that ended up bringing him into conflict with law and order.  In the end, he paid for his crimes.  However, he has paid us all back with the wealth of knowledge that he has shared about his methods of social engineering and computer hacking.  I recommend this book not only to those that are interested in the history of hacking but also to anyone that might ever take a telephone call or use a computer.  A little education about how easily Mitnick was able to gain the trust of unsuspecting people and get them to give him whatever info he wanted is worth the ounce of prevention that it will provide.  If nothing else, you’ll know what a nuclear launch code sounds like when it’s whistled in your general direction.

A Case of Mistaken Identity

It appears as though the carefully crafted hierarchy of trust that we’ve built in public key encryption is in danger of unraveling like a cheap suit.  Thanks to DigiNotar, the heretofor unknown registrar for the government of the Netherlands, we’ve got ourselves another fake certificate floating around out there.  This time, they generated a certificate for google.com (yes, the whole domain) back on July 19th.  According to DigiNotar, their certification authority (CA) infrastructure was breached and used to generate the false certificate.  Based on some defaced websites on DigiNotar’s site, there are strong rumors that a foreign government attempted to use the certificate as the catalyst in a man-in-the-middle (MITM) interception attack that would allow nefarious things like GMail to be snooped or search results to be cataloged.

Most security conscious users are already doing the smart thing.  They are removing DigiNotar from their trust lists even as Microsoft, Mozilla, and Google remove the rogue certificate.  I’m in the camp of completely removing DigiNotar from my list of trusted CAs.  I’ve also done the same with Comodo after their little issue with rogue certificate problems a few months ago.  To me, once a CA starts issuing false certificates, they have effectively erased any kind of trust they might have once built up.  Even worse, by admitting that it was a security breach and not an honest mistake on the part of a careless employee or an admin with a grudge they have moved from the realm of carelessness and into the ocean of stupidity.  If the CAs that sign our most trusted pieces of information that identify trustworthy organizations can be so easily compromised, how are we to trust the information we are presented?  Granted, this kind of MITM does require a chokepoint, such as a country with only one or two regulated Internet terminus points.  The risk of something similar happening in a country like the US or the UK is reduced due to our infrastructure, but it’s still something that could cause problems should a certificate like this be issued and then installed by a large ISP.

At Cisco Live, the 15,000 attendees hammered the Interop block providing Internet access to the point where the BGP peerings started freaking out.  Some of our traffic was getting rerouted to Japan.  A few noticed the strange google.co.jp pages popping up but thought nothing of them.  That same mentality causes people to click through certificates without much thought to where they were issued from or whether or not they should be trusted.  Now, compound that with a trusted provider not causing a certificate warning and you’ve got a recipe for disaster.

I think we need to take a hard look at all of these trusted CAs that are issuing certificates like I hand out candy at Halloween.  Someone needs to provide real oversight and not just allow anyone to start signing identities.  If you get caught issuing bad certificates, you should be shut down until you can prove you have implemented strict security measures somewhere other than on paper.  If not, you get shut down and all your certificates get invalidated permanently.  It would suck mightily, especially for a CA that signs government certificates.  However, faced with the alternative, I think a little bit of trouble in rooting out the bad CAs is worth not having to face what could happen.

Tom’s Take

If you haven’t already, rip DigiNotar out of your trusted certificate list.  Just search for your particular OS and there are lots of instructions.  Update your browser, as all the major players have already removed the rogue certificate.  Show DigiNotar that the price of being compromised is high.  Maybe a few people protesting like this is equal to a bucket of water missing from the Pacific Ocean, but the more people that remove that trusted certificate, the bigger the message that can be sent to all these “trusted” companies that they had better keep the keys to their kingdom safe and sound.  The alternative is a situation that doesn’t sit well with me at all.

Adrift In A Sea of Lulz

As I write this, it’s been about 24 hours since the hacking collective known as Lulzsec scuttled their ship and scattered to the four winds.  There’s been a lot of speculation as to what motivated the 50 days of hacking that has stirred up quite a bit of talk about exploiting security holes as well as what would cause the poster children of anti-sec hacking to disappear as quickly as they emerged.

Lulzsec emerged almost two months ago from the fires of the now-infamous Sony PSN hack.  It appears to have been formed by some of the Gn0sis people that hacked into the Gawker Media database and some other disaffected members of Anonymous.  After they popped up on the radar, they started posting a lot of supposedly secured information about all manner of things, from X-Factor contestant databases to FBI security contractors.  They also participated in other hacks, like taking cia.gov offline for a few hours.  Most recently, they posted a dump of the Arizona Department of Public Safety servers and some 750,000 AT&T subscriber accounts.  Their activities have caused a lot of questions about perimeter security and probably cost a few security professionals their jobs.

To Lulzsec, this was all a game.  A giant F-you to the whole security community at large.  Their manifesto reads a lot like some teenagers I know.  They do what they want, how they want, when the want.  At first, there was no rhyme or reason to their attacks.  Later, they started talking about their “anti-sec” agenda, the idea that information shouldn’t be buried and needs to be disseminated by whatever means necessary.  Indeed, their anti-sec agenda also extended to the idea that people with inadequate security needed to be exposed and publicly embarrassed to resolve these issues.

Just as soon as they burst into the limelight, Lulzsec announced they were disbanding.  Theories abound as to the reason for their dissolution.  Did the feds get to close?  Was the lifting of the anonymous veil through leaking of personal information the last straw?  Did they simply get bored?  Answers won’t be forthcoming from the members themselves.  They seem to have faded right back into the anonymity they spawned from.  I think the answer to what is going on probably lies somewhere in the middle of all these things.

The Lulzsec hacks appear to have mostly centered around SQL injections.  The time-honored tradition of exploiting databases with carefully crafted packet strings continues to be quite popular even today.  I think Lulzsec used this attack vector to great success against Sony and a couple of other choice targets up front.  After their initial success, their patterns seemed to be haphazard.  I think this is due to the nature of using their one attack against a variety of sites rather than targeting specific ones.  It was a brute force method of anarchy, kind of like using a screwdriver to do all your tool-related tasks.  It works really well for screwing, but not so well for hammering or sawing.  Once they managed to expose the FBI partner databases and take down the CIA’s small public facing webserver, that brought significant attention from all angles, not typically something you want if you are trying to stay anonymous.  Then, other groups inside the scene started either getting jealous of the attention or decided to fight fire with fire.  That led to d0xing, the term used to describe the leaking of personal information that can be used to identify someone.  Through exposure in the public and the looming investigation from some upset 3-letter agencies, I think the first members to jump the Lulz Boat left here.  Rather than face what might be coming, they ducked out and headed back to the darkness that had protected them so well.  This has been somewhat confirmed in interviews with the publicly-known members.

The remaining Lulzsec members then seemed to have gone on a recruitment drive.  They tried to bring more talent into the fold.  I don’t think this newer group was quite as determined or successful as the first, though.  That led to a slowdown in target penetration.  You might argue that they’ve been releasing stuff right up to the end.  True…but all we know is that those sites were hacked, we don’t know when.  For all we know, AT&T could have been the second site they hacked.  AT&T was their ace in the hole.  If they were for real and ready to keep going for a long time to come, they would have released AT&T right away.  By putting it away and saving it for last, it appears they wanted one big splash before they were forgotten.  A vigorous, active Lulzsec would have been able to keep hitting bigger targets than AT&T.  After their success rate started dropping off, I think the remaining “old” members of Lulzsec probably did get bored.  Without new conquests to fuel their fame, the rush wasn’t there any more.  They decided to go out with a bang and quit while they were still ahead.  The remaining new recruits will probably go on to be folding into newer organizations that spring up in place of Lulzsec, the new breed to SQL injectors (or whatever is next), just like the Lulz Boat appears to have sailed with many Gn0sis crew members on board.

Tom’s Take

The black hat in me cheered Lulzsec for what they’ve accomplished.  The white hat is appalled.  Again, the truth lies somewhere in the middle.  I look at Lulzsec like the Joker in The Dark Knight.  A group of anarchist hackers that just want to have fun and burn everything down around them.  No agenda, no statements, just exploitation for fun.  A group of chaotic neutral script kiddies.  However, the very limelight they sought burned them enough to force them back into the shadows.  The way I look at it, the key to being a successful hacker is to not get caught.  Don’t get famous, and definitely don’t draw attention to yourself.  Kevin Mitnick had to learn that the hard way.  Something tells me that more than one of the passengers of the good ship Lulz will learn it the same way sooner rather than later.

The Seedless Garden

After weeks of speculation on the matter, it appears that RSA has finally decided to admit the obvious that the SecurID Token system has become compromised.  Honestly, I’m not shocked.  In fact, I said as much almost 2 months ago when debating the subject with the other Packet Pushers.  I remember hearing the original disclosure and thinking to myself “How could these hackers NOT have the keys to the kingdom?”  RSA categorized this hack as an Advanced Persistent Threat (APT), which is a great new umbrella term to describe hacks that persist for weeks or months without detection.  Of course, I don’t think clicking on an Excel spreadsheet pulled out of your junk mail folder qualifies as a particularly advanced penetration method, but as we’ve seen in the past few months (if not years), social engineering is a much more reliable infection vector.  That’s because you can always count on people to do things they aren’t supposed to.

RSA covered up the worst of the attack.  They put up a good smoke screen about needing to figure out what was stolen in breach.  They even went so far as to talk about having the budget to implement new security that they wouldn’t have been able to before, which to me smacks of fixing the gate after the horses have gotten out.  RSA didn’t admit up front that the seed of the SecurID tokens could have been compromised, although they admitted that some information relating to the SecurID system might have been involved.  They really didn’t admit much more than that.  In return, we got months of second guessing, supposition, and ultimately delays that caused Lockheed Martin, Northrup Grumman, and L-3 Communications to suffer from penetration attempts.  RSA never publicly told their customers to ditch their tokens, even though security professionals said that the worst case scenario of the seed exposure was probably the case.  In fact, Steve Gibson eerily said as much back on March 19th.

RSA should have come clean the day after the attack.  Even if it didn’t admit that it (likely) stored the token serial number in a database along with the seed used to generate the token’s algorithm, they should have at least advised their customers begin the process to replace the older tokens with newer ones to ensure that the old tokens couldn’t be used as an attack vector.  Why?  Well, if you have access to the customer database, it doesn’t take much guesswork to figure out user IDs (first initial, last name).  Once you have the serial number, you can figure out which algorithm was used on the token, since it appears RSA stored this data somewhere or made it easily accessible.  Given that information, brute force becomes the tool used to try and penetrate a vulnerable network.  There has been some speculation that there is some foreign governmental interference in this whole mess due to the fact that the three targets were all defense contractors.  While I won’t discount this possibility, it’s more likely that these targets were chosen due to their heightened aura of security, almost guaranteeing they would use RSA tokens in their remote access strategy.  Since US defense contractors probably buy these things by the truckload, their information was probably all over the hacked database.  This lit them up like a Christmas tree in the eyes of potential hackers.

If you’re using an RSA token right now, put it down.  Drop it in a thirty-three foot hole in the ground.  Bury it completely (rocks and boulders should be fine).  Then demand the RSA replace it with a new one.  Yes, you aren’t going to be able to destroy your whole remote access strategy and rip out all the RSA equipment.  That would cost you a small fortune.  Better to make RSA replace the tokens for you (at their cost) and investigate alternatives down the road.  While I believe that RSA may be able to recover from this with enough time and some management changes, the fact that they let it happen in the first place will sting them for a long time to come.

Tom’s Take

Security breaches are always a wonderful game of ‘worst case scenario’.  It tends to make most security professionals a little cynical, but it also keeps us from shooting ourselves in the foot.  If you are a respected company like RSA (was), there should be no excuse for this cover up.  You should always assume the worst case scenario in a situation like this.  The new replacement tokens should have started shipping to your most important customers weeks ago.  They newly-keyed devices should have been in the hands of your critical customers before they had the chance to ask why their keychain ornaments needed to be replaced.  Even if the algorithm wasn’t compromised (which we now know that it was), a little proactive goodwill may cost money up front, but it won’t come anywhere near the cost that a black eye like this will end up totaling in the long run.  Sony may have a big black eye from its security fiasco, but RSA is actually a security company.  People like Sony trust them to security data.  Finding out that they were hacked and their code stolen to leverage attacks on their customer is like shooting a cop with his own gun.  RSA should have known better and done the right thing up front.  No grandiose PR moves backed with vague statements that “something happened, we think”.  Come clean, fix the issue, and be ready the meet the fallout head on rather than being blindsided in the press after the fact when your customers are getting the Sony Treatment.  Better to have a garden of crops that will eventually grow back than the barren salted earth you’ve got now.

A Moment of Silence for Sony

It goes without saying that Sony currently has a target the size of Iowa painted on its back.  Between the breaches in the Playstation Network, Sony Online Entertainment, and now Sony Pictures, you would be hard pressed to find a company that has been more thoroughly embarrassed when it comes to user data security.  Every day brings word of another incursion.  I’m thinking that something is going to have to give sooner or later.

Sony started out this whole mess by going after George Hotz, a famous hacker that goes by the online name Geohot.  Geohot has done all manner of things, including a simple jailbreak for the iPhone known as Limera1n.   Geohot also had his eyes on rooting the Playstation 3, Sony’s premier gaming console.  While Sony had given you the option to install a Linux-based OS onto the console from the start, Geohot wanted to take it a step further and unlock the ability to run other kinds of code, as well as gaining access to the memory contents and hypervisor level of the console.  This would allow users to do things like emulate Playstation 2 games, which was an original feature of the console that was later dropped due to complexity and memory contraints.  Geohot also started work on creating a custom firmware for the console that would allow users to do as they wished, while still keeping certain features of the OS intact.  In April of 2010, Geohot announced that he was not pursuing the development any further, but in January 2011, he posted the root signing keys of the PS3 online.  This is probably the straw that broke Sony.  The root key would give anyone the ability to sign code and execute it on the console without raising any suspicion.  Sony sued Geohot, and after some legal maneuvering and lots of publicity, eventually settled the lawsuit in April 2011.  This was the catalyst for the difficulties that Sony has faced over the past two months.

In late April, Sony shut down large portions of the Playstation Network (PSN) for an extended period of time due to what was later termed an “external intrusion”.  After rushing to bring the network that controlled the majority of Playstation online multiplayer capabilities, Sony Online Entertainment was intruded upon as well as PSN in early May.  Rather than rushing things back online this time, more care was taken to excise any possible problems and no ETA was set to bring the services back to the public.  In the interim, Sony profusely apologized for the problems and even testified before the US Congress about the breaches.  Sony recently enabled PSN once more, only to fall victim to another hacking group exposing portions of the Sony Pictures online customer database.  In all, close to 40 million Sony customers have had their personal information exposed in one form or another in the past two months.  Email addresses, birthdates, and credit card numbers with Card Verification Value (CVV) verification codes have all been stolen.

What started out as a showdown in the desert between Sony and a group of hackers angered by the treatment of Geohot has now taken on the appearance of a rotting carcass slowly being picked over by anyone that wants to come along and poke it.  The question now isn’t whether Sony will be hacked again, but what might get stolen this time, and where it will be stolen from.  As a former customer of Sony Online Entertainment, I can be certain that some of my information is probably out in the wild.  I’ve since changed passwords and credit card numbers to avert any possible wrongdoing, but other customers haven’t been so lucky.  I’ve lost all confidence in Sony and their ability to keep my information secure.  While many point to the infamous rootkit incident as the point where Sony started to sour in the eyes of their customer base, I think the PSN outage points to a bigger issue.  If Sony wants to install software on my computer to monitor whether or not I’m ripping CDs that’s their business.  I can dislike them for doing something they shouldn’t and be done with it.  The only harm done was their ham-handed attempt to sneak something onto my PC.  But with this series of hacks, Sony has taken their corporate image and dragged it through the filthiest mud imaginable.  I now no longer dislike Sony because they do things they shouldn’t, but instead I’ve lost confidence in their ability to keep me safe.  Just like a bank failure, when a company can no longer assure me they can do business the way that it should be conducted it’s time to move my business elsewhere.

Sony faces some pretty rough territory in the coming future.  First, they really need to find out what raised the ire of their intruders and apologize for it.  Profusely.  It may be a little late now, but if they show a little remorse for whatever wrong they may have done it might call off the dogs for a bit.  Sony needs time to recover and reassess their security posture.  Secondly, Sony needs to can their security team and bring a set of fresh eyes into the picture.  It’s quite apparent that the current time wouldn’t know security if it bit them in the ass.  Passwords stored in clear text, arbitrary account recovery mechanisms, and general incompetence seem to abound.  It’s time to get a new CISO and make some drastic and public changes.  Announce what you are going to do and make sure your now-burned customers are aware of your new commitment to security.  You aren’t going to win anyone back by implementing new security features and burying them on page 20 of a 21 page press release.  Face it Sony, your reputation is shot either way.  Why not make the most of it and try to win back some fans by admitting your screwed up and then fixing it?

Tom’s Take

There’s no doubt Sony makes good technology.  Even when it fails.  However, a series of organizational policies that have left their customer base more violated than the speed limit is their worst failure to date.  There isn’t going to be a cool new feature to save them from this disaster.  No hope for a new version of software to work out these bugs.  It’s time to rewrite the security posture from the ground up.  Find an executive or two to fall on their swords for this whole mess and move on.  Make sure to keep your former customers in the loop about how you’re going to ensure that this never happens again.  I, for one, am done with Sony until I see some major changes in their handling of customer data.  No more TVs, cameras, Walkmen, or games until they prove to me that filling out an online profile isn’t going to expose me to all manner of dastardly things on the Internet and beyond.  Sony’s had their moment of silence in all this by refusing to come clean about the hack in the first place.  Again and again, they’ve kept their mouths shut about timetables and countermeasures.  And until I hear something from them about all this, they won’t hear anything from me at all.

Cut Me Some SLAAC, Or Why You Need RA Guard

The Internet has been buzzing for the last couple of weeks about a new vulnerability discovered in IPv6 and the way it is interpreted by networking devices.  Firstly, head over to Sam Bowne’s excellent IPv6 site and read his assessment of the attack HERE.  What is being exploited is a “feature” in IPv6.

Since IPv6 doesn’t use Address Resolution Protocol (ARP), it relies on ICMP and Neighbor Discovery messages to determine neighbors on the network.  It also uses Router Advertisements (RA) to build a picture of how to get off the local network.  When the Stateless Address AutoConfiguration (SLAAC) flag is set in the RA, the local host will choose an address from the announced address space and begin using it.  This is a great addition to the protocol, since it allows a network admin to setup an automatic addressing protocol that isn’t reliant on a server like DHCPv6.  However, from a security standpoint, it introduces some possible problems.  If a host on the network were to start sending RA packets to the LAN, that man-in-the-middle could start influencing packet routing.  Worse still, if the attacker isn’t really interested in rerouting packets, they could just take the anarchist’s approach and burn the whole network down with a specially-crafted DoS attack.  By flooding a ton of RA messages onto the local network with different network address spaces, the attacker can cause the CPUs on Windows and FreeBSD boxes to spike to 100% and stay there indefinitely.  This is because the host system continually tries to process the RAs flooding in from the network and starts trying to pick address space in every network announcement that it hears.  This consumes all its resources with updating the routing table and addressing the adapters on the system.  This could cause problems for your end users should an attacker get into a position to launch RAs into your LAN.  Right now, there are a couple of ways to fix this:

1. Disable IPv6 – Okay, this doesn’t fix the problem it just makes it go away.

2.  Disable RAs on the local network – Again, not a fix, just hiding it.  Plus, this breaks SLAAC, which I see is a real advantage to IPv6.

3.  Install a firewall or ACL on your host-facing ports to block RAs or filter out the ICMPv6 packets carrying them.

What I find even more interesting about this whole affair is the response of the three biggest players in the game in regards to the issue.  Let me sum it all up using their words:

Microsoft – This is Working As Intended (WAI).  We don’t plan on fixing this.

Juniper – We need to work with the IETF and figure out a standard solution to address this problem, and until we do we aren’t patching against it.

Cisco – We fixed this last year, and by the way have you heard of RA Guard?

Cisco has implemented a solution very similar to what they do with DHCP snooping on IPv4 switches.  They call it RA Guard.  As defined in RFC 6105, RA Guard can be enabled on all host-facing switchports to filter RAs from non-trusted sources.  In this case, the trusted source would be a switchport you know to contain a valid router, so you wouldn’t enable RA guard on it.  The RFC defines a discovery method using Secure Neighbor Discovery (SeND) that made me chuckle because the four states of the discovery are the same as 802.1w Rapid Spanning Tree.  Seems we’re never going to get rid of it.  When you enable SeND-based RA Guard discovery, it can dynamically scan the network for devices broadcasting RAs and block or allow them as necessary.  That way, you don’t have to worry about misconfiguring a switchport and killing all the advertisements coming from it. By enabling RA guard on a Cisco switch with firmware 12.2(50)SY, you can effectively mitigate the possibility of an unauthorized attacker DoSing your entire network with what amounts to a script-kiddie style attack.

Tom’s Take

Take a vulnerability that has been known about for two years but swept under the rug, add in a dash of vendor disregard, and shake until the Internet security community is frothing at the mouth to tell you that you should turn off IPv6 on your entire network.  Sounds like a recipe for overreaction to me.  I’m not denying that it is a serious vulnerability.  In fact, given the fact that IPv6 is enabled by default on the current Windows version, it could cause issues.  That is, unless you are smart and take measures to fix the issue rather than sweeping it back under the rug.  Rather than just turning off IPv6 until someone other than Microsoft releases a patch, we need to work through the issue and fix the underlying security issue.  At the same time, this needs to be agreed upon by the major networking vendors sooner rather than later.  The longer this issue exists in its present form, the more security sensationalists can point to it and decry one of the advantages of IPv6 on your network when in fact they should be focusing on the lack of security in the software that allows anyone to masquerade as an IPv6 router.  Then, maybe we can cut IPv6 a little bit of slack.

PKI Uncovered – My Review

Security is a very important element in today’s network. The number of people trying to penetrate and disrupt you network is growing by the day, both internally and externally. The consolidation of servers into the data center is especially bothersome, as it tends to place your high-priority targets into one location.  It’s very important to find a way to keep that data secure from as many intruders as possible.

The trend recently has been to use virtual private networks (VPNs) to secure communications between users and critical data sources. Whether it be a remote access VPN for teleworkers or an internal VPN for HIPAA or PCI compliance, securing data with an encrypted tunnel is the fastest and easiest method of protection. However, in many cases the administrators use inherently insecure on non-scalable methods of VPN authentication, such as pre-shared keys (PSK). PSK works well with very small deployments or with very static equipment that requires few changes or little turnover/replacement. The main problem with using PSK is that it doesn’t scale very well, plus the method of distribution leaves a lot be desired.  You write the PSK down in a file for someone to configure and it’s just as insecure as writing it down on a sticky note. In order to really have a secure and scalable design, you must involve a public key infrastructure (PKI) at some point. I was somewhat familiar with PKI from my security training, but my depth of knowledge at implementing it on Cisco equipment was rather shallow.

As luck would have it, Cisco Press asked for volunteers to review books and I jumped at the chance. Imagine my surprise when a shiny new book showed up on my desk. PKI Uncovered is a new book from Cisco Press that looks to give the average Cisco enginee….rock star a crash course in PKI and the many implementations it has in the networking space. What follows is my review of this book.

PKI Uncovered Cover - Image courtesy of Cisco Press

The first section is an overview of PKI basics for the non-security people. If you are a CISSP, CCSP, or any other conglomeration of security acronyms, these chapters will be review.  The importance of using PKI, along with the differentiations between it and symmetric key encryption are laid out. As well, the hierarchy of certification authorities (CA) are laid out with great detail. Once we get past the review, it’s time to delve into the nuts and bolts of implementation.

The second section of the book looks at specific deployment scenarios where PKI would be useful. Chapter 5 is the generic model that the other chapters build on, so the most basic ideas of deployment and chaining CAs are presented. In the following chapters, more specific needs are addressed, from large scale implementations of PKI used with GETVPN in site-to-site design to remote access with ASAs and IOS VPN. As well, more application focused examples on 802.1x NAC and CUCM phone security are presented. These chapters give great examples to follow along with as well as detailed output of the process at each step. The troubleshooting sections at the end of each chapter are also well written, and could be very useful if you find yourself staring down a real head scratcher.   The final two chapters are presented more as a case study where the previous examples are used to illustrate deployments with Cisco Virtual Office or Cisco Security Manager.  They help tie everything together and allow you to see the building blocks in action.

Tom’s Take

Overall, I found this book a very quick and easy read. It clocks in at less than 250 pages, which is practically a white paper.  It never assumes that you are a PKI expert and does a great job of letting you wade in before you get to the real meat of the example deployments.

The middle of the book will be the most used section, dog-eared and well-worn from hours of reference. I think this will be how I use it the most, as a quick reference guide for my future PKI deployments.  It’s a simple matter to work through the configuration examples and make sure your output matches the generous output examples. The case studies at the end are less compelling, as I doubt I’ll find myself in those kinds of deployment scenarios any time soon.

Overall, I’d recommend this as one to pick up if you have any desire to learn about PKI and its implementation on Cisco devices or feel that you’ll be implementing it any time in the immediate future.

If you’d like to pick up a copy, you can find it on http://www.ciscopress.com or at http://www.amazon.com.

Disclaimer

This book was provided to me by Cisco Press at no cost for evaluation. It came with no promise of consideration for a review. The ideas and opinions expressed in this review are mine and mine alone and provided freely for the use and consideration of my audience.

Tune In and Switch Off

As I sit here right now, the country of Egypt is a black hole on the Internet.  All 3,500 prefixes originated by Egypt’s four major ISPs have been withdrawn from the global BGP table.  There is no route into or out of the country, save the one ISP utilized by the Egyptian Stock Market, most likely in an effort to keep the country’s economy from collapsing.  This follows on the heels of other government interference in cybercommunications in Tunisia this past month and Iran last year.  Egypt, however, is the first country to completely darken the Internet in an effort to keep services such as Twitter and Facebook from coordinating resistance and allowing information to be disseminated to the world at large.  I learned a very long time ago that arguing about politics never leads anywhere.  What I would like to comment on, however, is the trend toward censoring information by disrupting network communication.

Egypt yanked all Internet access for its citizens in an effort to control information.  Tunisia has been accused of affecting Internet traffic for its citizens as well, blocking certain routes and causing outages on the Web.  Iran limited access to social media and even attempted to severely rate limit Internet traffic during the election protests last year.  This trend shows that governments are starting to realize the power that the Internet provides to disaffected groups of people.  No longer to “subversives” need to meet in underground basements or abandoned warehouses.  Those places have been replaced by chat rooms and e-mail.  Relying on one or two trustworthy individuals to get the word out by smuggling rolls of film to the mass media has been replaced with instant pictures being uploaded from a cell phone to Twitter or Flickr.  The speed with which protests can become revolutions has become frighteningly accelerated.  So too is the speed with which the affected government can slam the door shut on the ability for these revolutionaries to use the very media which they rely on to spread the word.  Egypt was able to successfully cut off access within a few hours of the first rumors of such a thing being contemplated.

For those of you that think that something like that could never happen here (here being the US), let me direct your attention to the Protecting Cyberspace as a National Asset Act.  This hotly debated bill would give the government more ability to combat large-scale cyber warfare and allow them to protect assets deemed vital to the national interest.  The biggest concern comes from a provision inserted that would give the president the ability to enact “emergency measures” to prevent a wide-reaching cyber attack.  This includes the power to shut down major networks for a period of up to 120 days.  After that time, Congress must either approve an extension, or the networks must be reactivated.  I won’t delve into some of the wilder conspiracy theories I’ve seen surrounding this bill, but the idea that our networks could be shut down without our consent to protect us is troubling.  According to my research, there is no provision that defines the situation that could cause a national shutdown.  The president, acting through the National Center for Cybersecurity and Communications (NCCC) Director, is supposed to inform the affected networks to enact their emergency measures and ensure the emergency actions represent the least disruptive means feasible to operations.  In other words, the NCCC director just has to tell you he shut you down and you should try to make things work as well as you can.

Using this as a possible scenario, assume some kind of external driver causes the president and the NCCC director to shut down a large portion of the Internet traffic.  It doesn’t have to be a revolution or something so sinister.  It could be a Stuxnet-type attack on critical power infrastructure.  Or maybe even a coordinated cyber attack like something out of a Tom Clancy novel.  In an attempt to deter the attack or mitigate the damage, let’s say the unprecedented step of withdrawing a large number of BGP prefixes is taken, similarly to what Egypt has done.  What kind of global chaos might this cause?  How many transit ASes exist in the US that would pass traffic around the world.  I’ve seen stories of how the World Trade Center attacks in 2001 caused a global Internet slowdown due to the amount of traffic that was passed through the networks located there.  That was two buildings.  Imagine withdrawing even half the traffic that flows through the US and networks located here.  What impact would that have?  The possibilities would be mind-boggling.  Even a carefully coordinated network shutdown would have far reaching impact that no one could foresee.  Chaos is funny like that.

The Internet, or cyberspace or whatever your term for it, is now something of a curiosity.  It exists on its own, independent of the laws of nations or man.  Those who seek to control information flow or restrict access find themselves quickly thwarted by the fact that packets and frames do not respect political boundaries.  For every attempt to shutdown The Pirate Bay, a simple move to different location allowed them to stay active.  Even when pressure was applied to the people behind the site, it was quickly seen that their creation had taken on a life of its own and would persist no matter what.  What of the Wikileaks saga, where the attempt to behead the organization by targeting its leader has only fanned its flames and most likely ensured its survival no matter what may happen to Julian Assange.  Those of us who live our lives in this electronic realm see differences in the way culture is developing.  There are lawless places in the Internet where mob rule is the law of the cyberland.  Information is never truly forgotten, merely pigeonholed away until it is needed again.  Attempts to impose political will upon the citizens of the Internet are usually met with force, protest, and in some cases, retribution.  I keep wondering when organizations are going to figure out that attempting to erase information is tantamount to daring the Internet to publicize it.  In the same way, attempting to shut down access the Internet and social media at large is a sure way to force people to circumvent these restrictions.  As we watched Egypt vanish from the cyber landscape last night, many of my friends remarked that it would only be a matter of time before someone challenged the blockade and won.  Someone could hack the edge routers and reestablish the BGP peering with the rest of the world and the floodgates would be opened again.  Whether or not that happens in the next few days remains to be seen.

As the world becomes more reliant on the Internet to provide information to everyone, we as cyber citizens must also remain vigilant to keep the information flowing freely.  The Internet by design lends itself to surviving major disruptions without totally crashing.  It is our responsibility to show the world that information wants to be learned and shared and no amount of meddling will change that.