Unknown's avatar

About networkingnerd

Tom Hollingsworth, CCIE #29213, is a former network engineer and current organizer for Tech Field Day. Tom has been in the IT industry since 2002, and has been a nerd since he first drew breath.

Presenting To The D-Suite

Do you present to an audience? Odds are good that most of us have had to do it more than once in our life or career. Some of us do it rather often. And there’s no shortage of advice out there about how to present to an audience. A lot of it is aimed at people that are trying to speak to a general audience. Still more of it is designed as a primer on how to speak to executives, often from a sales pitch perspective. But, how do you present to the people that get stuff done? Instead of honing your skills for the C-Suite, let’s look at what it takes to present to the D-Suite.

1. No Problemo

If you’ve listened to a presentation aimed at execs any time recently, such as on Shark Tank or Dragon’s Den, you know all about The Problem. It’s a required part of every introduction. You need to present a huge problem that needs to be solved. You need to discuss why this problem is so important. Once you’ve got every head nodding, that’s when you jump in with your solution. You highlight why you are the only person that can do this. It’s a home run, right?

Except when it isn’t. Executives love to hear about problems. Because, often, that’s what they see. They don’t hear about technical details. They just see challenges. Or, if they don’t, then they are totally unaware of this particular issue. And problems tend to have lots of nuts and bolts. And the more you’re forced to summarize them the less impact they have:

Now, what happens when you try this approach with the Do-ers? Do they nod their heads? Or do they look bored because it’s a problem they’ve already seen a hundred times? Odds are good if you’re telling me that WANs are complicated or software is hard to write or the cloud is expensive I’m already going to know this. Instead of spending a quarter of your presentation painting the Perfect Problem Picture, just acknowledge there is a problem and get to your solution.

Hi, we’re Widgets Incorporated. We make widgets that fold spacetime. Why? Are you familiar with the massive distance between places? Well, our widget makes travel instantaneous.

With this approach, you tell me what you do and make sure that I know about the problem already. If I don’t, I can stop you and tell you I’m not familiar with it. Cue the exposition. Otherwise, you can get to the real benefits.

2. Why Should I Care?

Execs love to hear about Return on Investment (ROI). When will I make my investment back? How much time will this save me? Why will this pay off down the road? C-Suite presentations are heavy on the monetary aspects of things because that’s how execs think. Every problem is a resource problem. It costs something to make a thing happen. And if that resource is something other than money, it can quickly be quoted in those terms for reference (see also: man hours).

But what about the D-Suite? They don’t care about costs. Managers worry about blowing budgets. People that do the work care about time. They care about complexity. I once told a manager that the motivation to hit my budgeted time for a project was minimal at best. When they finished gasping at my frankness, I hit them with the uppercut: My only motivation for getting a project done quickly was going home. I didn’t care if it took me a day or a week. If I got the installation done and never had to come back I was happy.

Do-ers don’t want to hear about your 12% year-over-year return. They don’t want to hear about recurring investment paying off as people jump on board. Instead, they want to hear about how much time you’re going to save them. How you’re going to end repetitive tasks. Give them control of their lives back. And how you’re going to reduce the complexity of dealing with modern IT. That’s how you get the D-Suite to care.

3. Any Questions? No? Good!

Let me state the obvious here: if no one is asking questions about your topic, you’re not getting through to them. Take any course on active listening and they’ll tell you flat out that you need to rephrase. You need to reference what you’ve heard. Because if you’re just listening passively, you’re not going to get it.

Execs ask pointed questions. If they’re short, they are probably trying to get it. If they’re long winded, they probably stopped caring three slides ago. So most conventional wisdom says you need to leave a little time for questions at the end. And you need to have the answers at your fingertips. You need to anticipate everything that might get asked but not put it into your presentation for fear of boring people to tears.

But what about the Do-ers? You better be ready to get stopped. Practitioners don’t like to wait until the end to summarize. They don’t like to expend effort thinking through things only to find out they were wrong in the first place. They are very active listeners. They’re going to stop you. Reframe conversation. Explore tangent ideas quickly. Pick things apart at a detail level. Because that’s how Do-ers operate. They don’t truly understand something until they take it apart and put it back together again.

But Do-ers hate being lied to more than anything else. Don’t know the answer? Admit it. Can’t think of the right number? Tell them you’ll get it. But don’t make something up on the spot. Odds are good that if a D-Suite person asks you a leading question, they have an idea of the answer. And if your response is outside their parameters they’re going to pin you to the wall about it. That’s not a comfortable place to get grilled for precious minutes.

4. Data, Data, Data

Once you’re finished, how should you proceed? Summarize? Thank you? Go on about your life? If you’re talking to the C-Suite that’s generally the answer. You boil everything down to a memorable set of bullet points and then follow up in a week to make sure they still have it. Execs have data points streamed into the brains on a daily basis. They don’t have time to do much more than remember a few talking points. Why do you think elevator pitches are honed to an art?

Do-ers in the D-Suite are a different animal. They want all the data. They want to see how you came to your conclusions. Send them your deck. Give them reference points. They may even ask who your competitors are. Share that info. Let them figure out how you came to the place where you are.

Remember how I said that Do-ers love to disassemble things? Well, they really understand everything when they’re allow to put them back together again. If they can come to your conclusion independently of you then they know where you’re coming from. Give them that opportunity.


Tom’s Take

I’ve spent a lot of time in my career both presenting and being presented to. And one thing remains the same: You have to know your audience. If I know I’m presenting to executives I file off the rough edges and help them make conclusions. If I know I’m talking to practitioners I know I need to go a little deeper. Leave time for questions. Let them understand the process, not the problem. That’s why I love Tech Field Day. Even before I went to work there I enjoyed being a delegate. Because I got to ask questions and get real answers instead of sales pitches. People there understood my need to examine things from the perspective of a Do-er. And as I’ve grown with Tech Field Day, I’ve tried to help others understand why this approach is so important. Because the C-Suite may make the decisions, but it’s up the D-Suite to make things happen.

Clear Skys for IBM and Red Hat

There was a lot of buzz this week when IBM announced they were acquiring Red Hat. A lot has been discussed about this in the past five days, including some coverage that I recorded with the Gestalt IT team on Monday. What I wanted to discuss quickly here is the aspirations that IBM now has for the cloud. Or, more appropriately, what they aren’t going to be doing.

Build You Own Cloud

It’s funny how many cloud providers started springing from the earth as soon as AWS started turning a profit. Microsoft and Google seem to be doing a good job of challenging for the crown. But the next tier down is littered with people trying to make a go of it. VMware with vCloud Air before they sold it. Oracle. Digital Ocean. IBM. And that doesn’t count the number of companies offering a specific function, like storage, and are calling themselves a cloud service provider.

IBM was well positioned to be a contender in the cloud service provider (CSP) market. Except they started the race with a huge disadvantage. IBM was a company that was focused on selling solutions to their customers. Just like Oracle, IBM’s primary customer was external. The people they cared most about wrote them checks and they shipped out eServers and OS/2 Warp boxes.

Compare and contrast that with Amazon. Who is Amazon’s customer? Well, anyone that wants to buy something. But who consumes the products that Amazon builds for IT? Amazon people. Their infrastructure is built to provide a better experience for the people using their site. Amazon is very good at building systems that can handle a high amount of users and load and not buckle.

Now, contrary to the myth, Amazon did not start AWS with spare capacity. While that makes for a good folk tale, Amazon probably didn’t have a lot of spare capacity lying around. Instead, they had a lot of great expertise in building large-scale reliable systems and they parlayed that into a solution that could be used to bring even more people into the Amazon ecosystem. They built AWS with an eye toward selling services, not hardware.

Likewise, Microsoft’s biggest customers are their developers. They are focused on making the best applications and operating systems they can. They don’t sell hardware, aside from the rare occasional foray into phones or laptops. But they wanted their customers to benefit from the years of work they had put in to developing robust systems. That’s where Azure came from.

IBM is focused on customers buying their hardware and their expertise for installing it. AWS and Microsoft want to rent their expertise and software for building platforms. That difference in perspective is why IBM’s cloud aspirations were never going to take off to new heights. They couldn’t challenge for the top three places unless Google suddenly decided to shut down Google Cloud Engine. And no matter how hard they tried, Larry Ellison was always going to be nipping at their heels by pouring money into his cloud offerings to be on top. He may never get there but he is determined to make the best showing he can.

Putting On The Red Hat

Where does that leave IBM after buying Red Hat. Well, Red Hat sells software and services for it. But those services are all focused on integration. Red Hat has never built their own cloud platform. Instead, they work on everyone else’s platform effectively. They can deploy an OS or a container system on Amazon or Azure with no hiccups.

IBM has to realize now that they will never unseat Amazon. The momentum behind this 850-lb gorilla is just too much to challenge. The remaining players are fighting for a small piece of third or fourth place at this point. And yes, while Google has a comfortable hold on third place right now, they do have a tendency to kill projects that aren’t AdWords or the search engine homepage. Anything else lives in a world of uncertainty.

So, how does IBM compete? They need to leverage their expertise. They’ve sold off anything that has blinking lights, save for the mainframe division. They need to embrace their Global Services heritage and shepherd the SMEs that are afraid of the cloud. They need to help enterprises in the mid-range build into AWS and Azure instead of trying to make a huge profit off them and leave them high and dry. The days of making a fortune from Fortune 100 companies with no cloud aspirations are over. Just like the fight for cloud dominance, the battle lines are drawn and the prize isn’t one or two big companies. It’s a bunch of smaller ones.

The irony isn’t lost on me that IBM’s future lies in smaller companies. The days of “No one ever got fired for buying IBM” are long past in the rearview mirror. Instead, companies need the help of smart people to move into the cloud. But they also need to do it natively. They don’t need to keep running their old hybrid infrastructure. They need a trusted advisor that can help them build something that will last. IBM could be that company with the help of Red Hat. They could reinvent themselves all over again and beat the coming collapse of providers of infrastructure. As more companies start to look toward the cloud, IBM can help them along the path. But it’s going to take some realistic looks at what IBM can provide. And the end of IBM’s hope of running their own CSP.


Tom’s Take

I’m an old IBMer. At least, I interned there in 2001. I was around for all the changes that Lou Gerstner was trying to implement. I worked in IBM Global Services where they made the AS/400. As I’m fond of saying over and over again, IBM today is not Tom Watson’s IBM. It’s a different animal that changed with the times at just the right time. IBM is still changing today, but they aren’t as nimble as they were before. Their expertise lies all over the landscape of hot new tech, but people don’t want blockchain-enabled AI for IoT edge computing. They want a trusted partner than can help them with the projects they can’t get done today. That’s how you keep your foot in the door. Red Hat gives IBM that advantage. They key is whether or not IBM can see that the way forward for them isn’t as cloudy as they had first imagined.

How High Can The CCIE Go?

Congratulations to Michael Wong, CCIE #60064! And yes, you’re reading that right. Cisco has certified 30,000 new CCIEs in the last nine years. The next big milestone for CCIE nerds will be 65,536, otherwise known as CCIE 0x10000. How did we get here? And what does this really mean for everyone in the networking industry?

A Short Disclaimer

Before we get started here, a short disclaimer. I am currently on the Cisco CCIE Advisory Board for 2018 and 2019. My opinions here do not reflect those of Cisco, only me. No insider information has been used in the crafting of this post. Any sources are freely available or represent my own opinions.

Ticket To Ride

Why the push for a certified workforce? It really does make sense when you look at it in perspective. More trained people means more people that know how to implement your system properly. More people implementing your systems means more people that will pick that solution over others when they’re offered. And that means more sales. And hopefully also less support time spent by your organization based on the trained people doing the job right in the first place.

You can’t fault people for wanting to show off their training programs. CWNP just announced at Wi-Fi Trek 2018 that they’ve certified CWNE #300, Robert Boardman (@Robb_404). Does that mean that any future CWNEs won’t know what they’re doing compared to the first one, Devin Akin? Or does it mean that CWNP has hit critical mass with their certification program and their 900-page tome of wireless knowledge? I’d like to believe it’s the latter.

You can’t fault Cisco for their successes in getting people certified. Just like Novell and Microsoft, Cisco wants everyone installing their products to be trained. Which would you rather deal with? A complete novice who has no idea how the command line works? Or someone competent that makes simple mistakes that cause issues down the road? I know I’d rather deal with a semi-professional instead of a complete amateur.

The only way that we can get to a workforce that has pervasive knowledge of a particular type of technology is if the certification program expands. For everyone that claims they want to keep their numbers small you should have a bit of reflective doubt. Either they don’t want to spend the money to expand their program or they don’t have the ability to expand it. Because a rising tide lifts all boats. When everyone knows more about your solutions the entire community and industry benefit from that knowledge.

Tradition Is An Old Word

Another criticism of the CCIE today is that it doesn’t address the changing way we’re doing our jobs. Every month I hear people asking for a CCIE Automation or CCIE SDN or some thing like that. I also remember years ago hearing people clamoring for CCIE OnePK, so just take that with a grain of salt.

Why is the CCIE so slow to change? Think about it from the perspective of the people writing the test. It takes months to get single changes made to questions. it takes many, many months to get new topics added to the test via blueprints. And it could take at least two years (or more) to expand the number of topics tested by introducing a new track. So, why then would Cisco or any other company spend time introducing new and potentially controversial topics into one of their most venerable and traditional tests without vetting things thoroughly before finalizing them.

Cisco took some flak for introducing the CCIE Data Center with the Application Control Engine (ACE) module in version 1. Many critics felt that the solution was outdated and no one used it in real life. Yet it took a revision or two before it was finally removed. Imagine what would happen if something like that were to occur as someone was developing a new test.

Could you imagine the furor if Cisco had decided to build a CCIE OpenFlow exam? What would be tested? Which version would have been used? How will you test integration on non-Cisco devices? Which controller would you use? Why aren’t you testing on this esoteric feature in 1.1 that hasn’t officially been deprecated yet. Why don’t you just forget it because OpenFlow is a failure? I purposely picked a controversial topic to highlight how silly it would have been to build an OpenFlow test but feel free to attach that to the technology de jour, like IoT.


Tom’s Take

The CCIE is a bellwether. It changes when it needs to change. When the CCIE Voice became the CCIE Collaboration, it was an endorsement of the fact that the nature of communications was changing away from a focus on phones and more toward presence and other methods. When the CCIE Data Center was announced, Cisco formalized their plans to stay in the data center instead of selling a few servers and then exiting the market. The CCIE doesn’t change to suit the whims of everyone in the community that wants to wear a badge that’s shiny or has a buzzword on it. Just like the retired CCIE tracks like ISP Dial or Design, you don’t want to wear that yoke around your neck going into the future of technology.

I’m happy that Cisco has a force of CCIEs. I’m deeply honored to know quite a few of them going all the way back to Terry Slattery. I can tell you that every person that has earned their number has done so with the kind of study and intense concentration that is necessary to achieve this feat. Whether they get it through self-study, bootcamp practice, or good old fashioned work experience you can believe that, no matter what their number might be, they’re there because they want to be there.

What Makes a Security Company?

When you think of a “security” company, what comes to mind? Is it a software house making leaps in technology to save us from DDoS attacks or malicious actors? Maybe it’s a company that makes firewalls or intrusion detection systems that stand guard to keep the bad people out of places they aren’t supposed to be. Or maybe it’s something else entirely.

Tradition Since Twenty Minutes Ago

What comes to mind when you think of a traditional security company? What kinds of technology do they make? Maybe it’s a firewall. Maybe it’s an anti-virus program. Or maybe it’s something else that you’ve never thought of.

Is a lock company like Schlage a security company? Perhaps they aren’t a “traditional” IT security company but you can guarantee that you’ve seen their products protecting data centers and IDF closets. What about a Halon system manufacturer? They may not be a first thought for security, but you can believe that a fire in your data center is going cause security issues. Also, I remember that I learned more about Halon and wet/dry pipe fire sprinkler systems from my CISSP study than anywhere else.

The problem with classifying security companies as “traditional” or “non-traditional” is that it doesn’t reflect the ways that security can move and change over the course of time. Even for something as cut-and-dried as anti-virus, tradition doesn’t mean a lot. Symantec is a traditional AV vendor according to most people. But the product that used to be called Norton Antivirus and the product suite that now includes is are worlds apart in functionality. Even though Symantec is “traditional”, what they do isn’t. And when you look at companies that are doing more advanced threat protection mechanisms like deception-based security or using AI and ML to detect patterns, the lines blur considerably.

But, it doesn’t obviate the fact that Symantec is a security company. Likewise, a company can be a security company even if they security isn’t their main focus. Like the Schlage example above, you can have security aspects to your business model without being totally and completely focused on security. And there’s no bigger example of this than a company like Cisco.

A Bridge Not Far Enough?

Cisco is a networking company right? Or are they a server company now? Maybe they’re a wireless company? Or do they do cloud now? There are many aspects to their business models, but very few people think of them as a security company. Even though they have firewalls, identity management, mobile security, Malware protection, VPN products, Email and Web Security, DNS Protection, and even Threat Detection. Does that mean they aren’t really a security company?

It could be rightfully pointed out that Cisco isn’t a security company because many of these technologies they have were purchased over the years from other companies. But does that mean that their solutions aren’t useful or maintained? As I was a doing research for this point, a friend pointed out the story of Cisco MARS and how it was purchased and ultimately retired by Cisco. However, the Cisco acquisition of Protego that netted them MARS happened in 2004. The EOL announcement was in 2011, and the final end-of-support was in 2016. Twelve years is a pretty decent lifetime for any security product.

The other argument is that Cisco doesn’t have a solid security portfolio because they have trouble integrating their products together. A common criticism of large companies like Cisco or Dell EMC is that it is too difficult to integrate their products together. This is especially true in situations where the technologies were acquired over time, just like Cisco.

However, is the converse true? Are standalone products easier to integrate? Is is more simple to take solutions from six different companies and integrate them together in some fashion? I’d be willing to be that outside of robust API support, most people will find that integrating security products from different vendors is as difficult (if not more so) than integrating products from one vendor. Does Cisco have a perfect integration solution? No, they don’t. But why should they? Why should it be expected that companies that acquire solutions immediate burn cycles to make everything integrate seamlessly. Sure, that’s on the roadmap. But integrations with other products is on everyone’s road map.

The last argument that I heard in my research is that Cisco isn’t a security company because they don’t focus on it. They’re a networking (or wireless or server) company. Yet, when you look at the number of people that Cisco has working in a specific business unit on a product, it can often be higher headcount that some independent firms have working on their solutions. Does that mean that Cisco doesn’t know what they’re doing? Or does it mean that individual organizations can have multiple focuses? That’s a question for the customers to answer.


Tom’s Take

I take issue with a definition of “traditional” versus non-traditional. For the reason that Apple is a traditional computer company and so is Wang Computers. Guess which one is still making computers? And even in the case of Apple, you could argue that their main line-of-business is mobile devices now. But, does anyone dispute Apple’s ability to make a laptop? Would a company that does nothing but make laptops be a “better” computer company? The trap of labels like that is that it ignores a significant amount of investment in business at the expense of a quick and easy label. What makes a company a computer company or a security company isn’t how they label themselves. It’s what they do with the technology they have.

Security Is Bananas

I think we’ve reached peak bombshell report discussion at this point. It all started this time around with the big news from Bloomberg that China implanted spy chips into SuperMicro boards in the assembly phase. Then came the denials from Amazon and Apple and event SuperMicro. Then started the armchair quarterbacking from everyone, including TechCrunch. From bad sources to lack of technical details all the way up to the crazy conspiracy theories that someone at Bloomberg was trying to goose their quarterly bonus with a short sale or that the Chinese planted the story to cover up future hacking incidents, I think we’ve covered the entire gamut of everything that the SuperMicro story could and couldn’t be.

So what more could there be to say about this? Well, nothing about SuperMicro specifically. But there’s a lot to say about the fact that we were both oblivious and completely unsurprised about an attack on the supply chain of a manufacturer. While the story moved the stock markets pretty effectively for a few days, none of the security people I’ve talked to were shocked by the idea of someone with the power of a nation state inserting themselves into the supply chain to gain the kind of advantage needed to execute a plan of collection of data. And before you scoff, remember we’re only four years removed from the allegation that the NSA had Cisco put backdoors into IOS.

Why are we not surprised by this idea? Well, for one because security is getting much, much better at what it’s supposed to be doing. You can tell that because the attacks are getting more and more sophisticated. We’ve gone from 419 scam emails being deliberately bad to snare the lowest common denominator to phishing attacks that fool some of the best and brightest out there thanks to a combination of assets and DNS registrations that pass the initial sniff test. Criminals have had to up their game because we’re teaching people how to get better at spotting the fakes.

Likewise, technology is getting better at nabbing things before we even see them. Take the example of Forcepoint. I first found out about them at RSA this year. They have a great data loss prevention (DLP) solution that keeps you from doing silly things like emailing out Social Security Numbers or credit card information that would violate PCI standards. But they also have an AI-powered analysis engine that is constantly watching for behavioral threats. If someone does this on accident once it could just be a mistake. But a repeated pattern of behavior could indicate a serious training issue or even a malicious actor.

Forcepoint is in a category of solutions that are making the infrastructure smarter so we don’t have to be as vigilant. Sure, we’re getting much better at spotting things to don’t look right. But we also have a lot of help from our services. When Google can automatically filter spam and then tag presented messages as potentially phishing (proceed with caution), it helps me start my first read through as a skeptic. I don’t have to exhaust my vigilance for every email that comes across the wire.

The Dark Side Grows Powerful Too

Just because the infrastructure is getting smarter doesn’t mean we’re on the road to recovery. It means the bad actors are now exploring new vectors for their trade. Instead of 419 or phishing emails they’re installing malware on systems to capture keystrokes. iOS 12 now has protection from fake software keyboards that could capture information when something is trying to act as a keyboard on-screen. That’s a pretty impressive low-level hack when you think about it.

Now, let’s extrapolate the idea that the bad actors are getting smarter. They’re also contending with more data being pushed to cloud providers like Amazon and Azure. People aren’t storing data on their local devices. It’s all being pushed around in Virginia and Oregon data centers. So how do you get to that data? You can’t install bad software on a switch or even a class of switches or even a single vendor, since most companies are buying from multiple vendors now or even looking to build their own networking stacks, ala Facebook.

If you can’t compromise the equipment at the point of resale, you have to get to it before it gets into the supply chain. That’s why the SuperMicro story makes sense in most people’s heads, even if it does end up not being 100% true. By getting to the silicon manufacturer you have a entry point into anything they make. Could you imagine if this was Accton or Quanta instead of SuperMicro? If there was a chip inside every whitebox switch made in the last three years? If that chip had been scanning for data or relaying information out-of-band to a nefarious third-party? Now you see why supply chain compromises are so horrible in their potential scope.

This Is Bananas

Can it be fixed? That’s a good question that doesn’t have a clear answer. I look at it like the problem with the Cavendish banana. The Cavendish is the primary variant of the banana in the world right now. But it wasn’t always that way. The Gros Michel used to be the most popular all the way into the 1950s. It stopped because of a disease that infected the Gros Michel and caused entire crops to rot and die. That could happen because bananas are not grown through traditional reproductive methods like other crops. Instead, they are grafted from tree to tree. In a way, that makes almost all bananas clones of each other. And if a disease affects one of them, it affects them all. And there are reports that the Cavendish is starting to show signs of a fungus that could wipe them out.

How does this story about bananas relate to security? Well, if you can’t stop bananas from growing everywhere, you need to take them on at the source. And if you can get into the source, you can infect them without hope of removal. Likewise, if you can get into the supply chain and start stealing or manipulating data a low level, you don’t need to worry about all the crazy protections put in at higher layers. You’ll just bypass them all and get what you want.


Tom’s Take

I’m not sold on the Bloomberg bombshell about SuperMicro. The vehement denials from Apple and Amazon make this a more complex issue than we may be able to solve in the next couple of years. But now that the genie is out the bottle, we’re going to start seeing more and more complicated methods of attacking the merchant manufacturers at the source instead of trying to get at them further down the road. Maybe it’s malware that’s installed out-of-the-box thanks to a staging server getting compromised. Maybe it’s a hard-coded backdoor like the Xiamoi one that allowed webcams to become DDoS vectors. We can keep building bigger and better protections, but eventually we need to realize that we’re only one threat away from extinction, just like the banana.

The Why of Security

Security is a field of questions. We find ourselves asking all kinds of them all the time. Who is trying to get into my network? What are they using? How can I stop them? But I feel that the most important question is the one we ask the least. And the answer to that question provides the motivation to really fix problems as well as conserving the effort necessary to do so.

The Why’s Old Sage

If you’re someone with kids, imagine a conversation like this one for a moment:
Your child runs into the kitchen with a lit torch in their hands and asks “Hey, where do we keep the gasoline?”
Now, some of you are probably laughing. And some of you are probably imagining all kinds of crazy going on here. But I’m sure that most of you probably started asking a lot of questions like:
  • – Why does my child have a lit torch in the house?
  • – Why do they want to know where the gasoline is?
  • – Why do they want to put these two things together?
  • – Why am I not stopping this right now?
Usually, the rest of the Five Ws follow soon afterward. But Why is the biggest question. It provides motivation and understanding. If your child had walked in with a lit torch it would have triggered one set of responses. Or if they had asked for the location of combustible materials it might have elicited another set. But Why is so often overlooked in a variety of different places that we often take it for granted. Imagine this scenario:
An application developer comes to you and says, “I need to you open all the ports on the firewall and turn off the AV on all the machines in the building.”
You’d probably react with an immediate “NO”. You’d get cursed at and IT would live another day as the obstruction in “real development” at your company. As security pros, we are always trying to keep things safe. Sometimes that safety means we must prevent people from hurting themselves, as in the above example. But, let’s apply the Why here:
  • – Why do they need all the firewall ports opened?
  • – Why does the AV need to be disabled on every machine?
  • – Why didn’t they tell me about this earlier instead of coming to me right now?
See how each Why question has some relevance to things? If you start asking, I’d bet you would figure some interesting things out very quickly. Such as why the developer doesn’t know what ports their application uses. Or why they don’t understand how AV heuristics are triggered by software that appears to be malicious. Or the value of communicating to the security team ahead of time for things that are going to be big requests!

Digging Deeper

It’s always a question of motivation. More than networking or storage or any other facet of IT, security must understand Why. Other disciplines are easy to figure out. Increased connectivity and availability. Better data retention and faster recall. But security focuses on safety. On restriction. And allowing people to do things against their better nature means figuring out why they want to do them in the first place. Too much time is spent on the How and the What. If you look at the market for products, they all focus on that area. It makes sense at a basic level. Software designed to stop people from stealing your files is necessarily simple and focused on prevention, not intent. It does the job it was designed to do and no more. In other cases, the software could be built into a larger suite that provides other features and still not address the intent. And if you’ve been following along in security in the past few months, you’ve probably seen the land rush of companies talking about artificial intelligence (AI) in their solutions. RSA’s show floor was full of companies that took a product that did something last year and now magically does the same thing this year but with AI added in! Except, it’s not really AI. AI provides the basis for intent. Well, real AI does at least. The current state of machine learning and advanced analytics provides a ton of data (the what and the who) but fails to provide the intent (the why). That’s because Why is difficult to determine. Why requires extrapolation and understanding. It’s not as simple as just producing output and correlating. While machine learning is really good at correlation, it still can’t make the leap beyond analysis. That’s why humans are going to be needed for the foreseeable future in the loop. People provide the Why. They know to ask beyond the data to figure out what’s going on behind it. They want to understand the challenges. Until you have a surefire way of providing that capability, you’re never going to be able to truly automate any kind of security decision making system.

Tom’s Take

I’m a huge fan of Why. I like making people defend their decisions. Why is the one question that triggers deeper insight and understanding. Why concentrates on things that can’t be programmed or automated. Instead, why gives us the data we really need to understand the context of all the other decisions that get made. Concentrating on Why is how we can provide invaluable input into the system and ensure that all the tools we’ve spent thousands of dollars to implement actually do the job correctly.

Outing Your Outages

How are you supposed to handle outages? What happens when everything around you goes upside down in an instant? How much communication is “too much”? Or “not enough”? And is all of this written down now instead of being figured out when the world is on fire?

Team Players

You might have noticed this week that Webex Teams spent most of the week down. Hard. Well, you might have noticed if you used Microsoft Teams, Slack, or any other messaging service that wasn’t offline. Webex Teams went offline about 8:00pm EDT Monday night. At first, most people just thought it was a momentary outage and things would be back up. However, as the hours wore on and Cisco started updating the incident page with more info it soon became apparent that Teams was not coming back soon. In fact, it took until Thursday for most of the functions to be restored from whatever knocked them offline.

What happened? Well, most companies don’t like to admit what exactly went wrong. For every CloudFlare or provider that has full disclosures on their site of outages, there are many more companies that will eventually release a statement with the least amount of technical detail possible to avoid any embarrassment. Cisco is currently in the latter category, with most guesses landing on some sort of errant patch that mucked things up big time behind the scenes.

It’s easy to see when big services go offline. If Netflix or Facebook are down hard then it can impact the way we go about our lives. On the occasions when our work tools like Slack or Google Docs are inoperable it impacts our productivity more than our personal pieces. But each and every outage does have some lessons that we can take away and learn for our own IT infrastructure or software operations. Don’t think that companies that are that big and redundant everywhere can’t be affected by outages regularly.

Stepping Through The Minefield

How do you handle your own outage? Well, sometimes it does involve eating some humble pie.

  1. Communicate – This one is both easy and hard. You need to tell people what’s up. You need to let everyone know things are working right and you’re working to make them right. Sometimes that means telling people exactly what’s affected. Maybe you can log into Facebook but not Chat or Messages. Tell people what they’re going to see. If you don’t communicate, you’re going to have people guessing. That’s not good.
  2. Triage – Figure out what’s wrong. Make educated guesses if nothing stands out. Usually, the culprits are big changes that were just made or perhaps there is something external that is affecting your performance. The key is to isolate and get things back as soon as possible. That’s why big upgrades always have a backout plan. In the event that things go sideways, you need to get back to functional as soon as you can. New features that are offline aren’t as good as tried-and-true stuff that’s reachable.
  3. Honest Post-Mortem – This is the hardest part. Once you have things back in place, you have to figure out why the broke. This is usually where people start running for the hills and getting evasive. Did someone apply a patch at the wrong time? Did a microcode update get loaded to the wrong system? How can this be prevented in the future? The answers to these questions are often hard to get because the people that were affected and innocent often want to find the guilty parties and blame someone so they don’t look suspect. The guilty parties want to avoid blame and hide in public with the rest of the team. You won’t be able to get to the bottom of things unless you find out what went wrong and correct it. If it’s a process, fix it. If it’s a person, help them. If it’s a strange confluence of unrelated events that created the perfect storm, make sure that can never happen again.
  4. Communicate (Again) – This is usually where things fall over for most companies. Even the best ones get really good at figuring out how to prevent problems. However, most of them rarely tell anyone else what happened. They hide it all and hope that no one ever asks about anything. Yet, transparency is key in today’s world. Services that bounce up and down for no reason are seen as unstable. Communicating as to their volatility is the only way you can make sure that people have faith that they’re going to stay available. Once you’ve figure out what went wrong and who did it, you need to tell someone what happened. Because the alternative is going to be second guessing and theories that don’t help anyone.

Tom’s Take

I don’t envy the people at Cisco that spent their entire week working to get Webex Teams back up and running. I do appreciate their work. But I want to figure out where they went wrong. I want to learn. I want to say to myself, “Never do that thing that they did.” Or maybe it’s a strange situation that can be avoided down the road. The key is communication. We have to know what happened and how to avoid it. That’s the real learning experience when failure comes around. Not the fix, but the future of never letting it happen again.

Writing Is Hard

Writing isn’t the easiest thing in the world to do. There are a lot of times that people sit down to pour out their thoughts onto virtual paper and nothing happens. Or they spend hours and hours researching a topic only to put something together that falls apart because of assumptions about a key point that aren’t true.

The world is becoming more and more enamored with other forms of media. We like listening to podcasts instead of reading. We prefer short videos instead of long articles. Visual aids beat a wall of text any day. Even though each of these content types has a script it still feels better having a conversation. Informal chat beats formal prose every day.

Written Wringers

I got into blogging because my typing fingers are way more eloquent than the thoughts running through my brain. I had tons of ideas that I needed to put down on paper and the best way to do that was to build a simple blog and get to it. It’s been eight years of posting and I still feel like I have a ton to say. But it’s not easy to make the words flow all the time.

I find that my blogging issues boil down into two categories. The first is when there is nothing to write about. That’s how most people feel. They see the same problems over and over and there’s nothing to really discuss. The second issue is when a topic has been absolutely beaten to a pulp. SD-WAN is a great example. I’ve written a lot about SD-WAN in a bunch of places. And as exciting as the technology is for people implementing it for the first time, I feel like I’ve said everything there is to say about SD-WAN. I know that because it feels like the articles are all starting to sound the same.

There are some exciting new technologies on the horizon. 802.11ax is one of them. So too is the new crop of super fast Ethernet. We even have crazy stuff like silicon photonics and machine learning and AI invading everything we do. There’s a lot of great stuff just a little ways out there. But it’s all going to take research and time. And learning. And investment. And that takes time to suss everything out. Which means a lot of fodder for blog posts as people go through the learning process.

Paper Trail

The reason why blogging is still so exciting for me is because of all the searches that I get that land in my neighborhood. Thinks like fixing missing SFPs or sending calls directly to voicemail. These are real problems that people have that need to be solved.

As great as podcasts and video series are, they aren’t searchable. Sure, the show notes can be posted that discuss some of the topics in general. But those show notes are basically a blog post without prose. They’re a bullet point list of reference material and discussion points. That’s where blogs are still very important. They are the sum total of knowledge that we have in a form that people can see.

If you look at Egyptian hieroglyphs or even Ancient Greek writings you can see what their society is like. You get a feel for who they were. And you can read it because it was preserved over time. The daily conversations didn’t stand the test of time unless they were committed to memory somehow. Sure, podcasts and videos are a version of this as well, but they’re also very difficult to maintain.

Think back to all the video that you have that was recorded before YouTube existed. Think about all the recordings that exist on VHS, Super8, or even reel-to-reel tape. One of the biggest achievements of humanity was the manned landing on the moon in 1969. Now, just 50 years later we don’t have access to the video records of that landing. A few grainy copies of the records exist, but not the original media. However, the newspaper articles are still preserved in both printed and archive form. And those archives are searchable for all manner of information.


Tom’s Take

Written words are important. Because they will outlast us. As much as we’d like to believe that our videos are going to be our breakthrough and those funny podcasts are going to live forever, the truth is that people are going to forget our voices and faces long after we’re gone. Our words will live forever though. Because of archiving and searchability future generations will be able to read our thoughts just like we read those of philosophers and thinkers from years past. But in order to do that, we have to write.

A Matter of Perspective

Have you ever taken the opportunity to think about something from a completely different perspective? Or seen someone experience something you have seen through new eyes? It’s not easy for sure. But it is a very enlightening experience that can help you understand why people sometimes see things entirely differently even when presented with the same information.

Overcast Networking

The first time I saw this in action was with Aviatrix Systems. I first got to see them at Cisco Live 2018. They did a 1-hour presentation about their solution and gave everyone an overview of what it could do. For the networking people in the room it was pretty straightforward. Aviatrix did a lot of the things that networking should do. It was just in the cloud instead of in a data center. It’s not that Aviatrix wasn’t impressive. It’s the networking people have a very clear idea of what a networking platform should do.

Fast forward two months to Cloud Field Day 4. Aviatrix presents again, only this time to a group of cloud professionals. The message was a little more refined from their first presentation. They included some different topics to appeal more to a cloud audience, such as AWS encryption or egress security. The reception from the delegates was the differencue between night and day. Rather than just be satisfied with the message that Aviatrix put forward, the Cloud Field Day delegates were completely blown away! They loved everything that Aviatrix had to say. They loved the way that Aviatrix approached a problem they had seen and couldn’t quite understand. How to extend networking into the cloud and take control of it.

Did Aviatrix do something different? Why was the reaction between the two groups so stark? How did it happen this way? I think it is in part because networking people talk to a networking company and see networking. They find the things they expect to find and don’t look any deeper. But when the same company presents to an audience that doesn’t have networking on the brain for the entirety of their career it’s something entirely different. While a networking audience may understand the technology a cloud audience may understand how to make it work better for their needs because they can see the advantages. Perspective matters in this case because people exposed to new ideas find ways to make them work in ways that can only be seen with fresh eyes.

Letting Go of Wires

The second time I saw an example of perspective at play was at Mobility Field Day 3 with Arista Networks. Arista is a powerhouse in the data center networking space. They have gone up against Cisco and taken them head-to-head in a lot of deals. They have been gaining marketshare from Cisco in a narrow range of products focused on the data center. But they’re also now moving into campus switching as well as wireless with the acquisition of Mojo Networks.

When Arista stepped up to present at Mobility Field Day 3, the audience wasn’t a group of networking people that wanted to hear about CloudVision or 400GbE or even EOS. The audience of wireless and mobility professionals wanted to hear how Arista is going to integrate the Mojo product line into their existing infrastructure. The audience was waiting for a message that everything would work together and the way forward would be clear. I don’t know that they heard that message, but it wasn’t because of anything that Arista did on purpose.

Arista is very much trying to understand how they’re going to integrate Mojo Networks into what they do. They’re also very focused on the management and control plane of the access points. These are solved problems in the wireless world right now. When you talk to a wireless professional about centralized management of the device or a survivable control plane that can keep running if the management system is offline they’ll probably laugh. They’ve been able to experience this for the past several years so far. They know what SDN should look like because it’s the way that CAPWAP controllers have always operated. Wireless pros can tell you the flaws behind backhauling all your traffic through a controller and why there are much better options to keep from overwhelming the device.

Wireless pros have a different perspective from networking people right now. Things that networking pros are just now learning about are the past to wireless people. Wireless pros are focused more on the radio side of the equation than the routing and switching side. That perspective gives the wireless crowd a very narrow focus on solving some very hard problems but it does make them miss the point that their expertise can be invaluable to helping both networking pros and networking companies see how to take the best elements of wireless networking control mechanisms and implement them in such a way as to benefit everyone.


Tom’s Take

For me, the difficulty in seeing things differently doesn’t come from having an open mind. Instead, it comes from the fact that most people don’t have a conception of anything outside their frame of reference. We can’t really comprehend things we can’t conceive of. What you need to do to really understand what it feels like to be in someone else’s shoes is have someone show you what it looks like to be in them. Observe people learning something for the first time. Or see how they react to a topic you know well. Odds are good you might just find that you will know it better because they helped you understand it better.

A Review of Ubiquiti Wireless

About six months ago, I got fed up with my Meraki MR34 APs. They ran just fine, but they needed attention. They needed licenses. They needed me to pay for a dashboard I used rarely but yet had to keep up yearly. And that dashboard had most of the “advanced” features hidden away under lock and key. I was beyond frustrated. I happen to be at the Wireless LAN Professionals Conference (WLPC) and ran into Darrell DeRosia (@Darrell_DeRosia) about my plight. His response was pretty simple:

“Dude, you should check out Ubiquiti.”

Now, my understanding of Ubiquiti up to that point was practically nothing. I knew they sold into the SMB side of the market. They weren’t “enterprise grade” like Cisco or Aruba or even Meraki. I didn’t even know the specs on their APs. After a conversation with Darrell and some of the fine folks at Ubiquiti, I replaced my MR34s with a UniFI AP-AC-HD and an AP-AC-InWall-Pro. I also installed one of their UniFi Security Gateways to upgrade my existing Linksys connection device.

You may recall my issue with redundancy and my cable modem battery when I tried to install the UniFi Security Gateway for the first time. After I figured out how to really clear the ARP entries in my cable modem I got to work. I was able to install the gateway and get everything back up and running on the new Ubiquiti APs. How easy was it? Well, after renaming the SSID on the new APs to the same as the old one, I was able to connect all my devices without anyone in the house having to reconnect any of their devices. As far as they knew, nothing changed. Except for the slightly brighter blue light in my office.

I installed the controller software on a spare machine I had running. No more cloud controllers for me. I knew that I could replicate those features with a Ubiquiti Cloud Key, but my need to edit wireless settings away from home was pretty rare.

Edit: As pointed out by my fact checked Marko Milivojevic, you don’t need a Cloud Key for remote access. The Cloud Key functions more as a secure standalone controller instance that has remote access capabilities. You can still run the UniFi controller on lots of different servers, including dedicated rack-mount gear or a Mac Mini (like I have).

I logged into my new wireless dashboard for the first time:

It’s lovely! It gives me all the info I could want for my settings and my statistics. At a glance, I can see clients, devices, throughput, and even a quick speed test of my WAN connection. You’re probably saying to yourself right now “So what? This kind of info is tablestakes, right?” And you wouldn’t be wrong. But, the great thing about Ubiquiti is that its going to keep working after 366 days of installation without buying any additional licenses. It’s not going to start emailing me telling me it’s time to sink a few hundred dollars into keeping the lights on. That’s a big deal for me at home. Enterprises may be able to amortize license costs over the long haul but small businesses aren’t so lucky.

The Ubiquiti UniFi dashboard also has some other great things. Like a settings page:

Why is that such a huge deal? Well, Ubiquiti doesn’t remove functionality from the dashboard. They put it where you can find it. They make it easy to tweak settings without wishing on a star. They want you to use the wireless network the way you need to use it. If that means enabling or disabling features here and there to get things working, so be it. Those features aren’t locked away behind a support firewall that needs an act of Congress to access.

But the most ringing endorsement of Ubiquiti for me? Zero complaints in my house. Not once has anyone said anything about the wireless. It just “works”. With all the streaming and Youtube watching and online video game playing that goes on around here it’s pretty easy to saturate a network. But the Ubiquiti APs have kept up with all the things that have been thrown at them and more.

I also keep forgetting that I even have them installed. That’s a good thing. Because I don’t spend all my time tinkering with them they tend to fade away into the background of the house. Even the upstairs in-wall AP is chugging right along and serving clients with no issues. Small enough to fit into a wall box, powerful enough to feed Netflix for a whole family.


Tom’s Take

I must say that I’m very impressed by Ubiquiti. My impressions about their suitability for SMB/SME was all wrong. Thanks to Darrell I now know that Ubiquiti is capable of handling a lot of things that I considered “enterprise only” features. Even Lee Hutchinson at Are Technica is a fan of Ubiquiti at home. I also noticed that the school my kids attend installed Ubiquiti APs over the summer. It looks like Ubiquiti is making in-roads into SMB/SME and education. And it’s a very workable solution for what you need from a wireless system. Add in the fact that the software doesn’t require yearly upkeep and it makes all the sense in the world for someone that’s not ready to commit to the treadmill of constant licensing.