Unknown's avatar

About networkingnerd

Tom Hollingsworth, CCIE #29213, is a former network engineer and current organizer for Tech Field Day. Tom has been in the IT industry since 2002, and has been a nerd since he first drew breath.

Why Are These Slides Marked Confidential?

top-secret

Imagine you’re sitting in a presentation. You’re hearing some great information from the presenter and you can’t wait to share it with your colleagues or with the wider community. You are just about to say something when you look in the corner of the slide and you see…

Confidential Information

You pause for a moment and ask the presenter if this slide is a secret or if you should consider it under NDA. They respond that this slide can be shared with no restrictions and the information is publicly available. Which raises the question: Why is a public slide marked “confidential”?

I Fought The Law

The laws that govern confidential information are legion. Confidential information is a bit different than copyrighted information or intellectual property that has been patented. In most cases, confidential information is treated as a trade secret. Trade secrets can be harmful if they are divulged, since a trade secret can’t be patented.

A great example is the formula for Coca-Cola. If they tried to patent it they would have to write down all the ingredients. While that would protect the very specific formulation of their drink it would also allow their competitors to create something extremely similar with a few changes and create a viable competitor. Coca-Cola chooses to protect this information by ensuring it isn’t widely known. It ranks right up there with nuclear launch codes and Star Search results.

How does the concept of trade secrets and confidential information apply to slides? Well, one of the provisions of confidential information is that distribution must be controlled somehow. This means that you can’t just hand out information to anyone walking by on the street and hope that it stays confidential. You have to control distribution through confidentiality agreements and non-disclosure agreements (NDAs).

Most of the times you are seeing slides marked “confidential” you have implicitly agreed to some kind of confidentiality agreement. You are either covered by an NDA from your employer or from the event you are attending. Even if you didn’t sign an agreement it can still be argued in a court of law that you were invited to the presentation which means the presenter was selective in who could attend. That should meet the requirements of protecting distribution of the information.

I Shouldn’t Have Said That

The other reason why you see slides prominently marked as confidential is because the law says they have to be to be protected. A company can’t release information not bearing a confidential mark and then suddenly decide after the fact that said information should have been confidential. Could you imagine a world where companies routinely try to remove sensitive information from public knowledge because it isn’t flattering? What if they could use an ex post facto declaration to restrict distribution?

Confidential information has to be treated and marked as such from the very beginning to qualify for protection. In order to make sure that there is no chance for a slip up most companies will mark anything remotely sensitive to ensure it won’t come back to bite them later.

But why put the confidential marking on slides that you’re going to show to the world? What if those slide get uploaded to the Internet and shared all over the world as often happens? What purpose could it serve?

The reason to mark slides as confidential is to make sure you can restrict their use whenever you want. Rarely are slides uploaded by a company with a confidential marking. In order for something to be uploaded it has to be cleared through a legal department. So if there are slides out there that exist with a confidential marking it’s more likely someone uploaded them without explicit permission. Which isn’t a bad thing in general.

What if a competitor gets a copy of the slides and starts using the information? Or better yet, what if they use it in a marketing campaign against the company?

If the slide is marked as “confidential”, that allows the company to use legal means to remove the information or disallow use of the information. It means that rather than just complaining or fighting a marketing battle that heavier means can be used to take down anything embarrassing. It’s also more lasting to bar anyone from mentioning anything listed on a confidential slide.


Tom’s Take

I agree that the whole legal need to label everything short of your underwear as “confidential” is just plain stupid. This is the same legal system that says trademarks must be defended to be protected. But the rules are the rules. Which means that any company that wants to protect confidential information must mark it that way from the genesis of the concept. And having the ability to protect those assets also means dealing with misleading marks long after the information has entered the wild. Just make sure you ask the right questions before divulging anything that could be considered confidential.

 

SDN and the Trough Of Understanding

gartner_net_hype_2015

An article published this week referenced a recent Hype Cycle diagram (pictured above) from the oracle of IT – Gartner. While the lede talked a lot about the apparent “death” of Fibre Channel over Ethernet (FCoE), there was also a lot of time devoted to discussing SDN’s arrival at the Trough of Disillusionment. Quoting directly from the oracle:

Interest wanes as experiments and implementations fail to deliver. Producers of the technology shake out or fail. Investments continue only if the surviving providers improve their products to the satisfaction of early adopters.

As SDN approaches this dip in the Hype Cycle it would seem that the steam is finally being let out of the Software Defined Bubble. The Register article mentions how people are going to leave SDN by the wayside and jump on the next hype-filled networking idea, likely SD-WAN given the amount of discussion it has been getting recently. Do you know what this means for SDN? Nothing but good things.

Software Defined Hammers

Engineers have a chronic case of Software Defined Overload. SD-anything ranks right up there with Fat Free and New And Improved as the Most Overused Marketing Terms. Every solution release in the last two years has been software defined somehow. Why? Because that’s what marketing people think engineers want. Put Software Defined in the product and people will buy it hand over fist. Guess what Little Tommy Callahan has to say about that?

There isn’t any disillusionment in this little bump in the road. Quite the contrary. This is where the rubber meets the road, so to speak. This is where all the pretenders to the SDN crown find out that their solutions aren’t suited for mass production. Or that their much-vaunted hammer doesn’t have any nails to drive. Or that their hammer can’t drive a customer’s screws or rivets. And those pretenders will move on to the next hype bubble, leaving the real work to companies that have working solutions and real products that customers want.

This is no different than every other “hammer and nail” problem from the past few decades of networking. Whether it be ATM, MPLS, or any one of a dozen “game changing” technologies, the reality is that each of these solutions went from being the answer to every problem to being a specific solution for specific problems. Hopefully we’ve gotten SDN to this point before someone develops the software defined equivalent of LANE.

The Software Defined Road Ahead

Where does SD-technology go from here? Well, without marketing whipping everyone into a Software Defined Frenzy, the future is whatever developers want to make of it. Developers that come up with solutions. Developers that integrate SDN ideas into products and quietly sell them for specific needs. People that play the long game rather than hope that they can take over the world in a day.

Look at IPv6. It solves so many problems we have with today’s Internet. Not just IP exhaustion issues either. It solves issues with security, availability, and reachability. Yet we are just now starting to deploy it widely thanks to the panic of the IPocalypse. IPv6 did get a fair amount of hype twenty years ago when it was unveiled as the solution to every IP problem. After years of mediocrity and being derided as unnecessary, IPv6 is poised to finally assume its role.

SDN isn’t going to take nearly as long as IPv6 to come into play. What is going to happen is a transition away from Software Defined as the selling point. Even today we’re starting to see companies move away from SD labeling and instead use more specific terms to help customers understand what’s important about the solution and how it will help customers. That’s what is needed to clarify the confusion and reduce fatigue.

 

TECH.unplugged And Being Present

techunplugged-logo

I wanted to let everyone know that I’m going to be taking part in an excellent event being put on by my friend Enrico Signoretti (@ESignoretti) this September. TECH.unplugged is a jam-packed day of presentations from people that cover storage, computing, and in my case networking. We’re getting together to share knowledge and discuss topics of great interest to the IT community. As excited as I am to be taking part, I also wanted to take a few moments to discuss why events like this are important to the technology community.

WORM Food

There’s no doubt that online events are becoming the standard for events in recent years. It’s much more likely to find an event that offers streaming video, virtual meeting rooms, and moderated discussions taking place in a web browser. The costs of travel and lodging are far higher than they were during the recession days of yore. Finding a meeting room that works with your schedule is even harder. It’s much easier to spin up a conference room in the cloud and have people dial in to hear what’s going on.

For factual information, such as teaching courses, this approach works rather well. That’s where the magic of pre-recording comes into play. Write once, read many. Delivering information like this cuts down on time spent with the logistics of organization and allows the viewer to watch on-demand. And quesitons that come up can be handled with FAQs or community discussion on a small scale. Again, this works best for the kinds of content that are not easily debated.

Present And Accounted For

What about content that isn’t as cut-and-dried? Hot topics that are going to have lots of questions or opinions? How do you handle an event where the bulk of the time is spent having a discussion with peers instead of delivering material?

Virtual solutions are great for multicasting. When everyone is watching one topic being presented and doing very little interacting everything works just fine. The system starts to break down when those people try to talk to one another. Do you use the general channel? Private messages? Have you been silenced by the organizer before you try to ask a question? What if you want to discuss a topic covered five minutes ago?

Nothing beats a face-to-face conversation for actual discussion. There’s an dynamic that can’t be matched when you get ten people in a room and give them a prompt to start talking about something. There is usually lively debate and sharing of viewpoints. Someone is going to share a personal experience or be the voice of reason. Still others will play the devil’s advocate or be a contrarian. Those are concepts that are hard to replicate when screen names take the place of a nametag.

Another important part of being present for events like this is meeting like-minded people and engaging them in real conversation. In the world of social media, we often form relationships with people in the industry without having actually met them. While that does make it easy to build a network of people in the community to talk to, it also doesn’t allow you to hear someone talk or engage them in a meaningful talk of more than 100 characters at a time or nested comments.

There’s something magical about having in-person discussions. It is a very different thing to defend your opinion when looking someone in the eyes versus behind a keyboard. Without instant access to search engines you need to know the evidence to support your opinion rather than relying on someone else to do it for you. When you prove your point in a real life meetup people remember being there.


Tom’s Take

Virtual meetings are great for some specific things. But you can’t beat the importance of being around people and talking about something. Being present for an event makes it have much more of an impact. I’ve heard from countless people telling me how Cisco Live feels so much different when you’re there because of the people you are around. There’s a reason why Tech Field Day is an in-person event. Because you can’t beat the magic of being around other like-minded people to discuss things.

Be sure to check out TECH.unplugged and see the list of speakers for the September event. And if you just happen to be in Amsterdam be sure to sign up (it’s free)! We want you there!

The Score Is High. Who’s Holding On?

Checklist

If you haven’t had the chance to read Jeff Fry’s treatise on why the CCIE written should be dropped, do it now. He raises some very valid points about relevancy and continuing education and how the written exam is approaching irrelvancy as a prerequisite for lab candidates. I’d like to approach another aspect of this whole puzzle, namely the growing need to get that extra edge to pass the cut score.

Cuts Like A Knife

Every standardized IT test has a cut score, or the minimum necessary score required to pass. There is a surprising amount of work that goes into calculating a cut score for a standardized test. Too low and you end up with unqualified candidates being certified. Too high and you have a certification level that no one can attain.

The average cut score for a given exam level tends to rise as time goes on. This has a lot to do with the increasing depth of potential candidates as well as the growing average of scores from those candidates. Raising the score with each revision of the test guarantees you have the best possible group representing that certification. It’s like having your entire group be members of the honor roll.

A high cut score ensures that unqualified personnel are washed out of the program quickly. If you take a test with a cut score of 800 and you score a 500, you quickly know that you need to study quite a bit more before you’re ready to attempt the exam again. You might even say to yourself that you don’t know the material in any kind of depth to continue your quest for certification.

What happens if you’re just below the cut score? If you miss the mark by one question or less? How much more studying can you do? What else do you need to know? Sure, you can look at the exam and realize that there are many, many more questions you can answer correctly to hit the right score. But what if the passing score is absurdly high?

Horseshoes and Hand Grenades

I believe the largest consumer of purloined test questions is not the candidate that is completely clueless about a subject. Instead, the largest market of these types of services is the professional that has either missed the mark by a small margin or is afraid they will not pass even after hours of exhaustive study.

Rising cut scores lead to panic during exams. Why was a 790 good enough last time but now I need an 850 to pass? It’s easy to start worrying that your final score may fall in between that gray area that will leave lacking on the latest attempt. What happens if you miss the mark with all of the knowlege that you have obtained?

Those are the kinds of outcomes that drive people to invest in “test aids”. The lure is very compelling. Given the choice between failing an exam that costs $400 or spending a quarter of that to have a peek at what might be on the test, what is stopping the average test taker besides morality? What if your job depended on passing that exam? Now that multi-hundred dollar exam becomes a multi-thousand dollar decision.

Now we’re not talking about a particular candidate’s desire to fleece a potential employer or customer about knowledge. We’re talking about good old fashioned fear. Fear of failure. Fear of embarassement. Fear of losing your livelyhood because of a test. And that fear is what drives people to break the rules to ensure success.

Cut Us Some Slack

The solution to this issue is complicated. How can you ensure the integrity of a testing program? Worse yet, how can you stem the rising tide of improper behavior when it comes to testing?

The first thing is to realize what drives this behavior. Should a test like the CCIE written have higher and higher cut scores to eliminate illicit behavior? Is that really the issue here? Or is it more about the rising cut score itself causing a feedback loop that drives the behavior?

Companies need to take a hard look at their testing programs to understand what is going on with candidates. Are people missing the mark widely? Or are they coming very close without passing? Are the passing scores in the 99th percentile? Or barely above the mark? Adjustments in the cut score should happen both up and down.

It’s easy to look at testing groups and say, “If you just stuided a bit harder, you’d hit this impossibly high mark.” It’s also very easy to look at scores and say, “We see that many of you are missing the mark by less than ten points. Let’s lower that and see how things go from here.”

Certification programs are very worried about diluting the pool of certified candidates. But is having more members of the group with scores within a question or two of passing preferable to having a group with absurdly high passing scores thanks to outside help?


Tom’s Take

I’ve taken exams with a 100% cut score. It’s infuriating to think that even a single wrong answer could cost you an entire exam. It’s even worse if you are financing the cost of your exam on your own. Fear of missing that mark can drive people to do lots of crazy things.

I’m not going to say that companies like Cisco need to lower the cut scores of exams to unrealistically low levels. That would cheapen the certifications that people have already earned. What needs to happen is that Cisco and other certification bodies need to learn what they are trying to accomplish with their programs and adjust all the parameters of the tests to accomplish those goals.

Perhaps raising the cut scores to more than 900 points isn’t the answer. Maybe instead more complex questions or more hands-on simulations are required to better test the knowledge of the candidates. These are better solutions that take time and research. They aren’t the false panacea of raising the passing score. The rising tide can’t be fixed by making the buoys float just a little higher.

 

Objectivity Never Rests

objectivity

Being an independent part of the IT community isn’t an easy thing. There is a lot of writing involved and an even greater amount of research. For every word you commit to paper there is at least an hour of taking phone calls and interviewing leaders in the industry about topics. The rewards can be legion. So can the pitfalls. Objectivity is key, yet that is something where entire communities appear to be dividing.

Us Or Them

Communities are complex organisms with their own flow and feel. What works well in one community doesn’t work well in another. Familiarity with one concept doesn’t immediately translate to another. However, one thing that is universal across all communities is the polarization between extremes.

For instance, in the networking community this polarization is best characterized by the concept of “ABC – Anything But Cisco”. Companies make millions selling Cisco equipment every year. Writers and speakers can make a very healthy career from covering Cisco technologies. And yet there are a large number of companies and people that choose to use other options. They write about Juniper or install Brocade. They spend time researching Cumulus Linux or Big Switch Networks.

Knowing a little about many things is a great thing. There is no way I could have done my old VAR job had I only known Cisco gear. But when that specialization is taken to an extreme, you get the mentality that anything or anyone involved in the opposite camp must be wrong on principal. It does happne that some choose to ignore all other things at their own peril. Still others are branded as “haters” not because they truly hate a position but because others have taken comments and pushed them beyond their meaning to an extreme to serve as a comparison point.

Think a bit about the following situations that have been mentioned to me in recent months and look at what the perception is in certain communities:

  • Cisco vs. Not Cisco
  • Cisco vs. VMware
  • Cisco vs. Whitebox
  • VMware vs. OpenStack
  • VMware vs. Docker
  • EMC vs. Not EMC

The list could go on for many more entries. The point is that people have drawn “battle lines” in the industry around companies and concepts to provide contrast for positions.

Objectivity In Motion

How does the independent influencer cope with all these challenges to objectivity? It’s not unlike navigating a carpet full of Lego bricks with no shoes on.

The first important step is to avoid the trap of being pigeonholed as a “hater”. That’s easier than it sounds. Simply covering one technology or vendor isn’t going to cause you to fall into that trap. If someone writes a lot about Juniper, I simply assume they spend the majority of their time with Juniper gear. The only time they cross the line into the territory of anti-Someone is through calculated commentary to that effect.

The other important step in reference to the above is to keep your commentary on point. Petty comments like “that’s a stupid idea” or “no one in their right mind would do it like that” aren’t constructive and lead to labels of “hater”. The key to criticism is to keep it constructive. Why is it a stupid idea? Why would someone choose to do it differently? These are ways to provide contrast without relying on generalizing to get your point across.

The third and most important way to avoid losing objectivty is to keep the discussion focused on things and ideas and not people. As soon as you start attacking people and crticizing them your objectivity will always be called into question. For example, a few years ago I wrote a review of a short book that Greg Ferro (@EtherealMind) wrote about blogging. I disagreed with many of his points based on my own experiences. In my post, I never attacked Greg or called his blogging ability into question. Instead, I addressed his points and provided my own perspective. Greg and I have had many beers since then without wanting to choke each other, so I think we’re still friends. But more importantly, we’re still objective about blogging even though we have different opinions.


Tom’s Take

Objectivity is hard to gain and easy to lose. It’s also easy to have it taken from you by people that feel you’ve lost it. It wouldn’t be a stretch to look at my last blog post about Meraki and assume that I “hate” them based on my comments. But if you read through what I wrote, I never say that I hate the company or the people. Instead, I disagree with a choice they have made with their software. I still feel my objectivity is intact. If Meraki decides tomorrow to implement some of my ideas or something similar, I will be more than happy to tell everyone about it.

You can never stop looking at your own objectivity. When you get complacent you have lost. You need to constantly ask yourself why you are writing or speaking about something and how objective you are. If you are the first person to question your own objectivity it will be much easier to answer those that question it later.

Meraki Will Never Be A Large Enterprise Solution

Cisco-Cloud-Networking-Meraki

Thanks to a couple of recent conversations, I thought it was time to stir the wireless pot a little. First was my retweet of an excellent DNS workaround post from Justin Cohen (@CanTechIt). One of the responses I got from wireless luminary Andrew von Nagy (@RevolutionWifi):

http://twitter.com/revolutionwifi/status/618167906313076737

This echoed some of the comments that I heard from Sam Clements (@Samuel_Clements) and Blake Krone (@BlakeKrone) during this video from Cisco Live Milan in January:

During that video, you can hear Sam and Blake asking for a few features that aren’t really supported on Meraki just yet. And it all comes down to a simple issue.

Should It Just Work?

Meraki has had a very simple guiding philosophy since the very beginning. Things should be easy to configure and work without hassle for their customers. It’s something we see over and over again in technology. From Apple to Microsoft, the focus has shifted away from complexity and toward simplicity. Gone are the field of radio buttons and obscure text fields. In their place we find simple binary choics. “Do You Want To Do This Thing? YES/NO”.

Meraki believes that the more complicated configuration items confuse users and lead to support issues down the road. And in many ways they are absolutely right. If you’ve ever seen someone freeze up in front of a Coke Freestyle machine, you know how easy it is to be overwhelmed by the power of choice.

In a small business or small enterprise environment, you just need things to work. A business without a dedicated IT department doesn’t need to spend hours figuring out how to disable 802.11b data rates to increase performance. That SMB/SME market has historically been the one that Meraki sells into better than anyone else. The times are changing though.

Exceptions Are Rules?

Meraki’s acquistion by Cisco has raised their profile and provided a huge new sales force to bring their hardware and software to the masses. The software in particular is a tipping point for a lot of medium and large enterprises. Meraki makes it easy to configure and manage large access point deployments. And nine times out of ten their user interface provides everything a person could need for configuration.

Notice that was “nine times out of ten”. In an SME, that one time out of ten that something more was needed could happen once or twice in the lifetime of a deployment. In a large enterprise, that one time out of ten could happen once a month or even once a week. With a huge number of clients accessing the system for long periods of time, the statistical probability that an advanced feature will need to be configured does approach certainty quickly.

Meraki doesn’t have a way to handle these exceptions currently. They have an excellent feature request system in their “Make A Wish” feedback system, but the tipping point required for a feature to be implemented in a new release doesn’t have a way to be weighted for impact. If two hundred people ask for a feature and the average number of access points in their networks is less than five, it reflects differently than if ten people ask for a feature with an average of one thousand access points per network. It is important to realize that enterprises can scale up rapidly and they should carry a heavier weight when feature requests come in.

That’s not to say that Meraki should go the same route as Cisco Unified Communications Manager (CUCM). Several years ago, I wrote about CSCsb42763 which is a bug ID that enables a feature by typing that code into an obscure text field. It does enable the feature, but you have no idea what or how or why. In fact, if it weren’t for Google or a random call to TAC, you’d never even know about the feature. This is most definitely not the way to enable advanced features.

Making It Work For Me

Okay, the criticism part is over. Now for the constructive part. Because complaining without offering a solution is just whining.

Meraki can fix their issues with large enterprises by offering a “super config mode” to users that have been trained. It’s actually not that far away from how they validate licenses today. If you are listed as an admin on the system and you have a Meraki Master ID under your profile then you get access to the extra config mode. This would benefit both enterprise admins as well as partners that have admin accounts on customer systems.

This would also be a boon for the Meraki training program. Sure, having another piece of paper is nice. But what if all that hard work actually paid off with better configuration access to the system? Less need to call support instead of just getting slightly better access to engineers? If you can give people what they need to fix my problem without calling for support they will line up outside your door to get it.

If Meraki isn’t willing to take that giant leap just yet, another solution would be to weight the “Make A Wish” suggestions based on the number of APs covered by the user. They might even do this now. But it would be nice to know as a large enterprise end user that my feature requests are being taken under more critical advisement than a few people with less than a dozen APs. Scale matters.


Tom’s Take

Yes, the headline is a bit of clickbait. I don’t think it would have had quite the same impact if I’d titled it “How Meraki Can Fix Their Enterprise Problems”. You, the gentle reader, would have looked at the article either way. But the people that need to see this wouldn’t have cared unless it looked like the sky was falling. So I beg your forgiveness for an indulgence to get things fixed for everyone.

I use Meraki gear at home. It works. I haven’t even configured even 10% of what it’s capable of doing. But there are times when I go looking for a feature that I’ve seen on other enterprise wireless systems that’s just not there. And I know that it’s not there on purpose. Meraki does a very good job reaching the customer base that they have targeted for years. But as Cisco starts pushing their solutions further up the stack and selling Meraki into bigger and more complex environments, Meraki needs to understand how important it is to give those large enterprise users more control over their systems. Or “It Just Works” will quickly become “It Doesn’t Work For Me”.

Cisco and OpenDNS – The Name Of The Game?

SecureDNS

This morning, Cisco announced their intent to acquire OpenDNS, a security-as-a-service (SaaS) provider based around the idea of using Domain Naming Service (DNS) as a method for preventing the spread of malware and other exploits. I’ve used the OpenDNS free offering in the past as a way to offer basic web filtering to schools without funds as well as using OpenDNS at home for speedy name resolution when my local name servers have failed me miserably.

This acquistion is curious to me. It seems to be a line of business that is totally alien to Cisco at this time. There are a couple of interesting opportunities that have arisen from the discussions around it though.

Internet of Things With Names

The first and most obivious synergy with Cisco and OpenDNS is around Internet of Things (IoT) or Internent of Everything (IoE) as Cisco has branded their offering. IoT/IoE has gotten a huge amount of attention from Cisco in the past 18 months as more and more devices come online from thermostats to appliances to light sockets. The number of formerly dumb devices that now have wireless radios and computers to send information is staggering.

All of those devices depend on certain services to work properly. One of those services is DNS. IoT/IoE devices aren’t going to use pure IP to communicate with cloud servers. That’s because IoT uses public cloud offerings to communicate with devices and dashboards. As I said last year, capacity and mobility can be ensure by using AWS, Google Cloud, or Azure to host the servers to which IoT/IoE devices communicate.

The easiest way to communicate with AWS instances is via DNS. This ensures that a service can be mobile and fault tolerant. That’s critical to ensure the service never goes down. Losing your laptop or your phone for a few minutes is annoying but survivable. Losing a thermostat or a smoke detector is a safety hazard. Services that need to be resilient need to use DNS.

More than that, with control of OpenDNS Cisco now has a walled DNS garden that they can populate with Cisco service entries. Rather than allowing IoT/IoE devices to inherit local DNS resolution from a home ISP, they can hard code the DNS name servers in the device and ensure that the only resolution used will be controled by Cisco. This means they can activate new offerings and services and ensure that they are reachable by the devices. It also allows them to police the entries in DNS and prevent people from creating “workarounds” to enable to disable features and functions. Walled-garden DNS is as important to IoT/IoE as the walled-garden app store is to mobile devices.

Predictive Protection

The other offering hinted at in the acquistion post from Cisco talks about the professional offerings from OpenDNS. The OpenDNS Umbrella security service helps enterprises protect themselves from malware and security breaches through control and visibility. There is also a significant amount of security intelligence available due to the amount of traffic OpenDNS processes every day. This gives them insight into the state of the Internet as well as sourcing infection vectors and identifying threats at their origin.

Cisco hopes to utilize this predictive intelligence in their security products to help aid in fast identification and mitigation of threats. By combining OpenDNS with SourceFire and Ironport the hope is that this giant software machine will be able to protect customers even faster before they get exposed and embarrased and even sued for negligence.

The part that worries me about that superior predictive intelligence is how it’s gathered. If the only source of that information comes from paying OpenDNS customers then everything should be fine. But I can almost guarantee that users of the free OpenDNS service (like me) are also information sources. It makes the most sense for them. Free users provide information for the paid service. Paid users are happy at the level of intelligence they get, and those users pay for the free users to be able to keep using those features at no cost. Win/win for everyone, right?

But what happens if Cisco decides to end the free offering from OpenDNS? Let’s think about that a little. If free users are locked out from OpenDNS or required to pay even a small nominal fee, that means their source of information is lost in the database. Losing that information reduces the visibility OpenDNS has into the Internet and slows their ability to identify and vector threats quickly. Paying users then lose effectiveness of the product and start leaving in droves. That loss accelerates the failure of that intelligence. Any products relying on this intelligence also reduce in effectiveness. A downward spiral of disaster.


Tom’s Take

The solution for Cisco is very easy. In order to keep the effectiveness of OpenDNS and their paid intelligence offerings, Cisco needs to keep the free offering and not lock users out of using their DNS name servers for no cost. Adding IoT/IoE into the equation helps somewhat, but Cisco has to have the information from small enterprises and schools that use OpenDNS. It benefits everyone for Cisco to let OpenDNS operate just as they have been for the past few years. Cisco gains signficant intelligence for their security offerings. They also gain the OpenDNS customer base to sell new security devices to. And free users gain the staying power of a brand like Cisco.

Thanks to Greg Ferro (@EtherealMind), Brad Casemore (@BradCasemore) and many others for the discussion about this today.

The IPv6 Revolution Will Not Be Broadcast

IPv6Revolution

There are days when IPv6 proponents have to feel like Chicken Little. Ever since the final allocation of the last /8s to the RIRs over four years ago, we’ve been saying that the switch to IPv6 needs to happen soon before we run out of IPv4 addresses to allocate to end users.

As of yesterday, ARIN (@TeamARIN) has 0.07 /8s left to allocate to end users. What does that mean? Realistically, according to this ARIN page that means there are 3 /21s left in the pool. There are around 450 /24s. The availability of those addresses is even in doubt, as there are quite a few requests in the pipeline. I’m sure ARIN is now more worried that they have recieved a request that they can’t fulfill and it’s already in their queue.

The sky has indeed fallen for IPv4 addresses. I’m not going to sit here and wax alarmist. My stance on IPv6 and the need to transition is well known. What I find very interesting is that the transition is not only well underway, but it may have found the driver needed to see it through to the end.

Mobility For The Masses

I’ve said before that the driver for IPv6 adoption is going to be an IPv6-only service that forces providers to adopt the standard because of customer feedback. Greed is one of the two most powerful motivators. However, fear is an equally powerful motivator. And fear of having millions of mobile devices roaming around with no address support is an equally unwanted scenario.

Mobile providers are starting to move to IPv6-only deployments for mobile devices. T-Mobile does it. So does Verizon. If a provider doesn’t already offer IPv6 connectivity for mobile devices, you can be assured it’s on their roadmap for adoption soon. The message is clear: IPv6 is important in the fastest growing segment of device adoption.

Making mobile devices the sword for IPv6 adoption is very smart. When we talk about the barriers to entry for IPv6 in the enterprise we always talk about outdated clients. There are a ton of devices that can’t or won’t run IPv6 because of an improperly built networking stack or software that was written before the dawn of DOS. Accounting for those systems, which are usually in critical production roles, often takes more time than the rest of the deployment.

Mobile devices are different. The culture around mobility has created a device refresh cycle that is measured in months, not years. Users crave the ability to upgrade to the latest device as soon as it is available for sale. Where mobile service providers used to make users wait 24 months for a device refresh, we now see them offering 12 month refreshes for a significantly increased device cost. Those plans are booming by all indications. Users want the latest and greatest devices.

With the desire of users to upgrade every year, the age of the device is no longer a barrier to IPv6 adoption. Since the average age of devices in the wild is almost certain to be less than 3 years old providers can also be sure that the capability is there for them to support IPv6. That makes it much easier to enable support for it on the entire install base of handsets.

The IPv6 Trojan Horse

Now that providers have a wide range of IPv6-enabled devices on their networks, the next phase of IPv6 adoption can sneak into existence. We have a lot of IPv6-capable devices in the world, but very little IPv6 driven content. Aside from some websites being reachable over IPv6 we don’t really have any services that depend on IPv6.

Thanks to mobile, we have a huge install base of devices that we now know are IPv6 capable. Since the software for these devices is largely determined by the user base through third party app development, this is the vector for widespread adoption of IPv6. Rather than trumpeting the numbers, mobile providers and developers can quiety enable IPv6 without anyone even realizing it.

Most app resources must live in the cloud by design. Lots of them live in places like AWS. Service providers enable translation gateways at their edge to translate IPv6 requests into IPv4 requests. What would happen if the providers started offering native IPv6 connectivity to AWS? How would app developers react if there was a faster, native connetivity option to their resources? Given the huge focus on speed for mobile applications, do you think they would continue using a method that forces them to use slow translation devices? Or would they jump at the chance to speed up their devices?

And that’s the trojan horse. The app itself spurs adoption of IPv6 without the user even knowing what’s happened. When’s the last time you needed to know your IP on a mobile device? Odds are very good it would take you a while to even find out where that information is stored. The app-driven focus of mobile devices has eliminated the need for visibility for things like IP addresses. As long as the app connects, who cares what addressing scheme it’s using? That makes shifting the underlying infrastructure from IPv4 to IPv6 fairly inconsequential.


Tom’s Take

IPv6 adoption is going to happen. We’ve reached the critical tipping point where the increased cost of acquiring IPv4 resources will outweigh the cost of creating IPv6 connectivity. Thanks to the focus on mobile technologies and third-party applications, the IPv6 revolution will happen quietly at night when IPv6 connectivity to cloud resources becomes a footnote in some minor point update release notes.

Once IPv6 connectity is enabled and preferred in mobile applications, the adoption numbers will go up enough that CEOs focused on Gartner numbers and keeping up with the Joneses will finally get off their collective laurels and start pushing enteprise adoption. Only then will the analyst firms start broadcasting the revolution.

Thoughts on Cisco Live 2015

Cisco Live 2015 Twitter Pic

We’ve secretly replaced Tom with Mike Rowe. Let’s see if anyone notices…

Cisco Live 2015 is in the books. A great return to San Diego. A farewell from John Chambers. A greeting from Chuck Robbins (@ChuckRobbins). And a few other things.

The Community is Strong, But Concerned

The absolute best part of Cisco Live is the community that has grown from the social media attendees. More than once I heard during the week “I can’t believe this used to be 20-30 people!”. The social community continues to grow and change. Some people move on. Others return from absence. Still others are coming for the first time.

The Cisco Live social community is as inclusive as any I have seen. From the Sunday night Tweetup to the various interactions throughout the week, I’m proud to be a part of a community that strives to make everyone feel like they are part of a greater whole. I met so many new people this year and marveled at the way the Social Media Hub and Meetup Area were both packed at all hours of the day.

That being said, the community does have some concerns. Some of them are around institutionalized community. There was worry that bringing so many people into the Champions community threatened to marginalize the organic community that had grown up in the past six years. While some of that worry was quieted by the end of the show, I think the major concerns are still present and valid to a certain degree. I think a discussion about the direction of the Champion program and how it will interact with other organic communities is definitely in order sooner rather than later.

Gamification Continues, And I’m Not A Fan

Many of the activities at Cisco Live revovled around prizes and giveaways for interaction. As we’ve seen throughout the years, any time a prize is awarded for a game there is going to be some trying to work the system. I even mentioned it here:

I’m all for having fun. But the reward for a well-played game should be in the game itself. When things have to be modified and changed and curated to ensure no one is taking advantage, it stops being fun and starts being a competition. Competitions cause hurt feelings and bad blood. I think it’s time to look at what the result of this gamification is and whether it’s worth it.

Power Transitions And Telling The Story Right

As expected, John Chambers gave his farewell as CEO and introduced Chuck Robbins to the Cisco Live community. By all accounts, it was an orderly transfer of power and a great way to reassure the investors and press that things are going to proceed as usual. I was a bit interested in the talk from Chambers about how this transition plan has been in place for at least ten months. Given the discussion in the tech press (and more than a couple private comments), the succession wasn’t a smooth as John lets on. Maybe it’s better that the general Cisco public not know how crazy the behind-the-scenes politics really were.

Chuck finds himself in a very precarious position. He’s the person that follows the legend. Love him or hate him, Chambers has been the face of Cisco forever. He is the legend in the networking community. How do you step into his shoes? It’s better that John stepped down on his own terms instead of being forced out by the board. Chuck has also done a great job of rolling out his executive team and making some smart moves to solidify his position at the top.

The key is going to be how Chuck decides to solidify the businesses inside of Cisco. Things that were critical even two years ago are shrinking in the face of market movement. John’s speech was very pointed: there is another tranisition coming that can’t be missed. Chuck has a hard road ahead trying to stabilize Cisco’s position in the market. A cheeky example:

Cisco has missed transitions, SDN being the most recent. They need to concentrate on what’s important and remove the barriers to agile movement. A start would be cutting back on the crazy amounts of business units (BUs) competing for face time with the CEO. You could easily consolidate 50% of the organizations inside Cisco and still have more than anyone else in networking. A racecar that goes 200 mph is still unstable if it isn’t streamlined. Chuck needs to cut Cisco down to fighting weight to make the story sound right.

Cisco Finally Understands Social, But They Don’t Quite Get It (Yet)

I applaud the people inside of Cisco and Cisco Live that have fought tooth and nail for the past few years to highlight the importance of social. Turning a ship the size of Cisco can’t be easy, but it’s finally starting to sink in how powerful social media can be. I can promise you that Cisco understands it better than companies like IBM or Oracle. That’s not to say that Cisco embraces social like it should.

Cisco is still in the uncomfortable mode of using social as a broadcast platform rather than an interaction tool. There are some inside of Cisco that realize the need to focus on the audience rather than the message. But those are exceptions to the general rule of being “on message”.

Social media is a powerful tool to build visibility of personalities. The messenger is often more important than the message. Just ask Pheidippides. Allow your people the freedom to develop a voice and be themselves will win you more converts than having a force of robots parroting the same platitudes on a scheduled basis.

Cisco has some great people invovled in the community. Folks like J Metz (@DrJMetz), Rob Novak (@Gallifreyan), and Lauren Friedman (@Lauren) how how dedicated people can make a name for themselves separate from their employer. Cisco would do well to follow the example of these folks (and many others) and let the messengers make the audience they key.


Tom’s Take

Thanks to Tech Field Day, I go to a lot of industry events now. But Cisco Live is still my favorite. The people make it wonderful. The atmosphere is as electric as any I’ve been a part of. This was my tenth Cisco Live. I can’t imagine not being a part of the event.

Yes, I have concerns about some of the things going on, but it’s the kind of concern that you have for a loved one or dear friend. I want people to understand the challenges of keeping Cisco Live relevant and important to attendees and find a way to fix the issues before they become problems. What I don’t want to see is a conference devoid of personality and wonderful people going through the motions. That would not only destroy the event, but the communities that have sprung from it as well.

Cisco Live 2016 will be intensely personal for me. It’s the first return to Las Vegas since 2011. It’s also the fifth anniversary of Tom’s Corner. I want to make the next Cisco Live as important as Cisco Live 2011 was for me. I hope you will all join me there and be a part of the community that has changed my life for the better.