Unknown's avatar

About networkingnerd

Tom Hollingsworth, CCIE #29213, is a former network engineer and current organizer for Tech Field Day. Tom has been in the IT industry since 2002, and has been a nerd since he first drew breath.

There’s No Such Thing As Free Wireless

Wireless

If you’ve watched any of the recent Wireless Field Day presentations, you know that free wireless is a big hot button issue. The delegates believe that wireless is something akin to a public utility that should be available without reservation. But can it every really be free?

No Free Lunches

Let’s take a look at other “free” offerings you get in restaurants. If you eat at popular Mexican restaurants, you often get free tortilla chips and salsa, often called a “setup”. A large number of bars will have bowls of salty snacks waiting for patrons to enjoy between beers or other drinks. These appetizers are free so wireless should be free as well, right?

The funny thing about those “free” appetizers is that they aren’t really free. They serve as a means to an end. The salty snacks on the bar are there to make you thirsty and cause you to order more drinks to quench that thirst. The cost of offering those snacks is balanced by the amount of extra alcohol you consume. The “free” chips and salsa at the restaurant serve as much to control food costs as they do to whet your appetite. By offering cheap food up front, people are less likely to order larger food dishes that cost more to make. And if you don’t want to enjoy food from the menu, most restaurants will charge you a “nominal” fee to recoup their costs.

These “free” items serve to increase sales for the business. Business don’t mind giving things away as long as they can profit from them. The value proposition of a free service has to be balanced with some additional revenue source. In that sense, nothing is really and truly free from an altruistic point of view.

Anal(ytics) Retentive

The path to offering “free” wifi seems to be headed down the road of collecting information about users in order to offer services to recoup costs. Whether it be through a loyalty programs or social wiereless logins, companies are willing to give you access to wireless in exchange for some information about you.

The tradeoff is reasonable in the eyes of the business. They have to upgrade their infratructure to support transient guest users. It’s one thing to offer guest wireless to employees who are on the payrool and being productive. It’s something else entirely to offer it to people who will potentially use it and not your services. You have to have a way to get that investment back.

For a large percentage of the population, giving away information is something they dont’ care about. It’s something freely available on social media, right? If everyone can find out about it, might as well give it to someone in exchange for free wireless, right?

Despite what people have said as of late, the real issues with social login and data analytics have nothing to do with offering the data. Storing the data somewhere is of little consquence in the long run. So long as a compnay doesn’t attempt to use that data against you in some way then data collection is benign.

Yes, storing that data could be problematic thanks to the ever-shrinking timeline for exposing large databases inside companies. Data sitting around in a database has a siren call to companies to either do something with it or sell it to a third party in an effort to capitalize on the gold mine they are sitting on. But the idea that most people have is that won’t happen. That makes it tolerable to give away something meaningless in exchange for a necessary service.


Tom’s Take

The price of freedom is vigilance. The price of free wireless is a little less than that. Business owners need value to offer additional services. Cost with no return gives no value. Whether that value comes from increased insight into customer bases or reselling that data to someone that wants to provide analytics services to businesses is a moot point. Wireless will never truly be free so long as there is something that can be traded for its value.

Can Community Be Institutionalized?

CommunityPlanning

As technology grows at a faster pace, companies are relying more and more on their users to help spread the word about what they are doing. Why pay exorbitant amounts for marketing when there is a group of folks that will do it for little to nothing? These communities of users develop around any product or company with significant traction in the market. But can they be organized, built, and managed in a traditional manner?

Little Pink Houses

Communities develop when users start talking to each other. They exist in numerous different forms. Whether it be forum posters or sanctioned user groups or even unofficial meetups, people want to get together to talk about things. These communities are built from the idea that knowledge should be shared. Anecdotes, guides, and cautionary tales abound when you put enough people into a room and get them talking about a product.

That’s not to say that all communities can be positive ones. Some communities are even built around the idea of a negative reaction. Look at these groups that formed around simple ideas like getting their old Facebook page back or getting their old MySpace layout returned to them. Imagine the reaction that you get when you have a enterprise product that makes changes that users don’t like. That’s how communities get started too.

Whether they are positive or negative, communites exist to give people a way to interact with other like-minded individuals. Community is a refuge that allows members to talk freely and develop the community to suit their needs.

Community Planning

What happens when the community needs more direction? Some communities are completely sanctioned and sponsored by their subjects, like the VMware User Group (VMUG). Others are independent but tend to track along with the parent, such as the Cisco User Groups that have developed over the years. These tend to be very well organized versus other more informal communities.

With the advent of social media, many ad hoc communities have formed quickly around the idea of sharing online. Social media makes meeting new members of the community quick and easy. But it’s also difficult to control social communities. They grow and change so rapidly that even monitoring is a challenge.

The wild and unpredictable nature of social communities has led to a new form of sponsored community – the influencer outreach program. These programs have different names depending on the company, but the idea is still roughly the same: reach out to influencers and social media users in the immediate community and invite them into a new community that offers incentives like insider information or activities outside of those regularly available to everyone.

Influencer outreach programs are like a recipe. You must have the right mix in the correct proportions to make everything work. If you have too much of something or not enough of another, the whole construct can fall apart. Too many members leads to a feeling of non-exclusiveness. Too few members-only briefings leads to a sense that the program doesn’t offer anything over and above “normal” community membership.

The Meringue Problem

One of the most important things that influencer outreach communities need to understand is something I call the “Meringue Problem”. If you’ve ever made meringue for a dessert, you know that you have to whip the egg whites and sugar until it forms soft peaks. That’s what makes meringue light and fluffy. It’s a lot of work but it pays off if done right. However, if you whip the mixture too hard or too long, those soft peaks fall apart into a mess that must be thrown out.

The Meringue Problem in influencer outreach communities comes when the program organizers and directors (chefs) get too involved in directing the community. They try to direct things too much or try to refocus the community away towards an end that the community may or may not support wholeheartedly. That ends up creating animosity among the members and a feeling that things would be better if everyone “would just back off for a bit”. There are a hundred different reasons why this overinvolvement happens, but the results are always the same: a fractured community and a sense of disappointment.

The First Rule

If you want a textbook method for building a community, take a page from one of my favorite movies – Fight Club. Tyler and the Narrator start a community dedicated to working out agression through physical expression. They don’t tell everyone in the bar to come outside and start fighting. They just do their thing on their own. When others want to be invovled, they are welcomed with open arms (and closed fists).

Later, the whole idea of Fight Club takes on a life of it’s own. It becomes a living, breathing thing that no one person can really direct anymore. In the movie, it is mentioned that the leader moves among the crowd, with the only important thing being the people fighting in the ring. But it’s never exclusionary. They’ll let anyone join. Just ask Lou.

Tyler finally decides that he needs something more from Fight Club. So what does he do? Does he try to refocus the community to a new end? How can you control something like that? Instead, he creates a new community from a subset of the Fight Club members. Project Mayhem is still very much a part of Fight Club, as the space monkeys are still Fight Club members. But Project Mayhem is a different community with different goals. It’s not better or worse. Just…different.


Tom’s Take

I’m a proud member of several communities. Some of them are large and distinguished. Others are small and intimate. In some, I’m a quiet member in the back. In others I help organize and direct things. But no matter who I am there or what I’m doing, I remember the importace of letting the community develop. Communities will find their way if you let them. A guiding hand sometimes does help the community accomplish great things and transcend barriers. But that hand must guide. It should never force or meddle. When that line is cross, the community ceases being a collection of great people and starts taking on attributes that make it more important thant the members. And that kind of institutionalization isn’t a community at all.

Special thanks to Jeff Fry (@FryGuy_PA) and Stephen Foskett (@SFoskett) for helping me collect my thoughts for this post.

Just. Write.

955951_28854808

Somewhere, someone is thinking about writing. They are confused where to start. Maybe they think they can’t write well at all? Perhaps they even think they’ll run out of things to say? Guess what?

Just. Write.

Why A Blog?

Social media has taken over as the primary form of communication for a great majority of the population. Status updates, wall posts, and picture montages are the way we tell everyone what we’re up to. But this kind of communication is fast and ephemeral. Can you recall tweets you made seven months ago? Unless you can remember a keyword, Twitter and Google do a horrible job of searching for anything past a few days old.

Blogs represent something different. They are the long form record of what we know. They expand beyond a status or point-in-time posting. Blogs can exist for months or years past their original post date. They can be indexed and shared and amplifed. Blogs are how we leave our mark on the world.

I’ve been fielding questions recently from a lot of people about how to get started in blogging. I’m a firm believer that everyone has at least one good blog post in them. One story about a network problem solved or a cautionary tale that they’ve run into and wish to save others from. Everyone knows one thing that few others do. Sharing that one thought is what sets you apart from others.

A lot of blogs start off as a collection of notes or repository of knowledge that is unique to the writer. This makes it easy to share that knowledge with others. As people find that knowledge, they want to share it with others. As they share it, you become more well known in the community. As people learn who you are, they want to share with you. That’s how a simple post can start an avalanche of opportunity to learn more and more.

How To Start

This is actually much easier than it appears. Almost everyone has a first “Starting A Blog” post. It’s a way to announce to the world that your site is more than a parking page. That post is easy to write. And it will hardly ever be read.

The next step is to tell the world your one thing. Create pictures if it helps. Craft a story. Lay out all the information. Make sure to break it up into sections. It doesn’t have to be fancy, just readable. Once you’ve gotten all the information out of your head and onto virtual paper, it’s time to tell the world about it.

Publicize your work through those social media channels. Link to your post on Facebook, Twitter, LinkedIn, and anywhere else you can. The more eyeballs you get on your post, the more feedback you will get. You will also get people that share your post with others. That’s the amplication effect that helps you reach even further.

Now, Keep Going

Okay, you’ve gotten that out of your system. Guess what? You need to keep going. Momentum is a wonderful thing. Now that you’ve learned how to craft a post, you can write down more thoughts. Other stories you want to tell. Maybe you had a hard time configuring a switch or learning what a command does? Those are great posts to write and share. The key is to keep going.

You don’t need a set schedule. Some write every week to keep on track. Others write once a month to sum up things they are working on. The key is to find a schedule that works for you. Maybe you only do something interesting every two weeks? Maybe your job is so exicting that you can fill a whole week’s worth of posts?

The worst thing in the world is to have a rhythm going and then stop. Real life does get in the way more than you think. Jobs run long. Deadlines come and go. Missing a post becomes two. Two becomes three, and before you know it you haven’t posted for six months or more.

The way to fix the momentum problem is to keep writing things down. It doesn’t have to be a formal post. It can easily be bullet points in a draft. Maybe it’s a developing issue that you’re documenting? Just jot down the important things and when you’ve wrapped up all the hard work, you have all the beginnings of a great blog post. Every support case has notes that make for great blog subjects.


Tom’s Take

I still consider blogging to be one of the most important ways to share with people in our community. It’s very easy to write down tweets and status updates. It’s also very hard to find them again. Blogs are like living resumes for us. Everything we’ve done, everything we know is contained in several thousand characters of text that can be searched, indexed, and shared.

If you are sitting in a chair reading this thinking that you can’t blog, stop. Open a text editor and just start writing. Write about screwing up a VLAN config and how you learned not to do it again. Write down an interesting support case. Maybe it was a late-night migration gone wrong. Just write it down. When you finish telling your story, you’ve got the best start to blog you could possibly hope for. The key is to just write.

 

Open Choices In Networking

neo-architect

I had an interesting time at the spring meeting of the Open Networking User Group (@ONUG_) this past week. There were lots of discussions about networking, DevOps, and other assorted topics. One that caught me by surprise was some of the talk around openness. These tweets from Lisa Caywood (@RealLisaC) were especially telling:

After some discussion with other attendees, I think I’ve figured it out. People don’t want an open network. They want choice.

Flexible? Or Predictable?

Traditional networking marries software and hardware together. You want a Cisco switch? It runs IOS or NX-OS. Running Juniper? You can have any flavor of OS you want…as long as it’s Junos. That has been the accepted order of things for decades. Flexibility is traded for predictability. Traditional networking vendors give you many of the tools you need. If you need something different, you have to find the right mix of platform and software to get your goal accomplished. Mixing and matching is almost impossible.

This sounds an awful lot like the old IBM PC days. The same environment that gave rise to whitebox computers. We have a whitebox switching movement today as well for almost the same reasons – being able to run a different OS on cheaper hardware to the same end as the traditional integrated system. In return, you gain back that flexibility that you lost. There are some tradeoffs, however.

In theory, a whitebox switch is marginally harder to troubleshoot than a traditional platform. Which combiantion of OS and hardware are you running? How do those things interact to create bugs? Anyone that has ever tried to install USB device drivers on Windows knows that kind of pain. Getting everything to work right can be rough.

In practice, the support difference is negligible. Traditional vendors have a limited list of hardware, but the numerous versions of software (including engineering special code) interacting with those platforms can cause unforseen consequences. Likewise, most third party switch OS vendors have a tight hardware compatibility list (HCL) to ensure that everything works well together.

People do like flexibility. Giving them options means they can build systems to their liking. But that’s only a part of the puzzle.

The Problem Is Choice

Many of the ONUG attendees I talked to liked the idea of whitebox switching. They weren’t entirely enamoured, however. When I pressed a bit deeper, a pattern started to emerge. It sounded an awful lot like this:

I don’t want to run Vendor X Linux on my switch. I want to run my Linux on a switch!

That issue highlighted the real issue. Open networking proponents don’t want systems that offer open source networking enhancing the work of all. What they want is a flexible network that is capable of letting them run what they want on things.

The people that attend conferences like ONUG don’t like rigid choice options. Telling them they can run IOS or Junos is like picking the lesser of two evils. These people want to have a custom OS with the bare minimum needed to support a role in the network. They are used to solving problems outside the normal support chain. They chafe at the idea of being forced into a binary decision.

That goes back to Lisa’s tweets. People don’t want a totally open network running Quagga and other open source solutions. They want an open architecture that lets them rip and replace solutions based on who is cheaper that week or who upset them at the last account team meeting. They want the freedom to use their network as leverage to get better deals.

It’s a whole lot easier to get a better discount when you can legitimately threaten to have the incumbent thrown out and replaced relatively easily. Even if you have no intentions of doing so. Likewise, new advances in whitebox switching give you leverage to replace sections of the network and have feature parity with traditional vendors in all but a few corner cases. It seems to be yet another redefinition of open.


Tom’s Take

Maybe I’m just a cynic. I support development of software that makes the whole world better. My idea of open involves people working together to make everything better. It’s not about using strategies to make just my life easier. Enterprises are big consumers of open technologies with very little reciprocity outside of a few forward thinkers.

Maybe the problem is that we’ve overloaded open to mean so many other things that we have congnitive dissonance when we try to marry the various open ideas together? Open network architecture is easy as long as you stick to OSPF and other “standard” protocols. Perhaps the problem of choice is being shortsighted enough to make the wrong one.

 

The Light On The Fiber Mountain

MountainRoad

Fabric switching systems have been a popular solution for many companies in the past few years. Juniper has QFabric and Brocade has VCS. For those not invested in fabrics, the trend has been to collapse the traditional three tier network model down into a spine-leaf architecture to optimize east-west traffic flows. One must wonder how much more optimized that solution can be. As it turns out, there is a bit more that can be coaxed out of it.

Shine A Light On Me

During Interop, I had a chance to speak with the folks over at Fiber Mountain (@FiberMountain) about what they’ve been up to in their solution space. I had heard about their revolutionary SDN offering for fiber. At first, I was a bit doubtful. SDN gets thrown around a lot on new technology as a way to sell it to people that buy buzzwords. I wondered how a fiber networking solution could even take advantage of software.

My chat with M. H. Raza started out with a prop. He showed me one of the new Multifiber Push On (MPO) connectors that represent the new wave of high-density fiber. Each cable, which is roughly the size and shape of a SATA cable, contains 12 or 24 fiber connections. These are very small and pre-configured in a standardized connector. This connector can plug into a server network card and provide several light paths to a server. This connector and the fibers it terminates are the building block for Fiber Mountain’s solution.

With so many fibers running a server, Fiber Mountain can use their software intelligence to start doing interesting things. They can begin to build dedicated traffic lanes for applications and other traffic by isolating that traffic onto fibers already terminated on a server. The connectivity already exists on the server. Fiber Mountain just takes advantage of it. It feels very simliar to the way we add in additional gigabit network ports when we need to expand things like vKernel ports or dedicated traffic lanes for other data.

Quilting Circle

Where this solution starts looking more like a fabric is what happens when you put Fiber Mountain Optical Exchange devices in the middle. These switching devices act like aggregation ports in the “spine” of the network. They can aggregate fibers from top-of-rack switches or from individual servers. These exchanges tag each incoming fiber and add them to the Alpine Orchestration System (AOS), which keeps track of the connections just like the interconnections in a fabric.

Once AOS knows about all the connections in the system, you can use it to start building pathways between east-west traffic flows. You can ensure that traffic between a web server and backend database has dedicated connectivity. You can add additional resources between systems that are currently engaged in heavy processing. You can also dedicated traffic lanes for backup jobs. You can do quite a bit from the AOS console.

Now you have a layer 1 switching fabric without any additional pieces in the middle. The exchanges function almost like a passthrough device. The brains of the system exist in AOS. Remember when Ivan Pepelnjak (@IOSHints) spent all his time pulling QFabric apart to find out what made it tick? The Fiber Mountain solution doesn’t use BGP or MPLS or any other magic protocol sauce. It runs at layer 1. The light paths are programmed by AOS and the packets are swtiched across the dense fiber connections. It’s almost elegant in the simplicity.

Future Illumination

The Fiber Mountain solution has some great promise. Today, most of the operations of the system require manual intervention. You must build out the light paths between servers based on educated guesses. You must manually add additional light paths when extra bandwidth is needed.

Where they can really improve their offering in the future is to add intelligence to AOS to automatically make those decisions based on thresholds and inputs that are predefined. If the system can detect bigger “elephant” traffic flows and automatically provision more bandwidth or isolate these high volume packet generators it will go a long way toward making things much easier on network admins. It would also be great to provide a way to interface that “top talker” data into other systems to alert network admins when traffic flows get high and need additional resources.


Tom’s Take

I like the Fiber Mountain solution. They’ve built a layer 1 fabric that performs similarly to the ones from Juniper and Brocade. They are taking full advantage of the resources provided by the MPO fiber connectors. By adding a new network card to a server, you can test this system without impacting other traffic flows. Fiber Mountain even told me that they are looking at trial installations for customers to bring their technology in at lower costs as a project to show the value to decision makers.

Fiber Moutain has a great start on building a low latency fiber fabric with intelligence. I’ll be keeping a close eye on where the technolgy goes in the future to see how it integrates into the entire network and brings SDN features we all need in our networks.

 

Could IPv6 Drown My Wireless Network?

IPv6WiFi

By now, the transition to adopt IPv6 networks is in full swing. Registrars are running out of prefixes and new users overseas are getting v6-only allocations for new circuits. Mobile providers are going v6-only and transition mechanisms are in place to ease the migration. You can hear about some of these topics in this recent roundtable recorded at Interop last week:

One of the converstaions that I had with Ed Horley (@EHorley) during Interop opened my eyes to another problem that we will soon be facing with IPv6 and legacy technology. Only this time, it’s not because of a numbering scheme. It’s because of old hardware.

Rate Limited

Technology always marches on. Things that seemed magical to us just five years ago are now antiquated and slow. That’s the problem with the original 802.11 specification. It supported wireless data rates at a paltry 1 Mbps and 2 Mbps. When 802.11b was released, it raised the rates to 5.5 Mbps and 11 Mbps. Those faster data rates, combined with a larger coverage area, helped 802.11b become commercially successful.

Now, we have 802.11n with data rates in the hundreds of Mbps. We also have 802.11ac right around the corner with rates approaching 1 Gbps. It’s a very fast wireless world. But thanks to the need to be backwards compatible with existing technology, even those fast new 802.11n access points still support the old 1 & 2 Mbps data rates of 802.11. This is great if you happen to have a wireless device from the turn of the millenium. It’s not so great if you are a wireless engineer supporting such an installation.

Wireless LAN professionals have been talking for the past couple of years about how important it is to disable the 1, 2, and 5.5 Mbps data rates in your wireless networks. Modern equipment will only utilize those data rates when far away from the access point and modern design methodology ensures you won’t be far from an access point. Removing support for those devices forces the hardware to connect at a higher data rate and preserve the overall air quality. Even one 802.11b device connecting to your wireless network can cause the whole network to be dragged down to slow data rates. How important is it to disable these settings? Meraki’s dashboard allows you to do it with one click:

MerakiDataRates

Flood Detected

How does this all apply to IPv6? Well, it turns out that that multicast has an interesting behavior on wireless networks. It seeks out the lowest data rate to send traffic. This ensures that all recievers get the packet. I asked Matthew Gast (@MatthewSGast) of Aerohive about this recently. He said that it’s up to the controller manufacturer to decide how multicast is handled. When I gave him an inquisitive look, he admitted that many vendors leave it up to the lowest common denominator, which is usually the 1 Mbps or 2 Mbps data rate.

This isn’t generally a problem. IPv4 multicast tends to be sporadic and short-lived at best. Most controllers have mechanisms in place for dealing with this, either by converting those multicasts to unicasts or by turning off mulitcast completely. A bit of extra traffic on the low data rates isn’t noticeable.

IPv6 has a much higher usage of multicast, however. Router Advertisements (RAs) and Multicast Listener Discovery (MLD) are crictical to the operation of IPv6. So critical, in fact, that turning off Global Multicast on a Cisco wireless controller doesn’t disable RAs and MLD from happening. You must have multicast running for IPv6.

What happens when all that multicast traffic from IPv6 hits a controller with the lower data rates enable? Gridlock. Without vendor intervention the MLD and RA packets will hop down to the lowest data rate and start flooding the network. Listeners will respond on the same low data rate and drag the network down to an almost-unusable speed. You can’t turn off the multicast to fix it either.

The solution is to prevent this all in the first place. You need to turn off the 802.11b low data rates on your controller. 1 Mbps, 2 Mbps, and 5.5 Mbps should all be disabled, both as a way to prevent older, slower clients from connecting to your wireless network and to keep newer clients running IPv6 from swamping it with multicast traffic.

There may still be some older clients out there that absolutely require 802.11b data rates, like medical equipment, but the best way to deal with these problematic devices is isolation. These devices likely won’t be running IPv6 any time in the future. Isolating them onto a separate SSID running the 802.11b data rates is the best way to ensure they don’t impact your other traffic. Make sure you read up on how to safely disable data rates and do it during a testing window to ensure you don’t break everything in the world. But you’ll find your network much more healthy when you do.


Tom’s Take

Legacy technology support is critical for continued operation. We can’t just drop something because we don’t want to deal with it any more. Anyone who has ever called a technical support line feels that pain. However, when the new technology doesn’t feasably support working with older tech, it’s time to pull the plug. Whether it be 802.11b data rates or something software related, like dropping PowerPC app support in OS X, we have to keep marching forward to make new devices run at peak performance.

IPv6 has already exposed limitations of older technologies like DHCP and NAT. Wireless thankfully has a much easier way to support transitions. If you’re still running 802.11b data rates, turn them off. You’ll find your IPv6 transition will be much less painful if you do. And you can spend more time working with tech and less time trying to tread water.

 

The Walls Are On Fire

There’s no denying the fact that firewalls are a necessary part of modern perimeter security. NAT isn’t a security construct. Attackers have the equivalent of megaton nuclear arsenals with access to so many DDoS networks. Security admins have to do everything they can to prevent these problems from happening. But one look at firewall market tells you something is terribly wrong.

Who’s Protecting First?

Take a look at this recent magic polygon from everyone’s favorite analyst firm:

FW Magic Polygon.  Thanks to @EtherealMind.

FW Magic Polygon. Thanks to @EtherealMind.

I won’t deny that Checkpoint is on top. That’s mostly due to the fact that they have the biggest install base in enterprises. But I disagree with the rest of this mystical tesseract. How is Palo Alto a leader in the firewall market? I thought their devices were mostly designed around mitigating internal threats? And how is everyone not named Cisco, Palo Alto, or Fortinet regulated to the Niche Players corral?

The issue comes down to purpose. Most firewalls today aren’t packet filters. They aren’t designed to keep the bad guys out of your networks. They are unified threat management systems. That’s a fancy way of saying they have a whole bunch of software built on top of the packet filter to monitor outbound connections as well.

Insider threats are a huge security issue. People on the inside of your network have access. They sometimes have motive. And their power goes largely unchecked. They do need to be monitored by something, whether it be an IDS/IPS system or a data loss prevention (DLP) system that keeps sensitive data from leaking out. But how did all of those devices get lumped together?

Deploying security monitoring tools is as much art as it is science. IPS sensors can be placed in strategic points of your network to monitor traffic flows and take action on them. If you build it correctly you can secure a huge enterprise with relatively few systems.

But more is better, right? If three IPS units make you secure, six would make you twice as secure, right? No. What you end up with is twice as many notifications. Those start getting ignored quickly. That means the real problems slip through the cracks because no one pays attention any more. So rather than deploying multiple smaller units throughout the network, the new mantra is to put an IPS in the firewall in the front of the whole network.

The firewall is the best place for those sensors, right? All the traffic in the network goes through there after all. Well, except for the user-to-server traffic. Or traffic that is internally routed without traversing the Internet firewall. Crafty insiders can wreak havoc without ever touching an edge IPS sensor.

And that doesn’t even begin to describe the processing burden placed on the edge device by loading it down with more and more CPU-intensive software. Consider the following conversation:

Me: What is the throughput on your firewall?

 

Them: It’s 1 Gbps!

 

Me: What’s the throughput with all the features turned on?

 

Them: Much less than 1 Gbps…

When a selling point of your UTM firewall is that the numbers are “real”, you’ve got a real problem.

What’s Protecting Second?

There’s an answer out there to fix this issue: disaggregation. We now have the capability to break out the various parts of a UTM device and run them all in virtual software constructs thanks to Network Function Virtualization (NFV). And they will run faster and more efficiently. Add in the ability to use SDN service chaining to ensure packet delivery and you have a perfect solution. For almost everyone.

Who’s not going to like it? The big UTM vendors. The people that love selling oversize boxes to customers to meet throughput goals. Vendors that emphasize that their solution is the best because there’s one dashboard to see every alert and issue, even if those alerts don’t have anything to do with each other.

UTM firewalls that can reliably scan traffic at 1 Gbps are rare. Firewalls that can scan 10 Gbps traffic streams are practically non-existant. And what is out there costs a not-so-small fortune. And if you want to protect your data center you’re going to need a few of them. That’s a mighty big check to write.


Tom’s Take

There’s a reason why we call it Network Function Virtualization. The need for the days when you try and cram all the possible features you could think of onto a single piece of hardware are over. We don’t need complicated all-in-one boxes that have insanely large CPUs. We have software constructs that can take care of all of that now.

While the engineers will like this brave new world, there are those that won’t. Vendors of the single box solutions will still tell you that their solution runs better. Analyst firms with a significant interest in the status quo will tell you NFV solutions are too far out or don’t address all the necessary features. It’s up to you to sort out the smoke from the flame.

Betting On The Right Horse

HobbyHorse

The annoucement of the merger of Alcatel-Lucent and Nokia was a pretty big discussion last week. One of the quotes that kept being brought up in several articles was from John Chambers of Cisco. Chambers has said the IT industry is in for a big round of “brutal consolidation” spurred by “missed market transitions”, which is a favorite term for Chambers. While I agree that consolidation is coming in the industry, I don’t think market transitions are the driver. Instead, it helps to think of it more like a day at the races.

Tricky Ponies

Startups in the networking industry have to find a hook to get traction with investors and customers. Since you can’t boil the ocean, you have to stand out. You need to find an application that gives you the capability to sell into a market. That is much easier to do with SDN than hardware-based innovation. The time-to-market for software is much lower than the barriers to ramp up production of actual devices.

Being a one-trick pony isn’t a bad thing when it comes to SDN startups. If you pour all your talent into one project, you get the best you can build. If that happens to be what your company is known for, you can hit a home run with your potential customers. You could be the overlay company. Or the policy company. Or the Docker networking layer company.

That rapid development time and relative ease of creation makes startups a tantalizing target for acquistion as well. Bigger companies looking to develop expertise often buy that expertise. Either acquiring the product or the team that built it gives the acquiring company a new horse in their stable.

If you can assemble an entire stable of ponies, you can build a networking company that addresses a lot of the needs of your customers. In fact, that’s how Cisco has managed to thrive to the point where they can gamble on those “market transitions”. The entity we call Cisco is really Crescendo, Insieme, Nuova, Andiamo, and hundreds of other single focus networking companies. There’s nothing wrong with that strategy if you have patience and good leadership.

Buy Your Own Stable

If you don’t have patience but have deep pockets, you will probably end up going down a different road. Rather than buying a startup here and there to add to a core strategy, you’ll be buying the whole strategy. That’s what Dell did when they bought Force10. If the rumors are true, that’s what EMC is looking to do soon.

Buying a company to provide your strategy has benefits. You can immediately compete. You don’t have to figure out synergies. Just sell those products and keep moving forward. You may not be the most agile company on the market but you will get the job done.

The issue with buying the strategy is most often “brain drain”. We see brain drain with a small startup going to a mid-sized company. Startup founders aren’t usually geared to stay in a corporate structure for long. They vest their interest and cash out. Losing a founder or key engineer on a product line is tough, but can be overcome with a good team.

What happens when the whole team walks out the door? If the larger acquiring company mistreats the acquired assets or influences creativity in a negative way, you can quickly find your best and brightest teams heading for green pastures. You have to make sure those people are taken care of and have their needs met. Otherwise your new product strategy will crumble before you know it.


Tom’s Take

The Nokia/Alcatel deal isn’t the last time we’ll hear about mergers of networking companies. But I don’t think it’s because of missed market transitions or shifting strategies. It comes down to companies with one or two products wanting protection from external factors. There is strength in numbers. And those numbers will also allow development of new synergies, just like horses in a stable learning from the other horses. If you’re a rich company with an interest in racing, you aren’t going to assemble a stable piece by piece. You’ll buy your way into an established stable. In the end, all the horses end up in a stable owned by someone. Just make sure your horse is the right one to bet on.

Going Out With Style

720367_54066174

Watching the HP public cloud discussion has been an interesting lesson in technology and how it is supported and marketed. HP isn’t the first company to publish a bold statement ending support for a specific technology or product line only to go back and rescind it a few days later. Some think that a problem like that shows that a company has some inner turmoil with regards to product strategy. More often than not, the real issue doesn’t lie with the company. It’s the customers fault.

No Lemonade From Lemons

It’s no secret that products have a lifespan. No matter how popular something might be with customers there is always a date when it must come to an end. This could be for a number of reasons. Technology marches on each and every day. Software may not run on newer hardware. Drivers may not be able to be written for new devices. CPUs grow more powerful and require new functions to unlock their potential.

Customers hate the idea of obsolescence. If you tell them the thing they just bought will be out-of-date in six years they will sneer at you. No matter how fresh the technology might be, the idea of it going away in the future unnerves customers. Sometimes it’s because the customers have been burned on technology purchases in the past. For every VHS and Blu-Ray player sold, someone was just as happy to buy a Betamax or HD-DVD unit that is now collecting dust.

That hatred of obsolescence sometimes keeps things running well past their expiration date. The most obivous example in recent history is Microsoft being forced to support Windows XP. Prior to Windows XP, Microsoft supported consumer releases of Windows for about five years. WIndows 95 was released in 1995 and support ended in 2001. Windows 98 reached EOL around the same time. Windows 2000 enjoyed ten years of support thanks to a shared codebase with popular server operating systems. Windows XP should have reached end-of-life shortly after the release of Windows Vista. Instead, the low adoption rate of Vista pushed system OEMs to keep installing Windows XP on their offerings. Even Windows 7 failed to move the needle significantly for some consumers to get off of XP. It finally took Microsoft dropping the hammer and setting a final end of extended support date in 2014 to get customers to migrate away from Windows XP. Even then, some customers were asking for an extension to the thirteen-year support date.

Microsoft kept supporting an OS three generations old because customers didn’t want to feel like XP had finally given up the ghost. Even though drivers couldn’t be written and security holes couldn’t be patched, consumers still wanted to believe that they could run XP forever. Even if you bought one of the last available copies of Windows XP when you purchased your system, you still got as much support for your OS as Microsoft gave Windows 95/98. Never mind that the programmers had moved on to other projects or had squeezed every last ounce of capability from the software. Consumers just didn’t want to feel like they’d been stuck with a lemon more than a decade after it had been created.

The Lesson of the Lifecycle

How does this apply to situations today? Companies have to make customers understand why things are being replaced. A simple annoucement (or worse, a hint of an unofficial annoucement from a third party source) isn’t enough any more. Customers may not like hearing their their favorite firewall or cloud platform is going away, but if you tell them the reasons behind the decision they will be more accepting.

Telling your customers that you are moving away from a public cloud platform to embrace hybrid clouds or to partner with another company doing a better job or offering more options is the way to go. Burying the annoucement in a conversation with a journalist and then backtracking later isn’t the right method. Customers want to know why. Vendors should have faith that customers are smart enough to understand strategy. Sure, there’s always the chance that customers will push back like they did with Windows XP. But there’s just as much chance they’ll embrace the new direction.


Tom’s Take

I’m one of those consumers that hates obsolescence. Considering that I’ve got a Cius and a Flip it should be apparent that I don’t bet on the right horse every time. But I also understand the reasons why those devices are no longer supported. I choose to use Windows 7 on my desktop for my own reasons. I know why it has been put out to pasture. I’m not going to demand Microsoft devote time and energy to a tired platform when Windows 10 needs to be finished.

In the enterprise technology arena, I want companies to be honest and direct when the time comes to retire products. Don’t hem and haw about shifting landscapes and concise technology roadmaps. Tell the world that things didn’t work out like you wanted and give us the way you’re going to fix it next time.

That’s Using Your Embrane

BrainInABox

Cisco announced their intent to acquire Embrane last week. Since they did it on April 1st, there was an initial thought that it might be a prank. But given that Cisco doesn’t really do April Fools jokes, it was quickly determined to be the real deal. More importantly, the Embrane acquistion plugs a very important hole in ACI that I have been worried about for a while.

Everybody Play Nice

Application Centric Infrastructure (ACI) is a great idea that works on the principle that Cisco can get multiple disparate systems to work together to “program” the underlying network to rapidly deploy applications and create policies that allow systems to be provisioned and reconfigured with a minimum of effort.

That’s a great idea in theory. And if you’re only working with Cisco gear it’s any easy thing to pull off. Provided you can easily integrate the ASA operating system with IOS and NX-OS. That’s not an easy chore and all those business units work for the same company. Can you imagine how hard it would be to integrate with an external third party? Even one that is friendly to Cisco? What about a company that only implements the bare minimum functionality to make ACI operational?

ACI is predicated on the idea that all the systems in the network are going to work together to accomplish the goal of policy programming. That starts falling apart when systems are difficult to integrate or refuse to be a part of ACI. Sure, you could program around them. It wouldn’t take much to do an end run around an unruly switch or router. But what about a firewall or load balancer?

Those devices are more important to security and scalability of an application. You can’t just cut them out. You may even have regulations that require you to include them inline with the application. That means headaches if you are forced to work with something that won’t completely integrate.

Bring Your Own Toys

Enter Embrane. Embrane’s helios platform gives Cisco a stable of software firewalls and load balancers that can be spun up and deployed as needed on-demand. That means that unruly hardware can be bypassed when necessary. If your firewall doesn’t like ACI or won’t implement the shims needed to make them play nice, all you need to do is spin up an Embrane firewall. Since Embrane was integrating with ACI even before the acquistion, you know that everything is going to work just fine.

You can also use the Embrane Elastic Services Manager (ESM) to help manage those devices and reclaim them as needed. That sounds like a no-brainer, but if you ever find yourself booting a virtual system on a cluster that has charge-back enabled, or worse booting it on a public cloud provider and forgetting about it, you’ll find that using a lifecycle manager to avoid hundreds or thousands of dollars in charges is a great idea. ESM can also help you figure out how utilized your devices are and gives your a roadmap to add capacity when it’s needed. That way you never have to answer a phone call complaining the new application is running “slow”.


Tom’s Take

Embrane’s acquisition makes all the sense in the world. Cisco had put up a stake in the company in their last funding round. That could be seen as an initial investment to keep Embrane working down the ACI path instead of moving off onto other ideas. Now, Cisco makes good on that investment by bringing the Embrane team back in house, for a while at least. Cisco gets a braintrust that knows how to make on-demand SDN work.

It’s no shock that Embrane is going to be rolled into the INSBU that houses Insieme. These two teams are going to be working together very closely in the coming months to push the Embrane technology into the core of ACI and provide it as an offering to get potential customers off the fence and into the solution. More options for configuring policy based networks is always a great carrot for customers. Overcoming objections about incompatible hardware makes selling the software of ACI a no brainer.