About networkingnerd

Tom Hollingsworth, CCIE #29213, is a former network engineer that is now an organizer for Tech Field Day. Tom has been in the IT industry since 2002, and has been a nerd since he first drew breath.

Open Choices In Networking

neo-architect

I had an interesting time at the spring meeting of the Open Networking User Group (@ONUG_) this past week. There were lots of discussions about networking, DevOps, and other assorted topics. One that caught me by surprise was some of the talk around openness. These tweets from Lisa Caywood (@RealLisaC) were especially telling:

After some discussion with other attendees, I think I’ve figured it out. People don’t want an open network. They want choice.

Flexible? Or Predictable?

Traditional networking marries software and hardware together. You want a Cisco switch? It runs IOS or NX-OS. Running Juniper? You can have any flavor of OS you want…as long as it’s Junos. That has been the accepted order of things for decades. Flexibility is traded for predictability. Traditional networking vendors give you many of the tools you need. If you need something different, you have to find the right mix of platform and software to get your goal accomplished. Mixing and matching is almost impossible.

This sounds an awful lot like the old IBM PC days. The same environment that gave rise to whitebox computers. We have a whitebox switching movement today as well for almost the same reasons – being able to run a different OS on cheaper hardware to the same end as the traditional integrated system. In return, you gain back that flexibility that you lost. There are some tradeoffs, however.

In theory, a whitebox switch is marginally harder to troubleshoot than a traditional platform. Which combiantion of OS and hardware are you running? How do those things interact to create bugs? Anyone that has ever tried to install USB device drivers on Windows knows that kind of pain. Getting everything to work right can be rough.

In practice, the support difference is negligible. Traditional vendors have a limited list of hardware, but the numerous versions of software (including engineering special code) interacting with those platforms can cause unforseen consequences. Likewise, most third party switch OS vendors have a tight hardware compatibility list (HCL) to ensure that everything works well together.

People do like flexibility. Giving them options means they can build systems to their liking. But that’s only a part of the puzzle.

The Problem Is Choice

Many of the ONUG attendees I talked to liked the idea of whitebox switching. They weren’t entirely enamoured, however. When I pressed a bit deeper, a pattern started to emerge. It sounded an awful lot like this:

I don’t want to run Vendor X Linux on my switch. I want to run my Linux on a switch!

That issue highlighted the real issue. Open networking proponents don’t want systems that offer open source networking enhancing the work of all. What they want is a flexible network that is capable of letting them run what they want on things.

The people that attend conferences like ONUG don’t like rigid choice options. Telling them they can run IOS or Junos is like picking the lesser of two evils. These people want to have a custom OS with the bare minimum needed to support a role in the network. They are used to solving problems outside the normal support chain. They chafe at the idea of being forced into a binary decision.

That goes back to Lisa’s tweets. People don’t want a totally open network running Quagga and other open source solutions. They want an open architecture that lets them rip and replace solutions based on who is cheaper that week or who upset them at the last account team meeting. They want the freedom to use their network as leverage to get better deals.

It’s a whole lot easier to get a better discount when you can legitimately threaten to have the incumbent thrown out and replaced relatively easily. Even if you have no intentions of doing so. Likewise, new advances in whitebox switching give you leverage to replace sections of the network and have feature parity with traditional vendors in all but a few corner cases. It seems to be yet another redefinition of open.


Tom’s Take

Maybe I’m just a cynic. I support development of software that makes the whole world better. My idea of open involves people working together to make everything better. It’s not about using strategies to make just my life easier. Enterprises are big consumers of open technologies with very little reciprocity outside of a few forward thinkers.

Maybe the problem is that we’ve overloaded open to mean so many other things that we have congnitive dissonance when we try to marry the various open ideas together? Open network architecture is easy as long as you stick to OSPF and other “standard” protocols. Perhaps the problem of choice is being shortsighted enough to make the wrong one.

 

The Light On The Fiber Mountain

MountainRoad

Fabric switching systems have been a popular solution for many companies in the past few years. Juniper has QFabric and Brocade has VCS. For those not invested in fabrics, the trend has been to collapse the traditional three tier network model down into a spine-leaf architecture to optimize east-west traffic flows. One must wonder how much more optimized that solution can be. As it turns out, there is a bit more that can be coaxed out of it.

Shine A Light On Me

During Interop, I had a chance to speak with the folks over at Fiber Mountain (@FiberMountain) about what they’ve been up to in their solution space. I had heard about their revolutionary SDN offering for fiber. At first, I was a bit doubtful. SDN gets thrown around a lot on new technology as a way to sell it to people that buy buzzwords. I wondered how a fiber networking solution could even take advantage of software.

My chat with M. H. Raza started out with a prop. He showed me one of the new Multifiber Push On (MPO) connectors that represent the new wave of high-density fiber. Each cable, which is roughly the size and shape of a SATA cable, contains 12 or 24 fiber connections. These are very small and pre-configured in a standardized connector. This connector can plug into a server network card and provide several light paths to a server. This connector and the fibers it terminates are the building block for Fiber Mountain’s solution.

With so many fibers running a server, Fiber Mountain can use their software intelligence to start doing interesting things. They can begin to build dedicated traffic lanes for applications and other traffic by isolating that traffic onto fibers already terminated on a server. The connectivity already exists on the server. Fiber Mountain just takes advantage of it. It feels very simliar to the way we add in additional gigabit network ports when we need to expand things like vKernel ports or dedicated traffic lanes for other data.

Quilting Circle

Where this solution starts looking more like a fabric is what happens when you put Fiber Mountain Optical Exchange devices in the middle. These switching devices act like aggregation ports in the “spine” of the network. They can aggregate fibers from top-of-rack switches or from individual servers. These exchanges tag each incoming fiber and add them to the Alpine Orchestration System (AOS), which keeps track of the connections just like the interconnections in a fabric.

Once AOS knows about all the connections in the system, you can use it to start building pathways between east-west traffic flows. You can ensure that traffic between a web server and backend database has dedicated connectivity. You can add additional resources between systems that are currently engaged in heavy processing. You can also dedicated traffic lanes for backup jobs. You can do quite a bit from the AOS console.

Now you have a layer 1 switching fabric without any additional pieces in the middle. The exchanges function almost like a passthrough device. The brains of the system exist in AOS. Remember when Ivan Pepelnjak (@IOSHints) spent all his time pulling QFabric apart to find out what made it tick? The Fiber Mountain solution doesn’t use BGP or MPLS or any other magic protocol sauce. It runs at layer 1. The light paths are programmed by AOS and the packets are swtiched across the dense fiber connections. It’s almost elegant in the simplicity.

Future Illumination

The Fiber Mountain solution has some great promise. Today, most of the operations of the system require manual intervention. You must build out the light paths between servers based on educated guesses. You must manually add additional light paths when extra bandwidth is needed.

Where they can really improve their offering in the future is to add intelligence to AOS to automatically make those decisions based on thresholds and inputs that are predefined. If the system can detect bigger “elephant” traffic flows and automatically provision more bandwidth or isolate these high volume packet generators it will go a long way toward making things much easier on network admins. It would also be great to provide a way to interface that “top talker” data into other systems to alert network admins when traffic flows get high and need additional resources.


Tom’s Take

I like the Fiber Mountain solution. They’ve built a layer 1 fabric that performs similarly to the ones from Juniper and Brocade. They are taking full advantage of the resources provided by the MPO fiber connectors. By adding a new network card to a server, you can test this system without impacting other traffic flows. Fiber Mountain even told me that they are looking at trial installations for customers to bring their technology in at lower costs as a project to show the value to decision makers.

Fiber Moutain has a great start on building a low latency fiber fabric with intelligence. I’ll be keeping a close eye on where the technolgy goes in the future to see how it integrates into the entire network and brings SDN features we all need in our networks.

 

Could IPv6 Drown My Wireless Network?

IPv6WiFi

By now, the transition to adopt IPv6 networks is in full swing. Registrars are running out of prefixes and new users overseas are getting v6-only allocations for new circuits. Mobile providers are going v6-only and transition mechanisms are in place to ease the migration. You can hear about some of these topics in this recent roundtable recorded at Interop last week:

One of the converstaions that I had with Ed Horley (@EHorley) during Interop opened my eyes to another problem that we will soon be facing with IPv6 and legacy technology. Only this time, it’s not because of a numbering scheme. It’s because of old hardware.

Rate Limited

Technology always marches on. Things that seemed magical to us just five years ago are now antiquated and slow. That’s the problem with the original 802.11 specification. It supported wireless data rates at a paltry 1 Mbps and 2 Mbps. When 802.11b was released, it raised the rates to 5.5 Mbps and 11 Mbps. Those faster data rates, combined with a larger coverage area, helped 802.11b become commercially successful.

Now, we have 802.11n with data rates in the hundreds of Mbps. We also have 802.11ac right around the corner with rates approaching 1 Gbps. It’s a very fast wireless world. But thanks to the need to be backwards compatible with existing technology, even those fast new 802.11n access points still support the old 1 & 2 Mbps data rates of 802.11. This is great if you happen to have a wireless device from the turn of the millenium. It’s not so great if you are a wireless engineer supporting such an installation.

Wireless LAN professionals have been talking for the past couple of years about how important it is to disable the 1, 2, and 5.5 Mbps data rates in your wireless networks. Modern equipment will only utilize those data rates when far away from the access point and modern design methodology ensures you won’t be far from an access point. Removing support for those devices forces the hardware to connect at a higher data rate and preserve the overall air quality. Even one 802.11b device connecting to your wireless network can cause the whole network to be dragged down to slow data rates. How important is it to disable these settings? Meraki’s dashboard allows you to do it with one click:

MerakiDataRates

Flood Detected

How does this all apply to IPv6? Well, it turns out that that multicast has an interesting behavior on wireless networks. It seeks out the lowest data rate to send traffic. This ensures that all recievers get the packet. I asked Matthew Gast (@MatthewSGast) of Aerohive about this recently. He said that it’s up to the controller manufacturer to decide how multicast is handled. When I gave him an inquisitive look, he admitted that many vendors leave it up to the lowest common denominator, which is usually the 1 Mbps or 2 Mbps data rate.

This isn’t generally a problem. IPv4 multicast tends to be sporadic and short-lived at best. Most controllers have mechanisms in place for dealing with this, either by converting those multicasts to unicasts or by turning off mulitcast completely. A bit of extra traffic on the low data rates isn’t noticeable.

IPv6 has a much higher usage of multicast, however. Router Advertisements (RAs) and Multicast Listener Discovery (MLD) are crictical to the operation of IPv6. So critical, in fact, that turning off Global Multicast on a Cisco wireless controller doesn’t disable RAs and MLD from happening. You must have multicast running for IPv6.

What happens when all that multicast traffic from IPv6 hits a controller with the lower data rates enable? Gridlock. Without vendor intervention the MLD and RA packets will hop down to the lowest data rate and start flooding the network. Listeners will respond on the same low data rate and drag the network down to an almost-unusable speed. You can’t turn off the multicast to fix it either.

The solution is to prevent this all in the first place. You need to turn off the 802.11b low data rates on your controller. 1 Mbps, 2 Mbps, and 5.5 Mbps should all be disabled, both as a way to prevent older, slower clients from connecting to your wireless network and to keep newer clients running IPv6 from swamping it with multicast traffic.

There may still be some older clients out there that absolutely require 802.11b data rates, like medical equipment, but the best way to deal with these problematic devices is isolation. These devices likely won’t be running IPv6 any time in the future. Isolating them onto a separate SSID running the 802.11b data rates is the best way to ensure they don’t impact your other traffic. Make sure you read up on how to safely disable data rates and do it during a testing window to ensure you don’t break everything in the world. But you’ll find your network much more healthy when you do.


Tom’s Take

Legacy technology support is critical for continued operation. We can’t just drop something because we don’t want to deal with it any more. Anyone who has ever called a technical support line feels that pain. However, when the new technology doesn’t feasably support working with older tech, it’s time to pull the plug. Whether it be 802.11b data rates or something software related, like dropping PowerPC app support in OS X, we have to keep marching forward to make new devices run at peak performance.

IPv6 has already exposed limitations of older technologies like DHCP and NAT. Wireless thankfully has a much easier way to support transitions. If you’re still running 802.11b data rates, turn them off. You’ll find your IPv6 transition will be much less painful if you do. And you can spend more time working with tech and less time trying to tread water.

 

The Walls Are On Fire

There’s no denying the fact that firewalls are a necessary part of modern perimeter security. NAT isn’t a security construct. Attackers have the equivalent of megaton nuclear arsenals with access to so many DDoS networks. Security admins have to do everything they can to prevent these problems from happening. But one look at firewall market tells you something is terribly wrong.

Who’s Protecting First?

Take a look at this recent magic polygon from everyone’s favorite analyst firm:

FW Magic Polygon.  Thanks to @EtherealMind.

FW Magic Polygon. Thanks to @EtherealMind.

I won’t deny that Checkpoint is on top. That’s mostly due to the fact that they have the biggest install base in enterprises. But I disagree with the rest of this mystical tesseract. How is Palo Alto a leader in the firewall market? I thought their devices were mostly designed around mitigating internal threats? And how is everyone not named Cisco, Palo Alto, or Fortinet regulated to the Niche Players corral?

The issue comes down to purpose. Most firewalls today aren’t packet filters. They aren’t designed to keep the bad guys out of your networks. They are unified threat management systems. That’s a fancy way of saying they have a whole bunch of software built on top of the packet filter to monitor outbound connections as well.

Insider threats are a huge security issue. People on the inside of your network have access. They sometimes have motive. And their power goes largely unchecked. They do need to be monitored by something, whether it be an IDS/IPS system or a data loss prevention (DLP) system that keeps sensitive data from leaking out. But how did all of those devices get lumped together?

Deploying security monitoring tools is as much art as it is science. IPS sensors can be placed in strategic points of your network to monitor traffic flows and take action on them. If you build it correctly you can secure a huge enterprise with relatively few systems.

But more is better, right? If three IPS units make you secure, six would make you twice as secure, right? No. What you end up with is twice as many notifications. Those start getting ignored quickly. That means the real problems slip through the cracks because no one pays attention any more. So rather than deploying multiple smaller units throughout the network, the new mantra is to put an IPS in the firewall in the front of the whole network.

The firewall is the best place for those sensors, right? All the traffic in the network goes through there after all. Well, except for the user-to-server traffic. Or traffic that is internally routed without traversing the Internet firewall. Crafty insiders can wreak havoc without ever touching an edge IPS sensor.

And that doesn’t even begin to describe the processing burden placed on the edge device by loading it down with more and more CPU-intensive software. Consider the following conversation:

Me: What is the throughput on your firewall?

 

Them: It’s 1 Gbps!

 

Me: What’s the throughput with all the features turned on?

 

Them: Much less than 1 Gbps…

When a selling point of your UTM firewall is that the numbers are “real”, you’ve got a real problem.

What’s Protecting Second?

There’s an answer out there to fix this issue: disaggregation. We now have the capability to break out the various parts of a UTM device and run them all in virtual software constructs thanks to Network Function Virtualization (NFV). And they will run faster and more efficiently. Add in the ability to use SDN service chaining to ensure packet delivery and you have a perfect solution. For almost everyone.

Who’s not going to like it? The big UTM vendors. The people that love selling oversize boxes to customers to meet throughput goals. Vendors that emphasize that their solution is the best because there’s one dashboard to see every alert and issue, even if those alerts don’t have anything to do with each other.

UTM firewalls that can reliably scan traffic at 1 Gbps are rare. Firewalls that can scan 10 Gbps traffic streams are practically non-existant. And what is out there costs a not-so-small fortune. And if you want to protect your data center you’re going to need a few of them. That’s a mighty big check to write.


Tom’s Take

There’s a reason why we call it Network Function Virtualization. The need for the days when you try and cram all the possible features you could think of onto a single piece of hardware are over. We don’t need complicated all-in-one boxes that have insanely large CPUs. We have software constructs that can take care of all of that now.

While the engineers will like this brave new world, there are those that won’t. Vendors of the single box solutions will still tell you that their solution runs better. Analyst firms with a significant interest in the status quo will tell you NFV solutions are too far out or don’t address all the necessary features. It’s up to you to sort out the smoke from the flame.

Betting On The Right Horse

HobbyHorse

The annoucement of the merger of Alcatel-Lucent and Nokia was a pretty big discussion last week. One of the quotes that kept being brought up in several articles was from John Chambers of Cisco. Chambers has said the IT industry is in for a big round of “brutal consolidation” spurred by “missed market transitions”, which is a favorite term for Chambers. While I agree that consolidation is coming in the industry, I don’t think market transitions are the driver. Instead, it helps to think of it more like a day at the races.

Tricky Ponies

Startups in the networking industry have to find a hook to get traction with investors and customers. Since you can’t boil the ocean, you have to stand out. You need to find an application that gives you the capability to sell into a market. That is much easier to do with SDN than hardware-based innovation. The time-to-market for software is much lower than the barriers to ramp up production of actual devices.

Being a one-trick pony isn’t a bad thing when it comes to SDN startups. If you pour all your talent into one project, you get the best you can build. If that happens to be what your company is known for, you can hit a home run with your potential customers. You could be the overlay company. Or the policy company. Or the Docker networking layer company.

That rapid development time and relative ease of creation makes startups a tantalizing target for acquistion as well. Bigger companies looking to develop expertise often buy that expertise. Either acquiring the product or the team that built it gives the acquiring company a new horse in their stable.

If you can assemble an entire stable of ponies, you can build a networking company that addresses a lot of the needs of your customers. In fact, that’s how Cisco has managed to thrive to the point where they can gamble on those “market transitions”. The entity we call Cisco is really Crescendo, Insieme, Nuova, Andiamo, and hundreds of other single focus networking companies. There’s nothing wrong with that strategy if you have patience and good leadership.

Buy Your Own Stable

If you don’t have patience but have deep pockets, you will probably end up going down a different road. Rather than buying a startup here and there to add to a core strategy, you’ll be buying the whole strategy. That’s what Dell did when they bought Force10. If the rumors are true, that’s what EMC is looking to do soon.

Buying a company to provide your strategy has benefits. You can immediately compete. You don’t have to figure out synergies. Just sell those products and keep moving forward. You may not be the most agile company on the market but you will get the job done.

The issue with buying the strategy is most often “brain drain”. We see brain drain with a small startup going to a mid-sized company. Startup founders aren’t usually geared to stay in a corporate structure for long. They vest their interest and cash out. Losing a founder or key engineer on a product line is tough, but can be overcome with a good team.

What happens when the whole team walks out the door? If the larger acquiring company mistreats the acquired assets or influences creativity in a negative way, you can quickly find your best and brightest teams heading for green pastures. You have to make sure those people are taken care of and have their needs met. Otherwise your new product strategy will crumble before you know it.


Tom’s Take

The Nokia/Alcatel deal isn’t the last time we’ll hear about mergers of networking companies. But I don’t think it’s because of missed market transitions or shifting strategies. It comes down to companies with one or two products wanting protection from external factors. There is strength in numbers. And those numbers will also allow development of new synergies, just like horses in a stable learning from the other horses. If you’re a rich company with an interest in racing, you aren’t going to assemble a stable piece by piece. You’ll buy your way into an established stable. In the end, all the horses end up in a stable owned by someone. Just make sure your horse is the right one to bet on.

Going Out With Style

720367_54066174

Watching the HP public cloud discussion has been an interesting lesson in technology and how it is supported and marketed. HP isn’t the first company to publish a bold statement ending support for a specific technology or product line only to go back and rescind it a few days later. Some think that a problem like that shows that a company has some inner turmoil with regards to product strategy. More often than not, the real issue doesn’t lie with the company. It’s the customers fault.

No Lemonade From Lemons

It’s no secret that products have a lifespan. No matter how popular something might be with customers there is always a date when it must come to an end. This could be for a number of reasons. Technology marches on each and every day. Software may not run on newer hardware. Drivers may not be able to be written for new devices. CPUs grow more powerful and require new functions to unlock their potential.

Customers hate the idea of obsolescence. If you tell them the thing they just bought will be out-of-date in six years they will sneer at you. No matter how fresh the technology might be, the idea of it going away in the future unnerves customers. Sometimes it’s because the customers have been burned on technology purchases in the past. For every VHS and Blu-Ray player sold, someone was just as happy to buy a Betamax or HD-DVD unit that is now collecting dust.

That hatred of obsolescence sometimes keeps things running well past their expiration date. The most obivous example in recent history is Microsoft being forced to support Windows XP. Prior to Windows XP, Microsoft supported consumer releases of Windows for about five years. WIndows 95 was released in 1995 and support ended in 2001. Windows 98 reached EOL around the same time. Windows 2000 enjoyed ten years of support thanks to a shared codebase with popular server operating systems. Windows XP should have reached end-of-life shortly after the release of Windows Vista. Instead, the low adoption rate of Vista pushed system OEMs to keep installing Windows XP on their offerings. Even Windows 7 failed to move the needle significantly for some consumers to get off of XP. It finally took Microsoft dropping the hammer and setting a final end of extended support date in 2014 to get customers to migrate away from Windows XP. Even then, some customers were asking for an extension to the thirteen-year support date.

Microsoft kept supporting an OS three generations old because customers didn’t want to feel like XP had finally given up the ghost. Even though drivers couldn’t be written and security holes couldn’t be patched, consumers still wanted to believe that they could run XP forever. Even if you bought one of the last available copies of Windows XP when you purchased your system, you still got as much support for your OS as Microsoft gave Windows 95/98. Never mind that the programmers had moved on to other projects or had squeezed every last ounce of capability from the software. Consumers just didn’t want to feel like they’d been stuck with a lemon more than a decade after it had been created.

The Lesson of the Lifecycle

How does this apply to situations today? Companies have to make customers understand why things are being replaced. A simple annoucement (or worse, a hint of an unofficial annoucement from a third party source) isn’t enough any more. Customers may not like hearing their their favorite firewall or cloud platform is going away, but if you tell them the reasons behind the decision they will be more accepting.

Telling your customers that you are moving away from a public cloud platform to embrace hybrid clouds or to partner with another company doing a better job or offering more options is the way to go. Burying the annoucement in a conversation with a journalist and then backtracking later isn’t the right method. Customers want to know why. Vendors should have faith that customers are smart enough to understand strategy. Sure, there’s always the chance that customers will push back like they did with Windows XP. But there’s just as much chance they’ll embrace the new direction.


Tom’s Take

I’m one of those consumers that hates obsolescence. Considering that I’ve got a Cius and a Flip it should be apparent that I don’t bet on the right horse every time. But I also understand the reasons why those devices are no longer supported. I choose to use Windows 7 on my desktop for my own reasons. I know why it has been put out to pasture. I’m not going to demand Microsoft devote time and energy to a tired platform when Windows 10 needs to be finished.

In the enterprise technology arena, I want companies to be honest and direct when the time comes to retire products. Don’t hem and haw about shifting landscapes and concise technology roadmaps. Tell the world that things didn’t work out like you wanted and give us the way you’re going to fix it next time.

That’s Using Your Embrane

BrainInABox

Cisco announced their intent to acquire Embrane last week. Since they did it on April 1st, there was an initial thought that it might be a prank. But given that Cisco doesn’t really do April Fools jokes, it was quickly determined to be the real deal. More importantly, the Embrane acquistion plugs a very important hole in ACI that I have been worried about for a while.

Everybody Play Nice

Application Centric Infrastructure (ACI) is a great idea that works on the principle that Cisco can get multiple disparate systems to work together to “program” the underlying network to rapidly deploy applications and create policies that allow systems to be provisioned and reconfigured with a minimum of effort.

That’s a great idea in theory. And if you’re only working with Cisco gear it’s any easy thing to pull off. Provided you can easily integrate the ASA operating system with IOS and NX-OS. That’s not an easy chore and all those business units work for the same company. Can you imagine how hard it would be to integrate with an external third party? Even one that is friendly to Cisco? What about a company that only implements the bare minimum functionality to make ACI operational?

ACI is predicated on the idea that all the systems in the network are going to work together to accomplish the goal of policy programming. That starts falling apart when systems are difficult to integrate or refuse to be a part of ACI. Sure, you could program around them. It wouldn’t take much to do an end run around an unruly switch or router. But what about a firewall or load balancer?

Those devices are more important to security and scalability of an application. You can’t just cut them out. You may even have regulations that require you to include them inline with the application. That means headaches if you are forced to work with something that won’t completely integrate.

Bring Your Own Toys

Enter Embrane. Embrane’s helios platform gives Cisco a stable of software firewalls and load balancers that can be spun up and deployed as needed on-demand. That means that unruly hardware can be bypassed when necessary. If your firewall doesn’t like ACI or won’t implement the shims needed to make them play nice, all you need to do is spin up an Embrane firewall. Since Embrane was integrating with ACI even before the acquistion, you know that everything is going to work just fine.

You can also use the Embrane Elastic Services Manager (ESM) to help manage those devices and reclaim them as needed. That sounds like a no-brainer, but if you ever find yourself booting a virtual system on a cluster that has charge-back enabled, or worse booting it on a public cloud provider and forgetting about it, you’ll find that using a lifecycle manager to avoid hundreds or thousands of dollars in charges is a great idea. ESM can also help you figure out how utilized your devices are and gives your a roadmap to add capacity when it’s needed. That way you never have to answer a phone call complaining the new application is running “slow”.


Tom’s Take

Embrane’s acquisition makes all the sense in the world. Cisco had put up a stake in the company in their last funding round. That could be seen as an initial investment to keep Embrane working down the ACI path instead of moving off onto other ideas. Now, Cisco makes good on that investment by bringing the Embrane team back in house, for a while at least. Cisco gets a braintrust that knows how to make on-demand SDN work.

It’s no shock that Embrane is going to be rolled into the INSBU that houses Insieme. These two teams are going to be working together very closely in the coming months to push the Embrane technology into the core of ACI and provide it as an offering to get potential customers off the fence and into the solution. More options for configuring policy based networks is always a great carrot for customers. Overcoming objections about incompatible hardware makes selling the software of ACI a no brainer.