Cumulus Networks Could Be The New Microsoft

CumulusMSTurtle

When I was at HP Discover last December, I noticed a few people running around wearing Cumulus Networks shirts. That had me a bit curious, as Cumulus isn’t usually on the best of terms with traditional networking vendors unless they have a partnership. After some digging, I found out that HP would be announcing a “britebox” branded whitebox switch soon running Cumulus Linux. I wrote a post vaguely hinting about this in as much detail as I dared leak out.

No surprise that HP has formally announced their partnership with Cumulus. This is a great win for HP in the long run, as it gives customers the option to work with an up-and-coming network operating system (NOS) along side HP support and hardware. Note that the article mentions a hardware manufacturing deal with Accton, but I wouldn’t at all be surprised to learn that Accton had been making a large portion of their switching line already. Just a different sticker on this box.

Written Once, Runs Everywhere

The real winner here is Cumulus. They have partnered with Dell and HP to bring their NOS to some very popular traditional network vendor hardware. Given that they continue to push Cumulus Linux on traditional whitebox hardware, they are positioning themselves the same way that Microsoft did back in the 1980s when the IBM Clone PC market really started to take off.

Microsoft’s master stroke wasn’t building an empire around a GUI. It was creating software that ran on almost every variation of devices in the market. That common platform provided consistency for programmers the world over. You never had to worry about what OS was running on an IBM Clone. You could be almost certain it was MS-DOS. In fact, that commonality of platform is what enabled Microsoft to build their GUI interface on top. While DOS was eventually phased out in favor of WinNT kernels in Windows the legacy of DOS still remains on the command line.

Hardware comes and goes every year. Even with device vendors that are very tied to their hardware, like Apple. Look at the hardware differences between the first iPhone and the latest iPhone 6+. They are almost totally alien. Then look at the operating system running on each of them. They are remarkably similar, especially amazing given the eight year gap between them. That consistency of experience has allowed app developers to be comfortable writing apps that will work for more than one generation of hardware.

Bash Brothers

Cumulus is positioning themselves to do something very similar. They are creating a universal NOS interface to switches. Rather than pinning their hopes on the aging Cisco IOS CLI (and avoiding a potential lawsuit in the process), Cumulus has decided to go with Bash. Bash is almost universal for those that work on Linux, and if you’re an old school UNIX admin it doesn’t take long to adapt to Bash either. That common platform means that you have a legion of trained engineers and architects that know how to use your system.

More importantly, you have a legion of people that know how to write software to extend your system. You can create Bash scripts and programs to do many things. Cumulus even created ifupdown2 to help network admins with simplifying network interface administration. If you can extend the interface of a networking device with relative ease, you’ve started unlocking the key to unlimited expansion.

Think about the number of appliances you use every day that you never know are running Linux. I said previously that Linux won the server war because it is everywhere now and yet you don’t know it’s Linux. In the same way, I can see Cumulus negotiating to get the software offered as an option for both whitebox and britebox switches in the future. Once that happens, you can start to see the implications. If developers are writing apps and programs to extend Cumulus Linux and not the traditional switch OS, consumers will choose the more extensible option if everything else is equal. That means more demand for Cumulus. Which pours more resources into development. Which is how MS-DOS took over the world and led to Windows domination, while OS/2 died a quiet, protracted death.


Tom’s Take

When I first tweeted my thoughts about Cumulus Networks and their potential rise like the folks in Redmond, there was a lot of pushback. People told me to think of them more like Red Hat instead of Microsoft. While their business model does indeed track more closely with Red Hat, I think much of this pushback comes from the negative connotations we have with Windows. Microsoft has essentially been the only game in the x86 market for a very long time. People forget what it was like to run BeOS or early versions of Slackware. Microsoft had almost total domination outside the hobby market.

Cumulus doesn’t have to unseat Cisco to win. They don’t even have to displace the second or third place vendor. By signing deals with as many people as possible to bring Cumulus Linux to the masses, they will win in the long run by being the foundation for where networking will be going in the future.

Editor Note:  A previous version of this article incorrectly stated that Cumulus created ifupdown, when in fact they created ifupdown2.  Thanks to Matt Stone (@BigMStone) and Todd Craw (@ToddMCraw) for pointing this out to me.

The Packet Flow Duality

young-double-slit-diffraction-wikipedia-660x330

Quantum physics is a funny thing. It seeks to solve all the problems in the physical world by breaking everything down into the most basic unit possible. That works for a lot of the observable universe. But when it comes to light, quantum physics has issues. Thanks to experiments and observations, most scientists understand that light isn’t just a wave and it’s not just a collection of particles either. It’s both. This concept is fundamental to understanding how light behaves. But can it also explain how data behaves?

Moving Things Around

We tend to think about data as a series of discrete data units being pushed along a path. While these units might be frames, packets, or datagrams depending on the layer of the OSI model that you are operating at, the result is still the same. A single unit is evaluated for transmission. A brilliant post from Greg Ferro (@EtherealMind) sums up the forwarding thusly:

  • Frames being forwarded by MAC address lookup occur at layer 2 (switching)
  • Packets being forwarded by IP address lookup occur at layer 3 (routing)
  • Data being forwarded at higher levels is a stream of packets (flow forwarding)

It’s simple when you think about it. But what makes it a much deeper idea is that lookup at layer 2 and 3 requires a lot more processing. Each of the packets must be evaluated to be properly forwarded. The forwarding device doesn’t assume that the destination is the same for a group of similar packets. Each one must be evaluated to ensure it arrives at the proper location. By focusing on the discrete nature of the data, we are forced to expend a significant amount of energy to make sense of it. As anyone that studied basic packet switching can tell you, several tricks were invented to speed up this process. Anyone remember store-and-forward versus cut-through switching?

Flows behave differently. They contain state. They have information that helps devices make intelligent forwarding decisions. Those decisions don’t have to be limited by destination MAC or IP addresses. They can be labels or VLANs or other pieces of identifying information. They can be anything an application uses to talk to another device, like a DNS entry. It allows us to make a single forwarding decision per flow and implement it quickly and efficiently. Think about a stateful firewall. It works because the information for a given packet stream (or flow) can be programmed into the device. The firewall is no longer examining every individual packet, but instead evaluates the entire group of packets when making decisions.

Consequently, stageful firewalls also give us a peek at how flows are processed. Rather than having a CAM table or an ARP table, we have a group of rules and policies. Those policies can say “given a group of packets in a flow matching these characteristics, execute the following actions”. That’s a far cry from trying to figure out where each one goes.

It’s All About Scale

A single drop of water is discrete. Just like a single data packet, it represents an atomic unit of water. Taken in this measurement, a single drop of water does little good. It’s only when those drops start to form together that their usefulness becomes apparent. Think of a river or a firehose. Those groups of droplets have a vector. They can be directed somewhere to accomplish something, like putting out a fire or cutting a channel across the land.

Flows should be the atomic unit that we base our networking decisions upon. Flows don’t require complex processing on a per-unit basis. Flows carry additional information above and beyond a 48-bit hex address or a binary address representing an IP entry. Flows can be manipulated and programmed. They can have policies applied. Flows can scale to great heights. Packets and frames are forever hampered by the behaviors necessary to deliver them to the proper locations.

Data is simultaneously a packet and a flow. We can’t separate the two. What we can do is change our frame of reference for operations. Just like experiments with light, we must choose one aspect of the duality to act until such time as the other aspect is needed. Light can be treated like a wave the majority of the time. It’s only when things like the photoelectric effect happen that our reference must change. In the same way, data should be treated like a flow for the majority of cases. Only when the very basic needs of packet/frame/datagram forwarding are needed should we abandon our flow focus and treat it as a group of discrete packets.


Tom’s Take

The idea of data flows isn’t new. And neither is treating flows as the primary form of forwarding. That’s what OpenFlow has been doing for quite a while now. What makes this exciting is when people with new networking ideas start using the flow as an atomic unit for decisions. When you remove the need to do packet-by-packet forwarding and instead focus on the flow, you gain a huge insight into the world around the packet. It’s not much a stretch to think that the future of networking isn’t as concerned with the switching of frames or routing of packets. Instead, it’s the forwarding of a flow of packets that will be exciting to watch. As long as you remember that data can be both packet and flow you will have taken your first step into a larger world of understanding.

 

More Bang For Your Budget With Whitebox

white-box-sdn-nfv

As whitebox switching starts coming to the forefront of the next buying cycle for enterprises, decision makers are naturally wondering about the advantages of buying cheaper hardware. Is a whitebox switch going to provide more value for me than buying something from an established vendor? Where are the real savings? Is whitebox really for me? One of the answers to this puzzle comes not from the savings in whitebox purchases, but the capability inherent in rapid deployment.

Ten Thousand Spoons

When users are looking at the acquisition cost advantages of buying whitebox switches, they typically don’t see what they would like to see. Ridiculously cheap hardware isn’t the norm. Instead, you see a switch that can be bought for a decent discount. That does take into account that most vendors will give substantial one-time discounts to customers to entice them into more lucrative options like advanced support or professional services.

The purchasing advantage of whitebox doesn’t just come from reduced costs. It comes from additional unit purchases. Purchasing budgets don’t typically spell out that you are allowed to buy ten switches and three firewalls. They more often state that you are allowed to spend a certain dollar amount on devices of a specific type. Savvy shoppers will find deals or discounts to get more for their dollar. The real world of purchasing budgets means that every dollar will be spent, lest the available dollars get reduced next year.

With whitebox, that purchasing power translates into additional units for the same budget amount. If I could buy three switches from Vendor X or five switches from Whitebox Vendor Y, ceteris paribus I would buy the whitebox switches. If the purpose of the purchase was to connect 144 ports, then that means I have two extra switches lying around. Which does seem a bit wasteful.

However, the option of having spares on the shelf becomes very appealing. Networks are supposed to be built in a way to minimize or eliminate downtime because of failure. The network must continue to run if a switch dies. But what happens to the dead switch? In most current cases, the switch must be sent in for warranty replacement. Services contracts with large networking vendors give you the option for 4-hour, overnight, or next business day replacements. These vendors will even cross-ship you the part. But you are still down the dead switch. If the other part of the redundant pair goes down, you are going to be dead in the water.

With an extra whitebox switch on the shelf you can have a ready replacement. Just slip it into place and let your orchestration and provisioning software do the rest. While the replacement is shipping, you still have redundancy. It also saves you from needing to buy a hugely expensive (and wildly profitable) advanced support contract.

All You Need Is A Knife

Suppose for a moment that we do have these switches sitting around on a shelf doing nothing but waiting for the inevitable failure in the network. From a cost perspective, it’s neutral. I spent the same budget either way, so an unutilized switch is costing me nothing. However, what if I could do something with that switch?

The real advantage of whitebox in this scenario comes from the ability to use non-switching OSes on the hardware. Think for a moment about something like a network packet monitor. In the past, we’ve needed to download specialized software and slip a probing device into the network just for the purposes of packet collection. What if that could be done by a switch? What if the same hardware that is forwarding packets through the network could also be used to monitor them as well?

Imagine creating an operating system that runs on top of something like ONIE for the purpose of being a network tap. Now, instead of specialized hardware for that purpose you only need to go and use one of the switches you have lying around on the shelf and repurpose it into a sensor. And when it’s served that purpose, you put it back on the shelf and wait until there is a failure before going back to push it into production as a replacement. With Chef or Puppet, you could even have the switch boot into a sensor identity for a few days and then provision it back to being a data forwarding switch afterwards. No need for messy complicated software images or clever hacks.

Now, extend those ideas beyond sensors. Think about generic hardware that could be repurposed for any function. A switch could boot up as an inline firewall. That firewall could be repurposed into a load balancer for the end of the quarter. It could then become a passive IDS during an attack. All without moving. The only limitation is the imagination of the people writing code for the device. It may not ever top the performance of a device running purely for the purpose of a given function, but the flexibility of having a device that can serve multiple functions without massive reconfiguration would win out in the long run for many applications. Flexibility is more key than overwhelming performance.


Tom’s Take

Whitebox is still finding a purpose in the enterprise. It’s been embraced by webscale, but the value to the enterprise is not found in massive capabilities like that. Instead, the additional purchasing power that can be derived from additional unit purchases for the same dollar amount leads to reduced support contract costs and even new functionality increases from existing hardware lying around that can be made to do so many other things. Who could have imagined that a simple switch could be made to do the job of many other purpose-built devices in the data center? Isn’t it ironic, don’t you think?

 

The Atomic Weight of Policy

Helium-atom

The OpenDaylight project put out a new element this week with their Helium release.  The second release is usually the most important, as it shows that you have a real project on your hands and not just a bunch of people coding in the back room to no avail.  Not that something like that was going to happen to ODL.  The group of people involved in the project have the force of will to change the networking world.

Helium is already having an effect on the market.  Brocade announced their Vyatta Controller last week, which is based on Helium code.  Here’s a handy video as well.  The other thing that Helium has brought forth is the ongoing debate about network policy.  And I think that little gem is going to have more weight in the long run than anything else.

The Best Policy

Helium contains group-based policies for making groups of network objects talk to each other.  It’s a crucial step to bring ODL from an engineering hobby project to a full-fledged product that can be installed by someone that isn’t a code wizard.  That’s because most of the rest of the world, including IT people, don’t speak in specific terms with devices.  They have an idea of what needs to be done and they rely on the devices to implement that idea.

Think about a firewall.  When is the last time you wrote a firewall rule by hand? Unless you are a CLI masochist, you use the GUI to craft a policy that says something like “prevent DNS queries from any device that isn’t a DNS server (referenced by this DNS Server object list)”.  We create those kinds of policies because we can’t account for every new server appearing on the network that wants to make a DNS query.  We block the ones that don’t need to be doing it, and we modify the DNS Server object list to add new servers when needed.

Yet, in networking we’re still playing with access lists and VLAN databases.  We must manually propagate this information throughout the network when updates occur.  Because no one relies on VTP to add and prune information in the network.  The tools we’ve been using to do this work are archaic and failure-prone at best.

Staying Neutral

The inclusion of policy components of Helium will go a long way to paving the road for more of the same in future releases.  Of course, there is already talk about Cisco’s OpFlex and how it didn’t make the cut for Helium despite being one of the most dense pieces of code proposed for ODL.  It’s good that Cisco and ODL both realized that putting out code that wasn’t ready was only going to hurt in the long run.  When you’ve had seven or eight major releases, you can lay an egg with a poorly implemented feature and it won’t be the end of the world.  But if you put out a stinker in the second release, you may not make it to the third.

But this isn’t about OpFlex.  Or Congress. Or any other policy language that might be proposed for ODL in the future.  It’s about making sure that ODL is a policy-driven controller infrastructure.  Markus Nispel of Extreme talked about SDN at Wireless Field Day 7.  In that presentation, he said that he thinks the industry will standardize on ODL as the northbound interface in the network.  For those not familiar, the northbound interface is the one that is user-facing.  We interact with the northbound controller interface while the southbound controller interface programs the devices.

ODL is making sure the southbound interface is OpenFlow.  What they need to do now is ensure the northbound interface can speak policy to the users configuring it.  We’ve all heard the rhetoric of “network engineers need to learn coding” or “non-programmers will be out of a job soon”.  But the harsher reality is that while network programmers are going to be very busy people on the backend, the day-to-day operations of the network will be handled by different teams.  Those teams don’t speak IOS, Junos, or OpenFlow.  They think in policy-based thoughts.

Ops teams don’t want to know how something is going to work when implemented.  They don’t want to spend hours troubleshooting why a VLAN didn’t populate only to find they typoed the number.  They want to plug information into a policy and let the controller do the rest.  That’s what Helium has started and what ODL represents.  An interface into the network for mortal Ops teams.  A way to make that work for everyone, whether it be an OpFlex interface into Cisco APIC programming ACI or a Congress interface to an NSX layer.  If you are a the standard controller, you need to talk to everyone no matter their language.

Tom’s Take

ODL is going to be the controller in SDN.  There is too much behind it to let it fail.  Vendors are going to adopt it as their base standard for SDN.  They may add bells and whistles but it will still be ODL underneath.  That means that ODL needs to set the standard for network interaction.  And that means policy.  Network engineers complain about fragility in networking and how it’s hard to do and in the very same breath say they don’t want the CLI to go away.  What they need to say is that policy gives everyone the flexibility to create robust, fault-tolerant configurations with a minimum of effort while still allowing for other interface options, like CLI and API.  If you are the standard, you can’t play favorites.  Policy is going to be the future of networking interfaces.  If you don’t believe that, you’ll find yourself quickly crushed under the weight of reality.

I Can’t Drive 25G

Ethernet

The race to make things just a little bit faster in the networking world has heated up in recent weeks thanks to the formation of the 25Gig Ethernet Consortium.  Arista Networks, along with Mellanox, Google, Microsoft, and Broadcom, has decided that 40Gig Ethernet is too expensive for most data center applications.  Instead, they’re offering up an alternative in the 25Gig range.

This podcast with Greg Ferro (@EtherealMind) and Andrew Conry-Murray (@Interop_Andrew) does a great job of breaking down the technical details on the reasoning behind 25Gig Ethernet.  In short, the current 10Gig connection is made of four multiplexed 2.5Gig connections.  To get to 25Gig, all you need to do is over clock those connections a little.  That’s not unprecedented, as 40Gig Ethernet accomplishes this by over clocking them to 10Gig, albeit with different optics.  Aside from a technical merit badge, one has to ask themselves “Why?”

High Hopes

As always, money is the factor here.  The 25Gig Consortium is betting that you don’t like paying a lot of money for your 40Gig optics.  They want to offer an alternative that is faster than 10Gig but cheaper than the next standard step up.  By giving you a cheaper option for things like uplinks, you gain money to spend on things.  Probably on more switches, but that’s beside the point right now.

The other thing to keep in mind, as mentioned on the Coffee Break podcast, is that the cable runs for these 25Gig connectors will likely be much shorter.  Short term that won’t mean much.  There aren’t as many long-haul connections inside of a data center as one might thing.  A short hop to the top-of-rack (ToR) switch, then another different hop to the end-of-row (EoR) or core switch.  That’s really about it.  One of the arguments against 40/100Gig is that it was designed for carriers for long-haul purposes.  25G can give you 60% of the speed of that link at a much lower cost.  You aren’t paying for functionality you likely won’t use.

Heavy Metal

Is this a good move?  That depends.  There aren’t any 25Gig cards for servers right now, so the obvious use for these connectors will be uplinks.  Uplinks that can only be used by switches that share 25Gig (and later 50Gig) connections.  As of today, that means you’re using Arista, Dell, or Brocade.  And that’s when the optics and switches actually start shipping.  I assume that existing switching lines will be able to retrofit with firmware upgrades to support the links, but that’s anyone’s guess right now.

If Mellanox and Broadcom do eventually start shipping cards to upgrade existing server hardware to 25Gig then you’ll have to ask yourself if you want to pursue the upgrade costs to drive that little extra bit of speed out of the servers.  Are you pushing the 10Gig links in your servers today?  Are they the limiting factor in your data center?  And will upgrading your servers to support twice the bandwidth per network connection help alleviate your bottlenecks? Or will they just move to the uplinks on the switches?  It’s a quandary that you have to investigate.  And that takes time and effort.


 

Tom’s Take

The very first thing I ever tweeted (4 years ago):

We’ve come a long way from ratified standards to deployment of 40Gig and 100Gig.  Uplinks in crowded data centers are going to 40Gig.  I’ve seen a 100Gig optic in the wild running a research network.  It’s interesting to see that there is now a push to get to a marginally faster connection method with 25Gig.  It reminds me of all the competing 100Mbit standards back in the day.  Every standard was close but not quite the same.  I feel that 25Gig will get some adoption in the market.  So now we’ll have to choose from 10Gig, 40Gig, or something in between to connect servers and uplinks.  It will either get sent to the standards body for ratification or die on the vine with no adoption at all.  Time will tell.

 

CCIE Version 5: Out With The Old

Cisco announced this week that they are upgrading the venerable CCIE certification to version five.  It’s been about three years since Cisco last refreshed the exam and several thousand people have gotten their digits.  However, technology marches on.  Cisco talked to several subject matter experts (SMEs) and decided that some changes were in order.  Here are a few of the ones that I found the most interesting.

CCIEv5 Lab Schedule

Time Is On My Side

The v5 lab exam has two pacing changes that reflect reality a bit better.  The first is the ability to take some extra time on the troubleshooting section.  One of my biggest peeves about the TS section was the hard 2-hour time limit.  One of my failing attempts had me right on the verge of solving an issue when the time limit slammed shut on me.  If I only had five more minutes, I could have solved that problem.  Now, I can take those five minutes.

The TS section has an available 30 minute overflow window that can be used to extend your time.  Be aware that time has to come from somewhere, since the overall exam is still eight hours.  You’re borrowing time from the configuration section.  Be sure you aren’t doing yourself a disservice at the beginning.  In many cases, the candidates know the lab config cold.  It’s the troubleshooting the need a little more time with.  This is a welcome change in my eyes.

Diagnostics

The biggest addition is the new 30-minute Diagnostic section.  Rather than focusing on problem solving, this section is more about problem determination.  There’s no CLI.  Only a set of artifacts from a system with a problem: emails, log files, etc.  The idea is that the CCIE candidate should be an expert at figuring out what is wrong, not just how to fix it.  This is more in line with the troubleshooting sections in the Voice and Security labs.  Parsing log files for errors is a much larger part of my time than implementing routing.  Teaching candidates what to look for will prevent problems in the future with newly minted CCIEs that can diagnose issues in front of customers.

Some are wondering if the Diagnostic section is going to be the new “weed out” addition, like the Open Ended Questions (OEQs) from v3 and early v4.  I see the Diagnostic section as an attempt to temper the CCIE with more real world needs.  While the exam has never been a test of ideal design, knowing how to fix a non-ideal design when problems occur is important.  Knowing how to find out what’s screwed up is the first step.  It’s high time people learned how to do that.

Be Careful What You Wish For

The CCIE v5 is seeing a lot of technology changes.  The written exam is getting a new section, Network Principles.  This serves to refocus candidates away from Cisco specific solutions and more toward making sure they are experts in networking.  There’s a lot of opportunity to reinforce networking here and not idle trivia about config minimums and maximums.  Let’s hope this pays off.

The content of the written is also being updated.  Cisco is going to make sure candidates know the difference between IOS and IOS XE.  Cisco Express Forwarding is going to get a focus, as is ISIS (again).  Given that ISIS is important in TRILL this could be an indication of where FabricPath development is headed.  The written is also getting more IPv6 topics.  I’ll cover IPv6 in just a bit.

The biggest change in content is the complete removal of frame relay.  It’s been banished to the same pile as ATM and ISDN.  No written, no lab.  In it’s place, we get Dynamic Multipoint VPN (DMVPN).  I’ve talked about why Frame Relay is on the lab before.  People still complained about it.  Now, you get your wish.  DMVPN with OSPF serves the same purpose as Frame Relay with OSPF.  It’s all about Stupid Router Tricks.  Using OSPF with DMVPN requires use of mGRE, which is a Non-Broadcast Multi-Access (NBMA) network.  Just like Frame Relay.  The fact that almost every guide today recommends you use EIGRP with DMVPN should tell you how hard it is to do.  And now you’re forced to use OSPF to simulate NBMA instead of Frame Relay.  Hope all you candidates are happy now.

vCCIE

The lab is also 100% virtual now.  No physical equipment in either the TS or lab config sections.  This is a big change.  Cisco wants to reduce the amount of equipment that needs to be physically present to build a lab.  They also want to be able to offer the lab in more places than San Jose and RTP.  Now, with everything being software, they could offer the lab at any secured PearsonVUE testing center.  They’ve tried in the past, but the access requirements caused some disaster.  Now, it’s all delivered in a browser window.  This will make remote labs possible.  I can see a huge expansion of the testing sites around the time of the launch.

This also means that hardware-specific questions are out.  Like layer 2 QoS on switches.  The last reason to have a physical switch (WRR and SRR queueing) is gone.  Now, all you are going to get quizzed on is software functionality.  Which probably means the loss of a few easy points.  With the removal of Frame Relay and L2 QoS, I bet that services section of the lab is going to be really fun now.

IPv6 Is Real

Now, for my favorite part.  The JNCIE has had a robust IPv6 section for years.  All routing protocols need to be configured for IPv4 and IPv6.  The CCIE has always had a separate IPv6 section.  Not any more.  Going forward in version 5, all routing tasks will be configured for v4 and v6.  Given that RIPng has been retired to the written exam only (finally), it’s a safe bet that you’re going to love working with OSPFv3 and EIGRP for IPv6.

I think it’s great that Cisco has finally caught up to the reality of the world.  If CCIEs are well versed in IPv6, we should start seeing adoption numbers rise significantly.  Ensuring that engineers know to configure v4 and v6 simultaneously means dual stack is going to be the preferred transition method.  The only IPv6-related thing that worries me is the inclusion of an item on the written exam: IPv6 Network Address Translation.  You all know I’m a huge fan of NAT.  Especially NAT66, which is what I’ve been told will be the tested knowledge.

Um, why?!? 

You’ve removed RIPng to the trivia section.  You collapsed multicast into the main routing portions.  You’re moving forward with IPv6 and making it a critical topic on the test.  And now you’re dredging up NAT?!? We don’t NAT IPv6.  Especially to another IPv6 address.  Unique Local Addresses (ULA) is about the only thing I could see using NAT66.  Ed Horley (@EHorley) thinks it’s a bad idea.  Ivan Pepelnjak (@IOSHints) doesn’t think fondly of it either, but admits it may have a use in SMBs.  And you want CCIEs and enterprise network engineers to understand it?  Why not use LISP instead?  Or maybe a better network design for enterprises that doesn’t need NAT66?  Next time you need an IPv6 SME to tell you how bad this idea is, call me.  I’ve got a list of people.


Tom’s Take

I’m glad to see the CCIE update.  Getting rid of Frame Relay and adding more IPv6 is a great thing.  I’m curious to see how the Diagnostic section will play out.  The flexible time for the TS section is way overdue.  The CCIE v5 looks to be pretty solid on paper.  People are going to start complaining about DMVPN.  Or the lack of SDN-related content.  Or the fact that EIGRP is still tested.  But overall, this update should carry the CCIE far enough into the future that we’ll see CCIE 60,000 before it’s refreshed again.

More CCIE v5 Coverage:

Bob McCouch (@BobMcCouch) – Some Thoughts on CCIE R&S v5

Anthony Burke (@Pandom_) – Cisco CCIE v5

Daniel Dib (@DanielDibSWE) – RS v5 – My Thoughts

INE – CCIE R&S Version 5 Updates Now Official

IPExpert – The CCIE Routing and Switching (R&S) 5.0 Lab Is FINALLY Here!

Will Dell Buy Aerohive?

DELL-Aerohive-Logo

One rumor I keep hearing about in the industry involves a certain buzzing wireless vendor and the world’s largest startup.  Acquisitions happen all the time.  Rumors of them are even more frequent.  But the more I thought about it, the more I realized this may be good for everyone.

Dell wants to own the stack from top to bottom.  In the past, they have had to partner with printer companies (Lexmark) and networking companies (Brocade and Juniper) to deliver parts of the infrastructure they couldn’t provide themselves.  In the case of printers, Dell found a way to build them on their own.  That reduced their reliance on Lexmark.  In the networking world, Dell shocked everyone by going outside their OEM relationship and buying Force10.  I’ve talked before about why the Force10 pickup was a better deal in the long run than Brocade.

Dell’s Desires

Dell needs specific pieces of the puzzle.  They don’t want to be encumbered with ancillary products that will need to be jettisoned later.  Buying Brocade would have required unwinding a huge fibre channel business.  In much the same way, I don’t think Dell will end up buying their current wireless OEM, Aruba Networks.  Aruba has decided to branch out past the doing simple wireless and moved into wired network switches and security and identity management programs like ClearPass.  Dell doesn’t want any of that.  They already have an issue integrating the Force10 networking expertise into the PowerConnect line.  I’ve been told in the past the FTOS will eventually come to PowerConnect, but that has yet to happen.  Integrating purchased companies isn’t easier.  That becomes exponentially harder the more product lines you have to integrate.

Aruba is too expensive for Dell to buy outright.  Michael Dell spent a huge chunk of his cash to get his company back from the shareholders.  He’s going to put it on a diet pretty soon.  I would expect to see a few product lines slimmed down or outright dropped.  That makes it tough to justify buying so much from another company.  Dell needs a scalpel, not a sledgehammer.

Aerohive’s Aspirations

Aerohive is the best target for Dell.  They are clearly fighting for third place in the wireless market behind Cisco and Aruba.  Aerohive has never been shy about punching above their weight.  They have the mentality of a scrappy terrier that won’t go down without a fight.  But, they are getting pressure to expand quickly across their product lines.  They took their time releasing an 802.11ac access point.  Their switching offering hasn’t caught on in the same way that of Aruba or Meraki (now a division of Cisco).

Aerohive is on the verge of going public.  I’m sure the infusion of cash would allow them to pay off some early investors as well as fund more development for 802.11ac Phase 2 gear and maybe a firewall offering.  The risk comes when you look at what happened to Ruckus Wireless shortly after their IPO.  While they did recover, it didn’t look very good for a company that supposedly did have a unique claim, their antenna design.  Aerohive is a cloud management platform like many others in the market.  You have to wonder how investors would view them.  Scrappy doesn’t sell stock.

Aerohive is now fighting in the new Gartner “Wired and Wireless Access” magic quadrant, which is an absolute disaster for everyone.  An analyst firm thinks that wireless is just like wired, so naturally it makes sense for AP vendors to start making switches, right?  Except the people who are really brilliant when it comes to wireless, like Matthew Gast and Victor Shtrom couldn’t care less about bits on copper.  They’ve spent the better part of their careers solving the RF problems in the world.  And now someone tells them that interference problems aren’t that much different than spanning tree?  I would have long since planted my head permanently onto my desk if I’d been told that in their position.

Aerohive gains a huge backer in the fight if Dell acquires them.  They get the name to go up against Cisco/Meraki.  The gain R&D from Dell with expertise around cloud management.  They can start developing integration with HiveManager and Dell’s SMB extensive product line.  Switch supply becomes a thing of the past.  Their entire software offering fits well with what Dell is trying to accomplish from a device independence perspective with regards to customers.

Tom’s Take

I don’t put much stock in random rumors.  But I’ve heard this one come up enough to make me ask some tough questions.  There are people in both camps that think it will happen sometime in 2014.  Dell has to get the books sorted out and figure out who’s in charge of buying things.  Aerohive has to see if there’s enough juice left in the market to IPO and not look foolish.  Maybe Dell needs to run the numbers and find out what it would take to cash out Aerohive’s investors and add the company to the growing Empire of Round Rock.  A little buzz for the World’s Largest Startup couldn’t hurt.