The Light On The Fiber Mountain

MountainRoad

Fabric switching systems have been a popular solution for many companies in the past few years. Juniper has QFabric and Brocade has VCS. For those not invested in fabrics, the trend has been to collapse the traditional three tier network model down into a spine-leaf architecture to optimize east-west traffic flows. One must wonder how much more optimized that solution can be. As it turns out, there is a bit more that can be coaxed out of it.

Shine A Light On Me

During Interop, I had a chance to speak with the folks over at Fiber Mountain (@FiberMountain) about what they’ve been up to in their solution space. I had heard about their revolutionary SDN offering for fiber. At first, I was a bit doubtful. SDN gets thrown around a lot on new technology as a way to sell it to people that buy buzzwords. I wondered how a fiber networking solution could even take advantage of software.

My chat with M. H. Raza started out with a prop. He showed me one of the new Multifiber Push On (MPO) connectors that represent the new wave of high-density fiber. Each cable, which is roughly the size and shape of a SATA cable, contains 12 or 24 fiber connections. These are very small and pre-configured in a standardized connector. This connector can plug into a server network card and provide several light paths to a server. This connector and the fibers it terminates are the building block for Fiber Mountain’s solution.

With so many fibers running a server, Fiber Mountain can use their software intelligence to start doing interesting things. They can begin to build dedicated traffic lanes for applications and other traffic by isolating that traffic onto fibers already terminated on a server. The connectivity already exists on the server. Fiber Mountain just takes advantage of it. It feels very simliar to the way we add in additional gigabit network ports when we need to expand things like vKernel ports or dedicated traffic lanes for other data.

Quilting Circle

Where this solution starts looking more like a fabric is what happens when you put Fiber Mountain Optical Exchange devices in the middle. These switching devices act like aggregation ports in the “spine” of the network. They can aggregate fibers from top-of-rack switches or from individual servers. These exchanges tag each incoming fiber and add them to the Alpine Orchestration System (AOS), which keeps track of the connections just like the interconnections in a fabric.

Once AOS knows about all the connections in the system, you can use it to start building pathways between east-west traffic flows. You can ensure that traffic between a web server and backend database has dedicated connectivity. You can add additional resources between systems that are currently engaged in heavy processing. You can also dedicated traffic lanes for backup jobs. You can do quite a bit from the AOS console.

Now you have a layer 1 switching fabric without any additional pieces in the middle. The exchanges function almost like a passthrough device. The brains of the system exist in AOS. Remember when Ivan Pepelnjak (@IOSHints) spent all his time pulling QFabric apart to find out what made it tick? The Fiber Mountain solution doesn’t use BGP or MPLS or any other magic protocol sauce. It runs at layer 1. The light paths are programmed by AOS and the packets are swtiched across the dense fiber connections. It’s almost elegant in the simplicity.

Future Illumination

The Fiber Mountain solution has some great promise. Today, most of the operations of the system require manual intervention. You must build out the light paths between servers based on educated guesses. You must manually add additional light paths when extra bandwidth is needed.

Where they can really improve their offering in the future is to add intelligence to AOS to automatically make those decisions based on thresholds and inputs that are predefined. If the system can detect bigger “elephant” traffic flows and automatically provision more bandwidth or isolate these high volume packet generators it will go a long way toward making things much easier on network admins. It would also be great to provide a way to interface that “top talker” data into other systems to alert network admins when traffic flows get high and need additional resources.


Tom’s Take

I like the Fiber Mountain solution. They’ve built a layer 1 fabric that performs similarly to the ones from Juniper and Brocade. They are taking full advantage of the resources provided by the MPO fiber connectors. By adding a new network card to a server, you can test this system without impacting other traffic flows. Fiber Mountain even told me that they are looking at trial installations for customers to bring their technology in at lower costs as a project to show the value to decision makers.

Fiber Moutain has a great start on building a low latency fiber fabric with intelligence. I’ll be keeping a close eye on where the technolgy goes in the future to see how it integrates into the entire network and brings SDN features we all need in our networks.

 

That’s Using Your Embrane

BrainInABox

Cisco announced their intent to acquire Embrane last week. Since they did it on April 1st, there was an initial thought that it might be a prank. But given that Cisco doesn’t really do April Fools jokes, it was quickly determined to be the real deal. More importantly, the Embrane acquistion plugs a very important hole in ACI that I have been worried about for a while.

Everybody Play Nice

Application Centric Infrastructure (ACI) is a great idea that works on the principle that Cisco can get multiple disparate systems to work together to “program” the underlying network to rapidly deploy applications and create policies that allow systems to be provisioned and reconfigured with a minimum of effort.

That’s a great idea in theory. And if you’re only working with Cisco gear it’s any easy thing to pull off. Provided you can easily integrate the ASA operating system with IOS and NX-OS. That’s not an easy chore and all those business units work for the same company. Can you imagine how hard it would be to integrate with an external third party? Even one that is friendly to Cisco? What about a company that only implements the bare minimum functionality to make ACI operational?

ACI is predicated on the idea that all the systems in the network are going to work together to accomplish the goal of policy programming. That starts falling apart when systems are difficult to integrate or refuse to be a part of ACI. Sure, you could program around them. It wouldn’t take much to do an end run around an unruly switch or router. But what about a firewall or load balancer?

Those devices are more important to security and scalability of an application. You can’t just cut them out. You may even have regulations that require you to include them inline with the application. That means headaches if you are forced to work with something that won’t completely integrate.

Bring Your Own Toys

Enter Embrane. Embrane’s helios platform gives Cisco a stable of software firewalls and load balancers that can be spun up and deployed as needed on-demand. That means that unruly hardware can be bypassed when necessary. If your firewall doesn’t like ACI or won’t implement the shims needed to make them play nice, all you need to do is spin up an Embrane firewall. Since Embrane was integrating with ACI even before the acquistion, you know that everything is going to work just fine.

You can also use the Embrane Elastic Services Manager (ESM) to help manage those devices and reclaim them as needed. That sounds like a no-brainer, but if you ever find yourself booting a virtual system on a cluster that has charge-back enabled, or worse booting it on a public cloud provider and forgetting about it, you’ll find that using a lifecycle manager to avoid hundreds or thousands of dollars in charges is a great idea. ESM can also help you figure out how utilized your devices are and gives your a roadmap to add capacity when it’s needed. That way you never have to answer a phone call complaining the new application is running “slow”.


Tom’s Take

Embrane’s acquisition makes all the sense in the world. Cisco had put up a stake in the company in their last funding round. That could be seen as an initial investment to keep Embrane working down the ACI path instead of moving off onto other ideas. Now, Cisco makes good on that investment by bringing the Embrane team back in house, for a while at least. Cisco gets a braintrust that knows how to make on-demand SDN work.

It’s no shock that Embrane is going to be rolled into the INSBU that houses Insieme. These two teams are going to be working together very closely in the coming months to push the Embrane technology into the core of ACI and provide it as an offering to get potential customers off the fence and into the solution. More options for configuring policy based networks is always a great carrot for customers. Overcoming objections about incompatible hardware makes selling the software of ACI a no brainer.

Does EMC Need A Network?

EMCnetwork

Network acquisitions are in the news once again. This time, the buyer is EMC. In a blog article from last week, EMC is reportedly mulling the purchase of either Brocade or Arista to add a networking component to its offerings. While Arista would be a good pickup for EMC to add a complete data center networking practice, one must ask themselves “Does EMC Really Need A Network?”

Hardware? For What?

The “smart money” says that EMC needs a network offering to help complete their vBlock offering now that the EMC/Cisco divorce is in the final stages. EMC has accelerated those plans from the server side by offering EVO:RAIL as an option for VSPEX now. Yes, VSPEX isn’t a vBlock. But it’s a flexible architecture that will eventually supplant vBlock when the latter is finally put out to pasture once the relationship between Cisco and EMC is done.

EMC being the majority partner in VCE has incentive to continue offering the package to customers to make truckloads of cash. But long term, it makes more sense for EMC to start offering alternatives to a Cisco-only network. There have been many, many assurances that vBlock will not be going away any time soon (almost to the level of “the lady doth protest too much, methinks“). But to me, that just means that the successor to vBlock will be called something different, like nBlock or eBlock.

Regardless of what the next solution is called, it will still need networking components installed in order to facilitate communication between the components in the system. EMC has been looking at networking companies in the past, especially Juniper (again with much protesting to the contrary). It’s obvious they want to have a hardware solution to offer alongside Cisco for future converged systems. But do they really need to?

How About A BriteBlock?

EMC needs a network component. NSX is a great control system that EMC already owns (and is already considering for vBlocks), but as Joe Onisick (@JOnisick) is fond of pointing out, NSX doesn’t actually forward packets. So we still need something to fling bits back and forth. But why does it have to be something EMC owns?

Whitebox switching is making huge strides toward being a data center solution. Cumulus, Pluribus, and Big Switch have created stable platforms that offer several advantages over more traditional offerings, not the least of which is cost. The ability to customize the OS to a degree is also attractive to people that want to integrate with other systems.

Could you imagine running a Cumulus switch in a vBlock and having the network forwarding totally integrated with the management platform? Or how about running Big Switch’s Big Fabric as the backplane for vBlock? These solutions would work with minimal effort on the part of EMC and very little tuning required by the end user. Add in the lowered acquistion cost of the network hardware and you end up with a slightly healthier profit margin for EMC.

Is The Answer A FaceBlock?

The other solution is to use OpenCompute Project switches in a vBlock offering. OCP is gaining momentum, with Cumulus and Big Switch both making big contributions recently at the 2015 OCP Summit. Add in the buzz around the Wedge switch and new Six Pack chassis and you have the potential to have significant network performance for a relative pittance.

Wedge and Six Pack are not without their challenges. Even running Cumulus Linux or Open Network Linux from Big Switch, it’s going to take some time to integrate the network OS with the vBlock architecture. NSX can alleviate some of these challenges, but it’s more a matter of time than technology. EMC is actually very good at taking nascent technology from startups and integrating with their product lines. Doing the same with OCP networking would not be much different from their current R&D style.

Another advantage of using OCP networking comes from the effect that EMC would have to the project. By having a major vendor embrace OCP as the spine of their architecture, Facebook gains the advantages of reduced component costs and increased development. Even if EMC doesn’t release their developments back into the community, they will attract more developers to the project and magnify the work being done. This benefits EMC as well, as every OCP addition flows back into their offerings as well.


Tom’s Take

We’re running out of big companies to buy other companies. Through consolidation and positioning, the mid-tier has grown to the point where they can’t easily be bought by anyone other than Cisco. Thanks to Aruba, HP is going to be busy with that integration until well after the company split. EMC is the last company out there that has the resources to buy someone as big as Arista or Brocade.

The question that the people at EMC need to ask themselves is: Do we really need hardware? Or can we make everything work without pulling out the checkbook? Cisco will always been an option for vBlock, just not necessarily the cheapest solution. EMC can find solutions to increase their margins, but it’s going to take some elbow grease and a few thinking caps to integrate whitebox or OCP-style offerings.

EMC does need a network. It just may not need to be one they own.

 

HP Networking – Hitting The Right Notes

HP has quietly been making waves recently with their networking strategies.  They recently showed off their technology around software defined networking (SDN) applications at Interop New York.  Here’s a video:

It would seem that HP has been doing a lot of hard work on the back end with SDN.  So why haven’t we heard about it?

Trumpet and Bugle

HP Networking hasn’t been in the news as much as Cisco and VMware as of late.  When you consider that both of those companies are pushing agendas related to redefining the paradigm of networking around policy and virtualization their trumpeting of those agendas makes total sense.  But even members of the League of Non-Aligned Vendors like Brocade are talking a lot about their SDN strategy with the Vyatta Controller and OpenStack integrations.  Vendors have layers and layers of plans for the “new” networking.  But HP has actually been doing it!  Why haven’t we known until now?

HP has been content to play the role of the bugler to the trumpeters of the bigger organizations.  Rather than talking over and over again about what they are planning on doing, HP waits until they’ve actually done it to talk about it.  It’s a sound strategy.  I love making everything work first and then discussing what you’ve done rather than spending week after week, month after month, talking about a plan that may or may not come to fruition.

The issue with HP is that they need to bugle a little more often to stay afloat in the space.  Only making announcements won’t cut it.  The breakneck pace of innovation and adoption is disrupting the ability of laggard developers to stay afloat.  New technologies are being supplanted by upstarts.  Docker is old news.  Now we’re talking about SocketPlane and Rocket.  You’d be forgiven if you haven’t been keeping up as a blogger or engineer.  But if you’ve missed the boat as a vendor, you’re going to have a hard time treading water.

The Tijuana Brass

How can HP solve their problem?  Technically, they need to keep doing what they’ve been doing all along.  They are making good decisions and innovating around ideas like the HP SDN App Store.  What they need to do it tell more people about it.  Get the word out.  Start some discussions around what you’re doing.  Don’t be afraid to engage.  The more you talk to people about your solutions, the more your name will come up in conversation. You need to be loud and on-key.  Herb Alpert and the Tijuana Brass weren’t popular right away.  It took years of recording and playing before the mainstream “discovered” them and popularized their music.

HP Networking has spent considerable time building SDN infrastructure.  The fact that their are OpenFlow images for a wide variety of their existing switch infrastructure is proof they are concerned about making everything fit together.  Now it’s time to tell the story.  With the impending divestiture of HP’s enterprise businesses from the consumer line, it will be far too easy to get lost in the shuffle of reorganization.  They way to prevent that is to step out and make yourself known.  Write blogs, record podcasts, and interact with the community.  Don’t be afraid to toot your own horn a little.


Disclaimer

HP invited me to attend HP Discover Barcelona as their guest.  They provided travel and lodging expenses during my time in Europe.  They did not require any blog posts or consideration for this invitation, nor where they offered any on my part.  The opinions and analysis expressed herein represents my thoughts alone.

I Can’t Drive 25G

Ethernet

The race to make things just a little bit faster in the networking world has heated up in recent weeks thanks to the formation of the 25Gig Ethernet Consortium.  Arista Networks, along with Mellanox, Google, Microsoft, and Broadcom, has decided that 40Gig Ethernet is too expensive for most data center applications.  Instead, they’re offering up an alternative in the 25Gig range.

This podcast with Greg Ferro (@EtherealMind) and Andrew Conry-Murray (@Interop_Andrew) does a great job of breaking down the technical details on the reasoning behind 25Gig Ethernet.  In short, the current 10Gig connection is made of four multiplexed 2.5Gig connections.  To get to 25Gig, all you need to do is over clock those connections a little.  That’s not unprecedented, as 40Gig Ethernet accomplishes this by over clocking them to 10Gig, albeit with different optics.  Aside from a technical merit badge, one has to ask themselves “Why?”

High Hopes

As always, money is the factor here.  The 25Gig Consortium is betting that you don’t like paying a lot of money for your 40Gig optics.  They want to offer an alternative that is faster than 10Gig but cheaper than the next standard step up.  By giving you a cheaper option for things like uplinks, you gain money to spend on things.  Probably on more switches, but that’s beside the point right now.

The other thing to keep in mind, as mentioned on the Coffee Break podcast, is that the cable runs for these 25Gig connectors will likely be much shorter.  Short term that won’t mean much.  There aren’t as many long-haul connections inside of a data center as one might thing.  A short hop to the top-of-rack (ToR) switch, then another different hop to the end-of-row (EoR) or core switch.  That’s really about it.  One of the arguments against 40/100Gig is that it was designed for carriers for long-haul purposes.  25G can give you 60% of the speed of that link at a much lower cost.  You aren’t paying for functionality you likely won’t use.

Heavy Metal

Is this a good move?  That depends.  There aren’t any 25Gig cards for servers right now, so the obvious use for these connectors will be uplinks.  Uplinks that can only be used by switches that share 25Gig (and later 50Gig) connections.  As of today, that means you’re using Arista, Dell, or Brocade.  And that’s when the optics and switches actually start shipping.  I assume that existing switching lines will be able to retrofit with firmware upgrades to support the links, but that’s anyone’s guess right now.

If Mellanox and Broadcom do eventually start shipping cards to upgrade existing server hardware to 25Gig then you’ll have to ask yourself if you want to pursue the upgrade costs to drive that little extra bit of speed out of the servers.  Are you pushing the 10Gig links in your servers today?  Are they the limiting factor in your data center?  And will upgrading your servers to support twice the bandwidth per network connection help alleviate your bottlenecks? Or will they just move to the uplinks on the switches?  It’s a quandary that you have to investigate.  And that takes time and effort.


 

Tom’s Take

The very first thing I ever tweeted (4 years ago):

We’ve come a long way from ratified standards to deployment of 40Gig and 100Gig.  Uplinks in crowded data centers are going to 40Gig.  I’ve seen a 100Gig optic in the wild running a research network.  It’s interesting to see that there is now a push to get to a marginally faster connection method with 25Gig.  It reminds me of all the competing 100Mbit standards back in the day.  Every standard was close but not quite the same.  I feel that 25Gig will get some adoption in the market.  So now we’ll have to choose from 10Gig, 40Gig, or something in between to connect servers and uplinks.  It will either get sent to the standards body for ratification or die on the vine with no adoption at all.  Time will tell.

 

CCNA Data Center on vBrownBag

vbrownbagSometimes when I’m writing blog posts, I forget how important it is to start off on the right foot.  For a lot of networking people just starting out, discussions about advanced SDN topics and new theories can seem overwhelming when you’re trying to figure out things like subnetting or even what a switch really is.  While I don’t write about entry level topics often, I had the good fortune recently to talk about them on the vBrownBag podcast.

For those that may not be familiar, vBrownBag is a great series that goes into depth about a number of technology topics.  Historically, vBrownBag has been focused on virtualization topics.  Now, with the advent of virtual networking become more integrated into virtualization the vBrownBag organizers asked me if I’d be willing to jump on and talk about the CCNA Data Center.  Of course I took the opportunity to lend my voice to what will hopefully be the start of some promising data center networking careers.

These are the two videos I recorded.  The vBrownBag is usually a one-hour show.  I somehow managed to go an hour and half on both.  I realized there is just so much knowledge that goes into these certifications that I couldn’t do it all even if I had six hours.

Also, in the midst of my preparation, I found a few resources that I wanted to share with the community for them to get the most out of the experience.

Chris Wahl’s CCNA DC course from PluralSight – This is worth the time and investment for sure.  It covers DCICN in good depth, and his work with NX-OS is very handy if you’ve never seen it before.

Todd Lamle’s NX-OS Simulator – If you can’t get rack time on a real Nexus, this is pretty close to the real thing.  You should check it out even if only to get familiar with the NX-OS CLI.

NX-OS and Nexus Switching, 2nd Edition – This is more for post-grad work.  Ron Fuller (@CCIE5851) helped write the definitive guide to NX-OS.  If you are going to work on Nexus gear, you need a copy of this handy. Be sure to use the code “NETNERD” to get it for 30% off!


 

Tom’s Take

Never forget where you started.  The advanced topics we discuss take a lot for granted in the basic knowledge department.  Always be sure to give a little back to the community in that regard.  The network engineer you help shepherd today may end up being the one that saves your job in the future.  Take the time to show people the ropes.  Otherwise you’ll end up hanging yourself.

Building A Lego Data Center Juniper Style

JDC-BirdsEye

I think I’ve been intrigued by building with Lego sets as far back as I could remember.  I had a plastic case full of them that I would use to build spaceships and castles day in and day out.  I think much of that building experience paid off when I walked into the real world and I started building data centers.  Racks and rails are network engineering versions of the venerable Lego brick.  Little did I know what would happen later.

Ashton Bothman (@ABothman) is a social media rock star for Juniper Networks.  She emailed me and asked me if I would like to participate in a contest to build a data center from Lego bricks.  You could imagine my response:

YES!!!!!!!!!!!!!

I like the fact that Ashton sent me a bunch of good old fashioned Lego bricks.  One of the things that has bugged me a bit since the new licensed sets came out has been the reliance on specialized pieces.  Real Lego means using the same bricks for everything, not custom-molded pieces.  Ashton did it right by me.

Here’s a few of my favorite shots of my Juniper Lego data center:

My rack setup.  I even labeled some of the devices!

My rack setup. I even labeled some of the devices!

Ladder racks for my Lego cables.  I like things clean.

Ladder racks for my Lego cables. I like things clean.

Can't have a data center with a generator.  Complete with flashing lights.

Can’t have a data center with a generator. Complete with flashing lights.

The Big Red Button.  EPO is a siren call for troublemakers.

The Big Red Button. EPO is a siren call for troublemakers.

The Token Unix Guy.  Complete with beard and old workstation.

The Token Unix Guy. Complete with beard and old workstation.

Storage lockers and a fire extinguisher.  I didn't have enough bricks for a halon system.

Storage lockers and a fire extinguisher. I didn’t have enough bricks for a halon system.

The Obligatory Logo Shot.  Just for Ashton.

The Obligatory Logo Shot. Just for Ashton.


Tom’s Take

This was fun.  It’s also for a great cause in the end.  My son has already been eyeing this set and he helped a bit in the placement of the pirate DC admin and the lights on the server racks.  He wanted to put some ninjas in the data center when I asked him what else was needed.  Maybe he’s got a future in IT after all.

JDC-Overview

Here are some more Lego data centers from other contest participants:

Ivan Pepelnjak’s Lego Data Center

Stephen Foskett’s Datacenter History: Through The Ages in Lego

Amy Arnold’s You Built a Data Center?  Out Of A DeLorean?