Linux and the Quest for Underlays

TuxUnderlay

I’m at the OpenStack Summit this week and there’s a lot of talk around about building stacks and offering everything needed to get your organization ready for a shift toward service provider models and such. It’s a far cry from the battles over software networking and hardware dominance that I’m so used to seeing in my space. But one thing came to mind that made me think a little harder about architecture and how foundations are important.

Brick By Brick

The foundation for the modern cloud doesn’t live in fancy orchestration software or data modeling. It’s not because a retailer built a self-service system or a search engine giant decided to build a cloud lab. The real reason we have a growing market for cloud providers today is because of Linux. Linux is the underpinning of so much technology today that it’s become nothing short of ubiquitous. Servers are built on it. Mobile operating systems use it. But no one knows that’s what they are using. It’s all just something running under the surface to enable applications to be processed on top.

Linux is the vodka of operating systems. It can run in a stripped down manner on a variety of systems and leave very little trace behind. BSD is similar in this regard but doesn’t have the driver support from manufacturers or the ability to strip out pieces down to the core kernel and few modifications. Linux gives vendors and operators the flexibility to create a software environment that boots and gets basic hardware working. The rest is up to the creativity of the people writing the applications on top.

Linux is the perfect underlay. It’s a foundation that is built upon without getting in the way of things running above it. It gives you predictable performance and a familiar environment. That’s one of the reasons why Cumulus Networks and Dell have embraced Linux as a way to create switch operating systems that get out of the way of packet processing and let you build on top of them as your needs grow and change.

Break The Walls Down

The key to building a good environment is a solid underlay, whether it be be in systems or in networking. With reliable transport and operations taken care of, amazing things can be built. But that doesn’t mean that you need to build a silo around your particular area of organization.

The shift to clouds and stacks and “new” forms of IT management aren’t going to happen if someone has built up a massive blockade. They will work when you build a system that has common parts and themes and allows tools to work easily on multiple parts of the infrastructure.

That’s what’s made Linux such a lightning rod. If your monitoring tools can monitor servers, SANs, and switches with little to no modification you can concentrate your time on building on those pieces instead of writing and rewriting software to get you back to where you started in the first place. That’s how systems can be extensible and handle changes quickly and efficiently. That’s how you build a platform for other things.


Tom’s Take

I like building Lego sets. But I really like building them with the old fashioned basic bricks. Not the fancy new ones from licensed sets. Because the old bricks were only limited by your creativity. You could move them around and put them anywhere because they were all the same. You could build amazing things with the right basic pieces.

Clouds and stacks aren’t all that dissimilar. We need to focus on building underlays of networking and compute systems with the same kinds of basic blocks if we ever hope to have something that we can build upon for the future. You may not be able to influence the design of systems at the most basic level when it comes to vendors and suppliers, but you can vote with your dollars to back the solutions that give you the flexibility to get your job done. I can promise you that when the revenue from proprietary, non-open underlay technologies goes down the suppliers will start asking you the questions you need to answer for them.

The Tortoise and the Austin Hare

Dell_Logo

Dell announced today the release of their newest network operating system, OS10 (note the lack of an X). This is an OS that is slated to build on the success that Dell has had selling 3rd party solutions from vendors like Cumulus Networks and Big Switch. OS10’s base core will be built on an unmodified Debian distro that will have a “premium” feature set that includes layer 2 and layer 3 functionality. The aim to have a fully open-source base OS in the networking space is lofty indeed, but the bigger question to me is “what happens to Cumulus”?

Storm Clouds

As of right this moment, before the release of Dell OS10, the only way to buy Linux on a Dell switch is to purchase it with Cumulus. In the coming months, Dell will begin to phase in OS10 as an option in parallel with Cumulus. This is especially attractive to large environments that are running custom-built networking today. If your enterprise is running Quagga or sFlow or some other software that has been tweaked to meet your unique needs you don’t really need a ton of features wrapped in an OS with a CLI you will barely use.

So why introduce an OS that directly competes with your partners? Why upset that apple cart? It comes down to licenses. Every time someone buys a Dell data center switch, they have to pick an OS to run on it. You can’t just order naked hardware and install your own custom kernel and apps. You have to order some kind of software. When you look at the drop-down menu today you can choose from FTOS, Cumulus, or Big Switch. For the kinds of environments that are going to erase and reload anyway the choice is pointless. It boils down to the cheapest option. But what about those customers that choose Cumulus because it’s Linux?

Customers want Linux because they can customize it to their heart’s content. They need access to the switching hardware and other system components. So long as the interface is somewhat familiar they don’t really care what engine is driving it. But every time a customer orders a switch today with Cumulus as the OS option, Dell has to pay Cumulus for that software license. It costs Dell to ship Linux on a switch that isn’t made by Dell.

OS10 erases that fee. By ordering a base image that can only boot and address hardware, Dell puts a clean box in the hands of developers that are going to be hacking the system anyway. When the new feature sets are released later in the year that increase the functionality of OS10, you will likely see more customers beginning to experiment with running Linux development environments. You’ll also see Dell beginning to embrace a model that loads features on a switch as software modules instead of purpose-built appliances.

Silver Lining

Dell’s future is in Linux. Rebuilding their OS from the ground up to utilize Linux only makes sense given industry trends. Junos, EOS, and other OSes from upstarts like Pluribus and Big Switch are all based on Linux or BSD. Reinventing the wheel makes very little sense there. But utilizing the Switch Abstraction Interface (SAI) developed for OpenCompute gives them an edge to focus on northbound feature development while leaving the gory details of addressing hardware to the abstraction talking to something below it.

Dell isn’t going to cannibalize their Cumulus partnership immediately. There are still a large number of shops running Cumulus that a are going to want support from their vendor of choice in the coming months. Also, there are a large number of Dell customers that aren’t ready to disaggregate hardware from software radically. Those customers will require some monitoring, as they are likely to buy the cheapest option as opposed to the best fit and wind up with a switch that will boot and do little else to solve network problems.

In the long term, Cumulus will continue to be a fit for Dell as long as OS10 isn’t ported to the Campus LAN. Once that occurs, you will likely see a distancing of these two partners as Dell embraces their own Linux OS options and Cumulus moves on to focus on using whitebox hardware instead of bundling themselves with existing vendors. Once the support contracts expire on the Cumulus systems supported by Dell, I would expect to see a professional services offering to help those users of Cumulus-on-Dell migrate to a “truly open and unmodified kernel”.


Tom’s Take

Dell is making strides in opening up their networking with Linux and open source components. Juniper has been doing it forever, and HP recently jumped into the fray with OpenSwitch. Making yourself open doesn’t solve your problems or conjure customers out of thin air. But it does give you a story to tell about your goals and your direction. Dell needs to keep their Cumulus partnerships going forward until they can achieve feature parity with the OS that currently runs on their data center switches. After that happens and the migration plans are in place, expect to see a bit of jousting between the two partners about which approach is best. Time will tell who wins that argument.

 

 

Linux Lost The Battle But Won The War

I can still remember my first experience with Linux.  I was an intern at IBM in 2001 and downloaded the IBM Linux Client for e-Business onto a 3.5″ floppy and set about installing it to a test machine in my cubicle.  It was based on Red Hat 6.1.  I had lots of fun recompiling kernels, testing broken applications (thanks Lotus Notes), and trying to get basic hardware working (thanks deCSS).  I couldn’t help but think at the time that there was great potential in the software.

I’ve played with Linux on and off for the last twelve years.  SuSE, Novell, Ubuntu, Gentoo, Slackware, and countless other distros too obscure to rank on Google.  Each of them met needs the others didn’t.  Each tried to unseat Microsoft Windows as the predominant desktop OS.  Despite a range of options and configurability, they never quite hit the mark.  I think every year since 2005 has been the “Year of Desktop Linux”.  Yet year after year I see more Windows laptops out there and very few being offered with Linux installed from the factory.  It seems as though Linux might not ever reach the point of taking over the desktop.  Then I saw a chart that forced me to look at the battle in a new perspective:

AndroidDominance

Consider that Android is based on kernel version 3.4 with some Google modifications.  That means it runs Linux under the hood, even if the interface doesn’t look anything like KDE or GNOME.  And it’s running on millions of devices out there.  Phones and tablets in the hands of consumers world wide.  Linux doesn’t need to win the desktop battle any more.  It’s already ahead in the war for computing dominance.

It happened not because Linux was a clearly superior alternative to Windows-based computing.  It didn’t happen because users finally got fed up with horrible “every other version” nonsense from Redmond.  It happened because Linux offered something Windows has never been able to give developers – flexibility.

I’ve said more than once that the inherent flexibility of Linux could be considered a detriment to desktop dominance.  If you don’t like your window manager you can trade it out.  Swap GNOME for xfce or KDE if you prefer something different.  You can trade filesystems if you want.  You can pull out pieces of just about everything whenever you desire, even the kernel.  Without the mantra of forcing the user to accept what’s offered, people not only swap around at the drop of a hat but are also free to spin their own distro whenever they want.  As of this writing, Ubuntu has 72 distinct projects based on the core distro.  Is it a wonder why people have a hard time figuring out what to install?

Android, on the other hand, has minimal flexibility when it comes to the OS.  Google lets the carriers put their own UI customizations in place, and the hacker community has spun some very interesting builds of their own.  But the rank and file mobile device user isn’t going to go out and hack their way to OS nirvana.  They take what’s offered and use it in their daily computing lives.  Android’s development flexibility means it can be installed on a variety of hardware, from low end mobile phones to high end tablets.  Microsoft has much more stringent rules for hardware running their mobile OS.  Android’s licensing model is also a bit more friendly (it’s hard to beat free).

If the market is really driving toward a model of mobile devices replacing larger desktop computing, then Android may have given Linux the lead that it needs in the war for computing dominance.  Linux is already the choice for appliance computing.  Virtualization hypervisors other than Hyper-V are either Linux under the hood or owe much of their success to Linux.  Mobile devices are dominated by Linux.  Analysts were so focused on how Linux was a subpar performer when it came to workstation mindshare that they forgot to see that the other fronts in the battle were being quietly lost by Microsoft.


Tom’s Take

I’m not going to jump right out there and say that Linux is going to take over the desktop any time soon.  It doesn’t have to.  With the backing of Google and Android, it can quietly keep right on replacing desktop machines as they die off and mobile devices start replicating that functionality.  While I spend time on my old desktop PC now, it’s mostly for game playing.  The other functions that I use computers for, like email and web surfing, are slowly being replaced by mobile devices.  Whether or not you realize it, Linux and *BSD make up a large majority of the devices that people use in every day computing.  The hears and minds of the people were won by Linux without unseating the king of the desktop.  All that remains is to see how Microsoft chooses to act.  With a lead like the one Android has already in the mobile market, the war might be over before we know it.