Facebook Wedge 100 – The Future of the Data Center?



Facebook is back in the news again. This time, it’s because of the release of their new Wedge 100 switch into the Open Compute Project (OCP). Wedge was already making headlines when Facebook announced it two years ago. A fast, open sourced 40Gig Top-of-Rack (ToR) switch was huge. Now, Facebook is letting everyone in on the fun of a faster Wedge that has been deployed into production at Facebook data centers as well as being offered for sale through Edgecore Networks, which is itself a division of Accton. Accton has been leading the way in the whitebox switching market and Wedge 100 may be one of the ways it climbs to the top.

Holy Hardware!

Wedge 100 is pretty impressive from the spec sheet. They paid special attention to making sure the modules were expandable, especially for faster CPUs and special purpose devices down the road. That’s possible because Wedge is a highly specialized micro server already. Rather than rearchitecting the guts of the whole thing, Facebook kept the CPU and the monitoring stack and just put newer, faster modules on it to ramp to 32x100Gig connectivity.


As suspected in the above image, Facebook is using Broadcom Tomahawk as the base connectivity in their switch, which isn’t surprising. Tomahawk is the roadmap for all vendors to get to 100Gig. It also means that the downlink connectivity for these switches could conceivably work in 25/50Gig increments. However, given the enormous amount of east/west traffic that Facebook must generate, Facebook has created a server platform they call Yosemite that has 100Gig links as well. Given the probably backplane there, you can imagine the data that’s getting thrown around the data centers.

That’s not all. Omar Baldonado has said that they are looking at going to 400Gig connectivity soon. That’s the kind of mind blowing speed that you see in places like Google and Facebook. Remember that this hardware is built for a specific purpose. They don’t just have elephant flows. They have flows the size of an elephant herd. That’s why they fret about the operating temperature of optics or the rack design they want to use (standard versus Open Racks). Because every little change matters a thousand fold at that scale.

Software For The People

The other exciting announcement from Facebook was on the software front. Of course, FBOSS has been updated to work with Wedge 100. I found it very interesting in the press release that much of the programming in FBOSS went into interoperability with Wedge 40 and with fixing the hardware side of things. This makes some sense when you realize that Facebook didn’t need to spend a lot of time making Wedge 40 interoperate with anything, since it was a wholesale replacement. But Wedge 100 would need to coexist with Wedge 40 as the rollout happens, so making everything play nice is a huge point on the checklist.

The other software announcement that got the community talking was support for third-party operating systems running on Wedge 100. The first one up was Open Network Linux from Big Switch Networks. ONL ran on the original Wedge 40 and now runs on the Wedge 100. This means that if you’re familiar with running BSN OSes on your devices, you can drop in a Wedge 100 in your spine or fabric and be ready to go.

The second exciting announcement about software comes from a new company, Apstra. Apstra announced their entry into OCP and their intent to get their Apstra Operating System (AOS) running on Wedge 100 by next year. That has a big potential impact for Apstra customers that want to deploy these switches down the road. I hope to hear more about this from Apstra during their presentation at Networking Field Day 13 next month.

Tom’s Take

Facebook is blazing a trail for fast ToR switches. They’ve got the technical chops to build what they need and release the designs to the rest of the world to be used for a variety of ideas. Granted, your data center looks nothing like Facebook. But the ideas they are pioneering are having an impact down the line. If Open Rack catches on you may see different ideas in data center standardization. If the Six Pack catches on as a new chassis concept, it’s going to change spines as well.

If you want to get your hands dirty with Wedge, build a new 100Gig pod and buy one from Edgecore. The downlinks can break out into 10Gig and 25Gig links for servers and knowing it can run ONL or Apstra AOS (eventually) gives you some familiar ground to start from. If it runs as fast as they say it does, it may be a better investment right now than waiting for Tomahawk II to come to your favorite vendor.



Tomahawk II – Performance Over Programmability


Broadcom announced a new addition to their growing family of merchant silicon today. The new Broadcom Tomahawk II is a monster. It doubles the speed of it’s first-generation predecessor. It has 6.4 Tbps of aggregate throughout, divided up into 256 25Gbps ports that can be combined into 128 50Gbps or even 64 100Gbps ports. That’s fast no matter how you slice it.

Broadcom is aiming to push these switches into niches like High-Performance Computing (HPC) and massive data centers doing big data/analytics or video processing to start. The use cases for 25/50Gbps haven’t really changed. What Broadcom is delivering now is port density. I fully expect to see top-of-rack (ToR) switches running 25Gbps down to the servers with new add-in cards connected to 50Gbps uplinks that deliver them to the massive new Tomahawk II switches running in the spine or end-of-row (EoR) configuration for east-west traffic disbursement.

Another curious fact of the Tomahawk II is the complete lack of 40Gbps support. Granted, the support was only paid lip service in the Tomahawk I. The real focus was on shifting to 25/50Gbps instead of the weird 10/40/100Gbps split we had in Trident II. I talked about this a couple of years ago and wasn’t very high on it back then, but I didn’t know the level of apathy people had for 40Gbps uplinks. The push to 25/50Gbps has only been held up so far by the lack of availability of new NICs for servers to enable faster speeds. Now that those are starting to be produced in volume expect the 40Gbps uplinks to be a relic of the past.

A Foot In The Door

Not everyone is entirely happy about the new Broadcom Tomahawk II. I received an email today with a quote from Martin Izzard of Barefoot Networks, discussing their new Tofino platform. He said in part:

Barefoot led the way in June with the introduction of Tofino, the world’s first fully programmable switches, which also happen to be the fastest switches ever built.

It’s true that Tofino is very fast. It was the first 6.4 Tbps switch on the market. I talked a bit about it a few months ago. But I think that Barefoot is a bit off on its assessment here and has a bit of an axe to grind.

Barefoot is pushing something special with Tofino. They are looking to create a super fast platform with programmability. P4 is not quite an FPGA and it’s not an ASIC. It’s a switch stripped to its core and rebuilt with a language all its own. That’s great if you’re a dev shop or a niche market that has to squeeze every ounce of performance out of a switch. In the world of cars, the best analogy would be looking at Tofino like a specialized sports car like a Koenigsegg Agera. It’s very fast and very stylish, but it’s purpose built to do one thing – drive really fast on pavement and carry two passengers.

Broadcom doesn’t really care about development shops. They don’t worry about niche markets. Because those users are not their customers. Their customers are Arista, Cisco, Brocade, Juniper and others. Broadcom really is the Intel of the switching world. Their platforms power vendor offerings. Buying a basic Tomahawk II isn’t something you’re going to be able to do. Broadcom will only sell these in huge lots to companies that are building something with them. To keep the car analogy, Tomahawk II is more like the old F-body cars produced by Chevrolet that later went on to become Camaros, Firebirds, and Trans Ams. Each of these cars was distinctive and had their fans, but the chassis was the same underneath the skin.

Broadcom wants everyone to buy their silicon and use it to power the next generation of switches. Barefoot wants a specialist kit that is faster than anything else on the market, provided you’re willing to put the time into learning P4 and stripping out all the bits they feel are unnecessary. Your use case determines your hardware. That hasn’t changed, nor is it likely to change any time soon.

Tom’s Take

The data center will be 25/50/100Gbps top to bottom when the next switch refresh happens. It could even be there sooner if you want to move to a pod-based architecture instead of more traditional designs. The odds are very good that you’re going to be running Tomahawk or Tomahawk II depending on which vendor you buy from. You’re probably only going to be running something special like Tofino or maybe even Cavium if you’ve got a specific workload or architecture that you need performance or programmability.

Don’t wait for the next round of hardware to come out before you have an upgrade plan. Write it now. Think about where you want to be in 4 years. Now double your requirements. Start investigating. Ask your vendor of choice what their plans are. If their plans stink, as their competitor. Get quotes. Get ideas. Be ready for the meeting when it’s scheduled. Make sure you’re ready to work with your management to bury the hatchet, not get a hatchet jobbed network.

DevOps and the Infrastructure Dumpster Fire


We had a rousing discussion about DevOps at Cloud Field Day this week. The delegates talked about how DevOps was totally a thing and it was the way to go. Being the infrastructure guy, I had to take a bit of umbrage at their conclusions and go on a bit of a crusade myself to defend infrastructure from the predations of developers.

Stable, Boy

DevOps folks want to talk about continuous integration and continuous deployment (CI/CD) all the time. They want the freedom to make changes as needed to increase bandwidth, provision ports, and rearrange things to fit development timelines and such. It’s great that they have they thoughts and feelings about how responsive the network should be to their whims, but the truth of infrastructure today is that it’s on the verge of collapse every day of the week.

Networking is often a “best effort” type of configuration. We monkey around with something until it works, then roll it into production and hope it holds. As we keep building more patches on to of patches or try to implement new features that require something to be disabled or bypassed, that creates a house of cards that is only as strong as the first stiff wind. It’s far too easy to cause a network to fall over because of a change in a routing table or a series of bad decisions that aren’t enough to cause chaos unless done together.

Jason Nash (@TheJasonNash) said that DevOps is great because it means communication. Developers are no longer throwing things over the wall for Operations to deal with. The problem is that the boulder they were historically throwing over in the form of monolithic patches that caused downtime was replaced by the storm of arrows blotting out the sun. Each individual change isn’t enough to cause disaster, but three hundred of them together can cause massive issues.


Networks are rarely stable. Yes, routing tables are mostly stabilized so long as no one starts withdrawing routes. Layer 2 networks are stable only up to a certain size. The more complexity you pile on networks, the more fragile they become. The network really only is one fat-fingered VLAN definition or VTP server mode foul up away from coming down around our ears. That’s not a system that can support massive automation and orchestration. Why?

The Utility of Stupid Networking

The network is a source of pain not because of finicky hardware, but because of applications and their developers. When software is written, we have to make it work. If that means reconfiguring the network to suit the application, so be it. Networking pros have been dealing with crap like this for decades. Want proof?

  1. Applications can’t take to multiple gateways at a time on layer 2 networks. So lets create a protocol to make two gateways operate as one with a fake MAC address to answer requests and ensure uptime. That’s how we got HSRP.
  2. Applications can’t survive having the IP address of the server changed. Instead of using so many other good ideas, we create vMotion to allow us to keep a server on the same layer 2 network and change the MAC <-> IP binding. vMotion and the layer 2 DCI issues that it has caused has kept networking in the dark for the last eight years.
  3. Applications that run don’t need to be rewritten to work in the cloud. People want to port them as-is to save money. So cloud networking configurations are a nightmare because we have to support protocols that shouldn’t even be used for the sake of legacy application support.

This list could go on, but all these examples point to one truth: The application developers have relied upon the network to solve their problems for years. So the network is unstable because it’s being extended beyond the use case. Newer applications, like Netflix and Facebook, and thrive in the cloud because they were written from the ground up to avoid using layer 2 DCI or even operate at layer 2 beyond the minimum amount necessary. They solve tricky problems like multi host combinations and failover in the app instead of relying on protocols from the golden age of networking to fix it quietly behind the scenes.

The network needs to evolve past being a science project for protocols that aim to fix stupid application programming decisions. Instead, the network needs to evolve with an eye toward stability and reduced functionality to get there. Take away the ability to even try to do those stupid tricks and what you’re left with is a utility that is a resource for your developers. They can use it for transport without worrying about it crashing every day with some bug in a protocol no one has used in five years yet was still installed just in case someone turned on an old server accidentally.

Nowhere is this more apparent than cloud networking stacks like AWS or Microsoft Azure. There, the networking is as simplistic as possible. The burden for advanced functionality per group of users isn’t pushed into a realm where network admins need to risk outages to fix a busted application. Instead, the app developers can use the networking resources in a basic way to encourage them to think about failover and resiliency in a new way. It’s a brave new world!

Tom’s Take

I’ll admit that DevOps has potential. It gets the teams talking and helps analyze build processes and create more agile applications. But in order for DevOps to work the way it should, it’s going to need a stable platform to launch from. That means networking has to get its act together and remove the unnecessary things that can cause bad interactions. This was caused in part by application developers taking the easy road and pushing against the networking team of wizards. When those wizards push back and offer reduced capabilities countered against more uptime and fewer issues you should start to see app developers coming around to work with the infrastructure teams to get things done. And that is the best way to avoid an embarrassing situation that involves fire.

Running Barefoot – Thoughts on Tofino and P4


The big announcement this week is that Barefoot Networks leaped out of stealth mode and announced that they’re working on a very, very fast datacenter switch. The Barefoot Tofino can do up to 6.5 Tbps of throughput. That’s a pretty significant number. But what sets the Tofino apart is that it also uses the open source P4 programming language to configure the device for everything, from forwarding packets to making routing decisions. Here’s why that may be bigger than another fast switch.

Feature Presentation

Barefoot admits in their announcement post that one of the ways they were able to drive the performance of the Tofino platform higher was to remove a lot of the accumulated cruft that has been added to switch software for the past twenty years. For Barefoot, this is mostly about pushing P4 as the software component of their switch platform and driving adoption of it in a wider market.

Let’s take a look at what this really means for you. Modern network operating systems typically fall into one of two categories. The first is the “kitchen sink” system. This OS has every possible feature you could ever want built in at runtime. Sure, you get all the packet forwarding and routing features you need. But you also carry the legacy of frame relay, private VLANs, Spanning Tree, and a host of other things that were good ideas at one time and now mean little to nothing to you.

Worse yet, kitchen sink OSes require you to upgrade in big leaps to get singular features that you need but carry a whole bunch of others you don’t want. Need routing between SVIs? That’s an Advanced Services license. Sure, you get BGP with that license too, but will you ever use that in a wiring closet? Probably not. Too bad though, because it’s built into the system image and can’t be removed. Even newer operating systems like NX-OS have the same kitchen sink inclusion mentality. The feature may not be present at boot time, but a simple command turns it on. The code is still baked into the kernel, it’s just loaded as a module instead.

On the opposite end of the scale, you have newer operating systems like OpenSwitch. The idea behind OpenSwitch is to have a purpose built system that does a few things really, really well. OpenSwitch can build a datacenter fabric very quickly and make it perform well. But if you’re looking for additional features outside of that narrow set, you’re going to be out of luck. Sure, that means you don’t need a whole bunch of useless features. But what about things like OSPF or Spanning Tree? If you decide later that you’d like to have them, you either need to put in a request to have it built into the system or hope that someone else did and that the software will soon be released to you.

We Can Rebuild It

Barefoot is taking a different track with P4. Instead of delivering the entire OS for you in one binary image, they are allowing you to build the minimum number of pieces that you need to make it work for your applications. Unlike OpenSwitch, you don’t have to wait for other developers to build in a function that you need in order to deploy things. You drop to an IDE and write the code you need to forward packets in a specific way.

There are probably some people reading this post that are nodding their heads in agreement right now about this development process. That’s good for Barefoot. That means that their target audience wants functionality like this. But Barefoot isn’t for everyone. The small and medium enterprise isn’t going to jump at the chance to spend even more time programming forwarding engines into their switches. Sure, the performance profile is off the chart. But it’s also a bit like buying a pricy supercar to drive back and forth to the post office. Overkill for 98% of your needs.

Barefoot is going to do well in financial markets where speed is very important. They’re also going to sell into big development shops where the network team needs pared-down performance in software and a forwarding chip that can blow the doors off the rest of the network for East <-> West traffic flow. Give that we haven’t seen a price tag on Tofino just yet, I would imagine that it’s priced well into those markets and beyond the reach of a shop that just needs two leaf nodes and a spine to connect them. But that’s exactly what needs to happen.

Tom’s Take

Barefoot isn’t going to appeal to shops that plug in a power cable and run a command to provision a switch. Barefoot will shine where people can write code that will push a switch to peak performance and do amazing things. Perhaps Barefoot will start offering code later on that gives you the ability to program basic packet forwarding into a switch or routing functions when needed without the requirement of taking hours of classes on P4. But for the initial release, keeping Tofino in the hands of dev shops is a great idea. If for no other reason than to cut down on support costs.

BGP: The Application Networking Dream


There was an interesting article last week from Fastly talking about using BGP to scale their network. This was but the latest in a long line of discussions around using BGP as a transport protocol between areas of the data center, even down to the Top-of-Rack (ToR) switch level. LinkedIn made a huge splash with it a few months ago with their Project Altair solution. Now it seems company after company is racing to implement BGP as the solution to their transport woes. And all because developers have finally pulled their heads out of the sand.

BGP Under Every Rock And Tree

BGP is a very scalable protocol. It’s used the world over to exchange routes and keep the Internet running smoothly. But it has other power as well. It can be extended to operate in other ways beyond the original specification. Unlike rigid protocols like RIP or OSPF, BGP was designed in part to be extended and expanded as needs changes. IS-IS is a very similar protocol in that respect. It can be upgraded and adjusted to work with both old and new systems at the same time. Both can be extended without the need to change protocol versions midstream or introduce segmented systems that would run like ships in the night.

This isn’t the first time that someone has talked about running BGP to the ToR switch either. Facebook mentioned in this video almost three years ago. Back then they were solving some interesting issues in their own data center. Now, those changes from the hyperscale world are filtering into the real world. Networking teams are seeking to solve scaling issues without resorting to overlay networks or other types of workarounds. The desire to fix everything wrong with layer 2 has led to a revelation of sorts. The real reason why BGP is able to work so well as a replacement for layer 2 isn’t because we’ve solved some mystical networking conundrum. It’s because we finally figured out how to build applications that don’t break because of the network.

Apps As Far As The Eye Can See

The whole reason when layer 2 networks are the primary unit of data center measurement has absolutely nothing to do with VMware. VMware vMotion behaves the way that it does because legacy applications hate having their addresses changed during communications. Most networking professionals know that MAC addresses have a tenuous association to IP addresses, which is what allows the gratuitous ARP after a vMotion to work so well. But when you try to move an application across a layer 3 boundary, it never ends well.

When web scale companies started building their application stacks, they quickly realized that being pinned to a particular IP address was a recipe for disaster. Even typical DNS-based load balancing only seeks to distribute requests to a series of IP addresses behind some kind of application delivery controller. With legacy apps, you can’t load balance once a particular host has resolved a DNS name to an IP address. Once the gateway of the data center resolves that IP address to a MAC address, you’re pinned to that device until something upsets the balance.

Web scale apps like those built by Netflix or Facebook don’t operate by these rules. They have been built to be resilient from inception. Web scale apps don’t wait for next hop resolution protocols (NHRP) or kludgy load balancing mechanisms to fix their problems. They are built to do that themselves. When problems occur, the applications look around and find a way to reroute traffic. No crazy ARP tricks. No sly DNS. Just software taking care of itself.

The implications for network protocols are legion. If a web scale application can survive a layer 3 communications issue then we are no longer required to keep the entire data center as a layer 2 construct. If things like anycast can be used to pin geolocations closer to content that means we don’t need to worry about large failover domains. Just like Ivan Pepelnjak (@IOSHints) says in this post, you can build layer 3 failure domains that just work better.

BGP can work as your ToR strategy for route learning and path selection because you aren’t limited to forcing applications to communicate at layer 2. And other protocols that were created to fix limitations in layer 2, like TRILL or VXLAN, become an afterthought. Now, applications can talk to each other and fail back and forth as they need to without the need to worry about layer 2 doing anything other than what it was designed to do: link endpoints to devices designed to get traffic off the local network and into the wider world.

Tom’s Take

One of the things that SDN has promised us is a better way to network. I believe that the promise of making things better and easier is a noble goal. But the part that has bothered me since the beginning was that we’re still trying to solve everyone’s problems with the network. We don’t rearrange the power grid every time someone builds a better electrical device. We don’t replumb the house overtime we install a new sink. We find a way to make the new thing work with our old system.

That’s why the promise of using BGP as a ToR protocol is so exciting. It has very little to do with networking as we know it. Instead of trying to work miracles in the underlay, we build the best network we know how to build. And we let the developers and programmers do the rest.

The Death of TRILL


Networking has come a long way in the last few years. We’ve realized that hardware and ASICs aren’t the constant that we could rely on to make decisions in the next three to five years. We’ve thrown in with software and the quick development cycles that allow us to iterate and roll out new features weekly or even daily. But the hardware versus software battle has played out a little differently than we all expected. And the primary casualty of that battle was TRILL.

Symbiotic Relationship

Transparent Interconnection of Lots of Links (TRILL) was proposed as a solution to the complexity of spanning tree. Radia Perlman realized that her bridging loop solution wouldn’t scale in modern networks. So she worked with the IEEE to solve the problem with TRILL. We also received Shortest Path Bridging (SPB) along the way as an alternative solution to the layer 2 issues with spanning tree. The motive was sound, but the industry has rejected the premise entirely.

Large layer 2 networks have all kinds of issues. ARP traffic, broadcast amplification, and many other numerous issues plague layer 2 when it tries to scale to multiple hundreds or a few thousand nodes. The general rule of thumb is that layer 2 broadcast networks should never get larger than 250-500 nodes lest problems start occurring. And in theory that works rather well. But in practice we have issues at the software level.

Applications are inherently complicated. Software written in the pre-Netflix era of public cloud adoption doesn’t like it when the underlay changes. So things like IP addresses and ARP entries were assumed to be static. If those data points change you have chaos in the software. That’s why we have vMotion.

At the core, vMotion is a way for software to mitigate hardware instability. As I outlined previously, we’ve been fixing hardware with software for a while now. vMotion could ensure that applications behaved properly when they needed to be moved to a different server or even a different data center. But they also required the network to be flat to overcome limitations in things like ARP or IP. And so we went on a merry journey of making data centers as flat as possible.

The problem came when we realized that data centers could only be so flat before they collapsed in on themselves. ARP and spanning tree limited the amount of traffic in layer 2 and those limits were impossible to overcome. Loops had to be prevented, yet the simplest solution disabled bandwidth needed to make things run smoothly. That caused IEEE and IETF to come up with their layer 2 solutions that used CLNS to solve loops. And it was a great idea in theory.

The Joining

In reality, hardware can’t be spun that fast. TRILL was used as a reference platform for proprietary protocols like FabricPath and VCS. All the important things were there but they were locked into hardware that couldn’t be easily integrated into other solutions. We found ourselves solving problem after problem in hardware.

Users became fed up. They started exploring other options. They finally decided that hardware wasn’t the answer. And so they looked to software. And that’s where we started seeing the emergence of overlay networking. Protocols like VXLAN and NV-GRE emerged to tunnel layer 2 packets over layer 3 networks. As Ivan Pepelnjak is fond of saying layer 3 transport solves all of the issues with scaling. And even the most unruly application behaves when it thinks everything is running on layer 2.

Protocols like VXLAN solved an immediate need. They removed limitations in hardware. Tunnels and fabrics used novel software approaches to solve insurmountable hardware problems. An elegant solution for a thorny problem. Now, instead of waiting for a new hardware spin to fix scaling issues, customers could deploy solutions to fix the issues inherent in hardware on their own schedule.

This is the moment where software defined networking (SDN) took hold of the market. Not when words like automation and orchestration started being thrown about. No, SDN became a real thing when it enabled customers to solve problems without buying more physical devices.

Tom’s Take

Looking back, we realize now that building large layer 2 networks wasn’t the best idea. We know that layer 3 scales much better. Given the number of providers and end users running BGP to top-of-rack (ToR) switches, it would seem that layer 3 scales much better. It took us too long to figure out that the best solution to a problem sometimes takes a bit of thought to implement.

Virtualization is always going to be limited by the infrastructure it’s running on. Applications are only as smart as the programmer. But we’ve reached the point where developers aren’t counting on having access to layer 2 protocols that solve stupid decision making. Instead, we have to understand that the most resilient way to fix problems is in the software. Whether that’s VXLAN, NV-GRE, or a real dev team not relying on the network to solve bad design decisions.

Linux and the Quest for Underlays


I’m at the OpenStack Summit this week and there’s a lot of talk around about building stacks and offering everything needed to get your organization ready for a shift toward service provider models and such. It’s a far cry from the battles over software networking and hardware dominance that I’m so used to seeing in my space. But one thing came to mind that made me think a little harder about architecture and how foundations are important.

Brick By Brick

The foundation for the modern cloud doesn’t live in fancy orchestration software or data modeling. It’s not because a retailer built a self-service system or a search engine giant decided to build a cloud lab. The real reason we have a growing market for cloud providers today is because of Linux. Linux is the underpinning of so much technology today that it’s become nothing short of ubiquitous. Servers are built on it. Mobile operating systems use it. But no one knows that’s what they are using. It’s all just something running under the surface to enable applications to be processed on top.

Linux is the vodka of operating systems. It can run in a stripped down manner on a variety of systems and leave very little trace behind. BSD is similar in this regard but doesn’t have the driver support from manufacturers or the ability to strip out pieces down to the core kernel and few modifications. Linux gives vendors and operators the flexibility to create a software environment that boots and gets basic hardware working. The rest is up to the creativity of the people writing the applications on top.

Linux is the perfect underlay. It’s a foundation that is built upon without getting in the way of things running above it. It gives you predictable performance and a familiar environment. That’s one of the reasons why Cumulus Networks and Dell have embraced Linux as a way to create switch operating systems that get out of the way of packet processing and let you build on top of them as your needs grow and change.

Break The Walls Down

The key to building a good environment is a solid underlay, whether it be be in systems or in networking. With reliable transport and operations taken care of, amazing things can be built. But that doesn’t mean that you need to build a silo around your particular area of organization.

The shift to clouds and stacks and “new” forms of IT management aren’t going to happen if someone has built up a massive blockade. They will work when you build a system that has common parts and themes and allows tools to work easily on multiple parts of the infrastructure.

That’s what’s made Linux such a lightning rod. If your monitoring tools can monitor servers, SANs, and switches with little to no modification you can concentrate your time on building on those pieces instead of writing and rewriting software to get you back to where you started in the first place. That’s how systems can be extensible and handle changes quickly and efficiently. That’s how you build a platform for other things.

Tom’s Take

I like building Lego sets. But I really like building them with the old fashioned basic bricks. Not the fancy new ones from licensed sets. Because the old bricks were only limited by your creativity. You could move them around and put them anywhere because they were all the same. You could build amazing things with the right basic pieces.

Clouds and stacks aren’t all that dissimilar. We need to focus on building underlays of networking and compute systems with the same kinds of basic blocks if we ever hope to have something that we can build upon for the future. You may not be able to influence the design of systems at the most basic level when it comes to vendors and suppliers, but you can vote with your dollars to back the solutions that give you the flexibility to get your job done. I can promise you that when the revenue from proprietary, non-open underlay technologies goes down the suppliers will start asking you the questions you need to answer for them.