OpenFlow was on the menu for our second presentation at Network Field Day 3. We returned the the NEC offices to hear about all the new things that have come about since our first visit just six months ago. Don Clark started us off with an overview of NEC as a company. They have a pretty impressive balance sheet ($37.5 billion annually) for a company that gets very little press in the US. They make a large variety of products across all electronics lines, from monitors to storage switches and even things like projectors and digital television transmitters. But the key message to us at NFD3 revolved around the data center and OpenFlow.
According to NEC, the major problem today with data center design and operation is the silo effect. As Ethan Banks discussed recently, the various members of the modern datacenter (networking, storage, and servers) don’t really talk to each other any longer. We exist in our own little umwelts and the world outside doesn’t exist. With the drive to converge data center operations for the sake of reduced costs, both capital expenditures and operational expenditures, we can no longer afford to exist in solitude. NEC sees OpenFlow and programmable networking as a way to remove these silo walls and drive down costs by pushing networking intelligence into the application layer while also allowing for more centralized command and control of devices and packet flows. That’s a very laudable goal indeed.
A few things stuck out to me during the presentation. First, in the video above, Ivan asks what kind of merchant silicon is powering the NEC solution. He specifically mentions the Broadcom Trident chipset that many vendors are beginning to use as their entry into merchant silicon. As in, the Juniper QFX3500, Cisco Nexus 3000, HP5900AF, and Arista 7050. Ivan says that the specs he’s seeing on the PF5820 are very similar. Don’s response of “it’s merchant silicon” seems to lend credence to the use of a Trident chipset in this switch. I think this means that we’re going to start seeing switches with very similar “speeds and feeds” coming from every vendor that decides to outsource their chipsets. The real power is going to come from software and management layers that drive these switches to do things. That’s what OpenFlow is really getting into. If all the switches can have the same performance, it’s a relatively trivial matter to drive their performance with a centralized controller. When you consider that most of them will end up running similar chipsets anyway, it’s not a big leap to suggest that the first generation of OpenFlow/SDN enabled switches are going to look identical to a controller at a hardware level.
The other takeaway from the first part of the session is the “recommended” limit of 25 switches per controller in Programmable Flow architecture. This, in my mind, is the part that really cements this solution firmly in the data center and not in the campus as we know it. Campus closets can be very interesting environments with multiple switches across disparate locations. I’m not sure if the PF-series switches need to have direct connections to a controller or if they can be daisy chained. But by setting a realistic limitation of 25 switches in this revision, you’re creating a scaling limitation of 25 racks of equipment, since NEC considers the PF5820 to be a Top-of-Rack (ToR) switch for data center users. A 25-rack data center could be an acreage of servers for some or a drop in the bucket for others. The key will be seeing if NEC is going to support a large install base per controller in future releases.
We got a great overview of using OpenFlow in network design from Samrat Ganguly. He mentioned a lot of interesting scenarios where OpenFlow and Programmable Flow could be used to provide functionality similar to what we do today with things like MPLS. We could force a traffic flow to transit from a firewall to an IDS and then onto its final destination all by policy rather than clever cabling tricks. The idea for using OpenFlow as opposed to MPLS focuses mostly on the idea of using a (relatively) simple central controller versus the more traditional method of setting up VRFs and BGP to connect paths across your core. This is another place where software defined networking (SDN) will help in the data center. I don’t know what kind of inroads it will make against those organizations that are extensively using MPLS today, but it gives many starting out a good option for easy traffic steering. We rounded out our time at NEC with a live demo of Programmable Flow:
It appears to me that NEC has doubled down on OpenFlow. That’s not a bad thing in the least. However, I do believe that OpenFlow has a very well defined set of characteristics today that make it a good fit for data center networking and not for the campus LAN. The campus LAN is still the wild, wild west and won’t benefit in the near-term from the ability to push flows down into the access layer in a flash. The data center, on the other hand, is much less tolerant of delay and network reconfiguration. By allowing a ProgrammableFlow controller to direct traffic around your network, you can put the resources where they are needed much quicker than with some DC implementations on the market. The key to take away from NEC this time around is that OpenFlow is still very much a 1.0 product release. There are a lot of things planned for the future of OpenFlow, even in the 1.1 and 1.2 specs. I think NEC has the right ideas with where they want to take things in OpenFlow 2.0. The key is going to be whether or not the industry changes fast enough to keep up.
Tech Field Day Disclaimer
NEC was a sponsor of Network Field Day 3. As such, they were responsible for covering a portion of my travel and lodging expenses while attending Network Field Day 3. In addition, they provided me a USB drive containing marketing collateral and copies of the presentation and a very interesting erasable pen. They did not ask for, nor where they promised any kind of consideration in the writing of this review/analysis. The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.