Open Networking Needs to Be Interchangeable

OpenBuildingBlocks

We’re coming up quickly on the fall meeting of the Open Networking User Group, which is a time for many of the members of the financial community to debate the needs of modern networking and provide a roadmap and use case set for networking vendors to follow for in the coming months. ONUG provides what some technology desperately needs – a solution to which it can be applied.

Open Or Something Like It

We’ve already started to see the same kind of non-open solution building that plagued the early network years creeping into some aspects of our new “open” systems. Rather than building on what we consider to be tried-and-true building blocks, we instead come to proprietary solutions that promise “magic” when it comes to configuration and maintenance. Should your network provide the magic? Or is that your job?

Magical is what the network should look like to a user, not to the admins. Think about the networking in cloud providers like AWS and MS Azure. The networking there is a very simple model that hides complexity. The average consumer of AWS services doesn’t need to know the specifics of configuration in the underlay of Amazon’s labyrinth of the cloud. All that matters is traffic goes where it is supposed to go and arrives when it is supposed to be there.

Let’s apply those same kinds of lessons to open networks in our environments. What we need isn’t a magic bullet that makes everything turn into a checkbox or button to do mysterious things behind a curtain. What we really need is an open system that allows us to build a system that can be reduced to boxes and buttons. That requires a kind of interoperation that isn’t present in the first generation of driving networks through software.

This is also one of the concerns present in policy definitions and models like those found in Cisco ACI. In order for these higher-order systems to work efficiently, the majority of the focus needs to be on the definition of actions and the execution of those policies. What can’t occur is a large amount of time spent fixing the interoperation between pieces in the policy underlay.

Think about your current network. Do you spend most of your time focused on the packets flowing between applications? Or are you spending a higher percentage of your time fixing the pathways between those applications? Optimizing the underlay for those flows? Trying to figure out why something isn’t working over here versus why it is working over there?

Networking Needs Eli Whitney

Networking isn’t open the way that it needs to be. It’s as open as manufacturing was before the invention of interchangeable parts. Our systems are cobbled together contraptions of unique parts and systems that collapse when a single piece falls out of place. Instead of fixing the issue and restoring sanity, we are forced to exert extra effort molding the new pieces to function like the old.

Truly open networking isn’t just about the software riding on top of the underlay. It’s about making the interfaces said software interacts with seamless enough to swap parts and pieces and allow the system to continue to function without major disruption. We can’t spend our time tinkering with why the API isn’t accepting instructions or reconfiguring the markup language because the replacement part is a different model number.

When networks are open enough that they work the way that AWS and Azure work without massive interference on our part that will be a truly landmark day. That day will mark the moment when our networks become focused on service delivery instead of component integration. The openness in networking will lead us to stop worrying about it. Not because someone built a magic proprietary system that works now with three other devices and will probably be forgotten in another year. But instead because networking vendors finally discovered that solving problems is much more profitable than creating roadblocks.


Tom’s Take

I’ve been very proud to take part in ONUG for the past few years. The meetings have given me an entirely new perspective on how networking is viewed by users and consumers. It’s also a great way to get in touch with people who are doing networking in unique environments with exacting needs. ONUG has also helped forward the cause of opening networking by providing a nucleus for users to bring their requirements to the group that needs to hear them most of all.

ONUG can continue to drive networking forward by insisting that future networking developments are open and interoperable at a level that makes hardware inconsequential. No standards body can exert that influence. It comes from user voting with dollars and ONUG represents some deep purse strings.

If you are in the New York area and would like to attend ONUG this November 4th and 5th, you can use the code TFD30 to get 30% off the conference registration cost. And if you tell them that Tom sent you, I might be able to arrange for a nice fruit basket as well.

 

SDN Myths Revisited

techunplugged-logo

I had a great time at TECHunplugged a couple of weeks ago. I learned a lot about emerging topics in technology, including a great talk about the death of disk from Chris Mellor of the Register. All in all, it was a great event. Even with a presentation from the token (ring) networking guy:

I had a great time talking about SDN myths and truths and doing some investigation behind the scenes. What we see and hear about SDN is only a small part of what people think about it.

SDN Myths

Myths emerge because people can’t understand or won’t understand something. Myths perpetuate because they are larger than life. Lumberjacks and blue oxen clearing forests. Cowboys roping tornadoes. That kind of thing. With technology, those myths exist because people don’t want to believe reality.

SDN is going to take the jobs of people that can’t face the reality that technology changes rapidly. There is a segment of the tech worker populace that just moves from new job to new job doing the same old things. We leave technology behind all the time without a care in the world. But we worry when people can’t work on that technology.

I want you to put your hands on a floppy disk. Go on, I’ll wait. Not so easy, is it? Removable disk technology is on the way out the door. Not just magnetic disk either. I had a hard time finding a CD-ROM drive the other day to read an old disc with some pictures. I’ve taken to downloading digital copies of films because my kids don’t like operating a DVD player any longer. We don’t mourn the passing of disks, we celebrate it.

Look at COBOL. It’s a venerable programming language that still runs a large percentage of insurance agency computer systems. It’s safe to say that the amount of money it would cost to migrate away from COBOL to something relatively modern would be in the millions, if not billions, of dollars. Much easier to take a green programmer and teach them an all-but-dead language and pay them several thousand dollars to maintain this out-of-date system.

It’s like the old story of buggy whip manufacturers. There’s still a market for them out there. Not as big as it was before the introduction of the automobile. But it’s there. You probably can’t break into that market and you had better be very good (or really cheap) at making them if you want to get a job doing it. The job that a new technology replaced is still available for those that need that technology to work. But most of the rest of society has moved on and the old technology fills a niche roll.

SDN Truths

I wasn’t kidding when I said that Gartner not having an SDN quadrant was the smartest thing they ever did (aside from the shot at stretched layer 2 DCI). I say this because it will finally force customers to stop asking for a magic bullet SDN solution and it will force traditional networking vendors to stop packaging a bunch of crap and selling it as a magic bullet.

When SDN becomes a part of the entire solution and not some mystical hammer that fixes all the nails in your environment, then the real transformation can happen. Then people that are obstructing real change can be marginalized and removed. And the technology can be the driver for advancement instead of someone coming down the hall complaining about things not working.

We spend so much time reacting to problems that we forgot how to solve them for good. We’re not being malicious. We just can’t get past the triage. That’s the heart of the fire fighter problem. Ivan wrote a great response to my fire fighter post and his points were spot on. Especially the ones about people standing in the way, whether it be through outright obstruction or by taking power away to affect real change. We can’t hold networking people responsible for the architecture and simultaneously keep them from solving the root issues. That’s the ham-handed kind of organizational roadblock that needs to change to move networking forward.


Tom’s Take

Talks like this don’t happen over night. They take careful planning and thought, followed by panic when you realize your 45-minute talk is actually 20-minutes. So you cut out the boring stuff and get right to the meat of the issue. In this case, that meat is the continued misperception of SDN no matter how much education we throw at the networking community. We’re not going to end up jobless programmers being lied to by silver-tongued marketing wonks. But we are going to have to face the need for organization change and process reevaluation on a scale that will take months, if not years, to implement correctly. And then do it all over again as technology evolves to fit the new mold we created when we broke the old one.

I would rather see the easy money flee to a new startup slot machine and all of the fair weather professionals move on to a new career in whatever is the hot new thing. That means those of us left behind in the newly-transformed traditional networking space will be grizzled veterans willing to learn and implement the changes we need to make to stop being blamed for the problems of IT and be a model for how it should be run. That’s a future to look forward to.

 

CCIE at 50k: Software Defined? Or Hardware Driven?

50kSticker

Congratulations to Ryan Booth (@That1Guy_15) on becoming CCIE #50117. It’s a huge accomplishment for him and the networking community. Ryan has put in a lot of study time so this is just the payoff for hard work and a job well done. Ryan has done something many dream of and few can achieve. But where is the CCIE program today? And where will it be in the future?

Who Wants To Be A CCIE?

A lot of virtual ink has been committed to opinions in the past couple of years about how the CCIE is become increasingly irrelevant in a world of software defined DevOps focused non-traditional networking teams. It has been said that the CCIE doesn’t teach modern networking concepts like programming or building networks in a world with no CLI access. While this is all true, I don’t think it diminishes the value of getting a CCIE.

The CCIE has never been about building a modern network. It has never been focused on creating anything other than a medium-sized enterprise network in the case of the routing and switching exam. It is not a test of best practices or of greenfield deployment scenarios. Instead, it has been a test of interoperability with an exisiting architecture. It tests the ability of the candidate to add devices and protocols to a stable existing network.

Other flavors of the CCIE test over different protocols or technologies, but the idea is still the same. The only one that even comes close to requiring programming is the CCIE Collaboration, which tests over the ability to customize Cisco Contact Center scripts. Otherwise, each test focuses on technology implementation and not architecture or operation.

Current logic dictates that people don’t want to take the CCIE because it doesn’t teach programming or API interaction. Yet candidates are showing up in droves. It’s almost as if the networks we have today are going to need to be maintained and built out over the coming years. These are the kinds of tasks that are well suited to a support-focused certification like the CCIE. The ideal CCIE candidate isn’t using Vagrant and Chef in a lab somewhere. They’re muddling through OSPF to RIP distribution somewhere in the dark corners of a network that got welded on after an acquisition.

Is Everyone A CCIE?

One thing I have noticed about the CCIE is the fact the number climb seems to have leveled off. It’s not the rapid explosion of certifications that it has been in the past, nor is it the eventual cliff of increased difficulty. Things seem to be marching more toward steady growth. I don’t know how much of that can be attributed to factors like the Cisco official CCIE training program or the upgrade to version 5 almost two years ago.

Lots of CCIEs doesn’t necessarily mean that the test has lost meaning. Microsoft had several thousand MCSEs by the time the certification became a punchline to countless call center jokes. Novell had a virtual army of Certified NetWare Engineers (CNEs) before software changes locked many of them into CNE 5 or CNE 6. Having a lot of certified individuals doesn’t devalue the certificaiton. It’s what people do with it that creates the reputation. Ask and Novell Certififed Directory Engineer (CDE) about the reputation garnered by a test and they can give you a lesson in hard exams that breed bright engineers.

Does that mean that we should brace ourselves for even more CCIEs in the future? It likely won’t be as bad as has been imagined. The written exam for version 5 has pointed out to me that Cisco is going to start closing ranks around technologies in the near future. The written exam serves as a testing ground for potential new topics on the exam. MPLS was a written topic long before it became a potential lab exam topic. The current written exam is full of technologies that make me think Cisco is starting to put more emphasis on the Cisco and less on the Internetworking in CCIE.

Cisco wants to have a legion of certified individuals that think about Cisco technology benefits. That’s why we’re starting to see a shift toward things like DMVPN and GETVPN in testing. In place of industry standard protocols, we get the Cisco improved versions. This locks candidates into the Cisco method of thinking and ensures that their go-to solutions will include some form of proprietary technology.

If this shift in thinking is really the start of the new way of certification testing, I worry for the future of the CCIE. Not because there are 50,000 CCIEs, but because the new inductees into the CCIE group will be focused on creating islands of Cisco in the sea of interoperable data center networks. That’s good for Cisco’s bottom line, but bad for the reputation of the CCIE. Could you imagine what would happen if a CCIE walked in and told you they couldn’t fix your MPLS VPN configuration issues because “I only know how to work on DMVPN”?


Tom’s Take

Every time someone I know passes the CCIE it makes me happy that they’ve completed a rigorous exam testing process. It tells me this person knows how to follow the lab instructions to create an interoperable enterprise network based on constraints. It also tells me that this person knows how to study material and doesn’t give up. Those are the kinds of people I would want in my networking group.

CCIEs are the perfect people to learn more modern network techniques like programmability and SDN. Not because they learned how to do it on their test. But because they are the kinds of people that learn well and will apply everything they have to picking up a new concept. But it needs to be pointed out here that Cisco must foster that kind of interoperable learning experience with CCIEs. Focusing too heavily on proprietary solutions to help create an army of unknowing Cisco SEs in the field will only serve to hurt Cisco in the future when that group of certified individuals must learn to work in the world of networking post-SD.

 

This WAN Is Your WAN, This WAN Is My WAN

Straw Bales on Hill Landscape, Tuscany, Italy

Straw Bales on Hill Landscape, Tuscany, Italy

Ideas coalesce all the time in every vertical. You don’t really notice it until you wake up one day and suddenly everything around you looks identical. Wireless becoming the new access layer. Flash storage taking hold of the high end performance crown. And in networking we have the dominance of all things software defined. One recent development has coming along much faster than anyone could have predicted: Software Defined Wide Area Networking (SD-WAN).

Automatic For The People

SD-WAN is a force in modern networking because people want simplicity. While Ivan does a great job of decoupling marketing from reality, people still believe that SD-WAN is the silver bullet that will fix all of their WAN woes. Even during the original discussions of SD-WAN technology at conferences like ONUG, the overriding idea wasn’t around tying sites together or driving down costs to the point of feasibility. It was all about making life easier.

How does SD-WAN manage to accomplish this? It’s all black box networking. Just like the fuel injector in your car. There’s no crying about interoperability or standards-based protocols. You just plug things in and it all works, even if you can’t exactly plug one vendor solution into a competitor. Lock in wins again.

The ideas behind SD-WAN aren’t exactly new. Cisco talked about SD-WAN quite a bit at Networking Field Day 10. Here’s Jeff Reed on it:

The rest of the two hour session details how Cisco is using their Intelligent WAN (IWAN) product to drive SD-WAN. The names of the components all sound very familiar to networkers: DMVPN, NBAR, PfR, and so on. That’s because SD-WAN uses a lot of tried-and-true techniques to tie the concept together. There’s nothing earth-shattering about SD-WAN under the hood. In fact, a fair number of people that work at the “pioneering” SD-WAN startups all seem to have their roots in one or more traditional networking companies.

Fables of Reconstruction

Look at the other presenters at Networking Field Day 10. Two of them announced SD-WAN solutions even though they aren’t really known for expertise in SD-WAN. One of them wasn’t even known as a branch office acceleration solution. So why the SD-WAN land rush all of the sudden? What’s behind the need to have a solution?

You probably wouldn’t be surprised to learn that a lot of investors are backing expansion into SD-WAN technologies. It’s a hot property. But why? As above, customers aren’t interested in the technical wizardry that goes into SD-WAN. They aren’t clamoring for it to supplant their current WAN solution and offer a Rosetta Stone of inter-vendor WAN cooperation. What’s behind the push?

It probably goes something like this:

  1. Technologist needs to implement WAN architecture. Is dismayed that things are so difficult.
  2. Technologist starts searching for solutions about WAN. They probably start asking friends about it.
  3. Analyst firm hears that technologists are asking about WAN solutions. Releases a questionnaire asking which technologies you’d like to learn more about.
  4. Responses to questionnaires are loaded into a graph or report that people buy because they don’t know who to talk to.
  5. Companies realize customers want WAN solutions. They break their necks to offer those solutions to keep up with demand.
  6. Investors see companies beginning to offer WAN solutions and think there’s a huge untapped market. They start funding anyone that mentions WAN in a meeting.

By the way, you can replace “WAN” with any technology above and it still works.

Thanks to customers needing a solution for something they can’t configure easily they are going to be inundated with SD-WAN options by the time they turn around. And the biggest concern no long becomes “Who has the easiest solution?” but instead, “Who is still going to be here in six months?”

Collapse Into Now

The reckoning is coming in the SD-WAN market. If a company doesn’t already have an SD-WAN solution in development or if their solution won’t see daylight for another nine months, they are going to exercise the second “B” of innovation and buy it. And they have a lot of prime targets to choose from.

Investors get cagey without an exit strategy. How are they going to win at this game? They either have to get paid with an IPO, with a later round of funding, or by having someone buy out the investment. If an investor thinks they can get their money back (plus a bit of interest) by having this little startup bought by a traditional networking vendor you can better believe they will be advising the startup to sell.

The customers are the real losers in the case of a buyout, or worse a bankruptcy. Those highly proprietary solutions become dead weight if there isn’t any support for them any longer. Black box networking falls apart when the little magical creatures inside the box go away. Which means customers will be skittish of supporting a solution that is likely to go away any time soon.

Who will you support? An established vendor slow to roll out a solution? Or an up-and-coming company with new ideas but at risk of being snapped up by a big bank account?


Tom’s Take

I loved seeing all the SD-WAN discussion at Networking Field Day 10. SD-WAN is no longer magic sauce that aggregates DSL and MPLS circuits with encryption. Nuage Networks showed off deploying Docker apps to remote sites. Riverbed talked about using their WAN optimization experience to deploy SaaS solutions through SD-WAN.

We’ve heard from SD-WAN companies in the past at Networking Field Day. It’s interesting to hear the comparisons between the upstarts and the old geezers. It’s clear there is a ton of money that is being invested in SD-WAN. The trick is to find out your needs and pick the best solution for you. Otherwise you may find yourself losing your SD-WAN religion.

 

SDN and the Trough Of Understanding

gartner_net_hype_2015

An article published this week referenced a recent Hype Cycle diagram (pictured above) from the oracle of IT – Gartner. While the lede talked a lot about the apparent “death” of Fibre Channel over Ethernet (FCoE), there was also a lot of time devoted to discussing SDN’s arrival at the Trough of Disillusionment. Quoting directly from the oracle:

Interest wanes as experiments and implementations fail to deliver. Producers of the technology shake out or fail. Investments continue only if the surviving providers improve their products to the satisfaction of early adopters.

As SDN approaches this dip in the Hype Cycle it would seem that the steam is finally being let out of the Software Defined Bubble. The Register article mentions how people are going to leave SDN by the wayside and jump on the next hype-filled networking idea, likely SD-WAN given the amount of discussion it has been getting recently. Do you know what this means for SDN? Nothing but good things.

Software Defined Hammers

Engineers have a chronic case of Software Defined Overload. SD-anything ranks right up there with Fat Free and New And Improved as the Most Overused Marketing Terms. Every solution release in the last two years has been software defined somehow. Why? Because that’s what marketing people think engineers want. Put Software Defined in the product and people will buy it hand over fist. Guess what Little Tommy Callahan has to say about that?

There isn’t any disillusionment in this little bump in the road. Quite the contrary. This is where the rubber meets the road, so to speak. This is where all the pretenders to the SDN crown find out that their solutions aren’t suited for mass production. Or that their much-vaunted hammer doesn’t have any nails to drive. Or that their hammer can’t drive a customer’s screws or rivets. And those pretenders will move on to the next hype bubble, leaving the real work to companies that have working solutions and real products that customers want.

This is no different than every other “hammer and nail” problem from the past few decades of networking. Whether it be ATM, MPLS, or any one of a dozen “game changing” technologies, the reality is that each of these solutions went from being the answer to every problem to being a specific solution for specific problems. Hopefully we’ve gotten SDN to this point before someone develops the software defined equivalent of LANE.

The Software Defined Road Ahead

Where does SD-technology go from here? Well, without marketing whipping everyone into a Software Defined Frenzy, the future is whatever developers want to make of it. Developers that come up with solutions. Developers that integrate SDN ideas into products and quietly sell them for specific needs. People that play the long game rather than hope that they can take over the world in a day.

Look at IPv6. It solves so many problems we have with today’s Internet. Not just IP exhaustion issues either. It solves issues with security, availability, and reachability. Yet we are just now starting to deploy it widely thanks to the panic of the IPocalypse. IPv6 did get a fair amount of hype twenty years ago when it was unveiled as the solution to every IP problem. After years of mediocrity and being derided as unnecessary, IPv6 is poised to finally assume its role.

SDN isn’t going to take nearly as long as IPv6 to come into play. What is going to happen is a transition away from Software Defined as the selling point. Even today we’re starting to see companies move away from SD labeling and instead use more specific terms to help customers understand what’s important about the solution and how it will help customers. That’s what is needed to clarify the confusion and reduce fatigue.

 

Open Choices In Networking

neo-architect

I had an interesting time at the spring meeting of the Open Networking User Group (@ONUG_) this past week. There were lots of discussions about networking, DevOps, and other assorted topics. One that caught me by surprise was some of the talk around openness. These tweets from Lisa Caywood (@RealLisaC) were especially telling:

After some discussion with other attendees, I think I’ve figured it out. People don’t want an open network. They want choice.

Flexible? Or Predictable?

Traditional networking marries software and hardware together. You want a Cisco switch? It runs IOS or NX-OS. Running Juniper? You can have any flavor of OS you want…as long as it’s Junos. That has been the accepted order of things for decades. Flexibility is traded for predictability. Traditional networking vendors give you many of the tools you need. If you need something different, you have to find the right mix of platform and software to get your goal accomplished. Mixing and matching is almost impossible.

This sounds an awful lot like the old IBM PC days. The same environment that gave rise to whitebox computers. We have a whitebox switching movement today as well for almost the same reasons – being able to run a different OS on cheaper hardware to the same end as the traditional integrated system. In return, you gain back that flexibility that you lost. There are some tradeoffs, however.

In theory, a whitebox switch is marginally harder to troubleshoot than a traditional platform. Which combiantion of OS and hardware are you running? How do those things interact to create bugs? Anyone that has ever tried to install USB device drivers on Windows knows that kind of pain. Getting everything to work right can be rough.

In practice, the support difference is negligible. Traditional vendors have a limited list of hardware, but the numerous versions of software (including engineering special code) interacting with those platforms can cause unforseen consequences. Likewise, most third party switch OS vendors have a tight hardware compatibility list (HCL) to ensure that everything works well together.

People do like flexibility. Giving them options means they can build systems to their liking. But that’s only a part of the puzzle.

The Problem Is Choice

Many of the ONUG attendees I talked to liked the idea of whitebox switching. They weren’t entirely enamoured, however. When I pressed a bit deeper, a pattern started to emerge. It sounded an awful lot like this:

I don’t want to run Vendor X Linux on my switch. I want to run my Linux on a switch!

That issue highlighted the real issue. Open networking proponents don’t want systems that offer open source networking enhancing the work of all. What they want is a flexible network that is capable of letting them run what they want on things.

The people that attend conferences like ONUG don’t like rigid choice options. Telling them they can run IOS or Junos is like picking the lesser of two evils. These people want to have a custom OS with the bare minimum needed to support a role in the network. They are used to solving problems outside the normal support chain. They chafe at the idea of being forced into a binary decision.

That goes back to Lisa’s tweets. People don’t want a totally open network running Quagga and other open source solutions. They want an open architecture that lets them rip and replace solutions based on who is cheaper that week or who upset them at the last account team meeting. They want the freedom to use their network as leverage to get better deals.

It’s a whole lot easier to get a better discount when you can legitimately threaten to have the incumbent thrown out and replaced relatively easily. Even if you have no intentions of doing so. Likewise, new advances in whitebox switching give you leverage to replace sections of the network and have feature parity with traditional vendors in all but a few corner cases. It seems to be yet another redefinition of open.


Tom’s Take

Maybe I’m just a cynic. I support development of software that makes the whole world better. My idea of open involves people working together to make everything better. It’s not about using strategies to make just my life easier. Enterprises are big consumers of open technologies with very little reciprocity outside of a few forward thinkers.

Maybe the problem is that we’ve overloaded open to mean so many other things that we have congnitive dissonance when we try to marry the various open ideas together? Open network architecture is easy as long as you stick to OSPF and other “standard” protocols. Perhaps the problem of choice is being shortsighted enough to make the wrong one.

 

The Light On The Fiber Mountain

MountainRoad

Fabric switching systems have been a popular solution for many companies in the past few years. Juniper has QFabric and Brocade has VCS. For those not invested in fabrics, the trend has been to collapse the traditional three tier network model down into a spine-leaf architecture to optimize east-west traffic flows. One must wonder how much more optimized that solution can be. As it turns out, there is a bit more that can be coaxed out of it.

Shine A Light On Me

During Interop, I had a chance to speak with the folks over at Fiber Mountain (@FiberMountain) about what they’ve been up to in their solution space. I had heard about their revolutionary SDN offering for fiber. At first, I was a bit doubtful. SDN gets thrown around a lot on new technology as a way to sell it to people that buy buzzwords. I wondered how a fiber networking solution could even take advantage of software.

My chat with M. H. Raza started out with a prop. He showed me one of the new Multifiber Push On (MPO) connectors that represent the new wave of high-density fiber. Each cable, which is roughly the size and shape of a SATA cable, contains 12 or 24 fiber connections. These are very small and pre-configured in a standardized connector. This connector can plug into a server network card and provide several light paths to a server. This connector and the fibers it terminates are the building block for Fiber Mountain’s solution.

With so many fibers running a server, Fiber Mountain can use their software intelligence to start doing interesting things. They can begin to build dedicated traffic lanes for applications and other traffic by isolating that traffic onto fibers already terminated on a server. The connectivity already exists on the server. Fiber Mountain just takes advantage of it. It feels very simliar to the way we add in additional gigabit network ports when we need to expand things like vKernel ports or dedicated traffic lanes for other data.

Quilting Circle

Where this solution starts looking more like a fabric is what happens when you put Fiber Mountain Optical Exchange devices in the middle. These switching devices act like aggregation ports in the “spine” of the network. They can aggregate fibers from top-of-rack switches or from individual servers. These exchanges tag each incoming fiber and add them to the Alpine Orchestration System (AOS), which keeps track of the connections just like the interconnections in a fabric.

Once AOS knows about all the connections in the system, you can use it to start building pathways between east-west traffic flows. You can ensure that traffic between a web server and backend database has dedicated connectivity. You can add additional resources between systems that are currently engaged in heavy processing. You can also dedicated traffic lanes for backup jobs. You can do quite a bit from the AOS console.

Now you have a layer 1 switching fabric without any additional pieces in the middle. The exchanges function almost like a passthrough device. The brains of the system exist in AOS. Remember when Ivan Pepelnjak (@IOSHints) spent all his time pulling QFabric apart to find out what made it tick? The Fiber Mountain solution doesn’t use BGP or MPLS or any other magic protocol sauce. It runs at layer 1. The light paths are programmed by AOS and the packets are swtiched across the dense fiber connections. It’s almost elegant in the simplicity.

Future Illumination

The Fiber Mountain solution has some great promise. Today, most of the operations of the system require manual intervention. You must build out the light paths between servers based on educated guesses. You must manually add additional light paths when extra bandwidth is needed.

Where they can really improve their offering in the future is to add intelligence to AOS to automatically make those decisions based on thresholds and inputs that are predefined. If the system can detect bigger “elephant” traffic flows and automatically provision more bandwidth or isolate these high volume packet generators it will go a long way toward making things much easier on network admins. It would also be great to provide a way to interface that “top talker” data into other systems to alert network admins when traffic flows get high and need additional resources.


Tom’s Take

I like the Fiber Mountain solution. They’ve built a layer 1 fabric that performs similarly to the ones from Juniper and Brocade. They are taking full advantage of the resources provided by the MPO fiber connectors. By adding a new network card to a server, you can test this system without impacting other traffic flows. Fiber Mountain even told me that they are looking at trial installations for customers to bring their technology in at lower costs as a project to show the value to decision makers.

Fiber Moutain has a great start on building a low latency fiber fabric with intelligence. I’ll be keeping a close eye on where the technolgy goes in the future to see how it integrates into the entire network and brings SDN features we all need in our networks.

 

The Walls Are On Fire

There’s no denying the fact that firewalls are a necessary part of modern perimeter security. NAT isn’t a security construct. Attackers have the equivalent of megaton nuclear arsenals with access to so many DDoS networks. Security admins have to do everything they can to prevent these problems from happening. But one look at firewall market tells you something is terribly wrong.

Who’s Protecting First?

Take a look at this recent magic polygon from everyone’s favorite analyst firm:

FW Magic Polygon.  Thanks to @EtherealMind.

FW Magic Polygon. Thanks to @EtherealMind.

I won’t deny that Checkpoint is on top. That’s mostly due to the fact that they have the biggest install base in enterprises. But I disagree with the rest of this mystical tesseract. How is Palo Alto a leader in the firewall market? I thought their devices were mostly designed around mitigating internal threats? And how is everyone not named Cisco, Palo Alto, or Fortinet regulated to the Niche Players corral?

The issue comes down to purpose. Most firewalls today aren’t packet filters. They aren’t designed to keep the bad guys out of your networks. They are unified threat management systems. That’s a fancy way of saying they have a whole bunch of software built on top of the packet filter to monitor outbound connections as well.

Insider threats are a huge security issue. People on the inside of your network have access. They sometimes have motive. And their power goes largely unchecked. They do need to be monitored by something, whether it be an IDS/IPS system or a data loss prevention (DLP) system that keeps sensitive data from leaking out. But how did all of those devices get lumped together?

Deploying security monitoring tools is as much art as it is science. IPS sensors can be placed in strategic points of your network to monitor traffic flows and take action on them. If you build it correctly you can secure a huge enterprise with relatively few systems.

But more is better, right? If three IPS units make you secure, six would make you twice as secure, right? No. What you end up with is twice as many notifications. Those start getting ignored quickly. That means the real problems slip through the cracks because no one pays attention any more. So rather than deploying multiple smaller units throughout the network, the new mantra is to put an IPS in the firewall in the front of the whole network.

The firewall is the best place for those sensors, right? All the traffic in the network goes through there after all. Well, except for the user-to-server traffic. Or traffic that is internally routed without traversing the Internet firewall. Crafty insiders can wreak havoc without ever touching an edge IPS sensor.

And that doesn’t even begin to describe the processing burden placed on the edge device by loading it down with more and more CPU-intensive software. Consider the following conversation:

Me: What is the throughput on your firewall?

 

Them: It’s 1 Gbps!

 

Me: What’s the throughput with all the features turned on?

 

Them: Much less than 1 Gbps…

When a selling point of your UTM firewall is that the numbers are “real”, you’ve got a real problem.

What’s Protecting Second?

There’s an answer out there to fix this issue: disaggregation. We now have the capability to break out the various parts of a UTM device and run them all in virtual software constructs thanks to Network Function Virtualization (NFV). And they will run faster and more efficiently. Add in the ability to use SDN service chaining to ensure packet delivery and you have a perfect solution. For almost everyone.

Who’s not going to like it? The big UTM vendors. The people that love selling oversize boxes to customers to meet throughput goals. Vendors that emphasize that their solution is the best because there’s one dashboard to see every alert and issue, even if those alerts don’t have anything to do with each other.

UTM firewalls that can reliably scan traffic at 1 Gbps are rare. Firewalls that can scan 10 Gbps traffic streams are practically non-existant. And what is out there costs a not-so-small fortune. And if you want to protect your data center you’re going to need a few of them. That’s a mighty big check to write.


Tom’s Take

There’s a reason why we call it Network Function Virtualization. The need for the days when you try and cram all the possible features you could think of onto a single piece of hardware are over. We don’t need complicated all-in-one boxes that have insanely large CPUs. We have software constructs that can take care of all of that now.

While the engineers will like this brave new world, there are those that won’t. Vendors of the single box solutions will still tell you that their solution runs better. Analyst firms with a significant interest in the status quo will tell you NFV solutions are too far out or don’t address all the necessary features. It’s up to you to sort out the smoke from the flame.

Betting On The Right Horse

HobbyHorse

The annoucement of the merger of Alcatel-Lucent and Nokia was a pretty big discussion last week. One of the quotes that kept being brought up in several articles was from John Chambers of Cisco. Chambers has said the IT industry is in for a big round of “brutal consolidation” spurred by “missed market transitions”, which is a favorite term for Chambers. While I agree that consolidation is coming in the industry, I don’t think market transitions are the driver. Instead, it helps to think of it more like a day at the races.

Tricky Ponies

Startups in the networking industry have to find a hook to get traction with investors and customers. Since you can’t boil the ocean, you have to stand out. You need to find an application that gives you the capability to sell into a market. That is much easier to do with SDN than hardware-based innovation. The time-to-market for software is much lower than the barriers to ramp up production of actual devices.

Being a one-trick pony isn’t a bad thing when it comes to SDN startups. If you pour all your talent into one project, you get the best you can build. If that happens to be what your company is known for, you can hit a home run with your potential customers. You could be the overlay company. Or the policy company. Or the Docker networking layer company.

That rapid development time and relative ease of creation makes startups a tantalizing target for acquistion as well. Bigger companies looking to develop expertise often buy that expertise. Either acquiring the product or the team that built it gives the acquiring company a new horse in their stable.

If you can assemble an entire stable of ponies, you can build a networking company that addresses a lot of the needs of your customers. In fact, that’s how Cisco has managed to thrive to the point where they can gamble on those “market transitions”. The entity we call Cisco is really Crescendo, Insieme, Nuova, Andiamo, and hundreds of other single focus networking companies. There’s nothing wrong with that strategy if you have patience and good leadership.

Buy Your Own Stable

If you don’t have patience but have deep pockets, you will probably end up going down a different road. Rather than buying a startup here and there to add to a core strategy, you’ll be buying the whole strategy. That’s what Dell did when they bought Force10. If the rumors are true, that’s what EMC is looking to do soon.

Buying a company to provide your strategy has benefits. You can immediately compete. You don’t have to figure out synergies. Just sell those products and keep moving forward. You may not be the most agile company on the market but you will get the job done.

The issue with buying the strategy is most often “brain drain”. We see brain drain with a small startup going to a mid-sized company. Startup founders aren’t usually geared to stay in a corporate structure for long. They vest their interest and cash out. Losing a founder or key engineer on a product line is tough, but can be overcome with a good team.

What happens when the whole team walks out the door? If the larger acquiring company mistreats the acquired assets or influences creativity in a negative way, you can quickly find your best and brightest teams heading for green pastures. You have to make sure those people are taken care of and have their needs met. Otherwise your new product strategy will crumble before you know it.


Tom’s Take

The Nokia/Alcatel deal isn’t the last time we’ll hear about mergers of networking companies. But I don’t think it’s because of missed market transitions or shifting strategies. It comes down to companies with one or two products wanting protection from external factors. There is strength in numbers. And those numbers will also allow development of new synergies, just like horses in a stable learning from the other horses. If you’re a rich company with an interest in racing, you aren’t going to assemble a stable piece by piece. You’ll buy your way into an established stable. In the end, all the horses end up in a stable owned by someone. Just make sure your horse is the right one to bet on.

That’s Using Your Embrane

BrainInABox

Cisco announced their intent to acquire Embrane last week. Since they did it on April 1st, there was an initial thought that it might be a prank. But given that Cisco doesn’t really do April Fools jokes, it was quickly determined to be the real deal. More importantly, the Embrane acquistion plugs a very important hole in ACI that I have been worried about for a while.

Everybody Play Nice

Application Centric Infrastructure (ACI) is a great idea that works on the principle that Cisco can get multiple disparate systems to work together to “program” the underlying network to rapidly deploy applications and create policies that allow systems to be provisioned and reconfigured with a minimum of effort.

That’s a great idea in theory. And if you’re only working with Cisco gear it’s any easy thing to pull off. Provided you can easily integrate the ASA operating system with IOS and NX-OS. That’s not an easy chore and all those business units work for the same company. Can you imagine how hard it would be to integrate with an external third party? Even one that is friendly to Cisco? What about a company that only implements the bare minimum functionality to make ACI operational?

ACI is predicated on the idea that all the systems in the network are going to work together to accomplish the goal of policy programming. That starts falling apart when systems are difficult to integrate or refuse to be a part of ACI. Sure, you could program around them. It wouldn’t take much to do an end run around an unruly switch or router. But what about a firewall or load balancer?

Those devices are more important to security and scalability of an application. You can’t just cut them out. You may even have regulations that require you to include them inline with the application. That means headaches if you are forced to work with something that won’t completely integrate.

Bring Your Own Toys

Enter Embrane. Embrane’s helios platform gives Cisco a stable of software firewalls and load balancers that can be spun up and deployed as needed on-demand. That means that unruly hardware can be bypassed when necessary. If your firewall doesn’t like ACI or won’t implement the shims needed to make them play nice, all you need to do is spin up an Embrane firewall. Since Embrane was integrating with ACI even before the acquistion, you know that everything is going to work just fine.

You can also use the Embrane Elastic Services Manager (ESM) to help manage those devices and reclaim them as needed. That sounds like a no-brainer, but if you ever find yourself booting a virtual system on a cluster that has charge-back enabled, or worse booting it on a public cloud provider and forgetting about it, you’ll find that using a lifecycle manager to avoid hundreds or thousands of dollars in charges is a great idea. ESM can also help you figure out how utilized your devices are and gives your a roadmap to add capacity when it’s needed. That way you never have to answer a phone call complaining the new application is running “slow”.


Tom’s Take

Embrane’s acquisition makes all the sense in the world. Cisco had put up a stake in the company in their last funding round. That could be seen as an initial investment to keep Embrane working down the ACI path instead of moving off onto other ideas. Now, Cisco makes good on that investment by bringing the Embrane team back in house, for a while at least. Cisco gets a braintrust that knows how to make on-demand SDN work.

It’s no shock that Embrane is going to be rolled into the INSBU that houses Insieme. These two teams are going to be working together very closely in the coming months to push the Embrane technology into the core of ACI and provide it as an offering to get potential customers off the fence and into the solution. More options for configuring policy based networks is always a great carrot for customers. Overcoming objections about incompatible hardware makes selling the software of ACI a no brainer.