Pushing Everyone’s Buttons In IT

HistoryEraserButton

We have officially reached the point in our long and storied IT careers where we, as old fogies, have earned the right to complain about the next generation of users and professionals. Just as the gray beards before us complained about the way we did things, so too is it our turn to moan about the state of affairs. Today, I’d like to point out how driving IT to the point of pushing simple buttons is destroying the way we do things.

Easy Buttons

The fact that IT work has been able to be distilled into a series of simple button pushing exercises is very thrilling. We’ve spent a lot of time and effort building devices and frameworks that take the hard part out of building devices and frameworks. We no longer have to invent languages to build things or hardware to do things. Instead, we can refine our programming capabilities or use general purpose hardware in new combinations to provide environments for our users.

That’s one of the things that is driving people to the cloud. Cloud isn’t just about exciting hardware or keeping your data in other places. It is just as much about predictable, repeatable frameworks and workflows that let people accomplish tasks without much thought. Just like the assembly line of the eighties, we are take boring repetitive tasks away from people and giving them to machines that don’t get bored and love repetition. Doing that drives costs down and makes people more productive.

But what happens when those processes break down? What happens when the magic smoke is let out of the machine? That’s when the real issues start cropping up. Perhaps the framework developers are great at figuring out what exactly went wrong in their solution. However, they’re likely clueless about all of the things that their solution is built upon. Imagine if a mechanic couldn’t diagnose the engine or the various subsystems of your car? You’d have a fit, right?

The troubleshooting level of modern framework engineering only extends to the edge of their solution. Once you get into a problem with a service they’ve leveraged, such as AWS, then the buck is passed and the troubleshooting must move to a new team. The problem with having your teams building next-generation IT services on top of existing things is that they lose visibility into the things they didn’t build. It becomes a vicious cycle when those first-level services aren’t reliable enough to keep your solution running.

Push A Button, Push A Button!

As bad as things are for IT departments in a push-button world, it’s even crazier for users in the same boat. That’s because users understand two extremes of service delivery. Either the thing is working or it isn’t. There is no in between. In the old days of non-instant IT, users would just keep asking until a thing was done.

Today, things are much different. Automation and orchestration has allowed frameworks and platforms to be able to instantly create services. That means users have been able to create their own platform for building things. So anything that isn’t instant is broken. Take, for example, a recent discussion of bandwidth from the spring ONUG meeting. An IT professional was frustrated that it took the better part of a day to transfer 1 petabyte of information across the country. He wasn’t upset that it failed, mind you. He was mad because it didn’t happen in five minutes. Non-instant things are broken.

The story repeats itself over and over again. Networking resources that can’t be automatically provisioned are broken. Cloud services that need more than a few minutes to spin up aren’t working correctly. Mobile apps and sites that don’t instantly pop up on your phone must be working incorrectly. The patience level of a user isn’t even a tenth of the average IT professional. IT pros know why something is running slow. Users fall back on slow things being broken.


Tom’s Take

The problem is investment. Given an infinite amount of funding, everything can be fast. But people are trying to find ways to not pay for things in today’s IT world. Maybe they want to pay for AWS because it works. But OpenStack gives the promise of AWS for free. Except things like OpenStack and Ansible aren’t free. The currency you use to pay for them is time.

Time is just as precious a commodity as money. We don’t get refunds on time. We can’t ask for discounts on time. We can only invest and hope that there is a payoff down the road. The real outcome of this time investment looks very similar to the environment we have today. Things should just work and allow us to build new things on top of them. But that missing time investment is the key to the whole enterprise. If we don’t spend the time building and extending things, then we won’t have the expertise we need to fix things when they break. That time investment also helps everyone appreciate just how hard it is to build things in the first place. And that’s something no button will every duplicate.

Open Networking Needs to Be Interchangeable

OpenBuildingBlocks

We’re coming up quickly on the fall meeting of the Open Networking User Group, which is a time for many of the members of the financial community to debate the needs of modern networking and provide a roadmap and use case set for networking vendors to follow for in the coming months. ONUG provides what some technology desperately needs – a solution to which it can be applied.

Open Or Something Like It

We’ve already started to see the same kind of non-open solution building that plagued the early network years creeping into some aspects of our new “open” systems. Rather than building on what we consider to be tried-and-true building blocks, we instead come to proprietary solutions that promise “magic” when it comes to configuration and maintenance. Should your network provide the magic? Or is that your job?

Magical is what the network should look like to a user, not to the admins. Think about the networking in cloud providers like AWS and MS Azure. The networking there is a very simple model that hides complexity. The average consumer of AWS services doesn’t need to know the specifics of configuration in the underlay of Amazon’s labyrinth of the cloud. All that matters is traffic goes where it is supposed to go and arrives when it is supposed to be there.

Let’s apply those same kinds of lessons to open networks in our environments. What we need isn’t a magic bullet that makes everything turn into a checkbox or button to do mysterious things behind a curtain. What we really need is an open system that allows us to build a system that can be reduced to boxes and buttons. That requires a kind of interoperation that isn’t present in the first generation of driving networks through software.

This is also one of the concerns present in policy definitions and models like those found in Cisco ACI. In order for these higher-order systems to work efficiently, the majority of the focus needs to be on the definition of actions and the execution of those policies. What can’t occur is a large amount of time spent fixing the interoperation between pieces in the policy underlay.

Think about your current network. Do you spend most of your time focused on the packets flowing between applications? Or are you spending a higher percentage of your time fixing the pathways between those applications? Optimizing the underlay for those flows? Trying to figure out why something isn’t working over here versus why it is working over there?

Networking Needs Eli Whitney

Networking isn’t open the way that it needs to be. It’s as open as manufacturing was before the invention of interchangeable parts. Our systems are cobbled together contraptions of unique parts and systems that collapse when a single piece falls out of place. Instead of fixing the issue and restoring sanity, we are forced to exert extra effort molding the new pieces to function like the old.

Truly open networking isn’t just about the software riding on top of the underlay. It’s about making the interfaces said software interacts with seamless enough to swap parts and pieces and allow the system to continue to function without major disruption. We can’t spend our time tinkering with why the API isn’t accepting instructions or reconfiguring the markup language because the replacement part is a different model number.

When networks are open enough that they work the way that AWS and Azure work without massive interference on our part that will be a truly landmark day. That day will mark the moment when our networks become focused on service delivery instead of component integration. The openness in networking will lead us to stop worrying about it. Not because someone built a magic proprietary system that works now with three other devices and will probably be forgotten in another year. But instead because networking vendors finally discovered that solving problems is much more profitable than creating roadblocks.


Tom’s Take

I’ve been very proud to take part in ONUG for the past few years. The meetings have given me an entirely new perspective on how networking is viewed by users and consumers. It’s also a great way to get in touch with people who are doing networking in unique environments with exacting needs. ONUG has also helped forward the cause of opening networking by providing a nucleus for users to bring their requirements to the group that needs to hear them most of all.

ONUG can continue to drive networking forward by insisting that future networking developments are open and interoperable at a level that makes hardware inconsequential. No standards body can exert that influence. It comes from user voting with dollars and ONUG represents some deep purse strings.

If you are in the New York area and would like to attend ONUG this November 4th and 5th, you can use the code TFD30 to get 30% off the conference registration cost. And if you tell them that Tom sent you, I might be able to arrange for a nice fruit basket as well.

 

Open Choices In Networking

neo-architect

I had an interesting time at the spring meeting of the Open Networking User Group (@ONUG_) this past week. There were lots of discussions about networking, DevOps, and other assorted topics. One that caught me by surprise was some of the talk around openness. These tweets from Lisa Caywood (@RealLisaC) were especially telling:

After some discussion with other attendees, I think I’ve figured it out. People don’t want an open network. They want choice.

Flexible? Or Predictable?

Traditional networking marries software and hardware together. You want a Cisco switch? It runs IOS or NX-OS. Running Juniper? You can have any flavor of OS you want…as long as it’s Junos. That has been the accepted order of things for decades. Flexibility is traded for predictability. Traditional networking vendors give you many of the tools you need. If you need something different, you have to find the right mix of platform and software to get your goal accomplished. Mixing and matching is almost impossible.

This sounds an awful lot like the old IBM PC days. The same environment that gave rise to whitebox computers. We have a whitebox switching movement today as well for almost the same reasons – being able to run a different OS on cheaper hardware to the same end as the traditional integrated system. In return, you gain back that flexibility that you lost. There are some tradeoffs, however.

In theory, a whitebox switch is marginally harder to troubleshoot than a traditional platform. Which combiantion of OS and hardware are you running? How do those things interact to create bugs? Anyone that has ever tried to install USB device drivers on Windows knows that kind of pain. Getting everything to work right can be rough.

In practice, the support difference is negligible. Traditional vendors have a limited list of hardware, but the numerous versions of software (including engineering special code) interacting with those platforms can cause unforseen consequences. Likewise, most third party switch OS vendors have a tight hardware compatibility list (HCL) to ensure that everything works well together.

People do like flexibility. Giving them options means they can build systems to their liking. But that’s only a part of the puzzle.

The Problem Is Choice

Many of the ONUG attendees I talked to liked the idea of whitebox switching. They weren’t entirely enamoured, however. When I pressed a bit deeper, a pattern started to emerge. It sounded an awful lot like this:

I don’t want to run Vendor X Linux on my switch. I want to run my Linux on a switch!

That issue highlighted the real issue. Open networking proponents don’t want systems that offer open source networking enhancing the work of all. What they want is a flexible network that is capable of letting them run what they want on things.

The people that attend conferences like ONUG don’t like rigid choice options. Telling them they can run IOS or Junos is like picking the lesser of two evils. These people want to have a custom OS with the bare minimum needed to support a role in the network. They are used to solving problems outside the normal support chain. They chafe at the idea of being forced into a binary decision.

That goes back to Lisa’s tweets. People don’t want a totally open network running Quagga and other open source solutions. They want an open architecture that lets them rip and replace solutions based on who is cheaper that week or who upset them at the last account team meeting. They want the freedom to use their network as leverage to get better deals.

It’s a whole lot easier to get a better discount when you can legitimately threaten to have the incumbent thrown out and replaced relatively easily. Even if you have no intentions of doing so. Likewise, new advances in whitebox switching give you leverage to replace sections of the network and have feature parity with traditional vendors in all but a few corner cases. It seems to be yet another redefinition of open.


Tom’s Take

Maybe I’m just a cynic. I support development of software that makes the whole world better. My idea of open involves people working together to make everything better. It’s not about using strategies to make just my life easier. Enterprises are big consumers of open technologies with very little reciprocity outside of a few forward thinkers.

Maybe the problem is that we’ve overloaded open to mean so many other things that we have congnitive dissonance when we try to marry the various open ideas together? Open network architecture is easy as long as you stick to OSPF and other “standard” protocols. Perhaps the problem of choice is being shortsighted enough to make the wrong one.

 

Riding the SD-WAN Wave

Embed from Getty Images

Software Defined Networking has changed the way that organizations think about their network infrastructure.  Companies are looking at increasing automation of mundane tasks, orchestration of policy, and even using white box switches with the help of new unbound operating systems.  A new class of technologies that is coming to market hopes to reduce complexity and cost for the Achilles Heel of many enterprises: the Wide Area Network (WAN).

Do You WANt To Build A Snowman?

The WAN has always been a sore spot for enterprise networks.  It’s necessary to connect your organization to the world.  If you have remote sites or branch locations, it is critical for daily operations.  If you have an e-commerce footprint your WAN connection needs to be able to handle the generated traffic.  But good WAN connectivity costs money.  Lots of money.

WAN protocols are constantly being refined to come up with the fastest possible transmission and the highest possible uptime.  Frame Relay, Asynchronous Transfer Mode (ATM) and Multi-Protocol Label Switching (MPLS) are a succession of technologies that have shaped enterprise WAN connectivity for over a decade.  They have their strengths and weaknesses.  But it is difficult to build an enterprise WAN without one.

Some customers can’t get MPLS connectivity.  Or even Frame Relay for the matter.  Their locations are too remote or the cost of having the connection installed is far above the return on investment.  These customers are often forced to resort to consumer-class connections, like cable modems, Digital Subscriber Line (DSL), or even 4G/LTE modem uplinks.  While cheaper and easy to install, these solutions are often not as robust as their business-grade counterparts.  And when it comes to support on a down circuit…

Redefining the WAN

How does Software Defined WAN (SD-WAN) help?  SD-WAN technologies from companies like Silver Peak, CloudGenix, and Viptela function like overlay networks for the WAN.  They take the various inputs that you have, such as MPLS, cable, and 4G/LTE networks.  These inputs are then arranged in such a way as to allow you to intelligently program how traffic will behave on the links.  If you want only critical business traffic on the MPLS circuit during business hours you can do that.  If you want to ensure the 4G/LTE uplink is only used in the event of an emergency outage, you can do that too.  You can even program various costs and metrics into the system to help you make decisions about when a given link would be a better economic decision given the time of day or amount of transferred data.

You’re probably saying to yourself, “But I can do all of that today.” And you would be right. But all of this has to happen manually, or at the least require a lot of programming.  If you’ve ever tried to configure OER/PFR on a Cisco router you know what I’m talking about.  And that’s just one vendor’s equipment.  What if there are multiple devices in play?  How do you configure the edge routers for fifty sites?  What happens when a circuit goes down at 3 a.m.?  Having a simple interface for making decisions or even the ability to script actions based on inputs makes the system much more flexible and responsive.

It all comes down to a simple number for all parties involved.  For engineering, the amount of time spent configuring and maintaining complex WAN connectivity will be reduced.  Engineers love not needing to spend time on things.  For the decision makers (and bean counters), it all comes down to money.  SD-WAN technologies reduce costs by better utilizing existing infrastructure.  Eventually, their analysis can allow you to reduce or remove unnecessary connectivity.  That means more money in the pockets of the people that want the money.


Tom’s Take

I’ve referred to WAN applications as the “hello world” for SDN.  That’s because I saw so many people demoing them when SDN was first being talked about.  Cisco did this at Cisco Live 2012 in San Diego.  SD-WAN didn’t really become a concrete thing in my mind until is was the topic of discussion on the Spring ONUG meeting.  Those are the people with the money.  And they are looking at the cost savings and optimization from SD-WAN technologies.  You can better believe that the first wave of SD-WAN that you’ve seen in the last couple of months is just the precursor to a wider look at connectivity in general.  Better get ready to surf.

SDN 101 at ONUG Academy

300x250_TFD10_V2

Software defined networking is king of the hill these days in the greater networking world.  Vendors are contemplating strategies.  Users are demanding functionality.  And engineers are trying to figure out what it all means.  What’s needed is a way for vendor-neutral parties to get together and talk about what SDN represents and how best to implement it.  Most of the talk so far has been at vendor-specific conferences like Cisco Live or at other conferences like Interop.  I think a third option has just presented itself.

Nick Lippis (@NickLippis) has put together a group of SDN-focused people to address concerns about implementation and usage.  The Open Networking User Group (ONUG) was assembled to allow large companies using SDN to have a semi-annual meeting to discuss strategy and results.  It allows Facebook to talk to JP Morgan about what they are doing to simplify networking through use of things like OpenFlow.

This year, ONUG is taking it a step further by putting on the ONUG Academy, a day-long look at SDN through the eyes of those that implement it.  They have assembled a group of amazing people, including the founder of Cumulus Networks and Tech Field Day’s own Brent Salisbury (@NetworkStatic).  There will be classes about optimizing networks for SDN as well as writing SDN applications for the most popular controllers on the market.  Nick shares more details about the ONUG academy here:

If you’re interested in attending ONUG either for the academy or for the customer-focused meetings, you need to register today.  As a special bonus, if you use the code TFD10 when you sign up, you can take 10% of the cost of registration.  Use that extra cash to go out and buy a cannoli or two.

I’ll be at ONUG with Tech Field Day interviewing customers and attendees about their SDN strategies as well as where they think the state of the industry is headed.  If you’re there, stop by and say hello.  And be sure to bring me one of those cannolis.