The Myth of The Greenest Field

A fun anecdote: I had to upgrade my home landline (I know, I know) from circuit switched to packet switched last week. I called the number I was told to call and I followed the upgrade procedure. I told them exactly what I wanted – the bare minimum necessary to move the phone circuit. No more. No less.

When the technician arrived to do the upgrade, he didn’t seem to know what was going on. Instead of giving me the replacement modem I asked for, he tried to give me their “upgraded, Cadillac model” home media gateway router. I told him that I didn’t need it. I had a perfectly good router/firewall. I had wireless in my house. I didn’t need this huge monstrosity. Yet, he persisted. No amount of explanation from me could make him understand I neither wanted or needed what he was trying to install.

Finally, I gave in. I let him finish his appointment and move on. Once he was gone, I disassembled the router and took it to the nearest cable company store. I walked in and explained exactly what I wanted and what I needed. It took the techs there less than five minutes to find my new device without the wireless or media gateway functionality. They provisioned it through their system and gave it to me. I plugged it in at home and everything worked just the way that it had before. No fuss.

The Field Isn’t Always Greener

Why is this story important? Because it should inform us at networking and systems professionals that there is no such thing as a truly greenfield deployment. Even the greenest field is just a covering for the brown dirt below. If you’re thinking to yourself, “But what about those really green installs with new sites or new organizations?” you obviously aren’t thinking about the total impact of what makes an install non-greenfield.

We tend to think of technology as the limiting factor in a deployment. How do we make the New Awesome System work with the Old Busted Stuff? How can we integrate the automated, orchestrated, programmable thingy with the old punchcard system that still requires people to manually touch things to work? When we try to decide how to make it all work together, our challenge is totally focused on the old tech.

But what if the tech isn’t the limiting factor? What if there are a whole host of other things that make this field of green a little harder to work with? What if it’s a completely new site but still requires a modem connection to process credit cards? What if the branch office computers have a mandate to run an anti-virus program and the branch bought all OS X or Linux machines? What if the completely new company wants you to setup a new office but only has $1000 budgeted for networking gear?

What Can Brown Do For You?

The key to a proper installation at a so-called “brownfield” site is to do your homework. You have to know before you ever walk in the door what is waiting for you. You have to have someone do a site evaluation. You need to know what tech is waiting for you and how it all works together. If the organization that you are doing the installation for can’t produce that information, you either need to sell them a service to do it for them or be prepared to walk away.

At the same time, you also need to account for all the other non-technical things that can crop up. You need to get buy in from management and IT for the project you’re doing. If not, you’re going to find that management and IT are going to use you as a scapegoat for every problem that has ever cropped up in the network. You’ll be a pariah before you ever flip a switch or plug in a cable. Ensuring that you have buy in at all levels is the key to ensuring that everyone is on the same page.

You also need to make sure that you’re setting proper expectations. What are people expecting from this installation? Are their assumptions about the outcome reasonable? Are you going to get stuck holding the bag when the New Awesome Thing fails to live up to the lofty goals it was sold on? Note that this isn’t always the “us vs. them” mentality of sales and engineering either. Sometimes a customer has it in their head that something is going to do a thing or act a certain way and there’s nothing you can do to dissuade the customer. You need to know what they are expecting before you ever try to install or configure a device.


Tom’s Take

Greenfield sites are the unicorn that is both mythical and alluring to us all. We want to believe that will one day not have to work under constraints and be free to set things up in a way that we want. Sadly, much like that unicorn, we all know that true greenfield sites will never truly exist. You’re always going to be working under some kind of constraint. Some kind of restriction. It’s not always technically in nature either. But proper planning and communication will prevent your brown grass deployment from becoming a quagmire of quicksand.

Don’t Be My Guest

I’m interrupting my regularly scheduled musing about technology and networking to talk today about something that I’m increasingly seeing come across my communications channels. The growing market for people to “guest post” on blogs. Rather than continually point folks to my policies on this, I thought it might be good to break down why I choose to do what I do.

The Archive Of Tom

First and foremost, let me reiterate for the record: I do not accept guest posts on my site.

Note that this has nothing to do with your skills as a writer, your ability to create “compelling, fresh, and exciting content”, or your particular celebrity status as the CTO/CIO/COMGWTFBBQO of some hot, fresh, exciting new company. I’m sure if Kurt Vonnegut’s ghost or J.K. Rowling wanted to make a guest post on my blog, the answer would still be the same.

Why? Because this site is the archive of my thoughts. Because I want this to be an archive of my viewpoints on technology. I want people to know how I’ve grown and changed and come to love things like SDN over the years. What I don’t want is for people to need to look at a byline to figure out why the writer suddenly loves keynotes or suddenly decides that NAT is the best protocol ever. If the only person that ever writes here is me, all the things here are my voice and my views.

That’s not to say that the idea of guest posts or multiple writers of content is a bad thing. Take a look at Packet Pushers for instance. Greg, Ethan, and Drew do an awesome job of providing a community platform for people that want to write. If you’re not willing to setup your own blog, Packet Pushers is the next best option for you. They area the SaaS version of blogging – just type in the words and let the magic happen behind the screen.

However, Packet Pushers is a collection of many different viewpoints and can be confusing sometimes. The editorial staff does a great job of keeping their hands off the content outside of the general rules about posts. But that does mean that you could have two totally different viewpoints on a topic from two different writers that are posted at the same time. If you’re not normally known as a community content hub, the whiplash between these articles could be difficult to take.

The Dark Side Of Blogging

If the entire point of guest posting was to increase community engagement, I would very likely be looking at my policy and trying to find a way to do some kind of guest posting policy. The issue isn’t the writers, it’s what the people doing the “selling” are really looking for. Every time I get a pitch for a guest post, I immediately become suspicious of the motives behind it. I’ve done some of my own investigation and I firmly believe that there is more to this than meets the eye.


Pitch: Our CEO (Name Dropper) can offer your blog an increase in traffic with his thoughts on the following articles: (List of Crazy Titles)

Response: Okay, so why does he need to post on this blog? What advantage could he have for posting here and not on the corporate blog? Are you really trying to give me more traffic out of the goodness of your own heart? Or are you trying to game the system by using my blog as a lever to increase his name recognition with Google? He gains a lot more from me than I ever will from him, especially given that your suggested blog post titles are nowhere close to the content I write about.


Pitch: We want to provide an article for you to post under your own name to generate more visibility. All we ask is for a link back to our site in your article.

Reponse: More gaming the system. Google keeps track of the links back to your site and where they come from, so the more you get your name out there the higher your results. But as Google shuts down the more nefarious avenues, companies have to find places that Google actually likes to put up the links. Also, why does this link come wrapped in some kind of link shortener? Could it be because there are tons of tracking links and referral jumps in it? I would love to push back and tell them that I’m going to include my own link with no switches or extra parts of the URL and see how quickly the proposal is withdrawn when your tracking systems fail to work the way you intend. That’s not to say that all referral links are bad, but you can better believe that if there’s a referral link, I put it there.


Pitch: We want to pay you to put our content on your site

Response: I know what people pay to put content on major news sites. You’re hoping to game the system again by getting your content up somewhere for little to nothing compared to what a major content hub would cost. Why pay for major exposure when you can get 60% of that number of hits for a third of the cost? Besides, there’s no such thing as only taking money once for a post. Pretty soon everyone will be paying and the only content that will go up will be the kind of content that I don’t want on my blog.


Tom’s Take

If you really want to make a guest post on a site, I have some great suggestions. Packet Pushers or the site I help run for work GestaltIT.com are great community content areas.┬áBut this blog is not the place for that. I’m glad that you enjoy reading it as much as I enjoy writing it. But for now and for the foreseeable future, this is going to by my own little corner of the world.


Editor Note:

The original version of this article made reference to Network Computing in an unfair light. The error in my reference to their publishing model was completely incorrect and totally mine due to failure to do proper research. I have removed the incorrect information from this article after a conversation with Sue Fogarty.

Network Computing has a strict editorial policy about accepting content, including sponsored content. Throughout my relationship with them, I have found them to be completely fair and balanced. The error contained in this blog post was unforgivable and I apologize for it.

The Future Of SDN Is Up In The Air

The announcement this week that Riverbed is buying Xirrus was a huge sign that the user-facing edge of the network is the new battleground for SDN and SD-WAN adoption. Riverbed is coming off a number of recent acquisitions in the SDN space, including Ocedo just over a year ago. So, why then, would Riverbed chase down a wireless company when they’re so focused on the wiring behind the walls?

The New User Experience

When SDN was a pile of buzzwords attached to an idea that had just come out of Stanford, a lot of people were trying to figure out just what exactly SDN could offer them in terms of their network. Things like network slicing were the first big pieces to be put up before things like orchestration, programmability, and APIs were really brought to the fore. People were trying to figure out how to make this hot new thing work for them. Well, almost everyone.

Wireless professionals are a bit jaded when it comes to SDN. That’s because they’ve seen it already in the form of controller-based solutions. The idea that a central device can issue commands to remote access devices and control configurations easily? Airespace was doing that over a decade ago before they got bought by Cisco. Programmability is a moot point to people that can import thousands of access points into a device and automatically have new SSIDs being broadcast on them all in a matter of seconds. Even the new crop of “controllerless” wireless systems on the market still have a central control infrastructure that sends commands to the APs. Much like we’ve found in recent years with SDN, removing the control plane from the data plane path has significant advantages.

So, what would it take to excite wireless pros about SDN? Well, as it turns out, the issue comes down to the user side of the equation. Wireless networks work very well in today’s enterprise. They form the backbone of user connectivity. Companies like Aruba are experimenting with all-wireless offices. The concept is crazy at first glance. How will users communicate without phones? As it turns out, most of them have been using instant messengers and soft phone programs for years. Their communications infrastructure has changed significantly since I learned how to install phone systems years ago. But what hasn’t changed is the need to get these applications to play nicely with each other.

Application behavior and analysis is a huge selling point for SDN and, by extension, SD-WAN. Being able to classify application traffic running on a desktop and treat it differently based on criteria like voice traffic versus web browsing traffic is huge for network professionals. This means the complicated configurations of QoS back in the day can be abstracted out of the network devices and handled by more intelligent systems further up the stack. The hard work can be done where it should be done – by systems with unencumbered CPUs making intelligent decisions rather than by devices that are processing packets as quickly as possible. These decisions can only be made if the traffic is correctly marked and identified as close to the point of origin as possible. That’s where Riverbed and Xirrus come into play.

Extending Your Brains To Your Fingers

By purchasing a company like Xirrus, Riverbed can build on their plans for SDN and SD-WAN by incorporating their software technology into the wireless edge. By classifying the applications where they live, the wireless APs can provide the right information to the SDN processes to ensure traffic is dealt with properly as it flies through the network. With SD-WAN technologies, that can mean making sure the web browsing traffic is sent through local internet links when traffic meant for main sites, like communications or enterprise applications, can be sent via encrypted tunnels and monitored for SLA performance.

Network professionals can utilize SDN and SD-WAN to make things run much more smoothly for remote users without the need to install cumbersome appliances at the edge to do the classification. Instead, the remote APs now become the devices needed to make this happen. It’s brilliant when you realize how much more effective it can be to deploy a larger number of connectivity devices that contain software for application analysis than it is to drop a huge server into a branch office where it’s not needed.

With the deployment of these remote devices, Riverbed can continue to build on the software side of technology by increasing the capabilities of these devices while not requiring new hardware every time a change comes out. You may need to upgrade your APs when a new technology shift happens in hardware, like when 802.11ax is finally released, but that shouldn’t happen for years. Instead, you can enjoy the benefits of using SDN and SD-WAN to accelerate your user’s applications.


Tom’s Take

Fortinet bought Meru. HPE bought Aruba. Now, Riverbed is buying Xirrus. The consolidation of the wireless market is about more than just finding a solution to augment your campus networking. It’s about building a platform that uses wireless networking as a delivery mechanism to provide additional value to users. The spectrum part of wireless is always going to be hard to do properly. Now, the additional benefit of turning those devices into SDN sensors is a huge value point for enterprise networking professionals as well. What better way to magically deploy SDN in your network than to flip a switch and have it everywhere all at once?

Changing The Baby With The Bathwater In IT

If you’re sitting in a presentation about the “new IT”, there’s bound to be a guest speaker talking about their digital transformation or service provider shift in their organization. You can see this coming. It’s a polished speaker, usually a CIO or VP. They talk about how, with the help of the vendor on stage with them, they were able to rapidly transform their infrastructure into something modern while at the same time changing processes to accommodate faster IT response, more productive workers, and increase revenue or transform IT from a cost center to a profit center. The key components are simple:

  1. Buy new infrastructure from $vendor
  2. Transform all processes to be more agile, productive, and better.

Why do those things always happen in concert?

Spring Cleaning

Infrastructure grows old. That’s a fact of life. Outside of some very specialized hardware, no one is using the same desktop they had ten years ago. No enterprise is still running Windows 2000 server on an IBM NetFinity server. No one is still using 10Mbps Ethernet over Thinnet to connect their offices. Hardware marches on. So when we buy new things, we as technology professionals need to find a way to integrate them into our existing technology stack.

Processes, on the other hand, are very slow to change. I can remember dealing with process issues when I was an intern for IBM many, many years ago. The process we had for deploying a new workstation had many, many reboots involved. The deployment team worked out a new strategy to streamline deployments and make things run faster. We brought our plan to the head of deployments. From there, we had to:

  • Run tests to prove that it was faster
  • Verify that the process wasn’t compromised in any way
  • Type up new procedures in formal language to match the existing docs
  • Then submit them for ISO approval

And when all those conditions were met, we could finally start using our process. All in all, with aggressive testing, it still took two months.

Processes are things that are thought to be carved in stone, never to be modified or changed in any way for the rest of time. Unless the stones break or something major causes a process change. Usually, that major change is a whole truckload of new equipment showing up on the back dock attached to a consultant telling IT there is a better way (TM) to do things.

Ceteris Paribus

Ceteris Paribus is a latin term that means “all else unchanged”. We use it when we talk about having multiple variables in an equation and the need to keep them constant to be able to measure changes appropriately.

The funny thing about all these transformations is that it’s hard to track what actually made improvements when you’re changing so many things at once. If the new hardware is three or four times faster than your old equipment, would it show that much improvement if you just used your old software and processes on it? How much faster could your workloads execute with new CPUs and memory management techniques? How about collapsing your virtual infrastructure onto fewer and fewer physical servers because of advances there? Running old processes on new hardware can give you a very good idea of how good the hardware is. Does it meet the criteria for selection that you wanted when it was purchased? Or, better still, does it seems like you’re not getting the performance you paid for?

Likewise, how are you able to know for sure that the organization and process changes you implemented actually did anything? If you’re implementing them on new hardware how can you capture the impact? There’s no rule that says that new processes can only be implemented on new shiny hardware. Take a look at what Walmart is doing with OpenStack. They most certainly aren’t rushing out to buy tons and tons of new servers just for OpenStack integration. Instead, they are taking streamlined processes and implementing them on existing infrastructure to see the benefits. Then it’s easy to measure and say how much hardware you need to expand instead of overbuying for the process changes you make.


Tom’s Take

So, why do these two changes always seem to track with each other? The optimist in me wants to believe that it’s people deciding to make positive changes all at once to pull their organization into the future. Since any installation is disruptive, it’s better to take the huge disruption and retrain for the massive benefits down the road. It’s a rosy picture indeed.

The pessimist in me wonders if all these massive changes aren’t somehow tied to the fact that they always come with massive new hardware purchases from vendors. I would hope there isn’t someone behind the scenes with the ear of the CIO pushing massive changes in organization and processes for the sake of numbers. I would also sincerely hope that the idea isn’t to make huge organizational disruptions for the sake of “reducing overhead” or “helping tell the world your story” or, worse yet, “making our product look good because you did such a great job with all these changes”.

The optimist in me is hoping for the best. But the pessimist in me wonders if reality is a bit less rosy.

Short Take – The Present Future of the Net

A few random thoughts from ONS and Networking Field Day 15 this week:

  • Intel is really, really, really pushing their 5 generation (5G) wireless network. Note this is not Gen5 fibre channel or 5G 802.11 networking. This is the successor to LTE and capable of pushing a ridiculous amount of data to a very small handset. This is one of those “sure thing” technologies that is going to have a huge impact on our networks. Carriers and service providers are already trying to cope with the client rates we have now. What happens when they are two or three times faster?
  • PNDA has some huge potential for networking a data analytics. Their presentation had some of the most technical discussion during the event. They’re also the basis for a lot of other projects that are in the pipeline. Make sure you check them out. The project organizers suggest that you get started with the documentation and perhaps even help contribute some writing to get more people on board.
  • VMware hosted a dinner for us that had some pretty luminary speakers like Bruce Davie and James Watters. They talked about the journey from traditional networking to a new paradigm filled with microservices and intelligence in the application layer. While I think this is the golden standard that everyone is looking toward for the future, I also think there is still quite a bit of technical debt to unpack before we can get there.
  • Another fun thought kicking around: When we look at these new agile, paradigm shifting deployments, why are they always on new hardware? Would you see the similar improvement of existing processes on new hardware? What would these new processes look like on existing things? I think this one is worth investigating.

Do Network Professionals Need To Be Programmers?

With the advent of software defined networking (SDN) and the move to incorporate automation, orchestration, and extensive programmability into modern network design, it could easily be argued that programming is a must-have skill. Many networking professionals are asking themselves if it’s time to pick up Python, Ruby or some other language to create programs in the network. But is it a necessity?

Interfaces In Your Faces

The move toward using API interfaces is one of the more striking aspects of SDN that has been picked up quickly. Instead of forcing information to be input via CLI or information to be collected from the network via scraping the same CLI, APIs have unlocked more power than we ever imagined. RESTful APIs have giving nascent programmers the ability to query devices and push configurations without the need to learn cumbersome syntax. The ability to grab this information and feed it to a network management system and analytics platform has extended the capabilites of the systems that support these architectures.

The syntaxes that power these new APIs aren’t the copyrighted CLIs that networking professionals spend their waking hours memorizing in excruciating detail. JUNOS and Cisco’s “standard” CLI are as much relics of the past as CatOS. At least, that’s the refrain that comes from both sides of the discussion. The traditional networking professionals hold tight to the access methods they have experience with and can tune like a fine instrument. More progressive networkers argue that standardizing around programming languages is the way to go. Why learn a propriety access method when Python can do it for you?

Who is right here? Is there a middle ground? Is the issue really about programming? Is the prattle from programming proponents posturing about potential pitfalls in the perfect positioning of professional progress? Or are anti-programmers arguing against attacks, aghast at an area absent archetypical architecture?

Who You Gonna Call?

One clue in this discussion comes from the world of the smartphone. The very first devices that could be called “smartphones” were really very dumb. They were computing devices with strict user interfaces designed to mimic phone functions. Only when the device potential was recognized did phone manufacturers start to realize that things other than address books and phone dialers be created. Even the initial plans for application development weren’t straightforward. It took time for smartphone developers to understand how to create smartphone apps.

Today, it’s difficult to imagine using a phone without social media, augmented reality, and other important applications. But do you need to be a programmer to use a phone with all these functions? There is a huge market for smartphone apps and a ton of courses that can teach someone how to write apps in very little time. People can create simple apps in their spare time or dedicate themselves to make something truly spectacular. However, users of these phones don’t need to have any specific programming knowledge. Operators can just use their devices and install applications as needed without the requirement to learn Swift or Java or Objective C.

That doesn’t mean that programming isn’t important to the mobile device community. It does mean that programming isn’t a requirement for all mobile device users. Programming is something that can be used to extend the device and provide additional functionality. But no one in an AT&T or Verizon store is going to give an average user a programming test before they sell them the phone.

This, to me, is the argument for network programmability in a nutshell. Network operators aren’t going to learn programming. They don’t need to. Programmers can create software that gathers information and provides interfaces to make configuration changes. But the rank-and-file administrator isn’t going to need to pull out a Java manual to do their job. Instead, they can leverage the experience and intelligence of people that do know how to program in order to extend their network functionality.


Tom’s Take

It seems like this should be a fairly open-and-shut case, but there is a bit of debate yet left to have on the subject. I’m going to be moderating a discussion between Truman Boyes of Bloomberg and Vijay Gill of Salesforce around this topic on April 25th. Will they agree that networking professionals don’t need to be programmers? Will we find a middle ground? Or is there some aspect to this discussion that will surprise us all? I’ll make sure to keep you updated!

Building Reliability

Systems are inherently reliable. Until they aren’t. On a long enough timeline, even the most reliable system will eventually fail. How you manage that failure says a lot about the way your build your system or application. So, why is it then that we’re so focused on failing?

Ten Feet Tall And Bulletproof

No system is infallible. Networks go down. Cloud services get knocked offline. Even Facebook, which represents “the Internet” for a large number of people, has days when it’s unreachable. When we examine these outages, we often find issues at the core of the system that cause services to be unreachable. In the most recent case of Amazon’s cloud system, it was a typo in a script that executed faster than it could be stopped.

It could also be a failure of the system to anticipate increased loads when minor failures happen. If systems aren’t built to take on additional load when the worst happens, you’re going to see bigger outages. That is a particular thorn in the side of large cloud providers like Amazon and Google. It’s also something that network architects need to be aware of when building redundant pathways to handle problems.

Take, for example, a recent demo during Aruba Atmosphere 2017. During the Day 2 keynote, CTO Partha Narasimhan wowed the crowd in the room when he disclosed that they had been doing a controller upgrade during the morning talk. Users had been tweeting, surfing, and using the Internet without much notice from anyone aside from the most technical wireless minds in the room. Even they could only see some strange AP roaming behavior as an indicator of the controllers upgrading the APs.

Aruba showed that they built a resilient network that could survive a simulated major outage cause by a rolling upgrade. They’ve done everything they can to ensure uptime no matter what happens. But the bigger question for architects and engineers is “why are we solving the problem for others?”

Why Dodge Bullets When You Don’t Have To?

As amazing as it is to build a system that can survive production upgrades with no impact on users, what are we really building when we create these networks? Are we encouraging our users to respect our technology advantage in the network or other systems? Are we telling our application developers that they can count on us to keep the lights on when anything goes wrong? Or are we instead sending the message that we will keep scrambling to prevent issues in applications from being noticeable?

Building a resilient network is easy. Making something reliable isn’t rocket science. But create a network that is going to stay up for a long, long time without any outages is very expensive and process intensive. Engineering something to never be down requires layers of exception handling and backup systems that are as reliable as their primary counterparts.

A favorite story from the storage world involves recovery. When you initially ask a customer what their recovery point objective (RPO) is in a system, the answer is almost always “zero” or “as low as you can make it”. When the numbers are put together to include redundant or dual-active systems with replication and data assurance, the price tag of the solution is usually enough to start a new round of discussion along the lines of “how reliable can you make it for this budget?”

In the networking and systems world, we don’t have the luxury of sticker shock when it comes to creating reliability. Storage systems can have longer RPOs because lost data is gone forever. Taking the time to ensure it is properly recovered is important. But data in transmission can be retransmitted. That’s at the heart of TCP. So it’s expected that networks have near-instantaneous RPOs for no extra costs. If you don’t believe that, ask yourself what happens when you tell someone the network went down because there’s only one router or switch connecting devices together.

Instead of making systems ultra-reliable and absolving users and developers from thought, networks and systems should be as reliable as they can be made without huge additional costs. That reliability should be stated emphatically without wiggle room. These constraints should inform developers writing code so that exception handling can be built in to prevent issues when the inevitable outage occurs. Knowing your limitations is the first step to creating an atmosphere to overcome them.

A lesson comes from the programmers of old. When you have a limited amount of RAM, storage, or compute cycles, you can write very tight code. DOS programs didn’t need access to a cloud worth of compute. Mainframes could execute programs written on punch cards. The limitations were simple and could be overcome with proper problem solving. As compute and memory resources have exploded, so too have code bases. Rather than giving developers the limitless capabilities of the cloud without restraint, perhaps creating some limits is the proper way to ensure that reliability stays in the app instead of being bolted on to the network.


Tom’s Take

We had a lot of fun recording this roundtable. We talked about Aruba’s controller upgrade and building reliable wireless networks. But I think we also need to make sure we’re aware that continually creating protocols and other constructs in the underlay won’t solve application programming problems. Things like vMotion set networking and application development back a decade. Giving developers a magic solution to avoid building proper exception handling doesn’t make better developers. Instead, it puts the burden of uptime back on the networking team. And we would rather build the best network we can instead of building something that can solve every problem that could every possibly be created.