It’s Not The Size of Your Conference Community

CLUS2016Tweetup

Where do you get the most enjoyment from your conference attendance? Do you like going to sessions and learning about new things? Do you enjoy more of the social aspect of meeting friends and networking with your peers? Maybe it’s something else entirely?

It’s The Big Show

When you look at shows like Cisco Live, VMworld, or Interop ITX, there’s a lot going on. There are diverse education tracks attended by thousands of people. You could go to Interop and bounce from a big data session into a security session, followed by a cloud panel. You could attend Cisco Live and never talk about networking. You could go to VMworld and only talk about networking. There are lots of opportunities to talk about a variety of things.

But these conferences are huge. Cisco and VMware both take up the entire Mandalay Bay Convention Center in Las Vegas. When in San Francisco, both of these events dwarf the Moscone Center and have to spread out into the surrounding hotels. That means it’s easy to get lost or be overlooked. I’ve been to Cisco Live before and never bumped into people I know from my area that said they were going, even when we were at the same party. There are tens of thousands of people roaming the halls.

That means that these conferences only work well if you can carve out your own community. Cisco Live has certainly done that over the years. There’s a community of a few hundred folks that are active on social media and have really changed the direction of the way Cisco engages with the community. VMworld has their various user groups as well as VMUnderground constantly pushing the envelope and creating more organic community engagement.

You Think You Know Me

The flip side is the smaller boutique conferences that have sprung up in recent years. These take a single aspect of a technology and build around it. You get a very laser-focused event with a smaller subset of attendees based on similar interests. It’s a great way to instantly get massive community involvement around an idea. Maybe it’s Monitorama. Or perhaps it’s OSCon. Or even GopherCon. You can see how these smaller communities are united around a singular subject and have great buy in.

However, the critical mass needed to make a boutique conference happen is much greater per person. Cisco Live and VMworld are going to happen every year. There are no less than 10,000 – 15,000 people that would come to either no matter what. Even if 50% of last year’s attendees decided to stay home this year, the conference would happen.

On the flip side, if 50% of the DockerCon or OpenStack Summit attendees stayed home next year, you’d see mass panic in the community. People would start questioning why you’re putting on a show for 2,500 – 3,000 users. It’s one thing to do it when you’re small and just getting started. But to put on a show for those numbers now would be a huge decision point and things would need to be discussed to see what happens going forward.

Cisco Live and VMworld are fun because of their communities. But boutique conferences exist because of their communities. It’s important to realize that and drastic changes in a smaller conference community have huge ripples throughout the conference. Two hundred Twitter users don’t have much impact on the message at Cisco Live. But two hundred angry users at DockerCon can make massive changes happen. Each member of the community is amplified the smaller the conference they attend.


Tom’s Take

Anyone that knows me knows that I love the community. I love seeing them grow and change and develop their own voice. It’s why I work for Tech Field Day. It’s why I go to Cisco Live every year. It’s why I’m happy to speak at VMUnderground events. But I also realize how important the community can be to smaller events. And how quickly things can fall apart when the community is fractured or divided. It’s critical for boutique conferences to harness the power of their communities to get off the ground. But you also have to recognize how important they are to you in the long run. You need to cultivate them and keep the focused on making everything better for everyone.

What Happened To The CCDE?

Studying for a big exam takes time and effort. I spent the better part of 3 years trying to get my CCIE with constant study and many, many attempts. And I was lucky that the CCIE Routing and Switching exam is offered 5 days a week across multiple sites in the world. But what happens when the rug gets pulled out from under your feet?

Not Appearing In This Testing Center

The Cisco Certified Design Expert (CCDE) is a very difficult exam. It takes all of the technical knowledge of the CCIE and bends it in a new direction. There are fun new twists like requirements determination, staged word problems, and whole new ways to make a practical design exam. Russ White made a monster of a thing all those years ago and the team that continues to build on the exam has set a pretty high bar for quality. So high, in fact, that gaining the coveted CCDE number with its unique styling is a huge deal for the majority of people I know that have it, even those with multiple CCIEs.

The CCDE is also only offered 3-4 times a year. The testing centers are specialized Pearson centers that can offer enhanced security. You don’t have to fly to RTP or San Jose for your exam, but you can’t exactly take it at the local community college either. It’s the kind of thing that you work toward and get ready for, not the kind of spur-of-the-moment sales test that you have to take today to certify on a new product.

So, what happens if the test gets canceled? Because that’s exactly what happened two weeks ago. All those signed up to take the May 17, 2017 edition of the exam were giving one week’s notice that the exam had been canceled. Confusion reigned everywhere. Why had Cisco done this? What was going to happen? Would it be rescheduled or would these candidates be forced to sit the August exam dates instead? What was going on? This was followed by lots of rumors and suggestions of impropriety.

According to people that should know, the February 2017 exam had a “statistically signification pass rate increase”. For those that don’t know, Cisco tracks the passing exam scores of all their tests. They have people trained in the art of building tests and setting a specific passing rate. And they keep an eye on it. When too many people start passing the exam, it’s a sign that the difficulty needs to be increased. When too few people are able to hit the mark, a decision needs to be made about reducing difficulty or examining why the pass rate is so low. Those are the kinds of decisions that are made every day when the data comes back about tests.

For larger exams like the CCDE and CCIE that have fewer test taking opportunities, the passing rate is a huge deal. If 50% of the people that take the CCIE pass, it doesn’t say very much for the “expert” status of the exam. Likewise, if only 0.5% of test takers pass the CCDE on a given attempt, it’s an indicator that the test is too hard or overly complex and less about skill demonstration and more about being “really, really tough”. The passing rate is a big deal to Cisco.

According to those reports, there was a huge spike in the passing rate for the February CCDE. Big enough to make Cisco shut down the May attempts to find out why. If you have a few percentage points variance in passing or failing rates now and then, it’s not a huge deal. But if you have a huge increase in the number of certified individuals for one of the most challenging exams ever created, you need to find out why. That’s why Cisco went into damage control mode for May and maybe even for August.

The Jenga Problem

So, if your still with me at this point, you’ve probably figured out that there’s a good possibility that someone got their hands on a copy of the CCDE exam. That’s why Cisco had to stop the most recent date in order to ensure that whatever is out in the wild isn’t going to cause issues with the exam passing rates going forward. It makes sense from a test giver perspective to plug the leak and ensure the integrity of the exam.

Yes, it does suck for the people that are taking the exam. That’s a lot of time and effort wasted. One of the things I kept hearing from people was, “Why doesn’t Cisco just change the test?” Sadly, changes to the CCDE are almost impossible at this point in the game.

Even written exams with hundreds of test bank questions are difficult to edit and refresh. I know because I’ve been involved the process for many of them in the past. Getting new things added and old things removed takes time, effort, and cooperation among many people on a team. Imagine the nightmare of trying to get that done for a test where every question is hand-crafted and consists of multiple parts. Where something that is involved in Question 2 of a scenario has an impact on Question 8.

Remember, exams like the CCIE and CCDE are infamous for word choices mattering a lot in the decision making process of the candidate. Changing any of the words is a huge deal. Now, think about having to change a question or two. Or a scenario or two out of four. Or a whole exam revision. Now, do it all in two weeks. You can see what the CCDE team was faced with and why they had to make the decision they did. Paying customers are angry, but the possibility that the integrity of a complicated exam is compromised is worth more to Cisco in the long run.


Tom’s Take

What’s going to happen? It’s difficult to say. Cisco has to find out what happened and stop it immediately. Then, they have to assess what can be done to salvage the exam as it exists today. Scenarios must be created to replace known bad versions. Evaluation of those new scenarios could take a while. And in the interim, Cisco can’t just shut down the testing and certification process. It would be the end of the CCDE and cause a huge backlash in the elite community that exists to spread the good word of Cisco throughout networking circles. Cisco isn’t acting from a position of confusion, but instead from a position of caution. The wrong step at the wrong time could be disastrous.

The Myth of The Greenest Field

A fun anecdote: I had to upgrade my home landline (I know, I know) from circuit switched to packet switched last week. I called the number I was told to call and I followed the upgrade procedure. I told them exactly what I wanted – the bare minimum necessary to move the phone circuit. No more. No less.

When the technician arrived to do the upgrade, he didn’t seem to know what was going on. Instead of giving me the replacement modem I asked for, he tried to give me their “upgraded, Cadillac model” home media gateway router. I told him that I didn’t need it. I had a perfectly good router/firewall. I had wireless in my house. I didn’t need this huge monstrosity. Yet, he persisted. No amount of explanation from me could make him understand I neither wanted or needed what he was trying to install.

Finally, I gave in. I let him finish his appointment and move on. Once he was gone, I disassembled the router and took it to the nearest cable company store. I walked in and explained exactly what I wanted and what I needed. It took the techs there less than five minutes to find my new device without the wireless or media gateway functionality. They provisioned it through their system and gave it to me. I plugged it in at home and everything worked just the way that it had before. No fuss.

The Field Isn’t Always Greener

Why is this story important? Because it should inform us at networking and systems professionals that there is no such thing as a truly greenfield deployment. Even the greenest field is just a covering for the brown dirt below. If you’re thinking to yourself, “But what about those really green installs with new sites or new organizations?” you obviously aren’t thinking about the total impact of what makes an install non-greenfield.

We tend to think of technology as the limiting factor in a deployment. How do we make the New Awesome System work with the Old Busted Stuff? How can we integrate the automated, orchestrated, programmable thingy with the old punchcard system that still requires people to manually touch things to work? When we try to decide how to make it all work together, our challenge is totally focused on the old tech.

But what if the tech isn’t the limiting factor? What if there are a whole host of other things that make this field of green a little harder to work with? What if it’s a completely new site but still requires a modem connection to process credit cards? What if the branch office computers have a mandate to run an anti-virus program and the branch bought all OS X or Linux machines? What if the completely new company wants you to setup a new office but only has $1000 budgeted for networking gear?

What Can Brown Do For You?

The key to a proper installation at a so-called “brownfield” site is to do your homework. You have to know before you ever walk in the door what is waiting for you. You have to have someone do a site evaluation. You need to know what tech is waiting for you and how it all works together. If the organization that you are doing the installation for can’t produce that information, you either need to sell them a service to do it for them or be prepared to walk away.

At the same time, you also need to account for all the other non-technical things that can crop up. You need to get buy in from management and IT for the project you’re doing. If not, you’re going to find that management and IT are going to use you as a scapegoat for every problem that has ever cropped up in the network. You’ll be a pariah before you ever flip a switch or plug in a cable. Ensuring that you have buy in at all levels is the key to ensuring that everyone is on the same page.

You also need to make sure that you’re setting proper expectations. What are people expecting from this installation? Are their assumptions about the outcome reasonable? Are you going to get stuck holding the bag when the New Awesome Thing fails to live up to the lofty goals it was sold on? Note that this isn’t always the “us vs. them” mentality of sales and engineering either. Sometimes a customer has it in their head that something is going to do a thing or act a certain way and there’s nothing you can do to dissuade the customer. You need to know what they are expecting before you ever try to install or configure a device.


Tom’s Take

Greenfield sites are the unicorn that is both mythical and alluring to us all. We want to believe that will one day not have to work under constraints and be free to set things up in a way that we want. Sadly, much like that unicorn, we all know that true greenfield sites will never truly exist. You’re always going to be working under some kind of constraint. Some kind of restriction. It’s not always technically in nature either. But proper planning and communication will prevent your brown grass deployment from becoming a quagmire of quicksand.

Cisco and Viptela – The Price of Development Debt

Cisco finally pulled themselves into the SD-WAN market by acquiring Viptela on Monday. Viptela was considered to be one of, if not the leading SD-WAN vendor in the market. That Cisco decided to pick them as an acquisition target isn’t completely surprising. But one might wonder why?

IWANna New Debt

Cisco’s premier strategy for SD-WAN up until last week was IWAN. This is their catch-all solution designed to take the various component pieces being offered by SD-WAN solutions and replicate them on Cisco hardware. IWAN has served as a vehicle for Cisco to push things like the APIC-EM solution, Cisco ONE licensing, and a variety of other enhanced technologies like NBAR and PfR.

Cisco has packaged these technologies together because they have spent a couple of decades building these protocols up to be the best at what they do in the industry. NBAR was the key to application QoS years ago. PfR and OER were the genesis of Cisco having the ability to intelligently route packets to destinations. These protocols have formed the cornerstone of their platform for many, many years.

So why is IWAN such a mess? If you have the best of breed technology built into a router that makes the packets fly across the Internet at lightning speeds how is it that companies like Viptela were eating Cisco’s lunch in the SD-WAN space? It’s because those same best-of-breed protocols are to blame for the jigsaw puzzle of IWAN.

If you are the product manager for a protocol like NBAR or PfR, you want it to be adopted by as many people as possible. Wide adoption guarantees you’re going to have a job tomorrow or even next year. The people working on EIGRP and OSPF are safe. But if you get left behind technologically, you’re in for rough seas. Just ask the folks that managed LANE. But if you can attach yourself to a movement that’s got some steam, you’re in the drivers seat.

At the same time, you want your protocol or product to be the best at what it does. And sometimes being the best means you don’t compromise. That’s great when you are the only thing running on the system. But when you’re trying to get protocols to work together to create something bigger, you often find that compromises are not just a good idea, they’re necessary. But how do you handle it when the product manager for NBAR and the product manager for IP SLA get into a screaming match over who is going to blink first?

Using existing protocols and products is a great idea because it means you don’t have to reinvent the wheel every time you design something. But, with that wheel comes the technical debt of development. Given the chance to reuse something that thousands, if not millions, of dollars of R&D has gone into, companies like Cisco will jump at the chance to get some more longevity out of a protocol.

Not Pokey, But Gumby

Now, lets look at a scrappy startup like Viptela. They have to build their protocols from the ground up. Maybe they have the opportunity of leveraging some open source projects or some basic protocol implementations to get off the ground. That means that they are starting from essentially square one. It also means they are starting off with very little technical and development debt.

When Viptela builds their application monitoring stack or their IPSec VPN stack, they aren’t trying to build the best protocol for every possible situation that could ever be encountered by a wide variety of customers. They are just trying to build a protocol that works. And not just a protocol that works on its own. They want a protocol that works with everything else they are building.

When you’re forced to do everything from scratch, you find that you avoid making some of the same choices that you were forced to make years ago. The lack of technical and development debt also means you can take a new direction with things. Don’t want to support pre-shared key IPSec VPNs? Don’t build it into the protocol. Don’t care to have some of the quirks of PfR? Build something different that meets your needs. You have complete control.

Flexibility is why SD-WAN vendors were able to dominate the market for the past two years. They were able to adapt and change quickly because they didn’t need to keep trying to make systems integrate on top the tech and dev debt they incurred during the product lifecycle. That lets them concentrate on features that customers want, not on trying to integrate features that management has decreed must be included because the product manager was convincing in the last QBR.


Tom’s Take

In the end, the acquisition of Viptela by Cisco was as much about reduction of technical and development debt in their SD-WAN offerings as it was trying to get ahead in the game. They needed something that could be used as-is without the need to rely on any internal development processes. I alluded to this during our Network Collective Off-The-Cuff show. Without the spin-out model available any longer, Cisco is going to have to start making tough decisions to get things like this done. Either those decisions are made via reduction of business units without integration or through larger dollar signs to acquire solutions to provide the cohesion they need.

Don’t Be My Guest

I’m interrupting my regularly scheduled musing about technology and networking to talk today about something that I’m increasingly seeing come across my communications channels. The growing market for people to “guest post” on blogs. Rather than continually point folks to my policies on this, I thought it might be good to break down why I choose to do what I do.

The Archive Of Tom

First and foremost, let me reiterate for the record: I do not accept guest posts on my site.

Note that this has nothing to do with your skills as a writer, your ability to create “compelling, fresh, and exciting content”, or your particular celebrity status as the CTO/CIO/COMGWTFBBQO of some hot, fresh, exciting new company. I’m sure if Kurt Vonnegut’s ghost or J.K. Rowling wanted to make a guest post on my blog, the answer would still be the same.

Why? Because this site is the archive of my thoughts. Because I want this to be an archive of my viewpoints on technology. I want people to know how I’ve grown and changed and come to love things like SDN over the years. What I don’t want is for people to need to look at a byline to figure out why the writer suddenly loves keynotes or suddenly decides that NAT is the best protocol ever. If the only person that ever writes here is me, all the things here are my voice and my views.

That’s not to say that the idea of guest posts or multiple writers of content is a bad thing. Take a look at Packet Pushers for instance. Greg, Ethan, and Drew do an awesome job of providing a community platform for people that want to write. If you’re not willing to setup your own blog, Packet Pushers is the next best option for you. They area the SaaS version of blogging – just type in the words and let the magic happen behind the screen.

However, Packet Pushers is a collection of many different viewpoints and can be confusing sometimes. The editorial staff does a great job of keeping their hands off the content outside of the general rules about posts. But that does mean that you could have two totally different viewpoints on a topic from two different writers that are posted at the same time. If you’re not normally known as a community content hub, the whiplash between these articles could be difficult to take.

The Dark Side Of Blogging

If the entire point of guest posting was to increase community engagement, I would very likely be looking at my policy and trying to find a way to do some kind of guest posting policy. The issue isn’t the writers, it’s what the people doing the “selling” are really looking for. Every time I get a pitch for a guest post, I immediately become suspicious of the motives behind it. I’ve done some of my own investigation and I firmly believe that there is more to this than meets the eye.


Pitch: Our CEO (Name Dropper) can offer your blog an increase in traffic with his thoughts on the following articles: (List of Crazy Titles)

Response: Okay, so why does he need to post on this blog? What advantage could he have for posting here and not on the corporate blog? Are you really trying to give me more traffic out of the goodness of your own heart? Or are you trying to game the system by using my blog as a lever to increase his name recognition with Google? He gains a lot more from me than I ever will from him, especially given that your suggested blog post titles are nowhere close to the content I write about.


Pitch: We want to provide an article for you to post under your own name to generate more visibility. All we ask is for a link back to our site in your article.

Reponse: More gaming the system. Google keeps track of the links back to your site and where they come from, so the more you get your name out there the higher your results. But as Google shuts down the more nefarious avenues, companies have to find places that Google actually likes to put up the links. Also, why does this link come wrapped in some kind of link shortener? Could it be because there are tons of tracking links and referral jumps in it? I would love to push back and tell them that I’m going to include my own link with no switches or extra parts of the URL and see how quickly the proposal is withdrawn when your tracking systems fail to work the way you intend. That’s not to say that all referral links are bad, but you can better believe that if there’s a referral link, I put it there.


Pitch: We want to pay you to put our content on your site

Response: I know what people pay to put content on major news sites. You’re hoping to game the system again by getting your content up somewhere for little to nothing compared to what a major content hub would cost. Why pay for major exposure when you can get 60% of that number of hits for a third of the cost? Besides, there’s no such thing as only taking money once for a post. Pretty soon everyone will be paying and the only content that will go up will be the kind of content that I don’t want on my blog.


Tom’s Take

If you really want to make a guest post on a site, I have some great suggestions. Packet Pushers or the site I help run for work GestaltIT.com are great community content areas. But this blog is not the place for that. I’m glad that you enjoy reading it as much as I enjoy writing it. But for now and for the foreseeable future, this is going to by my own little corner of the world.


Editor Note:

The original version of this article made reference to Network Computing in an unfair light. The error in my reference to their publishing model was completely incorrect and totally mine due to failure to do proper research. I have removed the incorrect information from this article after a conversation with Sue Fogarty.

Network Computing has a strict editorial policy about accepting content, including sponsored content. Throughout my relationship with them, I have found them to be completely fair and balanced. The error contained in this blog post was unforgivable and I apologize for it.

The Future Of SDN Is Up In The Air

The announcement this week that Riverbed is buying Xirrus was a huge sign that the user-facing edge of the network is the new battleground for SDN and SD-WAN adoption. Riverbed is coming off a number of recent acquisitions in the SDN space, including Ocedo just over a year ago. So, why then, would Riverbed chase down a wireless company when they’re so focused on the wiring behind the walls?

The New User Experience

When SDN was a pile of buzzwords attached to an idea that had just come out of Stanford, a lot of people were trying to figure out just what exactly SDN could offer them in terms of their network. Things like network slicing were the first big pieces to be put up before things like orchestration, programmability, and APIs were really brought to the fore. People were trying to figure out how to make this hot new thing work for them. Well, almost everyone.

Wireless professionals are a bit jaded when it comes to SDN. That’s because they’ve seen it already in the form of controller-based solutions. The idea that a central device can issue commands to remote access devices and control configurations easily? Airespace was doing that over a decade ago before they got bought by Cisco. Programmability is a moot point to people that can import thousands of access points into a device and automatically have new SSIDs being broadcast on them all in a matter of seconds. Even the new crop of “controllerless” wireless systems on the market still have a central control infrastructure that sends commands to the APs. Much like we’ve found in recent years with SDN, removing the control plane from the data plane path has significant advantages.

So, what would it take to excite wireless pros about SDN? Well, as it turns out, the issue comes down to the user side of the equation. Wireless networks work very well in today’s enterprise. They form the backbone of user connectivity. Companies like Aruba are experimenting with all-wireless offices. The concept is crazy at first glance. How will users communicate without phones? As it turns out, most of them have been using instant messengers and soft phone programs for years. Their communications infrastructure has changed significantly since I learned how to install phone systems years ago. But what hasn’t changed is the need to get these applications to play nicely with each other.

Application behavior and analysis is a huge selling point for SDN and, by extension, SD-WAN. Being able to classify application traffic running on a desktop and treat it differently based on criteria like voice traffic versus web browsing traffic is huge for network professionals. This means the complicated configurations of QoS back in the day can be abstracted out of the network devices and handled by more intelligent systems further up the stack. The hard work can be done where it should be done – by systems with unencumbered CPUs making intelligent decisions rather than by devices that are processing packets as quickly as possible. These decisions can only be made if the traffic is correctly marked and identified as close to the point of origin as possible. That’s where Riverbed and Xirrus come into play.

Extending Your Brains To Your Fingers

By purchasing a company like Xirrus, Riverbed can build on their plans for SDN and SD-WAN by incorporating their software technology into the wireless edge. By classifying the applications where they live, the wireless APs can provide the right information to the SDN processes to ensure traffic is dealt with properly as it flies through the network. With SD-WAN technologies, that can mean making sure the web browsing traffic is sent through local internet links when traffic meant for main sites, like communications or enterprise applications, can be sent via encrypted tunnels and monitored for SLA performance.

Network professionals can utilize SDN and SD-WAN to make things run much more smoothly for remote users without the need to install cumbersome appliances at the edge to do the classification. Instead, the remote APs now become the devices needed to make this happen. It’s brilliant when you realize how much more effective it can be to deploy a larger number of connectivity devices that contain software for application analysis than it is to drop a huge server into a branch office where it’s not needed.

With the deployment of these remote devices, Riverbed can continue to build on the software side of technology by increasing the capabilities of these devices while not requiring new hardware every time a change comes out. You may need to upgrade your APs when a new technology shift happens in hardware, like when 802.11ax is finally released, but that shouldn’t happen for years. Instead, you can enjoy the benefits of using SDN and SD-WAN to accelerate your user’s applications.


Tom’s Take

Fortinet bought Meru. HPE bought Aruba. Now, Riverbed is buying Xirrus. The consolidation of the wireless market is about more than just finding a solution to augment your campus networking. It’s about building a platform that uses wireless networking as a delivery mechanism to provide additional value to users. The spectrum part of wireless is always going to be hard to do properly. Now, the additional benefit of turning those devices into SDN sensors is a huge value point for enterprise networking professionals as well. What better way to magically deploy SDN in your network than to flip a switch and have it everywhere all at once?

Changing The Baby With The Bathwater In IT

If you’re sitting in a presentation about the “new IT”, there’s bound to be a guest speaker talking about their digital transformation or service provider shift in their organization. You can see this coming. It’s a polished speaker, usually a CIO or VP. They talk about how, with the help of the vendor on stage with them, they were able to rapidly transform their infrastructure into something modern while at the same time changing processes to accommodate faster IT response, more productive workers, and increase revenue or transform IT from a cost center to a profit center. The key components are simple:

  1. Buy new infrastructure from $vendor
  2. Transform all processes to be more agile, productive, and better.

Why do those things always happen in concert?

Spring Cleaning

Infrastructure grows old. That’s a fact of life. Outside of some very specialized hardware, no one is using the same desktop they had ten years ago. No enterprise is still running Windows 2000 server on an IBM NetFinity server. No one is still using 10Mbps Ethernet over Thinnet to connect their offices. Hardware marches on. So when we buy new things, we as technology professionals need to find a way to integrate them into our existing technology stack.

Processes, on the other hand, are very slow to change. I can remember dealing with process issues when I was an intern for IBM many, many years ago. The process we had for deploying a new workstation had many, many reboots involved. The deployment team worked out a new strategy to streamline deployments and make things run faster. We brought our plan to the head of deployments. From there, we had to:

  • Run tests to prove that it was faster
  • Verify that the process wasn’t compromised in any way
  • Type up new procedures in formal language to match the existing docs
  • Then submit them for ISO approval

And when all those conditions were met, we could finally start using our process. All in all, with aggressive testing, it still took two months.

Processes are things that are thought to be carved in stone, never to be modified or changed in any way for the rest of time. Unless the stones break or something major causes a process change. Usually, that major change is a whole truckload of new equipment showing up on the back dock attached to a consultant telling IT there is a better way (TM) to do things.

Ceteris Paribus

Ceteris Paribus is a latin term that means “all else unchanged”. We use it when we talk about having multiple variables in an equation and the need to keep them constant to be able to measure changes appropriately.

The funny thing about all these transformations is that it’s hard to track what actually made improvements when you’re changing so many things at once. If the new hardware is three or four times faster than your old equipment, would it show that much improvement if you just used your old software and processes on it? How much faster could your workloads execute with new CPUs and memory management techniques? How about collapsing your virtual infrastructure onto fewer and fewer physical servers because of advances there? Running old processes on new hardware can give you a very good idea of how good the hardware is. Does it meet the criteria for selection that you wanted when it was purchased? Or, better still, does it seems like you’re not getting the performance you paid for?

Likewise, how are you able to know for sure that the organization and process changes you implemented actually did anything? If you’re implementing them on new hardware how can you capture the impact? There’s no rule that says that new processes can only be implemented on new shiny hardware. Take a look at what Walmart is doing with OpenStack. They most certainly aren’t rushing out to buy tons and tons of new servers just for OpenStack integration. Instead, they are taking streamlined processes and implementing them on existing infrastructure to see the benefits. Then it’s easy to measure and say how much hardware you need to expand instead of overbuying for the process changes you make.


Tom’s Take

So, why do these two changes always seem to track with each other? The optimist in me wants to believe that it’s people deciding to make positive changes all at once to pull their organization into the future. Since any installation is disruptive, it’s better to take the huge disruption and retrain for the massive benefits down the road. It’s a rosy picture indeed.

The pessimist in me wonders if all these massive changes aren’t somehow tied to the fact that they always come with massive new hardware purchases from vendors. I would hope there isn’t someone behind the scenes with the ear of the CIO pushing massive changes in organization and processes for the sake of numbers. I would also sincerely hope that the idea isn’t to make huge organizational disruptions for the sake of “reducing overhead” or “helping tell the world your story” or, worse yet, “making our product look good because you did such a great job with all these changes”.

The optimist in me is hoping for the best. But the pessimist in me wonders if reality is a bit less rosy.