Legacy IT Is Not A Monument

During Networking Field Day 17, there was a lot of talk about legacy IT constructs, especially as they relate to the cloud. Cloud workloads are much better when they are new things with new applications and new processes. Existing legacy workloads are harder to move to the cloud, especially if they require some specific Java version or special hardware to work properly.

We talk a lot about how painful legacy IT is. So why do we turn it into a monument that spans the test of time?’

Keeping Things Around

Most monuments that we have from ancient times are things that we never really intended to keep. Aside from the things that were supposed to be saved from the beginning, most iconic things were never built to last. Even things like the Parthenon or the Eiffel Tower. These buildings were always envisioned to be torn down sooner or later.

Today, we can’t imagine a world without those monuments. We can’t conceive of a time without them. And, depending on the amount of time that elapsed between the building or creation of things and the decision to preserve it there could be irreparable damage. Yet that just adds to the charm.

Now, apply those factors to legacy IT. We have software that is outdated. We have applications that need old Java versions to run correctly. Or outdated DLLs. Or some other kind of thing that could cause complication or damage to our systems. Yet, we can’t bear to part with our old familiar IT. Maybe it was a UI change. Maybe it never really ran correctly on a new operating system. Or maybe the new version cost so much to upgrade that it was just cheaper to keep hacking the old thing to work slightly better each time.

Rather than examining how we could replicate the workload or find a better, quicker way to do things, we find ourselves building legacy IT into a preserved monument. We freeze the software or hardware and we never update it. We build crazier and more complicated solutions to keep something running that is well past the retirement date.

View Only

The other complication of legacy IT is that we use the programs but we never really do more than look at them. We don’t focus on the data or the process. Instead, we just plug things into a system and run what we need to run. I remember working for IBM and having to enter my weekly timesheet twice. Why? Because the new timesheet system was a Java app that ran on Windows 2000. But the system that paychecks were generated from for hourly employees (like me) was run from an AS/400 terminal window. So, after I spent half an hour entering my time for the week, I had to spend another half hour entering those exact same time entries into a console.

Would it have been easier to replicate the functions of the terminal program in Java and make time entry a single thing? Sure. Would it be easier for everything to integrate and reduce time for employees? Sure. But the people that had been using the console program for the last decade had it down. They could enter their time in a few minutes. In fact, even though the Java program was more precise for time increments most employees hated it. They’d rather use an imprecise and outdated program because it was faster and more familiar. Even though they had to manually edit their timesheet after the fact because the newer reason codes weren’t loaded in the old system.

Read-only legacy IT makes everyone’s life miserable. All kinds of crazy patches and hacks are necessary to make it run correctly with new functions. And no matter what the replacement solution is something that people will hate. Simply because it’s not the old system.

Hands Off

This, for me, is the hardest part of legacy IT monuments. Once it works, NEVER TOUCH IT AGAIN. You can’t migrate, upgrade, or move a machine. You can’t get newer hardware that’s under support. The number of times that I’ve had to buy parts from Ebay to fix broken legacy systems is much, much to high.

Now, we have to worry about what happens when the system never comes back. Or when we push it past the breaking point for some legacy app. Look at the shift from 32-bit applications to 64-bit. We’re in the transition process and yet, still, there are people that have some old application or hardware device that can’t run on newer software. Once we force a cutoff, we have to find a way to build band-aids that people can use to make a decade-old thing work.

This hands-off mentality is also part of the reason why cloud migration projects fail. Even if you can get 90% of the software in your environment to work the way you want, the odds are that the remaining 10% is composed of legacy applications that haven’t been retired because they are mission critical and very finicky. They won’t migrate. They might even be running in VMs that have to emulate old OSes because they can’t ever be upgraded. And those kinds of old familiar pets are the ones that take too much of your time.


Tom’s Take

Monuments are old buildings. We keep them around because they remind us of how things used to be. They might be falling apart. They may take more time to keep up but people love to see them and enjoy them for the nostalgia. Legacy IT is not that. It’s a headache. It’s a pain to have read-only apps that we can never change because we don’t want to or can’t get them running on something new. Rather than building them into a static monument, we need to retire them and find a way to build something new. Because no matter how beautiful they may be, no legacy IT project will ever stand the test of time like the Parthenon.

Advertisements

Linux and the Quest for Underlays

TuxUnderlay

I’m at the OpenStack Summit this week and there’s a lot of talk around about building stacks and offering everything needed to get your organization ready for a shift toward service provider models and such. It’s a far cry from the battles over software networking and hardware dominance that I’m so used to seeing in my space. But one thing came to mind that made me think a little harder about architecture and how foundations are important.

Brick By Brick

The foundation for the modern cloud doesn’t live in fancy orchestration software or data modeling. It’s not because a retailer built a self-service system or a search engine giant decided to build a cloud lab. The real reason we have a growing market for cloud providers today is because of Linux. Linux is the underpinning of so much technology today that it’s become nothing short of ubiquitous. Servers are built on it. Mobile operating systems use it. But no one knows that’s what they are using. It’s all just something running under the surface to enable applications to be processed on top.

Linux is the vodka of operating systems. It can run in a stripped down manner on a variety of systems and leave very little trace behind. BSD is similar in this regard but doesn’t have the driver support from manufacturers or the ability to strip out pieces down to the core kernel and few modifications. Linux gives vendors and operators the flexibility to create a software environment that boots and gets basic hardware working. The rest is up to the creativity of the people writing the applications on top.

Linux is the perfect underlay. It’s a foundation that is built upon without getting in the way of things running above it. It gives you predictable performance and a familiar environment. That’s one of the reasons why Cumulus Networks and Dell have embraced Linux as a way to create switch operating systems that get out of the way of packet processing and let you build on top of them as your needs grow and change.

Break The Walls Down

The key to building a good environment is a solid underlay, whether it be be in systems or in networking. With reliable transport and operations taken care of, amazing things can be built. But that doesn’t mean that you need to build a silo around your particular area of organization.

The shift to clouds and stacks and “new” forms of IT management aren’t going to happen if someone has built up a massive blockade. They will work when you build a system that has common parts and themes and allows tools to work easily on multiple parts of the infrastructure.

That’s what’s made Linux such a lightning rod. If your monitoring tools can monitor servers, SANs, and switches with little to no modification you can concentrate your time on building on those pieces instead of writing and rewriting software to get you back to where you started in the first place. That’s how systems can be extensible and handle changes quickly and efficiently. That’s how you build a platform for other things.


Tom’s Take

I like building Lego sets. But I really like building them with the old fashioned basic bricks. Not the fancy new ones from licensed sets. Because the old bricks were only limited by your creativity. You could move them around and put them anywhere because they were all the same. You could build amazing things with the right basic pieces.

Clouds and stacks aren’t all that dissimilar. We need to focus on building underlays of networking and compute systems with the same kinds of basic blocks if we ever hope to have something that we can build upon for the future. You may not be able to influence the design of systems at the most basic level when it comes to vendors and suppliers, but you can vote with your dollars to back the solutions that give you the flexibility to get your job done. I can promise you that when the revenue from proprietary, non-open underlay technologies goes down the suppliers will start asking you the questions you need to answer for them.

The Myth of Chargeback

 

Cash Register

Cash register by the National Cash Register Co., Dayton, Ohio, United States, 1915.

Imagine a world where every aspect of a project gets charged correctly. Where the massive amount of compute time for a given project gets labeled into the proper department and billed correctly. Where resources can be allocated and associated to the projects that need them. It’s an exciting prospect, isn’t it? I’m sure that at least one person out there said “chargeback” when I started mentioning all these lofty ideas. I would have agreed with you before, but I don’t think that chargeback actually exists in today’s IT environment.

Taking Charge

The idea of chargeback is very alluring. It’s been on slide decks for the last few years as a huge benefit to the analytics capabilities in modern converged stacks. By collecting information about the usage of an application or project, you can charge the department using that resource. It’s a bold plan to change IT departments from cost centers to revenue generators.

IT is the red headed stepchild of the organization. IT is necessary for business continuity and function. Nothing today can run without computers, networking, or phones. However, we aren’t a visible part of the business. Much like the plumbers and landscapers around the organization, IT’s job is to make things happen and not be seen. The only time users acknowledge IT is when something goes wrong.

That’s where chargeback comes into play. By charging each department for their usage, IT can seek to ferret out extraneous costs and reduce usage. Perhaps the goal is to end up a footnote in the weekly management meeting where Brian is given recognition for closing a $500,000 deal and IT gets a shout-out for figuring out marketing was using 45% more Exchange server space than the rest of the organization. Sounds exciting, doesn’t it?

In theory, chargeback is a wonderful way to keep departments honest. In practice, no one uses it. I’ve talked to several IT professionals about chargeback. About half of them chuckled when I mentioned it. Their collective experience can best be summarized as “They keep talking about doing that around here but no one’s actually figured it out yet.”

The rest have varying levels of implementation. The most advanced ones that I’ve spoken to use chargeback only for physical assets in a project. If Sales needs a new server and five new laptops for Project Hunter, then those assets are charged back correctly to the department. This keeps Sales from asking for more assets than they need and hoping that the costs can be buried in IT somewhere.

No one that I’ve spoken to is using chargeback for the applications and software in an organization. We can slice the pie as fine as we want for how to allocate assets that you can touch but when it comes to figuring out how to make Operations pay their fair share of the bill for the new CRM application we’re stuck. We can pull all the analytics all day long but we can’t seem to get them matched to the right usage.

Worse yet, politics plays a big role in chargeback. If a department head disagrees with the way their group is being characterized for IT usage, they can go to their superiors and talk about how critical their operation is to the business and how they need to be able to work without the restrictions of being billed for their usage. A memo goes out the next day and suddenly the department vanishes from the records with an admonishment to “let them do their jobs”.

Cloud Charges

The next thing that always comes up is public cloud. Chargeback proponents are waiting for wide-spread adoption of public cloud. That’s because the billing method for cloud is completely democratic. Everyone pays the price no matter what. If an AWS instance is running someone needs to pay for it. If those systems can be isolated to a specific application or department then the chargeback takes care of itself. Everyone is happy in the end. IT gets to avoid blame for not producing and the other departments get their resources.

Of course, the real problem comes when the bills start piling up. Cloud isn’t cheap. It exposes the dirty little secret that sunk-cost hardware has a purpose. When you bill based on CPU hour you’ll find that a lot of systems sit idle. Management will come unglued trying to figure out how cloud costs so much. The commercials and sales pitches said we would save money!

Then the politics start all over again. IT gets blamed because cloud was implemented wrong. No protesting will fix that. Then comes the rapid costs cutting measures. Shutting off systems not in use. Databases lose data capture for down periods. People can access systems in off hours. Work falls off and the cloud project gets scrapped for the old, cheaper way.

Cloud is the model for chargeback that should be used. But it should be noted that we need to remember those numbers need to be correctly attributed. Just pushing a set of usage statistics down without context will lead to finger pointing and scrambling for explanation. Instead, we need to provide context from the outset. Maybe Marketing used an abnormally high amount of IT resources last week. But did it have anything to do with the end of the quarter? Can we track that usage back to higher profits from sales? That context is critical to figuring out how usage statistics affect things overall.


Tom’s Take

Chargeback is the stick that we use to threaten organizations to shape up and fly right. We make plans to implement a process to track all the evil things that are hidden in a department and by the time the project is ready to kick off we find that costs are down and productivity is up. That becomes the new baseline and we go on about our day think about how chargeback would have let us catch it before it became a problem.

In reality, chargeback is a solution that will take time to implement and cost money and time to get right. We need data context and allocation. We need actionable information and the ability to coordinate across departments. We need to know where the charges are coming from and why, not just complaining about bills. And there can be no exceptions. That’s the only way to put chargeback in charge.

 

My Buzzword Security Blanket

If you’ve been following the networking world for a while, you’re probably getting sick of hearing the words cloud and fabric.  The former is something of a nebulous term used to describe all manner of strange things.  Hosted e-mail, hosted websites, hosted storage, infrastructure as a service (IAAS), software as a service (SAAS), virutal machine hosting, and so on.  Every major networking and server player has some sort of cloud-based strategy.  Yet, when I think of clouds, I think of the little white fluffy things I put on network diagrams when I denote a section outside my control that I don’t really care about, like a WAN frame relay section or the Internet.  So when I hear about providers telling me to move “to the cloud”, I laugh.  I think about hosted Hotmail account I’ve had for 13 years.  Or the services like Dropbox that I’m starting to use for many things now.  But I don’t think of them as cloud services, per se.  Just software that is useful.

Fabric is another overused term, especially in the datacenter.  Fabric is the term that describes connecting nodes in the network together in a meshed-type of environment, like a rug or a shirt.  The resulting output is termed fabric.  This term used to be very popular with the storage people back in the day.  Now that the storage network has been unified with the server network the term seems to be leaking into our little world.

With all this in mind, I tweeted a little joke a week or so ago:

And then people came out of the woodwork.  Someone suggested I make it borderless to be compliant with Cisco’s Borderless Networks initiative.  A couple of people told me that I should send them one.  Greg Ferro even thought it was a good idea.  So, after a little shopping with my wonderful wife this past weekend, we came up with this:

Pretty, isn’t it?  I thought the bears added a little something.  Also, no stitching on the edges so it really is “borderless”.  This is my Buzzword Security Blanket.  I’m going to carry it with me everywhere I go.  Anytime someone talks to me about “Cloud this” or “Fabric that”, I’m going to curl up with my blanket and wait until all the mean people leave me alone.  I think of my nice secure data centers where my packets can cozy up with their Buzzword Security Blankets at night, safe and sound and right where I want them to be, protected from the evil in the cloud.  And when someone carries on about the new exciting fabric options in their strategies, I’ll nuzzle my Buzzword Security Blanket against my cheek and remind myself that it’s all the fabric I’m ever going to need.

Who knows?  If this takes off, I could do a whole line of baby-themed networking buzzword items.  Let me know what you think.