In my last few blog posts, I’ve been looking back at some of the ideas that were presented at Future:Net at VMworld this year. While I’ve discussed resource contention, hardware longevity, and event open source usage, I’ve avoided one topic that I think dictates more of the way our networks are built and operated today. It has very little to do with software, merchant hardware, or even development. It’s about legacy.
They Don’t Make Them Like They Used To
Every system in production today is running some form of legacy equipment. It doesn’t have to be an old switch in a faraway branch office closet. It doesn’t have to be an old Internet router. Often, it’s a critical piece of equipment that can’t be changed or upgraded without massive complications. These legacy pieces of the organization do more to dictate IT policies than any future technology can hope to impact.
In my own career, I’ve seen this numerous times. It could be the inability to upgrade workstation operating systems because users relied on WordPerfect for document creation and legacy document storage. And new workstations wouldn’t run WordPerfect. Or perhaps it cost too much to upgrade. Here, legacy software is the roadblock.
Perhaps it’s legacy hardware causing issues. Most wireless professionals agree that disabling older 802.11b data rates will help with network capacity issues and make things run more smoothly. But those data rates can only be configured if your wireless network clients are more modern. What if you’re still running 802.11b wireless inventory scanners? Or what if your old Cisco 7921 phones won’t run correctly without low data rates enabled? Legacy hardware dictates the configuration here.
In other cases, legacy software development is the limiting factor. I’ve run into a number of situations where legacy applications are dictating IT decisions. Not workstations and their productivity software. But enterprise applications like school grade book systems, time tracking applications, or even accounting programs. How about an accounting database that refuses to load if IPX/SPX isn’t enabled on a Novell server? Or an old AS/400 grade book that can’t be migrated to any computer system that runs software built in this century? Application development blocks newer systems from installation and operation.
Brownfields Forever
We’ve reached the point in IT where it’s safe to say that there are no more greenfield opportunities. The myth that there is an untapped area where no computer or network resources exist is ludicrous. Every organization that is going to be computerized is so right now. No opportunities exist to walk into a completely blank slate and do as you will.
Legacy is what makes a brownfield deployment difficult. Maybe it’s an old IP scheme. Or a server running RIP routing. Or maybe it’s a specialized dongle connected to a legacy server for licensing purposes that can’t be virtualized. These legacy systems and configurations have to be taken into account when planning for new deployments and new systems. You can’t just ignore a legacy system because you don’t like the requirements for operation.
This is part of the reason for the rise of modular-based pod deployments like those from Big Switch Networks. Big Switch realized early on that no one was going to scrap an entire networking setup just to deploy a new network based on BSN technology. And by developing a pod system to help rapidly insert BSN systems into existing operational models, Big Switch proved that you can non-disruptively bring new network areas online. This model has proven itself time and time again in the cloud deployment models that are favored by many today, including many of the Future:Net speakers.
Brownfields full of legacy equipment require careful planning and attention to detail. They also require that development and operations teams both understand the impact of the technical debt carried by an organization. By accounting for specialized configurations and needs, you can help bring portions of the network or systems architecture into the modern world while maintaining necessary services for legacy devices. Yes, it does sound a bit patronizing and contrived for most IT professionals. But given the talk of “burn it all to the ground and build fresh”, one must wonder if anyone truly does understand the impact of legacy technology investment.
Tom’s Take
You can’t walk away from debt. Whether it’s a credit card, a home loan, or the finance server that contains those records on an old DB2 database connected by a 10/100 FastEthernet switch. Part of my current frustration with the world of forward-looking proponents is that they often forget that you must support every part of the infrastructure. You can’t leave off systems in the same way you can’t leave off users. You can’t pretend that the AS/400 is gone when you scrap your entire network for new whitebox switches running open source OSes. You can’t hope that the old Oracle applications won’t have screwed up licensing the moment you migrate them all to AWS. You have to embrace legacy and work through it. You can plan for it to be upgraded and replaced at some point. But you can’t ignore it no matter how much it may suck.