Chipping Away At Technical Debt

We’re surrounded by technical debt every day. We have a mountain of it sitting in distribution closets and a yard full of it out behind the data center. We make compromises for budget reasons, for technology reasons, and for political reasons. We tell ourselves every time that this is the last time we’re giving in and the next time it’s going to be different. Yet we find ourselves staring at the landscape of technical debt time and time again. But how can we start chipping away at it?

Time Is On Your Side

You may think you don’t have any time to work on the technical debt problem. This is especially true if you don’t have the time due to fixing problems caused by your technical debt. The hours get longer and the effort goes up exponentially to get simple things done. But it doesn’t have to be that way.

Every minute you spend trying to figure out where a link goes or a how a server is connected to the rest of the pod is a minute that should have been spent documenting it somewhere. In a text document, in a picture, or even on the back of a napkin stuck to the faceplate of said server!

I once watch an engineer get paid an obscene amount of money to diagram a network. Because it had never been done. He didn’t modify anything. He didn’t change a setting. All he did was figure out what went where and what the interface addresses were. He put it in Visio and showed it to the network administrators. They were so thrilled they had it printed in poster size and framed! He didn’t do anything above and beyond showing them what they already had.

It’s easy to say that you’re going to document something. It’s hard to remember to do it. It’s even harder when you forget to do it and have to spend an hour remembering why you did it in the first place. So it’s better to take a minute to write in an interface description. Or perhaps a comment about why you did a thing the way you did it. Every one of those minutes chips away a little more technical debt.

Ask yourself this question next time: If I spent less time solving documentation problems, what could I do to reduce debt overall?

Mad As Hell And Not Going To Design It Anymore

The other sources of technical debt are money and politics. And those usually come into play in the design phase. People jockeying for position or trying to save money for something else. I don’t mind honest budget saving measures. What I mind is trying to cut budgets for things like standing desks and quarterly bonuses. The perception of IT is that it is still a budget sink with no real purpose to support the business. And then the email server goes down. That’s why IT’s real value comes out.

When you design a network, you need to take into account many factors. What you shouldn’t take into account is someone dictating how things are going to be just because they like the way that something looks or sounds. When I worked at a VAR there were always discussions like this. Who should we use for the switching hardware? Which company has a great rebate this quarter? I have a buddy that works at Vendor X so we should throw him a bone this time.

As the networking professional, you need to stand up for your decisions. If you put the hard work and effort into making something happen you shouldn’t sit down and let it get destroyed by someone else’s bad idea. That’s how technical debt starts. And if you let it start this way you’ll never be able to get rid of it. Both because it will be outside of your control and because you’ll always identify someone else as the source of the problem.

So, how do you get mad and fix it before it starts? Know your project cold. Know every bolt and screw. Justify every decision. Point out the mistakes when they happen in front of people that don’t like to be embarrassed. In short, be a jerk. The kind of self-important, overly cerebral jerk that is smug because he knows he’s right. And when people know you’re right they’ll either take your word for things or only challenge you when they know they’re right.

That’s not to say that you should be smug and dismissive all the time. You’re going to be wrong. We all are. Well, everyone except my wife. But for the rest of us, you need to know that when you’re wrong you need to accept it and discuss the situation. No one is 100% infallible, but someone that can’t recognize when they are wrong can create a whole mountain range of technical debt before they’re stopped.


Tom’s Take

Technical Debt is just a convenient term for something we all face. We don’t like it, but it’s part of the job. Technical Debt compounds faster than any finance construct. And it’s stickier than student loans. But, we can chip away at it slowly with some care. Take a little time to stop the little problems before they happen. Take a minute to save an hour. And when you get into that next big meeting about a project and you see a huge tidal wave of technical debt headed your way, stand up and let it be known. Don’t be afraid to speak out. Just make sure you’re right and you won’t drown.

Advertisements

2018 Is The Year Of Writing Everything

Welcome back to a year divisible by 2! 2018 is going to be a good year through the power of positive thinking. It’s going to be a fun year for everyone. And I’m going to do my best to have fun in 2018 as well.

Per my tradition, today is a day to look at what is going to be coming in 2018. I don’t make predictions, even if I take some shots at people that do. I also try not to look back to heavily on the things I’ve done over the past year. Google and blog searches are your friend there. Likely as not, you’ve read what I wrote this year and found one or two things useful, insightful, or amusing. What I want to do is set up what the next 52 weeks are going to look like for everyone that comes to this blog to find content.

Wearing Out The Keyboard

The past couple of years has shown me that the written word is starting to lose a bit of luster for content consumers. There’s been a bit push to video. Friends like Keith Townsend, Robb Boardman, and Rowell Dionicio have started making more video content to capture people’s eyes and ears. I myself have even noticed that I spend more time listening to 10-minute video recaps of things as opposed to reading 500-600 words on the subject.

I believe that my strengths lie in my writing. I’m going to continue to produce content around that here for the foreseeable future. But that’s not so say that I can’t dabble. With the help of my media-minded co-worker Rich Stroffolino, we’ve created a weekly video discussion called the Gestalt IT Rundown. It’s mostly what you would expect from a weekly video series. Rich and I find 2-3 tech news articles and make fun of them while imparting some knowledge and perspective. We’ve been having a blast recording them and we’re going to be bringing you more of them every Wednesday in 2018.

That’s not to say that my only forays on GestaltIT.com are going to be video focused. I’m also going to be writing more there as well. I’ve had some articles go up there this year that talked about some of the briefings that I’ve been taking from networking companies. That is going to continue through 2018 as well. I’ve also had some great long form writing, including my favorite article from last year about vendors and VARs. I’m going to continue to post articles on Gestalt IT in 2018 as well as posting them here. I’m going to need to figure out the right mix of what to put up for my day job and what to put up here for my Batman job. If there’s a kind of article you prefer to see here please make sure to let me know so I can find a way to make more of them.

Lastly, in 2018 I’m going to try some new things as well with regard to my writing. I’m going to write more coverage of the events I attend. Things like Cisco Live, WLPC, and other industry events will have some discussions of what goes on there for people that can’t go or are looking for the kind of information that you can’t get from an event website. I’m also going to try to find a way to incorporate some of my more detailed pieces together as reports for consumption. I’m not sure how this is going to happen, but I figured if I put it down here as a goal then I would be required to make it happen next year.


Tom’s Take

I like writing. I like teaching and informing and telling the occasional bad joke about BGP. It’s all because I get regular readers that want to hear what I have to say that I continue to do what I do. Rather than worry about the lack of content and coverage that I’m seeing in the community, I’ve decided to do my part to put out more. It is a challenging exercise to come up with new ideas every week and put them out for the world to see. But as long as folks like you keep reading what I have to say I’ll keep writing it all down.

Programming Unbound

I’m doing some research on Facebook’s Open/R routing platform for a future blog post. I’m starting to understand the nuances a bit compared to OSPF or IS-IS, but during my reading I got stopped cold by one particular passage:

Many traditional routing protocols were designed in the past, with a strong focus on optimizing for hardware-limited embedded systems such as CPUs and RAM. In addition, protocols were designed as purpose-built solutions to solve the particular problem of routing for connectivity, rather than as a flexible software platform to build new applications in the network.

Uh oh. I’ve seen language like this before related to other software projects. And quite frankly, it worries me to death. Because it means that people aren’t learning their lessons.

New and Improved

Any time I see an article about how a project was rewritten from the ground up to “take advantage of new changes in protocols and resources”, it usually signals to me that some grad student decided to rewrite the whole thing in Java because they didn’t understand C. It sounds a bit cynical, but it’s not often wrong.

Want proof? Check out Linus Torvalds and his opinion about rewriting the Linux kernel in C++. Spoiler alert – “C++ is a horrible language.” And it gets more colorful from there. Linus has some very valid points about C++ that have been debated by lots of communities for the past ten years. But the fact remains that he has decided that completely rewriting the entire kernel in C++ is an exercise in futility.

In today’s world, we’re faced with a multitude of programming languages fighting for our attention. We’re evolved past FORTRAN, COBOL, C, and C++. We now live a world of Python, C#, J#, R, Ruby, and dozens more. And those don’t even include the languages that aren’t low-level and are more scripting. Every one of these languages was designed to solve a particular problem. And every one of them is begging to be used.

But it’s not enough that we have ten ways to write a function today. What’s more troublesome is that we’ve forgotten why certain languages were preferred over others in the past. We’ve forgotten that things used to be done the way they were done because we had no other alternatives. I can remember studying for my Novell CNE and taking 50-649, wherein Novell kept referring to OSPF as an “expensive” protocol to use on a NetWare server. At the time that test was created, OSPF was expensive from a CPU cycle standpoint. If the server was doing other things besides running a routing protocol you might see a potential impact if the CPU was slowed. And having OSPF calculations interrupted because someone was doing an FTP transfer could be expensive indeed.

No Wasted Space

More to the point, when people are faced with a limitation they have to be creative and concise. And nowhere is that more apparent than in E.T. the Extraterrestrial for the Atari 2600. Infamously, Howard Scott Warshaw had just one month to write a video game that would go on to be blasted critically, considered one of the worst of all time, and be blamed for the Video Game Crash of 1983. Yet, as one fan discovered years later when he set out to “fix” the game’s code, as bad is it may have been it was very well coded. From the article:

…it’s unlikely that Howard Scott Warshaw (the developer) included some useless code for us to replace…

So, a programmer for an outdated video game system had a month to code a complex game and managed to do it in such a way as to leave very little empty space to insert code patches? And yet my Facebook app on my iPhone requires how much space?!?

All joking aside, the issue with E.T. wasn’t code quality. The problems with the game have been documented over the years, but almost no one blames Warshaw’s coding capabilities. That’s because Warshaw was working within his limitations. He couldn’t magically make the Atari 2600 cartridge bigger. He couldn’t increase the CPU size on the system. He worked within him limitations and made the best game that he could make.

Now, let’s look at an article about Open/R and see some of Facebook’s criticisms of OSPF and IS-IS:

We didn’t want to get bogged down in discussions over the lower-level protocol details, such as frame formatting and handshakes…

While it might sound heavyweight compared with OSPF and ISIS, which use their own “lightweight” transports, we haven’t found this to be an issue in modern networking hardware…

Whether or not they were intended to be taken as such, these are some pretty interesting knocks against OSPF and IS-IS. What Facebook is essentially saying is that they didn’t want to worry about building the low level parts of the messaging system, so they picked something off the shelf. They also built it to be more resource intensive than necessary because they didn’t need to compromise when running it on Six Pack and Wedge.

So long as your routers have ample CPU cycles and memory, Open/R will run just fine. But how many people out there are running a data center server board in their edge router? How many routers out there have to take a reduced BGP table because they don’t have enough memory to fit the entire global IPv4 routing table in memory, let alone IPv6? If resources are infinite and time is irrelevant than building your protocols the way you want is of no consequence. But as soon as you add constraints to the equation, like support for older hardware or limited memory, you have to start making compromises to make things work.


Tom’s Take

I’m not saying that Open/R is a bad routing protocol. I’m going to save that analysis for a later time. But I do take a bit of umbrage with Facebook’s idea that OSPF and IS-IS are a bit outdated simply because they were programmed for a different era. If they were really that inept they would have been replaced or expanded by now. The fact that twenty-somethings got a bug to rewrite a routing protocol because they could and threw all caution to the wind with regard to resource usage should be a cautionary tale to any programmer out there. Never assume that you have more space than you need. Train yourself to do more with less. And be ready to compromise in case the worst case scenario becomes reality.

Data Is Not The New Oil, It’s Nuclear Power

Big Data. I believe that one phrase could get millions in venture capital funding. I don’t even have to put a product with it. Just say it. And make no mistake about it: the rest of the world thinks so too. Data is “the new oil”. At least, according to some pundits. It’s a great headline making analogy that describes how data is driving business and controlling it can lead to an empire. But, data isn’t really oil. It’s nuclear power.

Black Gold, Texas Tea

Crude oil is a popular resource. Prized for a variety of uses, it is traded and sold as a commodity and refined into plastics, gasoline, and other essential items of modern convenience. Oil creates empires and causes global commerce to hinge on every turn of the market. Living in a state that is a big oil producer, the exploration and refining of oil has a big impact.

However, when compared to Big Data, oil isn’t the right metaphor. Much like oil, data needs to be refined before use. But oil can be refined into many different distinct things. Data can only be turned into information. Oil burns up when consumed. Aside from some smoke and a small amount of residuals oil all but disappears after it expends the energy trapped within. Data doesn’t disappear after being turned into information.

In fact, the biggest issue that I have with the entire “Data as Oil” argument is that oil doesn’t stick around. We don’t see massive pools of oil on the side of the road from spills. We don’t hear about our massive issues with oil disposal or securing our spent oil to prevent theft. People that treat data like oil are only looking at the refined product as the final form. They tend to forget that the raw form of data sticks around after the transformation. Most people will tell you that’s a good thing because you can run analytics and machine learning against static datasets and continue to derive value from it. But doesn’t make me think of oil at all.

Welcome To The Nuclear Age

In fact, Big Data reminds me most of a nuclear power plant. Much like oil, the initial form of radioactive material isn’t very useful. It radiates and creates a small amount of heat but not enough to run the steam generators in a power plant. Instead, you must bombard the uranium 235 pellets with neutrons to start a fission reaction. Once you have a sustained controllable reaction the amount of generated heat rises and creates the resource you need to power the rest of your machinery.

Much like data, nuclear fission reactions don’t do much without the proper infrastructure to harness them. Even after you transform you data into information you need to parse, categorize, and analyze it. The byproduct of the transformation is the critical part of the whole process.

Much like nuclear fuel rods, data stays in place for years. It continues to produce the resource after being modified and transformed. It sits around in the hopes that it can be useful until the day that it no longer serves its purpose. Data that is past the useful shelf life goes into a data warehouse when it will eventually be forgotten. Spent nuclear fuel rods are also eventually removed and placed somewhere where they can’t affect other things. Maybe it’s buried deep underground. Or shot into space. Or placed under enough concrete that they will never be found again.

The danger in data and in nuclear power is not what happens when everything goes right. Instead, it’s what happens when everything goes wrong. With nuclear power, wrong is a chain reaction meltdown. Or wrong could be improper disposal of waste. It could be a disaster at the plant or even a theft of fissile nuclear material from the plant. The fuel rods themselves are simultaneously our source of power and the source of our potential disaster.

Likewise, data that is just sitting around and stored improperly can lead to huge disasters. We’re only four years removed from Target’s huge data breach. And how many more are waiting out there to happen? It seems the time between data leaks is shrinking as more and more bad actors are finding ways to steal, manipulate, and appropriate data for their own ends. And, much like nuclear fuel rods, the methods of protecting the data are few compared to other fuel sources.

Data isn’t something that can easily be hidden or compacted. It needs to be readable to be useful. It needs to be fast to be useful. All the things that make it easy to use also make it easy to exploit. And once we’re done with it, the way that it is stored in perpetuity only increase the likelihood of it being used improperly. Unless we’re willing to bury it under metaphorical concrete we’re in for a bad future if we forget how to handle spent data.


Tom’s Take

Data as Oil is a stupid metaphor. It’s meant to impress upon finance CEOs and Wall Street wonks how important it is for data to be taken seriously. Data as Oil is something a data scientist would say to get a job. By drawing a bad comparison, you make data seem like a commodity to be traded and used as collateral for empire building. It’s not. It’s a ticking bomb of disastrous proportions when not handled correctly. Rather than coming up with a pithy metaphor for cable news consumption and page views, let’s treat data with the respect it deserves and make sure we plan for how we’re going to deal with something that won’t burn up into smoke whenever it’s convenient.

Should We Build A Better BGP?

One story that seems to have flown under the radar this week with the Net Neutrality discussion being so dominant was the little hiccup with BGP on Wednesday. According to sources, sources inside AS39523 were able to redirect traffic from some major sites like Facebook, Google, and Microsoft through their network. Since the ISP in question is located inside Russia, there’s been quite a lot of conversation about the purpose of this misconfiguration. Is it simply an accident? Or is it a nefarious plot? Regardless of the intent, the fact that we live in 2017 and can cause massive portions of Internet traffic to be rerouted has many people worried.

Routing by Suggestion

BGP is the foundation of the modern Internet. It’s how routes are exchanged between every autonomous system (AS) and how traffic destined for your favorite cloud service or cat picture hosting provider gets to where it’s supposed to be going. BGP is the glue that makes the Internet work.

But BGP, for all of the greatness that it provides, is still very fallible. It’s prone to misconfiguration. Look no further than the Level 3 outage last month. Or the outage that Google caused in Japan in August. And those are just the top searches from Google. There have been a myriad of problems over the course of the past couple of decades. Some are benign. Some are more malicious. And in almost every case they were preventable.

BGP runs on the idea that people configuring it know what they’re doing. Much like RIP, the suggestion of a better route is enough to make BGP change the way that traffic flows between systems. You don’t have to be a evil mad genius to see this in action. Anyone that’s ever made a typo in their BGP border router configuration will tell you that if you make your system look like an attractive candidate for being a transit network, BGP is more than happy to pump a tidal wave of traffic through your network without regard for the consequences.

But why does it do that? Why does BGP act so stupid sometimes in comparison to OSPF and EIGRP? Well, take a look at the BGP path selection mechanism. CCIEs can probably recite this by heart. Things like Local Preference, Weight, and AS_PATH govern how BGP will install routes and change transit paths. Notice that these are all set by the user. There are not automatic conditions outside of the route’s origin. Unlike OSPF and EIGRP, there is no consideration for bandwidth or link delay. Why?

Well, the old Internet wasn’t incredibly reliable from the WAN side. You couldn’t guarantee that the path to the next AS was the “best” path. It may be an old serial link. It could have a lot of delay in the transit path. It could also be the only method of getting your traffic to the Internet. Rather than letting the routing protocol make arbitrary decisions about link quality the designers of BGP left it up to the person making the configuration. You can configure BGP to do whatever you want. And it will do what you tell it to do. And if you’ve ever taken the CCIE lab you know that you can make BGP do some very interesting things when you’re faced with a challenge.

BGP assumes a minimum level of competency to use correctly. The protocol doesn’t have any built in checks to avoid doing stupid things outside of the basics of not installing incorrect routes in the routing table. If you suddenly start announcing someone else’s AS with better metrics then the global BGP network is going to think you’re the better version of that AS and swing traffic your way. That may not be what you want. Given that most BGP outages or configurations of this type only last a couple of hours until the mistake is discovered, it’s safe to say that fat fingers cause big BGP problems.

Buttoning Down BGP

How do we fix this? Well, aside from making sure that anyone touching BGP knows exactly what they’re doing? Not much. Some Regional Internet Registrars (RIRs) require you to preconfigure new prefixes with them before they can be brought online. As mentioned in this Reddit thread, RIPE is pretty good about that. But some ISPs, especially ones in the US that work with ARIN, are less strict about that. And in some cases, they don’t even bring the pre-loaded prefixes online at the correct time. That can cause headaches when trying to figure out why your networks aren’t being announced even though your config is right.

Another person pointed out the Mutually Agreed Norms for Routing Security (MANRS). These look like some very good common sense things that we need to be doing to ensure that routing protocols are secure from hijacks and other issues. But, MANRS is still a manual setup that relies on the people implementing it to know what they’re doing.

Lastly, another option would be the Resource Public Key Infrastructure (RPKI) service that’s offered by ARIN. This services allows people that own IP Address space to specify which autonomous systems can originate their prefixes. In theory, this is an awesome idea that gives a lot of weight to trusting that only specific ASes are allowed to announce prefixes. In practice, it requires the use of PKI cryptographic infrastructure on your edge routers. And anyone that’s ever configured PKI on even simple devices knows how big of a pain that can be. Mixing PKI and BGP may be enough to drive people back to sniffing glue.


Tom’s Take

BGP works. It’s relatively simple and gets the job done. But it is far too trusting. It assumes that the people running the Internet are nerdy pioneers embarking on a journey of discovery and knowledge sharing. It doesn’t believe for one minute that bad people could be trying to do things to hijack traffic. Or, better still, that some operator fresh from getting his CCNP isn’t going to reroute Facebook traffic through a Cisco 2524 router in Iowa. BGP needs to get better. Or we need to make some changes to ensure that even if BGP still believes that the Internet is a utopia someone is behind it to ensure those rose colored glasses don’t cause it to walk into a bus.

Network Visibility with Barefoot Deep Insight

As you may have heard this week, Barefoot Networks is back in the news with the release of their newest product, Barefoot Deep Insight. Choosing to go down the road of naming a thing after what it actually does, Barefoot has created a solution to finding out why network packets are behaving the way they are.

Observer Problem

It’s no secret that modern network monitoring is coming out of the Dark Ages. ping, traceroute, and SNMP aren’t exactly the best tools to be giving any kind of real information about things. They were designed for a different time with much less packet flow. Even Netflow can’t keep up with modern networks running at multi-gigabit speeds. And even if it could, it’s still missing in-flight data about network paths and packet delays.

Imagine standing outside of the Holland Tunnel. You know that a car entered at a specific time. And you see the car exit. But you don’t know what happened to the car in between. If the car takes 5 minutes to traverse the tunnel you have no way of knowing if that’s normal or not. Likewise, if a car is delayed and takes 7-8 minutes to exit you can’t tell what caused the delay. Without being able to see the car at various points along the journey you are essentially guessing about the state of the transit network at any given time.

Trying to solve this problem in a network can be difficult. That’s because the OS running on the devices doesn’t generally lend itself to easy monitoring. The old days of SNMP proved that time and time again. Today’s networks are getting a bit better with regard to APIs and the like. You could even go all the way up the food chain and buy something like Cisco Tetration if you absolutely needed that much visibility.

Embedding Reporting

Barefoot solves this problem by using their P4 language in concert with the Tofino chipset to provide a way for there to be visibility into the packets as they traverse the network. P4 gives Tofino the flexibility to build on to the data plane processing of a packet. Rather than bolting the monitoring on after the fact you can now put it right along side the packet flow and collect information as it happens.

The other key is that the real work is done by the Deep Insight Analytics Software running outside of the switch. The Analytics platform takes the data collected from the Tofino switches and starts processing it. It creates baselines of traffic patterns and starts looking for anomalies in the data. This is why Deep Insight claims to be able to detect microbursts. Because the monitoring platform can analyze the data being fed to it and provide the operator with insights.

It’s important to note that this is info only. The insights gathered from Deep Insight are for informational purposes. This is where the skill of network professional comes into play. By gaining perspective into what could be causing issues like microbursts from the software you gain the ability to take your skills and fix those issues. Perhaps it’s a misconfigured ECMP pair. Maybe it’s a dead or dying cable in a link. Armed with the data from the platform, you can work your networking magic to make it right.

Barefoot says that Deep Insight builds on itself via machine learning. While machine learning is seems to be one of the buzzwords du jour it could be hoped that a platform that can analyze the states of packets can start to build an idea of what’s causing them to behave in certain ways. While not mentioned in the press release, it could also be inferred that there are ways to upload the data from your system to a larger set of servers. Then you can have more analytics applied to the datasets and more insights extracted.


Tom’s Take

The Deep Insight platform is what I was hoping to see from Barefoot after I saw them earlier this year at Networking Field Day 14. They are taking the flexibility of the Tofino chip and the extensibility of P4 and combining them to build new and exciting things that run right alongside the data plane on the switches. This means that they can provide the kinds of tools that companies are willing to pay quite a bit for and do it in a way that is 100% capable of being audited and extended by brilliant programmers. I hope that Deep Insight takes off and sees wide adoption for Barefoot customers. That will be the biggest endorsement of what they’re doing and give them a long runway to building more in the future.

Does Juniper Need To Be Purchased?

You probably saw the news this week that Nokia was looking to purchase Juniper Networks. You also saw pretty quickly that the news was denied, emphatically. It was a curious few hours when the network world was buzzing about the potential to see Juniper snapped up into a somewhat larger organization. There was also talk of product overlap and other kinds of less exciting but very necessary discussions during mergers like this. Which leads me to a great thought exercise: Does Juniper Need To Be Purchased?

Sins of The Father

More than any other networking company I know of, Juniper has paid the price for trying to break out of their mold. When you think Juniper, most networking professionals will tell you about their core routing capabilities. They’ll tell you how Juniper has a great line of carrier and enterprise switches. And, if by some chance, you find yourself talking to a security person, you’ll probably hear a lot about the SRX Firewall line. Forward thinking people may even tell you about their automation ideas and their charge into the world of software defined things.

Would you hear about their groundbreaking work with Puppet from 2013? How about their wireless portfolio from 2012? Would anyone even say anything about Junosphere and their modeling environments from years past? Odds are good you wouldn’t. The Puppet work is probably bundled in somewhere, but the person driving it in that video is on to greener pastures at this point. The wireless story is no longer a story, but a footnote. And the list could go on longer than that.

When Cisco makes a misstep, we see it buried, written off, and eventually become the butt of really inside jokes between groups of engineers that worked with the product during the short life it had on this planet. Sometimes it’s a hardware mistake. Other times it’s software architecture missteps. But in almost every case, those problems are anecdotes you tell as you watch the 800lb gorilla of networking squash their competitors.

With Juniper, it feels different. Every failed opportunity is just short of disaster. Every misstep feels like it lands on a land mine. Every advance not expanded upon is the “one that got away”. Yet we see it time and time again. If a company like Cisco pushed the envelope the way we see Juniper pushing it we would laud them with praise and tell the world that they are on the verge of greatness all over again.

Crimes Of The Family

Why then does Juniper look like a juicy acquisition target? Why are they slow being supplanted by Arista as the favored challenger of the Cisco Empire? How is it that we find Juniper under the crosshairs of everyone, fighting to say alive?

As it turns out, wars are expensive. And when you’re gearing to fight Cisco you need all the capital you can. That forces you to make alliances that may not be the best for you in the long run. And in the case of Juniper, it brought in some of the people that thought they could get in on the ground floor of a company that was ready to take on the 800lb gorilla and win.

Sadly, those “friends” tend to be the kind that desert you when you need them the most. When Juniper was fighting tooth and nail to build their offerings up to compete against Cisco, the investors were looking for easy gains and ways to make money. And when those investors realize that toppling empires takes more than two quarters, they got antsy. Some bailed. Those needed to go. But the ones that stayed cause more harm than good.

I’ve written before about Juniper’s issues with Elliott Capital Management, but it bears repeating here. Elliott is an activist investor in the same vein as Carl Ichan. They take a substantial position in a company and then immediately start demanding changes to raise the stock price. If they don’t get their way, they release paper after paper decrying the situation to the market until the stock price is depressed enough to get the company to listen to Elliott. Once Elliott’s demands are met, the company exits their position. They get a small profit and move on to do it all over again, leaving behind a shell of a company wonder what happened.

Elliott has done this to Juniper in droves. Pulse VPN. Trapeze. They’ve demanded executive changes and forced Juniper to abandon good projects that have long term payoffs because they won’t bounce the stock price higher this quarter. And worse yet, if you look back over the last five years you can find story in the finance industry about Juniper being up for sale or being a potential acquisition target. Five. Years. When’s the last time you heard about Cisco being a potential target for buyout? Hell, even Arista doesn’t get shopped as much as Juniper.


Tom’s Take

I think these symptoms are all the same root issue. Juniper is a great technology company that does some exciting and innovative things. But, much like a beautiful potted plant in my house, they are reaching the maximum amount of size they can grow to without making a move. Like a plant, you can only grow as big as their container. If you leave them in a small one, they’ll only ever be small. You can transfer them to something larger but you risk harm or death. But you’ll never grow if you don’t change. Juniper has the minds and the capability to grow. And maybe with the eyes of the Wall Street buzzards looking elsewhere for a while, they can build a practice that gives them the capability to challenge in the areas they are good at, not just being the answer for everything Cisco is doing.