Embed from Getty Images
An article came out this week that really made me sigh. The title was “Six Aging Protocols That Could Cripple The Internet“. I dove right in, expecting to see how things like Finger were old and needed to be disabled and removed. Imagine my surprise when I saw things like BGP4 and SMTP on the list. I really tried not to smack my own forehead as I flipped through the slideshow of how the foundation of the Internet is old and is at risk of meltdown.
If It Ain’t Broke
Engineers love the old adage “If it ain’t broke, don’t fix it!”. We spend our careers planning and implementing. We also spend a lot of time not touching things afterwards in order to prevent it from collapsing in a big heap. Once something is put in place, it tends to stay that way until something necessitates a change.
BGP is a perfect example. The basics of BGP remain largely the same from when it was first implemented years ago. BGP4 has been in use since 1994 even though RFC 4271 didn’t officially formalize it until 2006. It remains a critical part of how the Internet operates. According to the article, BGP is fundamentally flawed because it’s insecure and trust based. BGP hijacking has been occurring with more frequency, even as resources to combat it are being hotly debated. Is BGP to blame for the issue? Or is it something more deeply rooted?
Don’t Fix It
The issues with BGP and other protocols mentioned in the article, including IPv6, aren’t due to the way the protocol was constructed. It is due in large part to the humans that implement those protocols. BGP is still in use in the current insecure form because it works. And because no one has proposed a simple replacement that accomplishes the goal of fixing all the problems.
Look at IPv6. It solves the address exhaustion issue. It solves hierarchical addressing issues. It restores end-to-end connectivity on the Internet. And yet adoption numbers still languish in the single digit percentage. Why? Is it because IPv6 isn’t technically superior? Or because people don’t want to spend the time to implement it? It’s expensive. It’s difficult to learn. Reconfiguring infrastructures to support new protocols takes time and effort. Things that are better spent on answering user problems or taking on additional tasks as directed by management that doesn’t care about BGP insecurity until the Internet goes down.
It Hurts When I Do This
Instead of complaining about how protocols are insecure, the solution to the problem should be two fold: First, we need to start building security into protocols and expiring their older, insecure versions. POODLE exploited SSLv3, an older version that served as a fallback to TLS. While some old browsers still used SSLv3, the simple easy solution was to disable SSL and force people to upgrade to TLS-capable clients. In much the same way, protocols like NTP and BGP can be modified to use more security. Instead of suggesting that people use those versions, architects and engineers need to implement those versions and discourage use of the old insecure protocols by disabling them. It’s not going to be easy at first. But as the movement gains momentum, the solution will work.
The next step in the process is to build easy-to-configure replacements. Bolting security onto a protocol after the fact does stop the bleeding. But to fix the underlying symptoms, the security needs to be baked into the protocol from the beginning. But doing this with an entirely new protocol that has no backwards compatibility will be the death of that new protocol. Just look at how horrible the transition to IPv6 has been. Lack of an easy transition coupled with no monetary incentive and lack of an imminent problem caused the migration to drag out until the eleventh hour. And even then there is significant pushback against an issue that can no longer be ignored.
Building the next generation of secure Internet protocols is going to take time and transparent effort. People need to see what’s going into something to understand why it’s important. The next generation of engineers needs to understand why things are being built the way they are. We’re lucky in that many of the people responsible for building the modern Internet are still around. When asked about limitations in protocols the answer remains remarkably the same – “We never thought it would be around this long.”
The longevity of quick fixes seems to be the real issue. When the next generation of Internet protocols is built there needs to be a built-in expiration date. A point-of-no-return beyond which the protocol will cease to function. And there should be no method for extending the shelf life of a protocol to forestall it’s demise. In order to ensure that security can’t be compromised we have to resign ourselves to the fact that old things need to be put out to pasture. And the best way to ensure that new things are put in place to supplant them is to make sure the old things go away on time.
The Internet isn’t fundamentally broken. It’s a collection of things that work well in their roles that maybe have been continued a little longer than necessary. The probability of an exploit being created for something rises with every passing day it is still in use. We can solve the issues of the current Internet with some security engineering. But to make sure the problem never comes back again, we have to make a hard choice to expire protocols on a regular basis. It will mean work. It will create strife. And in the end we’ll all be better for it.