I’ve always been a science nerd. Especially when it comes to astronomy. I’ve always been fascinated by stellar objects. The most fascinating has to be the black hole. A region of space with intense gravity formed from the collapse of a stellar body. I’ve read all about the peculiar properties of classical black holes. Of particular interest to the networking field is the idea of the event horizon.
An event horizon is a boundary beyond which events no longer affect observers. In layman’s terms, it is the point of no return for things falling into a black hole. Anything that falls below the event horizon disappears from the perspective of the observer. From the point of view of someone falling into the black hole, they never reach the event horizon yet are unable to contact the outside world. The event horizon marks the point at which information disappears in a system.
How does this apply to the networking world? Well, every system has a visibility boundary. We tend to summarize information heading in both directions. To a network engineer, everything above the transport layer of the OSI model doesn’t really matter. Those are application decisions that are made by programmers that don’t affect the system other than to be a drain on resources. To the programmers and admins, anything below the session layer of the OSI model is of little importance. As long as a call can be made to utilize network resources, who cares what’s going on down there?
Software Defined Networking (SDN) vendors are enforcing these event horizons. VMware NSX and the Microsoft Hyper-V virtual networking solution both function in a similar manner. They both create overlay networks that virtualize resources below the level of the host. Tunnels are created between devices or systems that ride on top of the physical network beneath. This means that the overlay can function no matter the state of the underlay, provided a path between the two hosts exists. However, it also means that the overlay obscures the details of the physical network.
There are many that would call this abstraction. Why should the hosts care about the network state? All they really want is a path to reach a destination. It’s better to give them what they want and leave the gory details up to routing protocols and layer 2 loop avoidance mechanisms. But, that abstraction becomes an event horizon when the overlay is unwilling (or unable) to process information from the underlay network.
Applications and hosts should be aware enough to listen to network conditions. Overlays should not rely on OSPF or BGP to make tunnel endpoint rerouting decisions. Putting undue strain on network processing is part of what has led to the situation we have now, where network operating systems need to be complex and intensive to calculate solutions to problems that could be better solved at a higher level.
If the network reports a traffic condition, like a failed link or a congested WAN circuit, that information should be able to flow back up to the overlay and act as a data point to trigger an alternate solution or path. Breaking the event horizon for information flowing back up toward the overlay is crucial to allow the complex network constructs we’ve created, such as fabrics, to utilize the best possible solutions for application traffic.
That’s not to say the event horizon doesn’t exist in the other direction as well. The network has historically been ignorant of the needs of applications at a higher layer. Network engineers have spent thousands of hours of time creating things like Quality of Service in an attempt to meet the unique needs of higher level programs. Sometimes this works in a vacuum with no problems provided we’ve guess accurately enough to predict traffic patterns. Other times, it fails spectacularly when the information changes too quickly.
The underlay network needs to destroy the event horizon that prevents information at higher layers from flowing down into the network. Companies that have historically concentrated on networking alone have started to see how important this intelligence can be. By allowing the network to respond to the needs of applications quickly developers can provide enough information to ensure that they programs are treated fairly by changing network conditions even without needing to listen to them. In this way, the application people can no longer claim the network is a “black hole”.
Even as I was writing this, a number of news stories came out from a paper by Professor Stephen Hawking that stated that the classical event horizon doesn’t exist. The short, short version is that the conditions close to a quantum singularity preclude a well-defined boundary that prevents the escape of all information, including light. Pretty heady stuff.
In networking, we do have the luxury of a well-defined boundary between underlay networks and overlay networks. We’ve seen the damage caused by the apparent event horizon for years. Critical information wasn’t flowing back and forth as needed to help each side provide the best experience for users and engineers. We need to ensure that this barrier is removed going forward. The networking people can’t exist in a vacuum pretending that applications don’t have needs. The overlay admins need to understand the the underlay is a storehouse of critical information and shouldn’t be ignored simply because tunnels are awesome. Knowing about the event horizon is the first step to finding a way to blast through it.