You wouldn’t think that AWS re:Invent would be a big week for networking, would you? Most of the announcements are focused on everything related to the data center but teasing out the networking specific pieces isn’t as easy. That’s why I found mention of a new-ish protocol in an unrelated article to be fascinating.
In this Register piece about CPUs there’s a mention of the Nitro DPU. More importantly there’s also a reference to something that Amazon has apparently been working on for the last couple of years. It turns out that the world’s largest online bookstore and data center company is looking to get rid of TCP.
The new protocol was developed in 2020. Referred to as Scalable Reliable Datagram (SRD), it was build to solve specific challenges Amazon was seeing related to performance in their cloud. Amazon decided that TCP had bigger issues for them that they needed to address.
The first was that dropped packets required retransmission. In an environment like the Internet that makes sense. You want to get the data you lost. However, when TCP was developed fifty years ago the amount of data that was lost in transit was tiny compared to the flows of today. Likewise, TCP doesn’t really know how to take advantage of multi path flows. That’s because any packet that arrives out-of-order requires the whole thing to be reassembled in order to be read by the operating system. That makes total sense when you think about the relative power of a computer back in the 70s and 80s. If the CPU is already super busy trying to do other things you don’t want it to have to try to deal with a messy stream of packets too. Halting the flow until it’s reassembled makes sense.
Today, Amazon would rather have the packets arrive on multiple different paths and be reassembled higher up the stack. Instead of forcing the system to stop transmitting the entire flow until the lost or out-of-order pieces are fixed they’d rather get the whole thing delivered and start putting the puzzle together while the I/O interface is processing the next flow. That’s because the size of the flows is creating communications issues between systems. They would rather have slightly higher latency to increase performance.
DPUs Or Bust
If this is so obvious, why has no one rewritten TCP before? Well, we hinted at it when we discussed the fact that TCP is stopping the flow to sort out the mess. The CPU will be fully tasked with reassembling flows with something like SRD because networking communications are constant. If you’re hoping that whatever your successor is will just magically sort things out you’re going to find a system that is quite busy with things that shouldn’t be taking so much time. The perceived gains in performance are going to evaporate.
The key to how SRD will actually work lies not in the protocol but in the way it’s implemented in hardware. SRD only works if you’re using an AWS Nitro DPU. When Amazon says they want the packets to be reassembled “up the stack” they’re really saying the want the DPU to do the work to put the pieces back together before presenting the reassembled packet back to the system. The system itself doesn’t know the packets came in out-of-order. The system doesn’t even know how the packets arrived. It just knows it sent data somewhere else and that it arrived with no errors.
The magic here is the DPU. TCP works for the purpose it was designed to do. If you’re transmitting packets over a wide area in serial fashion and there’s packet loss on the link TCP is the way to go. Amazon SRD only works with Nitro-configured systems in AWS. It’s a given that many servers are now in AWS and more than a few are going to have this additional piece of hardware installed and configured. The value is that having this enabled is going to increase performance. But there’s a cost.
I think back to configuring jumbo Ethernet frames for a storage array that I was configuring back in 2011. Enabling 9000 byte frames between the switch and the array really increased performance. However, if you plugged the array in anywhere else or plugged anything into that port on the switch it broke. Why? Because 9000 byte Ethernet isn’t the standard. It’s a special configuration that only works when explicitly enabled.
Likewise, SRD works well within Amazon. You need to specifically enable it on your servers and those performance gains aren’t going to happen if you need to talk to something that isn’t SRD-enabled or isn’t configured with a Nitro DPU. Because the hard work is being done by the DPU you don’t have to worry about a custom hardware configuration in your application ruining your ability to migrate. However, if you’re relying on a specific performance level with the app to make things happen, like database lookups or writing files to a SAN, you’re not going to be able to move that app to another cloud without incurring a performance penalty.
I get that companies like Amazon are heavily invested in Ethernet but want higher performance. I also get that DPUs are going to enable us to do things with systems that we’ve never even really considered before, like getting rid of TCP and optimizing how packets are sent through the network. It’s a grand idea but it not only breaks compatibility but creates a gap in performance expectations as well. We already see this in the wireless space. People want gigabit file transfers with Wi-Fi 6 and are disappointed because their WAN connection isn’t that fast. If Amazon is promising super fast communications between systems users are going to be disappointed when they don’t get that in other clouds or between non-DPU systems. It creates lock-in and sets expectations that can’t be met. The future of networking is going to involve DPUs. But in order to really change the way we do things we need to work with the standards we have and make sure everyone can benefit.