Reaction: Should routing react to the data plane?
Over at Packet Pushers, there’s an interesting post asking why we don’t use actual user traffic to detect network failures, and hence to drive routing protocol convergence—or rather, asking why routing doesn’t react to the data place.
This is, indeed, an interesting question—and ones that’s highly relevant in our current software defined/drive world. So why not? Let me give you two lines of thinking that might be used to answer this question.
First, let’s consider the larger problem of fast convergence. Anyone who’s spent time in any of my books, or sat through any of my presentations, should know the four steps to convergence—but just in case, let’s cover them again, using a slide from my forthcoming LiveLesson on IS-IS:
There are four steps—detect, report, calculate, and install. The primary point the original article makes is that we might be able to detect a failure faster by seeing traffic flows stop than we can through some other form of detection in the control plane. But is this true? Let’s try to build such a system in our “imagination space” (think of your brain as just another VM maybe) and see what we can figure out.
Since event driven mechanisms are (almost) always faster than polling driven mechanisms, let’s construct this system in an event driven way. Let’s say we build a router that keeps track of flows and, when it sees a set of flows from a particular host or destination stop working, takes the route to that destination out of its local table, and then notifies any local routing processes that the destination is down. This is similar to the way a router treats an interface, only at a flow level.
But this idea creates two more questions.
First, how do I know all the flows to this device aren’t supposed to be stopped for some reason? It might seem suspicious, to a router, that every flow being transmitted to a single host would stop at the same time, but it might also mean something as simple as the host processes finishing all their jobs. It could mean all the traffic going to this host has suddenly switched to another path for some reason, so the route is still valid, it’s just no longer used.
How can I tell the difference between these different situations? Let’s say I start monitoring the state of each flow, rather than just the existence, so I can see all the TCP FIN’s, and say, “oh, all these flows are ending, so the host really isn’t going off line, it’s just done working for the moment.” But now, rather than just monitoring flows, I’m actually monitoring the state of those flows. And even with this solution, I still have some more problems to address. For instance, what if all the TCP sessions end just as the host actually crashes? This might seem unlikely, but in a network that’s large enough, all sorts of odd things are going to happen. It’s better to consider the corner cases before they happen, rather than at 2am when you’re trying to resolve a problem caused by one.
In terms of the complexity model, the control plane and the data plane are two different systems, and there is an interaction surface between these two systems. In this proposal, we’re deepening the interaction surface, which means we’re increasing complexity. The tradeoff might be (though it will rarely be) faster convergence, but at the cost of systems that must interact more deeply, and hence become more like one system than two.
Second, how do I know which route to remove? IP networks hide information in order to scale—there’s almost no way to scale to something like the Internet without aggregating information someplace. To put this in other terms, the Internet already doesn’t converge. How much worse would it be if we were keeping track of the state of each host, rather than the state of each subnet? Probably not too well, I’m thinking. I can’t tell the network to stop sending traffic to 2001:db8:0:1::1 if the only route I have in the local table is to 2001:db8:0:1::/64—I’d probably cause more problems than I’m solving.
In terms of the complexity model, adding per host state into the network would actually be adding complexity to the state side of the triangle. I’m not only adding to the amount of state—there are more hosts than subnets—but also to the speed at which the state changes—as hosts will change state more often than subnets will.
Using the complexity model here helps me to see where I’m adding complexity, which is why you should care about understanding complexity as a network designer. In fact, this is why I chose to write a reply to my friends over at Packet Pushers—because this is such a good example of how understanding the complexity tradeoffs can help you analyze a particular question and come to a solid conclusion.
In the end, then, I’d judge this “not the best idea in the world.” I can see where it might be useful in a small range of cases, but it probably isn’t generalizable to the larger networking world.