Knowing Where to Look
If you haven’t found the tradeoffs, you haven’t looked hard enough. Something I say rather often—as Eyvonne would say, a “Russism.” Fair enough, and it’s easy enough to say “if you haven’t found the tradeoffs, you haven’t looked hard enough,” but what does it mean, exactly? How do you apply this to the everyday world of designing, deploying, operating, and troubleshooting networks?
Humans tend to extremes in their thoughts. In many cases, we end up considering everything a zero-sum game, where any gain on the part of someone else means an immediate and opposite loss on my part. In others, we end up thinking we are going to get a free lunch. The reality is there is no such thing as a free lunch, and while there are situations that are a zero-sum game, not all situations are. What we need is a way to “cut the middle” to realistically appraise each situation and realistically decide what the tradeoffs might be.
This is where the state/optimization/surface (SOS) model comes into play. You’ll find this model described in several of my books alongside some thoughts on complexity theory (see the second chapter here, for instance, or here), but I don’t spend a lot of time discussing how to apply this concept. The answer lies in the intersection between looking for tradeoffs and the SOS model.
TL;DR version: the SOS model tells you where you should look for tradeoffs.
Take the time-worn example of route aggregation, which improves the operation of a network by reducing the “blast radius” of changes in reachability. Combining aggregation with summarization (as is almost always the intent), it reduces the “blast radius” for changes in the network topology as well. The way aggregation and summarization reduce the “blast radius” is simple: if you define a failure domain as the set of devices which must somehow react to a change in the network (the correct way to define a failure domain, by the way), then aggregation and summarization reduce the failure domain by hiding changes in one part of the network from devices in some other part of the network.
Note: the depth of the failure domain is relevant, as well, but not often discussed; this is related to the depth of an interaction surface, but since this is merely a blog post . . .
According to SOS, route aggregation (and topology summarization) is a form of abstraction, which means it is a way of controlling state. If we control state, we should see a corresponding tradeoff in interaction surfaces, and a corresponding tradeoff in some form of optimization. Given these two pointers, we can search for your tradeoffs. Let’s start with interaction surfaces.
Observe aggregation is normally manually configured; this is an interaction surface. The human-to-device interaction surface now needs to account for the additional work of designing, configuring, maintaining, and troubleshooting around aggregation—these things add complexity to the network. Further, the routing protocol must also be designed to support aggregation and summarization, so the design of the protocol must also be more complex. This added complexity is often going to come in the form of . . . additional interaction surfaces, such as the not-to-stubby external conversion to a standard external in OSPF, or something similar.
Now let’s consider optimization. Controlling failure domains allows you to build larger, more stable networks—this is an increase in optimization. At the same time, aggregation removes information from the control plane, which can cause some traffic to take a suboptimal path (if you want examples of this, look at the books referenced above). Traffic taking a suboptimal path is a decrease in optimization. Finally, building larger networks means you are also building a more complex network—so we can see the increase in complexity here, as well.
Experience is often useful in helping you have more specific places to look for these sorts of things, of course. If you understand the underlying problems and solutions (hint, hint), you will know where to look more quickly. If you understand common implementations and the weak points of each of those implementations, you will be able to quickly pinpoint an implementation’s weak points. History might not repeat itself, but it certainly rhymes.
I have spent many years building networks, protocols, and software. I have never found a situation where the SOS model, combined with a solid knowledge of the underlying problems and solutions (or perhaps technologies and implementations used to solve these problems) have led me astray in being able to quickly find the tradeoffs so I could see, and then analyze, them.