Simon Weckhert recently hacked Google Maps into guiding drivers around a street through a rather simple mechanism: he placed 95 cellphones, all connected to Google Maps, in a little wagon and walked down the street with the wagon in tow. Maps saw this group of cell phones as a very congested street—95 cars cannot even physically fit into the street he was walking down—and guided other drivers around the area. The idea is novel, and the result rather funny, but it also illustrates a weakness in our “modern scientific mindset” that often bleeds over into network engineering.

The basic problem is this: we assume users will use things the way we intend them to. This never works out in the real world, because users are going to use wrenches as hammers, cell phones as if they were high-end cameras, and many other things in ways they were never intended. To make matters worse, users often “infer” the way something works, and adapt their actions to get what they want based on their inference. For instance, everyone who drives “reverse-engineers” the road in their head, thinking about what the maximum safe speed might be, etc. Social media users do the same thing when posting or reading through their timeline, causing people to create novel and interesting ideas about how these things work that have no bearing on reality.

As folks who work in the world of networks, we often “reverse-engineer” a vendor product in much the same way drivers “reverse-engineer” roads and social media users “reverse-engineer” the news feed—we observe how it works in some circumstances, we read some of the documentation, we infer how it must work based on the information we have, and then we design around how we think it works. Sometimes this is a result of abstraction—the vendor has saved us from learning all the “technical details” to make our lives easier. And sometimes abstraction does make our lives easier—but sometimes abstraction makes our lives harder.

I’m reminded of a time I was working with a cable team to bring a wind speed/direction system back up. The system in question relied on several miles of 12c12 cable across which a low voltage signal was driven off a generator attached to an impeller. The folks working on the cable could “see” power flowing on the meter after their repair, so why wouldn’t it work?

In some cases, then, our belief about how these things work is completely wrong, and we end up designing precisely the wrong thing, or doing precisely the wrong thing to bring a failed network back on-line.

Folks involved in networks face this on the other side of the equation, as well—we supply application developers and business users with a set of abstractions they don’t’ really need to understand. In using them, however, they develop “folk theories” about how a network works, coming to conclusions that are often counter-productive to what they are trying to get done. The person in the airline lounge that tells you to reboot your system to see if the WiFi will work doesn’t really understand what the problem is, they just know “this worked once before, so maybe it will work now.”

There is nothing wrong per se with this kind of “reverse-engineering”—we’re going to encounter it every time we abstract things, and abstracting things is necessary to scale. On the other hand, we’re supposed to be the “engineer in the middle”—the person who knows how to relate to the vendor and the user, bridging the gap between product and service. That’s how we add value.

There are some places, like with vendor-supplied gear, that we are dealing with an abstraction we simply cannot rip the lid off. There are many times when we cannot learn the “innards” because there are 24 hours in a day, you cannot learn all that needs to be learned in the available timeframe, and there are times, as a human, that you need to back off and “do something else.” But… there are times when you really need to know what “lies beneath the abstraction”—how things really work.

I suspect the times when understanding “how it really works” would be helpful are very common—and that we would all live a world with a little less vendor hype during the day, and a lot less panic during the night, if we put a little more priority on learning how networks work.

What is the best way to build a large-scale network—in two words? Ask ten networking folks (engineers, designers, or whatever else), and you’re likely to get the same answer from at least nine: clean abstractions. They might not say the word abstraction, of course; instead, they might say words like build things in modules, using summarization and aggregation to divide the modules up. Or they might say make certain to reduce the failure domain to the smallest you possible can everywhere you can. Or they might say use hierarchical design. These answers are, however, variants of the single word: abstraction.

This response came to mind when I was reading an article on clean code this last week (it’s amazing how often software architecture overlaps with network architecture):

Once we learn how to create abstractions, it is tempting to get high on that ability, and pull abstractions out of thin air whenever we see repetitive code. After a few years of coding, we see repetition everywhere — and abstracting is our new superpower. If someone tells us that abstraction is a virtue, we’ll eat it. And we’ll start judging other people for not worshipping “cleanliness”.

I have been teaching network design for many, many years. I co-authored my first book on network design, Advanced IP Network Design, with Don Slice and Alvaro Retana; it was published in 1999, and it typically takes about a year to write a book, so we probably started working on it in the middle of 1998. The entire object that book was to teach hierarchical network design, which relies on modularization through aggregation and summarization to separate complexity from complexity (though I didn’t really use this wording until many years later) in order to break up failure domains.

It has been twenty-two years since Don, Alvaro, and I wrote that book—and hierarchical network design is still as relevant today as it was then. But in the last 22 years, I think I’ve learned just a little more about network design.

Among the things I’ve picked up in that 22 years is this one: if you haven’t found the tradeoffs, you haven’t looked hard enough. Or perhaps there is no such thing as a free lunch. Abstraction is a superpower, and it can make your network a lot cleaner, even when you’re using it correctly (not using it to paper over complexity). But building the perfectly clean network can mean reducing the agility of the design to the point of fragility. For instance, in the article linked above, Dan Abramov notes changing requirements made his “clean revision” of the code much more complex—a classic sign of fragility.

Perhaps an example would be helpful here. If you think of RIP as a link state protocol with summarization (abstraction of topology) at every hop, given you understand how link state and distance-vector protocols work, you can probably quickly grasp what you have gained by summarizing at every hop—and what you have lost.

You should still use abstraction to break up failure domains. You should still use abstraction to separate complexity from complexity. But you should use abstraction like you would any other tool. You should decide the best places and times to use abstraction after understanding the whole system.

For instance—a lot of people really insist on aggregating routing information in their data center fabric, especially in the underlay control plane. Why? The underlay is a constrained routing domain with known properties. Aggregation in this environment can cause routing black holes and unpredictable traffic flow behavior—both of which require added complexity to “work around.” If there is another solution available, it might be best to use it.

At the same time, I see a lot of people insisting BGP is the only option for data center underlays, or that it is the simplest option because you can use a single protocol for the underlay and overlay. This, in my opinion, is wrong, as well—because it does not properly separate two different parts of the network, each with their own purpose, into separate failure domains.

Rather than looking at a network and saying, “we can abstract here, so we should abstract here,” you should look at a network and say, “what are the modules here, and what purposes do they serve?” Once you know that, you can start thinking about when and were abstraction makes sense.

To paraphrase Dan, don’t be a clean network design zealot. Clean network design is not a goal. It’s a good guide when you don’t understand the network; such guides are often useful, but they are guides rather than rules.

One of my pet peeves about the network “engineering” world is this: we do too little engineering and too much administration. What brought this to mind this week is an article about Margaret Hamilton about the time she spent working on software development for the Apollo space program, and the lessons she learned about software development there. To wit—

Engineering—back in 1969 as well as here in 2020—carries a whole set of associated values with it, and one of the most important is the necessity of proofing for disaster before human usage. You don’t “fail fast” when building a bridge: You ensure the bridge works first.

Sounds simple in theory—but it is not in practice.

Let’s take, as an example, replacing some of the capacity in your data center designed on a rather traditional two-layer hierarchy, aggregation, and core. If you’ve built your network with a decent modular design, you buy enough new routers (or switches—but let’s use routers here) to build out a new aggregation module, the additional firewalls and other middleboxes you need, and the additional line cards to scale the core up. You unit test everything you can in the lab, understanding that you will not be able to fully test in the product network until you arrange a maintenance window. If you’re automating things, you build (and potentially test) the scripts—if you are smart, you will test these scripts in a virtual environment before using them.

You arrange the maintenance window, install the hardware, and … run the scripts. If it works, you go to bed, take a long nap, and get back to work doing “normal maintenance stuff” the next day. Of course, it rarely works, so you preposition some energy bars, make certain you have daycare plans, and put the vendor’s tech support number on speed dial.

What’s wrong with this picture? Well, many things, but primarily: this is not engineering. Was there any thought put into how to test beyond the individual unit level? Is there any way to test realistic traffic flows while connecting the new module to the network without impacting the rest of the network’s operation? Is there any real rollback plan in case things go wrong? Can there be?

In “modern” network design, none of these things tend to exist because they cannot exist. They cannot exist because we have not truly learned to do design life-cycles or truly modular designs. In the software world, if you don’t do modular design, it’s either because you didn’t think it through, or because you thought it through and decided the trade-off just wasn’t worth it. In the networking world, we play around the edges of resilient, modular designs, but networking folks don’t tend to know the underlying technologies—and how they work—well enough to understand how to divide a problem into modules correctly, and the interfaces between those modules.

Let’s consider the same example, but with some engineering principles applied. Instead of a traditional two-layer hierarchy, you have a single-SKU spine and leaf fabric with clearly defined separation between the fabric and pods, clearly defined underlay and overlay protocols, etc. Now you can build a pod and test it against a “fake fabric” before attaching it to the production fabric, including any required automation. Then you can connect the pod to the production fabric and bring up just the underlay protocol, testing the entire underlay before pushing the overlay out to the edge. Then you can push the overlay to the edge and test that before putting any workload on the new pod. Then you can test fake load on the new pod before pushing production traffic onto the pod…

Each of these tests, other than the initial test against a lab environment, can take place on the production network with little or no risk to the entire system. You’re not physically modifying current hardware (except plugging in new cables!), so it’s easy to roll changes back. You know the lower layer parts work before putting the higher layer parts in place. Because the testing happens on the real network, these are canaries rather than traditional “certification” style tests. Because you have real modularization, you can fail fast without causing major harm to any system. Because you are doing things in stages, you can build tests that determine clean and correct operation before moving to the next stage.

This is an engineered solution—thought has been put into proper modules, how those modules connect, what information is carried across those modules, etc. Doing this sort of work requires knowing more than how to configure—or automate—a set of protocols based on what a vendor tells you to do. Doing this sort of work requires understanding what failure looks like at each point in the cycle and deciding whether to fail out or fix it.

It may not meet the “formal” process mathematicians might prefer, but neither is it the “move fast and break stuff” attitude many see in “the Valley.” It is fail fast, but not fail foolishly. And its where we need to move to retain the title of “engineer” and not lose the confidence of the businesses who pay us to build networks that work.

How many 9’s is your network? How about your service provider’s? Now, to ask the not-so-obvious question—why do you care? Does the number of 9’s actually describe the reliability of the network? According to Jeffery Mogul and John Wilkes, nines are not enough. The question is—while this paper was written for commercial relationships and cloud providers, is it something you can apply to running your own network? Let’s dive into the meat of the paper and find out.

While 5 9’s is normally given as a form of Service Level Agreement (SLA), there are two other measures of reliability a network operator needs to consider—the Service Level Objective (SLO), and the Service Level Indicator (SLI). The SLO defines a set of expectations about the level of service; internal SLO’s define “trigger points” where actions should be taken to prevent an external SLO from failing. For instance, if the external SLO says no more than 2% of the traffic will be dropped on this link, the internal SLO might say if more than 1% of the traffic on this link is dropped, you need to act. The SLA, on the other hand, says if more than 2% of the traffic on this link is dropped, the operator will rebate (some amount) to the customer. The SLI says this is how I am going to measure the percentage of packets dropped on this link.

Splitting these three concepts apart helps reveal what is wrong with the entire 5 9’s way of thinking, because it enables you to ask questions like—can my telemetry system measure and report on the amount of traffic dropped on this link? Across what interval should this SLI apply? If I combine all the SLI’s across my entire network, what does the monitoring system need to look like? Can I support the false positives likely to occur with such a monitoring system?

These questions might be obvious, of course, but there are more non-obvious ones, as well. For instance—how do my internal and external SLO’s correlate to my SLI’s? Measuring the amount of traffic dropped on a link is pretty simple (in theory). Measuring something like this application will not perform at less than 50% capacity because of network traffic is going to be much, much harder.

The point Mogul and Wilkes make in this paper is that we just need to rethink the way we write SLO’s and their resulting SLA’s to be more realistic—in particular, we need to think about whether or not the SLI’s we can actually measure and act on can cash the SLO and SLA checks we’re writing. This means we probably need to expose more, rather than less, of the complexity of the network itself—even though this cuts against the grain of the current move towards abstracting the network down to “ports and packets.” To some degree, the consumer of networking services is going to need to be more informed if we are to build realistic SLA’s that can be written and kept.

How does this apply to the “average enterprise network engineer?” At first glance, it might seem like this paper is strongly oriented towards service providers, since there are definite contracts, products, etc.,  in play. If you squint your eyes, though, you can see how this would apply to the rest of the world. The implicit promise you make to an application developer or owner that their application will, in fact, run on the network with little or no performance degradation is, after all, an SLO. Your yearly review examining how well the network has met the needs of the organization is an SLA of sorts.

The kind of thinking represented here, if applied within an organization, could turn the conversation about whether to out- or in-source on its head. Rather than talking about the 5 9’s some cloud provider is going to offer, it opens up discussions about how and what to measure, even within the cloud service, to understand the performance being offered, and how more specific and nuanced results can be measured against a fuller picture of value added.

This is a short paper—but well worth reading and considering.

If you haven’t found the tradeoffs, you haven’t looked hard enough. Something I say rather often—as Eyvonne would say, a “Russism.” Fair enough, and it’s easy enough to say “if you haven’t found the tradeoffs, you haven’t looked hard enough,” but what does it mean, exactly? How do you apply this to the everyday world of designing, deploying, operating, and troubleshooting networks?

Humans tend to extremes in their thoughts. In many cases, we end up considering everything a zero-sum game, where any gain on the part of someone else means an immediate and opposite loss on my part. In others, we end up thinking we are going to get a free lunch. The reality is there is no such thing as a free lunch, and while there are situations that are a zero-sum game, not all situations are. What we need is a way to “cut the middle” to realistically appraise each situation and realistically decide what the tradeoffs might be.

This is where the state/optimization/surface (SOS) model comes into play. You’ll find this model described in several of my books alongside some thoughts on complexity theory (see the second chapter here, for instance, or here), but I don’t spend a lot of time discussing how to apply this concept. The answer lies in the intersection between looking for tradeoffs and the SOS model.

TL;DR version: the SOS model tells you where you should look for tradeoffs.

Take the time-worn example of route aggregation, which improves the operation of a network by reducing the “blast radius” of changes in reachability. Combining aggregation with summarization (as is almost always the intent), it reduces the “blast radius” for changes in the network topology as well. The way aggregation and summarization reduce the “blast radius” is simple: if you define a failure domain as the set of devices which must somehow react to a change in the network (the correct way to define a failure domain, by the way), then aggregation and summarization reduce the failure domain by hiding changes in one part of the network from devices in some other part of the network.

Note: the depth of the failure domain is relevant, as well, but not often discussed; this is related to the depth of an interaction surface, but since this is merely a blog post . . .

According to SOS, route aggregation (and topology summarization) is a form of abstraction, which means it is a way of controlling state. If we control state, we should see a corresponding tradeoff in interaction surfaces, and a corresponding tradeoff in some form of optimization. Given these two pointers, we can search for your tradeoffs. Let’s start with interaction surfaces.

Observe aggregation is normally manually configured; this is an interaction surface. The human-to-device interaction surface now needs to account for the additional work of designing, configuring, maintaining, and troubleshooting around aggregation—these things add complexity to the network. Further, the routing protocol must also be designed to support aggregation and summarization, so the design of the protocol must also be more complex. This added complexity is often going to come in the form of . . . additional interaction surfaces, such as the not-to-stubby external conversion to a standard external in OSPF, or something similar.

Now let’s consider optimization. Controlling failure domains allows you to build larger, more stable networks—this is an increase in optimization. At the same time, aggregation removes information from the control plane, which can cause some traffic to take a suboptimal path (if you want examples of this, look at the books referenced above). Traffic taking a suboptimal path is a decrease in optimization. Finally, building larger networks means you are also building a more complex network—so we can see the increase in complexity here, as well.

Experience is often useful in helping you have more specific places to look for these sorts of things, of course. If you understand the underlying problems and solutions (hint, hint), you will know where to look more quickly. If you understand common implementations and the weak points of each of those implementations, you will be able to quickly pinpoint an implementation’s weak points. History might not repeat itself, but it certainly rhymes.

I have spent many years building networks, protocols, and software. I have never found a situation where the SOS model, combined with a solid knowledge of the underlying problems and solutions (or perhaps technologies and implementations used to solve these problems) have led me astray in being able to quickly find the tradeoffs so I could see, and then analyze, them.

Raise your hand if you think moving to platform as a service or infrastructure as a service is all about saving money. Raise it if you think moving to “the cloud” is all about increasing business agility and flexibility.

Put your hand down. You’re wrong.

Before going any further, let me clarify things a bit. You’ll notice I did not say software as a service above—for good reason. Move email to the cloud? Why not? Word processing? Sure, word processing is (relatively) a commodity service (though I’m always amazed at the number of people who say “word processor x stinks,” opting to learn complex command sets to “solve the problem,” without first consulting a user manual to see if they can customize “word processor x” to meet their needs).

What about supporting business-specific, or business-critical, applications? You know, the ones you’ve hired in-house developers to create and curate?

Will you save money by moving these applications to a platform as a service? There is, of course, some efficiency to be gained. It is cheaper for a large-scale manufacturer of potato chips to make a bag of chips than for you to cook them in your own home. They have access to specialized slicers, fryers, chemists, and even special potatoes (with more starch than the ones you can buy in a grocery store). Does this necessarily mean that buying potato chips in a bag is always cheaper? In other words, does the manufacturer pass all these savings on to you, the consumer? To ask the question is to know the answer.

And once you’ve turned making all your potato chips over to the professionals, getting rid of the equipment needed to make them, and letting the skill of making good potato chips atrophy, what is going to happen to the price? Yep, thought so.

This is not to say cost is not a factor. Rather, the cost of supporting customized applications on the cloud or local infrastructure needs to be evaluated on a case-by-case basis—either might be cheaper than the other, and the cost of both will change over time.
Does using the cloud afford you more business flexibility? Sometimes, yes. And sometimes, no. Again, the flexibility benefit normally comes from “business agnostic” kinds of flexibility. The kind of flexibility you need to run your business efficiently may, or may not, be the same as the majority of other business. Moving your business to another cloud provider is not always as simple as it initially seems.

The cost and flexibility benefit come from relatively customer-agnostic parts of the business models. To that extent, you rely more on them than they rely on you. Yes, you can vote with your feet if the mickey is taken, but if we’re honest, this kind of supply is almost as inelastic as your old IT service deal. There are few realistic options for supply at scale, and the act of reversing out of a big contract, selecting a new supplier, and making the operational switch can bleed any foreseeable benefits out of a change—something all parties in the procurement process know too well.

So… saving money is sometimes a real reason to outsource things. In some situations, flexibility or agility is going to be a factor. But… there is a third factor I have not mentioned yet—probably the most important, but almost never discussed. Risk aversion.

Let’s be honest. For the last twenty years we network engineers have specialized in building extremely complex systems and formulating the excuses required when things don’t go right. We’ve specialized in saying “yes” to every requirement (or even wish) because we think that by saying “yes” we will become indispensable. Rather than building platforms on which the business can operate, we’ve built artisanal, complex, pets that must be handled carefully lest they turn into beasts that devour time and money. You know, like the person who tries to replicate store-bought chips by purchasing expensive fryers and potatoes, and ends up just making a mess out of the kitchen?

If you want to fully understand your infrastructure, and the real risk of complexity, you need to ask about risk, money, and flexibility—all three. When designing a network, or modifying things to deploy a new service onto an existing network, you need to think about risk as well as cost and flexibility.

How do you manage risk? Sarah Clarke, in the article I quoted above, gives us a few places to start (which I’ve modified to fit the network engineering world). First, ask the question about risk. Don’t just ask “how much money is this going to cost or save,” ask “what risk is being averted or managed here?” You can’t ever think the problem through if you don’t ever ask the question. Second, ask about how you are going to assess the solution against risk, money, and flexibility. How will you know if moving in a particular direction worked? Third, build out clear demarcation points. This is both about the modules within the system as well as responsibilities.

Finally, have an escalation plan. Know what you are going to do when things go wrong, and when you are going to do it. Think about how you can back out of a situation entirely. What are the alternatives? What does it take to get there? You can’t really “unmake” decisions, but you can come to a point where you realize you need to make a different decision. Know what that point is, and at least have the information on hand to know what decision you should make when you get there.

But first, ask the question. Risk aversion drives many more decisions than you might think.

If you are looking for a good resolution for 2020 still (I know, it’s a bit late), you can’t go wrong with this one: this year, I will focus on making the networks and products I work on truly simpler. Now, before you pull Tom’s take out on me—

There are those that would say that what we’re doing is just hiding the complexity behind another layer of abstraction, which is a favorite saying of Russ White. I’d argue that we’re not hiding the complexity as much as we’re putting it back where it belongs – out of sight. We don’t need the added complexity for most operations.

Three things: First, complex solutions are always required for hard problems. If you’ve ever listened to me talk about complexity, you’ve probably seen this quote on a slide someplace—

[C]omplexity is most succinctly discussed in terms of functionality and its robustness. Specifically, we argue that complexity in highly organized systems arises primarily from design strategies intended to create robustness to uncertainty in their environments and component parts.

You cannot solve hard problems—complex problems—without complex solutions. In fact, a lot of the complexity we run into in our everyday lives is a result of saying “this is too complex, I’m going to build something simpler.” (here I’m thinking of a blog post I read last year that said “when we were building containers, we looked at routing and realized how complex it was… so we invented something simpler… which, of course, turned out to be more complex than dynamic routing!)

Second, abstraction can be used the right way to manage complexity, and it can be used the wrong way to obfuscate or mask complexity. The second great source of complexity and system failure in our world is we don’t abstract complexity so much as we obfuscate it.

Third, abstraction is not a zero-sum game. If you haven’t found the tradeoffs, you haven’t looked hard enough. This is something expressed through the state/optimization/surface triangle, which you should know at this point.

Returning to the top of this post, the point is this: Using abstraction to manage complexity is fine. Obfuscation of complexity is not. Papering over complexity “just because I can” never solves the problem, any more than sweeping dirt under the rug, or papering over the old paint without bothering to fix the wall first.

We need to go beyond just figuring out how to make the user interface simpler, more “intent-driven,” automated, or whatever it is. We need to think of the network as a system, rather than as a collection of bits and bobs that we’ve thrown together across the years. We need to think about the modules horizontally and vertically, think about how they interact, understand how each piece works, understand how each abstraction leaks, and be able to ask hard questions.

For each module, we need to understand how things work well enough to ask is this the right place to divide these two modules? We should be willing to rethink our abstraction lines, the placement of modules, and how things fit together. Sometimes moving an abstraction point around can greatly simplify a design while increasing optimal behavior. Other times it’s worth it to reduce optimization to build a simpler mouse trap. But you cannot know the answer to this question until you ask it. If you’re sweeping complexity under the rug because… well, that’s where it belongs… then you are doing yourself and the organization you work for a disfavor, plain and simple. Whatever you sweep under the rug of obfuscation will grow and multiply. You don’t want to be around when it crawls back out from under that rug.

For each module, we need to learn how to ask is this the right level and kind of abstraction? We need to learn to ask does the set of functions this module is doing really “hang together,” or is this just a bunch of cruft no-one could figure out what to do with, so they shoved it all in a black box and called it done?

Above all, we need to learn to look at the network as a system. I’ve been harping on this for so long, and yet I still don’t think people understand what I am saying a lot of times. So I guess I’ll just have to keep saying it. 😊

The problems networks are designed to solve are hard—therefore, networks are going to be complex. You cannot eliminate complexity, but you can learn to minimize and control it. Abstraction within a well-thought-out system is a valid and useful way to control complexity and understanding how and where to create modules at the edges of which abstraction can take place is a valid and useful way of controlling complexity.

Don’t obfuscate. Think systemically, think about the tradeoffs, and abstract wisely.