Over at IPSpace this last week, Ivan pointed to a paper by Dijkstra (and if you don’t know who that is, you need to learn a thing or two about the history of routing protocols—because history makes culture, and culture matters—or, as the tagline on this blog says, culture eats technology for breakfast). In this paper, Dijkstra points out some rather important things about computer science and programming that can be directly applied to the network engineering world. For instance, Ivan says—
I was fascinated with Ivan’s take on this paper—particularly in that complexity is an area I find interesting and very useful in my everyday life as a designer—that I went and read the original article. You should, too.
I think Ivan’s observations are spot on, but I think it’s worthwhile to actually broaden them. From where I sit, after 25 years building and breaking networks, I agree that complexity sells—but it sells for two particular reasons. The reason, as Dijkstra said all those years ago (in computer terms), is this:
Since the Romans have taught us “Simplex Veri Sigillum” —that is: simplicity is the hallmark of truth— we should know better, but complexity continues to have a morbid attraction. When you give for an academic audience a lecture that is crystal clear from alpha to omega, your audience feels cheated and leaves the lecture hall commenting to each other: “That was rather trivial, wasn’t it?” The sore truth is that complexity sells better.
This bit is the fault of engineers. A vendor brings us a product they’ve thrown together from the remaining bits of the last ten projects, and promises to do “great things,” and we love it. The more knobs to turn, the more buttons to push, the more layers of protocols on top of protocols, the more discarded loose ends sitting around to study and understand, the better. To put it in more frank terms, the more it makes us feel like some sort of Gnostic priest who’s achieved some sort of inner knowledge no-one else could possibly know, the happier we are. This isn’t an indictment of engineers, by the way, it is an indictment of humanity at large, from religion to politics to social “science” to, yes, engineering. Some folks have even told me that, “the point of the style of an RFC isn’t to make it possible to read the thing, but rather to restrict the ‘real knowledge of how it works’ to a select small group of folks.”
But this is only half the problem. If this all there were, network engineers would still be forming clubs and parking their cars in their driveways (their garages being filled with half completed neat things that could never be practically used). The other half is that people buy this stuff—wholesale, retail, and in every other tail you can imagine. Businesses truly believe that the next thing, the next vendor product, the next networking technology, is really going to solve “everything.”
There is, of course, some reality to the situation. The next “big thing” really could be “just around the corner.” Of course, once you have turned several thousand corners, you begin to suspect it is not, but, well, it could be, right? Just so.
How do we solve this?
Do we start treating our networks like cattle, and stop treating networks like snowflakes? I’m not certain this is the whole answer. My reasoning is simple: business processes and information technology are intrinsically linked. How many times have you been told, “I can’t do that, because the computer won’t let me?” More than I’d like to know, right? If networks are to be built to support businesses, and businesses must have at least some “snowflakiness” to survive (if no business is a special snowflake, then there’s no particular reason to buy anything from anyone, right?), then… What is the implication in this line of thinking?
But does this mean we need to just give up and give in to the complexity of “all networks are special snowflakes?” No, down this path lies madness.
What we need to learn to do is to separate the snowflakiness from the non-snowflakiness, standardize the bits that aren’t special, and lay the parts that are on top (somehow). What we need to do is to go back to basics, and build protocols that do simple things, and build complex and unique things on top of, rather in to, them.
What we cannot do is forget that complexity is real, and we need to learn to manage it. What we must not do is continue to think we can play in the land of dragons forever, and not get burnt.
So, what’s the ultimate point of this long winded rant? That it’s time we went back to basics, stopped thinking software defined is going to save us from the dragons, start thinking about how to build networks that do the right things in the right places, and really think hard about how to separate the snowflakes that really do exist from the more common, and plainer, chunks of ice that are falling among them.
We’re not going to solve this one by just tossing it over the cubicle wall to the coder sitting next door, folks.