Reaction: Complexity Sells

Reaction: Complexity Sells

Over at IPSpace this last week, Ivan pointed to a paper by Dijkstra (and if you don’t know who that is, you need to learn a thing or two about the history of routing protocols—because history makes culture, and culture matters—or, as the tagline on this blog says, culture eats technology for breakfast). In this paper, Dijkstra points out some rather important things about computer science and programming that can be directly applied to the network engineering world. For instance, Ivan says—

People tend to forget that “doing away with the programmer” was COBOL’s major original objective. —Replace “programmer” with “networking engineer” and COBOL with SDN 😉

I was fascinated with Ivan’s take on this paper—particularly in that complexity is an area I find interesting and very useful in my everyday life as a designer—that I went and read the original article. You should, too.

I think Ivan’s observations are spot on, but I think it’s worthwhile to actually broaden them. From where I sit, after 25 years building and breaking networks, I agree that complexity sells—but it sells for two particular reasons. The reason, as Dijkstra said all those years ago (in computer terms), is this:

Since the Romans have taught us “Simplex Veri Sigillum” —that is: simplicity is the hallmark of truth— we should know better, but complexity continues to have a morbid attraction. When you give for an academic audience a lecture that is crystal clear from alpha to omega, your audience feels cheated and leaves the lecture hall commenting to each other: “That was rather trivial, wasn’t it?” The sore truth is that complexity sells better.

This bit is the fault of engineers. A vendor brings us a product they’ve thrown together from the remaining bits of the last ten projects, and promises to do “great things,” and we love it. The more knobs to turn, the more buttons to push, the more layers of protocols on top of protocols, the more discarded loose ends sitting around to study and understand, the better. To put it in more frank terms, the more it makes us feel like some sort of Gnostic priest who’s achieved some sort of inner knowledge no-one else could possibly know, the happier we are. This isn’t an indictment of engineers, by the way, it is an indictment of humanity at large, from religion to politics to social “science” to, yes, engineering. Some folks have even told me that, “the point of the style of an RFC isn’t to make it possible to read the thing, but rather to restrict the ‘real knowledge of how it works’ to a select small group of folks.”

But this is only half the problem. If this all there were, network engineers would still be forming clubs and parking their cars in their driveways (their garages being filled with half completed neat things that could never be practically used). The other half is that people buy this stuff—wholesale, retail, and in every other tail you can imagine. Businesses truly believe that the next thing, the next vendor product, the next networking technology, is really going to solve “everything.”

There is, of course, some reality to the situation. The next “big thing” really could be “just around the corner.” Of course, once you have turned several thousand corners, you begin to suspect it is not, but, well, it could be, right? Just so.

How do we solve this?

Do we start treating our networks like cattle, and stop treating networks like snowflakes? I’m not certain this is the whole answer. My reasoning is simple: business processes and information technology are intrinsically linked. How many times have you been told, “I can’t do that, because the computer won’t let me?” More than I’d like to know, right? If networks are to be built to support businesses, and businesses must have at least some “snowflakiness” to survive (if no business is a special snowflake, then there’s no particular reason to buy anything from anyone, right?), then… What is the implication in this line of thinking?

But does this mean we need to just give up and give in to the complexity of “all networks are special snowflakes?” No, down this path lies madness.

What we need to learn to do is to separate the snowflakiness from the non-snowflakiness, standardize the bits that aren’t special, and lay the parts that are on top (somehow). What we need to do is to go back to basics, and build protocols that do simple things, and build complex and unique things on top of, rather in to, them.

plane of the possible What we cannot do is forget that complexity is real, and we need to learn to manage it. What we must not do is continue to think we can play in the land of dragons forever, and not get burnt.

So, what’s the ultimate point of this long winded rant? That it’s time we went back to basics, stopped thinking software defined is going to save us from the dragons, start thinking about how to build networks that do the right things in the right places, and really think hard about how to separate the snowflakes that really do exist from the more common, and plainer, chunks of ice that are falling among them.

We’re not going to solve this one by just tossing it over the cubicle wall to the coder sitting next door, folks.

About the Author:

4 Comments

  1. Mark 27 June 2016 at 10:36 pm

    So my view is, evolution follows a path and that path is curved.

    For example, let’s take the relatively new term of underlay networks. I suspect most readers of this blog are familiar.

    The plumbing is still complex, often springs leaks (global scale) and costs an arm and a leg to get it fixed/deployed.

    Enter the overlay network. Which uses the existing plumbing efficiently, avoids leaks, delivers your packets with QoS like characteristics, at a fraction of the price and across a multitude of fixed and wireless medium.

    As network engineers we should try to always reduce complexity and melt the snowflakes in our designs.

    Todays customers need simplicity they can use. CSP’s need a business they can build beyond the complexity of old, give them a simple OTT they can use.

    The more a CFO can understand the more he’s likely to spend…

    • Russ 28 June 2016 at 11:11 am

      Mark — thanks for stopping by and commenting!

      Enter the overlay network. Which uses the existing plumbing efficiently, avoids leaks, delivers your packets with QoS like characteristics, at a fraction of the price and across a multitude of fixed and wireless medium.

      My biggest response is going to be to this–how do “overlay networks” do all these wonderful things? Has the physical network actually gone away entirely? Can we live without it? Wasn’t the original point of IP to provide end-to-end service over any sort of media type — and are you saying IP has failed, so the only solution is a new overlay on top of… IP? Overlay networks don’t “spring leaks,’ and they resolve all the leaks “sprung” by the underlying networks?

      I don’t buy any of this… Overlays are a form of complexity. By covering up the old with “new paint on top,” we’re really not solving any problems. The rust will still out, in the end.

      As network engineers we should try to always reduce complexity and melt the snowflakes in our designs.

      My take on this is — we don’t need to get rid of snowflakes, just control their scope and impact. If you have no snowflakes, then you (ultimately) have no business model that’s unique from anyone else in the world. Rather, I would argue snowflakes are fine, so long as they are clearly contained, and less than some percentage of the entire network (either from a layering perspective, or from a topological perspective, or…). For instance, it’s okay to have a special/homegrown network management process and tools, if that’s what gives you competitive advantage. What’s not okay is to have that set of tools built on one vendor platform so the “snowflake” nature of things goes “all the way down.”

      🙂

      Russ

  2. Rob Claxton 30 June 2016 at 2:52 pm

    I suppose the holy grail of networking is a homogeneous network where only a few people need to configure a “few” settings and everything just starts to work…Whatever planet this is on I still prefer the one we are on. Last time I checked a network still needs routers, switches, servers and WIRES to connect them all. Maybe Google’s new TransAtlantic pipeline to Japan could have been wireless?…no. The point is this: Only a good network engineer with years of experience on “hardware” and a firm grasp of the many complexities of routing protocols can truly make ‘underlays with overlays’ sing.

    Is anyone else seeing parallels with the medical industry as network engineering matures? Where’s our DR. designation?

    Rob

    • Russ 30 June 2016 at 3:36 pm

      Rob — thanks for stopping by and commenting!

      IMHO, where we need to go is — complexity where it adds value, and “homogenize the rest.” The problem is, as always, where complexity adds value is different for every business, business model, etc. The original point of Ivan’s article is that complexity is best for vendors at every point in the network, because they can sell all the stuff that goes with complexity, and because they can sell broad based products to a wide range of customers that act like a knife with every possible blade sticking out from every corner of the thing. Simple knives tend to serve fewer purposes — it might be more complex to carry ten knives to be an effective cook, but the one knife to replace them all might just not work, and might be really hard to sharpen, etc. Someplace in here we’ve forgotten these sorts of principles… Snowflakes aren’t bad, and they’re really pretty when they’re about two feet deep.

      Well, so long as you can sit inside and drink a nice hot beverage, and not worry about shoveling the stuff.

      🙂

      Russ

Comments are closed.