rule 11

Reaction: Issue a press release

Ladies and gentlemen, start your crystal balls—it is close to the end of the year, that favorite time of prognosticators and analysts everywhere to tell us what is going to be “hot” and “not” next year. But before you drop out of a good conversation with your family, or sitting around the dinner table eating one more piece of pie, let me ask—have you ever checked on last year’s predictions?

Here is a favorite of mine: “Books will soon be obsolete in schools.” So up to the minute, right? So in touch with the reality of today. Only it’s not. This is Thomas Edison in 1913. While I wasn’t alive back then to read the papers, I can assure you I’ve heard many other folks make the same prediction in the intervening years. The way these sorts of predictions normally work is this:

  • Choose a technology that seems directly related to an existing way of doing things. The current way of doing things, or the current technology, needs to be widespread, recognizable, and somehow seen as “fundamental.” In the modern networking world, routers would be an equivalent.
  • Choose a date that is just far enough ahead to seem plausible, not too far away to reduce the impact of the prediction, and yet far enough away to make it forgettable when the time actually comes.
  • Make the prediction in an off-handed way, such that it’s buried in some story about the technology, or about culture, so the prediction itself doesn’t become the focus of the story, but rather the predicted result.

With these elements in place, you can predict away. Of course there are some predictions, and some movements, I think are important. I happen to think, for instance, that Geoff Huston is right about the future of transit providers—given the current state of the market, and given nothing else happens along the way to shift current trends. I happen to think that the networking market is going to look different in ten years—and that the shift is going to be towards a clear division between hyperconverged, hyperscaled, and disaggregated.

But when it comes to technologies, there are two things I’m pretty certain about.

First, that there’s no point in trying to predict which technologies are going to “win” and “lose” in the next year or two. We’ve had forecasts of technology uptake for many years—and for many years all these forecasts have been wrong. Using an example given over at Hack Education—

In 2011, the analyst firm Gartner predicted that annual tablet shipments would exceed 300 million units by 2015. Half of those, the firm said, would be iPads. IDC estimates that the total number of shipments in 2015 was actually around 207 million units. Apple sold just 50 million iPads. That’s not even the best worst Gartner prediction. In October of 2006, Gartner said that Apple’s “best bet for long-term success is to quit the hardware business and license the Mac to Dell.” —Hack Education

Second, Rule 11 is still true (as is Ecclesiastes 1, in fact). No matter what happens in this world, very little in the way of new ideas that result in overnight shifts in everything we know really comes along all that often.

Where does this leave us? Just about everything you hear about the next big thing in technology is probably wrong. There are exceptions, of course, but networking flows through cycles just as certain as the four seasons. There will be centralization, and there will be decentralization. There will be complexity piled on to solve a hard problem, and then there will be simplification, when the solutions are decomposed and rethought.

Today we’re in a simplification phase. Segment routing is simpler than most other forms of traffic engineering. Bit Indexed Replication is simpler than most other forms of multicast. The time for piling protocol on protocol, layer on layer, is over for the moment. It’s time to rethink the base in simpler terms.

But there’s another side to this. Vendors will be vendors, of course, who want to sell products. There will be grand claims, and demos, and large trade shows, and white papers written, and… Of the making of product announcements there is no end. To return to the post that inspired this reaction—

The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.

In other words, most of these product announcements can be sliced through using rule 11 like a hot knife cuts through warm butter. But most engineers will treat each new announcement with great wonder and excitement. And in treating them thus, we are making the future of our world. If the definition of insanity is doing the same things over and over again, each time expecting different results, what does that say about the world of network engineering?

Deep, heavy thoughts, I know, for a networking blog. But thoughts we should all ponder nonetheless—especially among the flurry of press releases, analysis about what’s hot and what’s not, new certifications and skills promising us instant riches and fame, and the constant stream of new technologies—the firehose that awaits us every day of every week of every month of every year.

Layer 2 Routing—Haven’t we been here before?

We often think the entire Internet, as we know it, just popped out of “thin air,” somehow complete and whole, with all the pieces in place. In reality, there have been many side roads taken, and many attempts to solve the problem of pushing the maximum amount of data across a wire along the way. One of these ways came to mind this last week, when I ran across this story—

Also on the virtualisation list is the customer premises equipment (CPE), and this deserves a little explanation. Obviously every premises needs some sort of physical connection to the network providing the services. What CPE virtualisation refers to is making the CPE as generic as possible. —The Register

Pulling layer 2 into the network to centralize the edge—where have I heard this before? Maybe it was all those years ago, when I was in TAC, and we used to support Cisco 1001’s, or the LEX—LAN Extenders. The promise then was the same as the promise now: a lightweight, easy to manage device that would relocate all the intelligence from the network edge into the access layer of the “mother ship,” where it could be properly managed.

Remember Rule 11? If you read this blog on a regular basis, you should.

But this time we have all the problems solved, right? We have SDNs to manage the policies, so we don’t need to carry all the broadcast traffic to the network core, and we have eVPNs and other neat technologies that allow us to actually forward layer 2 packets hop by hop. It’s not really routing, of course, because the layer 2 header isn’t being rewritten at every hop (the mark of true routing, no matter whether it’s done in software or hardware), but layer 2 routing is the closest I can come to the idea of a hybrid between routing and switching simpliciter.

So why do we do this? Because somehow we’ve come to equate routing with complexity, particularly on the application side. Look at the announcement I point to above again—there will be the ability to add services without changing hardware, and chain services, etc. What the article doesn’t say is why any of this requires layer 2 forwarding, rather than layer 3 forwarding. Can we not add services without changing hardware with a plain IPv6 box? Can we not chain services?

Of course we can—but we’ve tied our applications to layer 2 forwarding, and now we must make our networks cooperate. I suspect we’re going to find, at some point, that we really haven’t solved all the problems. CAP is hard to beat in real life, just another manifestation of the complexity triangle that seems to be built into the fabric of reality.

But the thought of layer 2 routing—this seems to be somehow familiar, as well, doesn’t it? I reached way back in my memory, and discovered that yes, we have indeed done this once before. It’s called CLNS. In fact, IS-IS, the most venerable of routing protocols in actual current use today, is grounded in the concept of host addresses on the front of packets that aren’t swapped out at every hop (see my recent IS-IS LiveLesson for more). IS-IS, within a flooding domain, is actually (in a real sense) layer 2 routing.

The more things change, the more they stay the same. Rule 11, indeed.

The question now is—or rather the question we should be asking is—what did we know then that we no longer know now?

Hence the power of thinking in the context of Rule 11.