Reaction; Do we really need a new Internet?

The other day several of us were gathered in a conference room on the 17th floor of the LinkedIn building in San Francisco, looking out of the windows as we discussed some various technical matters. All around us, there were new buildings under construction, with that tall towering crane anchored to the building in several places. We wondered how that crane was built, and considered how precise the building process seemed to be to the complete mess building a network seems to be.

And then, this week, I ran across a couple of articles arguing that we need a new Internet. For instance—

What we really have today is a Prototype Internet. It has shown us what is possible when we have a cheap and ubiquitous digital infrastructure. Everyone who uses it has had joyous moments when they have spoken to family far away, found a hot new lover, discovered their perfect house, or booked a wonderful holiday somewhere exotic. For this, we should be grateful and have no regrets. Yet we have not only learned about the possibilities, but also about the problems. The Prototype Internet is not fit for purpose for the safety-critical and socially sensitive types of uses we foresee in the future. It simply wasn’t designed with healthcare, transport or energy grids in mind, to the extent it was ‘designed’ at all. Every “circle of death” watching a video, or DDoS attack that takes a major website offline, is a reminder of this. What we have is an endless series of patches with ever growing unmanaged complexity, and this is not a stable foundation for the future. —CircleID

So the Internet is broken. Completely. We need a new one.


First, I’d like to point out that much of what people complain about in terms of the Internet, such as the lack of security, or the lack of privacy, are actually a matter of tradeoffs. You could choose a different set of tradeoffs, of course, but then you would get a different “Internet”—one that may not, in fact, support what we support today. Whether the things it would support would be better or worse, I cannot answer, but the entire concept of a “new Internet” that supports everything we want it to support in a way that has none of the flaws of the current one, and no new flaws we have not thought about before—this is simply impossible.

So lets leave that idea aside, and think about some of the other complaints.

The Internet is not secure. Well, of course not. But that does not mean it needs to be this way. The reality is that security is a hot potato that application developers, network operators, and end users like to throw at one another, rather than something anyone tries to fix. Rather than considering each piece of the security puzzle, and thinking about how and where it might be best solved, application developers just build applications without security at all, and say “let the network fix it.” At the same time, network engineers say either: “sure, I can give you perfect security, let me just install this firewall,” or “I don’t have anything to do with security, fix that in the application.” On the other end, users choose really horrible passwords, and blame the network for losing their credit card number, or say “just let my use my thumbprint,” without ever wondering where they are going to go to get a new one when their thumbprint has been compromised. Is this “fixable?” sure, for some strong measure of security—but a “new Internet” isn’t going to fare any better than the current one unless people start talking to one another.

The Internet cannot scale. Well, that all depends on what you mean by “scale.” It seems pretty large to me, and it seems to be getting larger. The problem is that it is often harder to design in scaling than you might think. You often do not know what problems you are going to encounter until you actually encounter them. To think that we can just “apply some math,” and make the problem go away shows a complete lack of historical understanding. What you need to do is build in the flexibility that allows you to overcome scaling issues as they arise, rather than assuming you can “fix” the problem at the first point and not worry about it ever again. The “foundation” analogy does not really work here; when you are building a structure, you have a good idea of what it will be used for, and how large it will be. You do not build a building today and then say, “hey, let’s add a library on the 40th floor with a million books, and then three large swimming pools and a new eatery on those four new floors we decided to stick on the top.” The foundation limits scaling as well as ensures it; sometimes the foundation needs to be flexible, rather than fixed.

There have been too many protocol mistakes. Witness IPv6. Well, yes, there have been many protocol mistakes. For instance, IPv6. But the problem with IPv6 is not that we didn’t need it, not that there was not a problem, nor even that all bad decisions were made. Rather, the problem with IPv6 is the technical community became fixated on Network Address Translators, effectively designing an entire protocol around eliminating a single problem. Narrow fixations always result in bad engineering solutions—it’s just a fact of life. What IPv6 did get right was eliminating fragmentation, a larger address space, and a few other areas.

That IPv6 exists at all, and is even being deployed at all, shows just the entire problem with “the Internet is broken” line of thinking. It shows that the foundations of the Internet are flexible enough to take on a new protocol, and to fix problems up in the higher layers. The original design worked, in fact—parts and pieces can be replaced if we get something wrong. This is more valuable than all the iron clad promises of a perfect future Internet you can ever make.

We are missing a layer. This is grounded in the RINA model, which I like, and I actually use in teaching networking a lot more than any other model. In fact, I consider the OSI model a historical curiousity, a byway that was probably useful for a time, but is no longer useful. But the RINA model implies a fixed number of layers, in even numbers. The argument, boiled down to its essential point, is that since we have seven, we must be wrong.

The problem with the argument is twofold. First, sometimes six layers is right, and at other times eight might be. Second, we do have another layer in the Internet model; it’s just generally buried in the applications themselves. The network does not end with TCP, or even HTTP; it ends with the application. Applications often have their own flow control and error management embedded, if they need them. Some don’t, so exposing all those layers, and forcing every application to use them all, would actually be a waste of resources.

The Internet assumes a flawed model of end to end connectivity. Specifically, that the network will never drop packets. Well, TCP does assume this, but TCP isn’t the only transport protocol on the network. There is also something called “UDP,” and there are others out there as well (at least the last time I looked). It’s not that the network doesn’t provide more lossy services, it’s that most application developers have availed themselves of the one available service, no matter whether or not it is needed for their specific application.

The bottom line.

When I left San Francisco to fly home, 2nd street was closed. Why? Because a piece of concrete had come lose on one of the buildings nearby, and seemed to be just about ready to fall to the street. On the way to the airport, the driver told me stories of several other buildings in the area that were problematic, some that might need to be taken down and rebuilt. The image of the industrial building process, almost perfect every time, is an illusion. You can’t just “build a solid foundation” and then “build as high as you like.”

Sure, the Internet is broken. But anything we invent will, ultimately, be broken in some way or another. Sure the IETF is broken, and so is open source, and so is… whatever we might invent next. We don’t need a new Internet, we need a little less ego, a lot less mud slinging, and a lot more communication. We don’t need the perfect fix, we need people who will seriously think about where the layers and lines are today, why they are there, and why and how we should change them. We don’t need grand designs, we need serious people who are seriously interested in working on fixing what we have, and people who are interested in being engineers, rather than console jockeys or system administrators.


  1. Dirk Schroetter on 20 February 2017 at 7:42 pm

    Hello Russ,

    yep, that pretty much sums it up. Just one more thing that might help us getting out of this mess: “Systematic thinking” – the appreciation that “fixing” the Internet requires cooperative efforts on multiple layers and camps.
    Maybe a deeper appreciation of RFC 1925 (especially rules 4-6) might help. 😉

  2. Eduard Grasa on 21 February 2017 at 3:53 am

    “But the RINA model implies a fixed number of layers, in even numbers. The argument, boiled down to its essential point, is that since we have seven, we must be wrong.”

    This sentence is not correct. In the RINA model there is a single type of layer, that repeats as many times as needed by the network designers. The number of layers in any part of the network is decided by the network designers, no static number of layers should be imposed by network architecture.

    “.. we need .. a lot more communication.” Agree 🙂

    • Russ on 23 February 2017 at 7:09 pm

      Actually, there are four functions in RINA. Each pair falls into a single layer, so there are always pairs of layers if you want to have a fully functional stack. Hence RINA always assumes there is an even number of layers — if you have 3, 5, 7, or 9, there is missing functionality someplace or another.



  3. John Day on 22 February 2017 at 11:24 pm

    Mr. Grasa’s comment is correct and to go further, IPv6 was not necessary and neither were NATs. Actually, IP itself is a mistake. The Internet lacks a complete addressing architecture. loc/id is a false distinction and is merely another way to generate complexity. With a complete addressing architecture, the multihoming problem is solved for free. Router table size is reduced by a factor of 3-4, if not more and mobility requires nothing special: no home routers, foreign routers or tunnels. The Internet lost the Internet Layer around 1980 and fell back into the ITU model. The worst problem of all, the congestion control scheme is about the worst design choice one could make. The effectiveness of any congestion control strategy will decline with increasing network diameter. TCP maximizes that. By putting it in the Transport Layer (of all places), it thwarts a solution to QoS. And implicit notification makes it predatory. It is likely there is no way to back it out without great pain. At 7 or 8 major decision points with the right way and the wrong way well-known, the Internet has been consistent in always choosing the wrong way.

    You are a bit behind the times. In 1983, OSI merged the upper 3 layers. In fact, the OSI clueless test was if someone implemented the upper 3 layers as separate layers, they were clueless. 😉 Actually, OSI didn’t lose a layer (see ISO8648) so it wouldn’t have the addressing problems we see today, TP4 was a major advance over TCP, and they also laid the groundwork for application architecture. There were some problems forced on them by the phone companies* that needed to be fixed, but they could be fixed, unlike the problems the Internet has. It seems your understanding of both OSI and RINA needs a little work. I think you have a lot to learn.

    * That is another common misconception. OSI was not a phone company effort. It was started by US computer companies. Given the regulator environment in 1980, the Europeans preferred to work with the phone companies rather than against them. That was a mistake and the work suffered greatly for it.