Engineering Lessons, IPv6 Edition

Yes, we really are going to reach a point where the RIRs will run out of IPv4 addresses. As this chart from Geoff’s blog shows —

ipv4-exhaustion

Why am I thinking about this? Because I ran across a really good article by Geoff Huston over at potaroo about the state of the IPv4 address pool at APNIC. The article is a must read, so stop right here, right click on this link, open it in a new tab, read it, and then come back. I promise this blog isn’t going anyplace while you’re over on Geoff’s site. But my point isn’t to ring the alarm bells on the IPv4 situation. Rather, I’m more interested in how we got here in the first place. Specifically, why has it taken so long for the networking industry to adopt IPv6?

Inertia is a tempting answer, but I’m not certain I buy this as the sole reason for lack of deployment. IPv6 was developed some fifteen years ago; since then we’ve deployed tons of new protocols, tons of new networking gear, and lots of other things. Remember what a cell phone looked like fifteen years ago? In fact, if we’d have started fifteen years ago with simple dual mode devices, we could easily be fully deployed in IPv6 today. As it is, we’re really just starting now.

We didn’t see a need? Perhaps, but that’s difficult to maintain, as well. When IPv6 was originally developed (remember — fifteen years ago), we all knew there was an addressing problem. I suspect there’s another reason.

I suspect that IPv6, in it’s original form tried to boil the ocean, and the result might have been too much change too fast for the networking community to handle in such a fundamental area of the stack. What engineering lessons might we draw from the long times scales around IPv6 deployment?

For those who weren’t in the industry those many years ago, there were several drivers behind IPv6 beyond just the need for more address space. For instance, the entire world exploded with “no more NATs.” In fact, many engineers, to this day, still dislike NATs, and see IPv6 as a “solution” to the NAT “problem.” Mailing lists roiled with long discussions about NAT, security by obscurity (still waiting for someone who strongly believes that obscurity is useless to step onto a modern battlefield with a state of the art armor system painted bright orange), and a thousand other topics. You see, ARP really isn’t all that efficient, so let’s do something a little different and create an entirely new neighbor discovery system. And then there’s that whole fragmentation issue we’ve been dealing with for IPv4 for all these years. And…

Part of the reason it’s taken so long to deploy IPv6, I think, is because it’s not just about expanding the address space. IPv6, for various reasons, has tried to address every potential failing ever found in IPv4.

Don’t miss my point here. The design and engineering decisions made for IPv6 are generally solid. But all of us — and I include myself here — tend to focus too much on building that practically perfect protocol, rather than building something that was “good enough,” along with stretchy spots where obvious change can be made in the future.

In this specific case, we might have passed over one specific question too easily — how easy will this be to deploy in the real world? I’m not saying there weren’t discussions around this very topic, but the general answer was, “we have fifteen years to deploy this stuff.” And, yet… Here we are fifteen years later, and we’re still trying to convince people to deploy it. Maybe a bit of honest reflection might be useful just about now.

I’m not saying we shouldn’t deploy IPv6. Rather, I’m saying we should try and take a lesson from this — a lesson in engineering process. We needed, and need, IPv6. We probably didn’t need the NAT wars. We needed, and need, IPv6. But we probably didn’t need the wars over fragmentation.

What we, as engineers, tend to do is to build solutions that are complete, total, self contained, and practically perfect. What we, as engineers, should do is build platforms that are flexible, usable, and can support a lot of different needs. Being a perfectionists isn’t just something you say during the interview to that one dumb question about your greatest weakness. Sometimes you — we, really — do need to learn to stop what we’re doing, take a look around, and ask — why are we doing this?

A Neutral ‘Net?

This week I’m going to step off the beaten path for a moment and talk about ‘net neutrality. It appears we are about to enter a new phase in the life of the Internet — at least in the United States — as the FCC is out and about implying we should expect a ruling on Title II regulation of the ‘net within the United States in the near future. What the FCC’s chairman has said is —

  • The Internet would be reclassified as a Title II communication service, which means the portions within the United States would fall under the same regulations as telephone and television service.
  • “comma, but…” The ‘net infrastructure in the United States won’t be subject to all the rules of Title II regulation.
  • Specifically mentioned is the last mile, “there will be no rate regulation, no tariffs, no last-mile unbundling.”

A lot of digital ink has been spilled over how the proposed regulations will impact investment — for instance, AT&T has made a somewhat veiled threat that if the regulations don’t go the way they’d like to see them go, there will be no further investment in last mile broadband throughout the US (as if there were a lot of investment today — most of the US is a “bandwidth desert” of single vendor territories, and the vendors often treat you poorly). But while these are concerns of mine, I have a deeper concern, one that’s not really being voiced in the wide world of technology.

Here’s the problem I see — the “comma, but…” part of this game. The Internet developed in a world that was encouraged by the government, through direct investment in the technologies involved, by buying the first few truly large scale networks, by encouraging and/or tolerating monopolies in the last mile, by unbundling companies that seemed to be “too big,” and many other things. Internet infrastructure, in other words, hasn’t ever really been a “free market,” in the Adam Smith sense of the term. Content and connectivity have, however, largely been a “free market,” to the point that IP is in danger of becoming a dial tone to the various “over the top” options that are available (we can have a separate discussion about the end to end principle, and the intentionality of being dial tone).

We might not have understood the rules, but at least there were rules. What we seem to be going into now is a world where there are no rules, except the rules made up by a small group of “experts,” who decide, well, whatever they decide, however they decide it. The process will be “transparent,” in much the same way the IETF process is “transparent” — if you can spend almost all your time paying attention to the high volume of proposals, ideas, and trial balloons. In other words, the process will be transparent for those who have the time, and can put the effort into “paying attention.”

And who will that be, precisely? Well, as always, it will be companies big enough to carry the load of paying people to pay attention. Which means, in the end, that we may well just be seeing yet another instance of rent seeking, of setting things as they exist “in stone,” to benefit the current players against anyone who might come along and want to challenge the status quo. The wishy-washiness of the statements on the part of those speaking for the FCC lends credence to this view of things — “we’ll implement the parts of the regulations we think fit, a determination that might happen to change over time.”

And here we reach a point made by Ayn Rand (no, I’m not a Rand-head, but I still agree with many points Ms Rand made over the course of her work). There is no difference between having an overly broad set of selectively enforced regulations in place and simply allowing a small group of people do what they like on a day-to-day basis. There is, in fact, a word for governments that don’t live by the rule of law — you might think it’s harsh, but that word is tyranny.

So what bothers me about this isn’t so much the regulation itself — though the regulations outlined thus far indicate a clear preference for the status quo big players than for real innovation by smaller players. It’s the way the regulations are being approached. “We’ll know the right regulations when we see them.”

Down this path lies regulation of content because “it’s offensive,” and gaming the system towards those who make the largest contributions, a vicious brew of political correctness and rent seeking on a grand scale.

And, in the end, this is one of the quickest ways to effect the obsolescence of the Internet as we know it. On the other hand, maybe that’s the point.