Hedge 99

Two things have been top of mind for those who watch the ‘net and global Internet policy—the increasing number of widespread outages, and the logical and physical centralization of the ‘net. How do these things relate to one another? Alban Kwan joins us to discuss the relationship between centralization and widespread outages. You can read Alban’s article on the topic here.

download

The Hedge 34: Andrew Alston and the IETF

Complaining about how slow the IETF is, or how single vendors dominate the standards process, is almost a by-game in the world of network engineering going back to the very beginning. It is one thing to complain; it is another to understand the structure of the problem and make practical suggestions about how to fix it. Join us at the Hedge as Andrew Alston, Tom Ammon, and Russ White reveal some of the issues, and brainstorm how to fix them.

download

Research: Legal Barriers to RPKI Deployment

Much like most other problems in technology, securing the reachability (routing) information in the internet core as much or more of a people problem than it is a technology problem. While BGP security can never be perfect (in an imperfect world, the quest for perfection is often the cause of a good solution’s failure), there are several solutions which could be used to provide the information network operators need to determine if they can trust a particular piece of routing information or not. For instance, graph overlays for path validation, or the RPKI system for origin validation. Solving the technical problem, however, only carries us a small way towards “solving the problem.”

One of the many ramifications of deploying a new system—one we do not often think about from a purely technology perspective—is the legal ramifications. Assume, for a moment, that some authority were to publicly validate that some address, such as 2001:db8:3e8:1210::/64, belongs to a particular entity, say bigbank, and that the AS number of this same entity is 65000. On receiving an update from a BGP peer, if you note the route to x:1210::/64 ends in AS 65000, you might think you are safe in using this path to reach destinations located in bigbank’s network.

What if the route has been hijacked? What if the validator is wrong, and has misidentified—or been fooled into misidentifying—the connection between AS65000 and the x:1210::/64 route? What if, based on this information, critical financial information is transmitted to an end point which ultimately turns out to be an attacker, and this attacker uses this falsified routing information to steal millions (or billions) of dollars?

Yoo, Christopher S., and David A. Wishnick. 2019. “Lowering Legal Barriers to RPKI Adoption.” SSRN Scholarly Paper ID 3308619. Rochester, NY: Social Science Research Network. https://papers.ssrn.com/abstract=3309813.

Who is responsible? This legal question ultimately plays into the way numbering authorities allow the certificates they issue to be used. Numbering authorities—specifically ARIN, which is responsible for numbering throughout North America—do not want the RPKI data misused in a way that can leave them legally responsible for the results. Some background is helpful.

The RPKI data, in each region, is stored in a database; each RPKI object (essentially and loosely) contains an origin AS/IP address pair. These are signed using a private key and can be validated using the matching public key. Somehow the public key itself must be validated; ultimately, there is a chain, or hierarchy, of trust, leading to some sort of root. The trust anchor is described in a file called the Trust Anchor Locator, or TAL. ARIN wraps access to their TAL in a strong indemnification clause to protect themselves from the sort of situation described above (and others). Many companies, particularly in the United States, will not accept the legal contract involved without a thorough investigation of their own culpability in any given situation involving misrouting traffic, which ultimately means many companies will simply not use the data, and RPKI is not deployed.

The essential point the paper makes is: is this clause really necessary? Thy authors make several arguments towards removing the strict legal requirements around the use of the data in the TAL provided by ARIN. First, they argue the bounds of potential liability are uncertain, and will shift as the RPKI is more widely deployed. Second, they argue the situations where harm can come from use of the RPKI data needs to be more carefully framed and understood, and how these kinds of legal issues have been used in the past. To this end, the authors argue strict liability is not likely to be raised, and negligence liability can probably be mitigated. They offer an alternative mechanism using straight contract law to limit the liability to ARIN in situations where the RPKI data is misused or incorrect.

Whether this paper causes ARIN to rethink its legal position or not is yet to be seen. At the same time, while these kinds of discussions often leave network engineers flat-out bored, the implications for the Internet are important. This is an excellent example of an intersection between technology and policy, a realm network operators and engineers need to pay more attention to.

Ossification and Fragmentation: The Once and Future ‘net

Mostafa Ammar, out of Georgia Tech (not my alma mater, but many of my engineering family are alumni there), recently posted an interesting paper titled The Service-Infrastructure Cycle, Ossification, and the Fragmentation of the Internet. I have argued elsewhere that we are seeing the fragmentation of the global Internet into multiple smaller pieces, primarily based on the centralization of content hosting combined with the rational economic decisions of the large-scale hosting services. The paper in hand takes a slightly different path to reach the same conclusion.

cross posted at CircleID

TL;DR[time-span]

  • Networks are built based on a cycle of infrastructure modifications to support services
  • When new services are added, pressure builds to redesign the network to support these new services
  • Networks can ossify over time so they cannot be easily modified to support new services
  • This causes pressure, and eventually a more radical change, such as the fracturing of the network

 
The author begins by noting networks are designed to provide a set of services. Each design paradigm not only supports the services it was designed for, but also allows for some headroom, which allows users to deploy new, unanticipated services. Over time, as newer services are deployed, the requirements on the network change enough that the network must be redesigned.
This cycle, the service-infrastructure cycle, relies on a well-known process of deploying something that is “good enough,” which allows early feedback on what does and does not work, followed by quick refinement until the protocols and general design can support the services placed on the network. As an example, the author cites the deployment of unicast routing protocols. He marks the beginning of this process as 1962, when Prosser was first deployed, and then as 1995, when BGPv4 was deployed. Across this time routing protocols were invented, deployed, and revised rapidly. Since around 1995, however—a period of over 20 years at this point—routing has not changed all that much. So there were around 35 years of rapid development, followed by what is now over 20 years of stability in the routing realm.

Ossification, for those not familiar with the term, is a form of hardening. Petrified wood is an ossified form of wood. An interesting property of petrified wood is that is it fragile; if you pound a piece of “natural” wood with a hammer, it dents, but does not shatter. Petrified, or ossified, wood shatters, like glass.

Multicast routing is held up as an opposite example. Based on experience with unicast routing, the designers of multicast attempted to “anticipate” the use cases, such that early iterations were clumsy, and failed to attain the kinds of deployment required to get the cycle of infrastructure and services started. Hence multicast routing has largely failed. In other words, multicast ossified too soon; the cycle of experience and experiment was cut short by the designers trying to anticipate use cases, rather than allowing them to grow over time.

Some further examples might be:

  • IETF drafts and RFCs were once short, and used few technical terms, in the sense of a term defined explicitly within the context of the RFC or system. Today RFCs are veritable books, and require a small dictionary to read.
  • BGP security, which is mentioned by the author as a victim of ossification, is actually another example of early ossification destroying the experiment/enhancement cycle. Early on, a group of researchers devised the “perfect” BGP security system (which is actually by no means perfect—it causes as many security problems as it resolves), and refused to budge once “perfection” had been reached. For the last twenty years, BGP security has not notably improved; the cycle of trying and changing things has been stopped this entire time.

There are also weaknesses in this argument, as well. It can be argued that the reason for the failure of widespread multicast is because the content just wasn’t there when multicast was first considered—in fact, that multicast content still is not what people really want. The first “killer app” for multicast was replacing broadcast television over the Internet. What has developed instead is video on demand; multicast is just not compelling when everyone is watching something different whenever they want to.

The solution to this problem is novel: break the Internet up. Or rather, allow it to break up. The creation of a single network from many networks was a major milestone in the world of networking, allowing the open creation of new applications. If the Internet were not ossified through business relationships and the impossibility of making major changes in the protocols and infrastructure, it would be possible to undertake radical changes to support new challenges.

The new challenges offered include IoT, the need for content providers to have greater control over the quality of data transmission, and the unique service demands of new applications, particularly gaming. The result has been the flattening of the Internet, followed by the emergence of bypass networks—ultimately leading to the fragmentation of the Internet into many different networks.

Is the author correct? It seems the Internet is, in fact, becoming a group of networks loosely connected through IXPs and some transit providers. What will the impact be on network engineers? One likely result is deeper specialization in sets of technologies—the “enterprise/provider” divide that had almost disappeared in the last ten years may well show up as a divide between different kinds of providers. For operators who run a network that indirectly supports some other business goal (what we might call “enterprise”), the result will be a wide array of different ways of thinking about networks, and an expansion of technologies.

But one lesson engineers can certainly take away is this: the concept of agile must reach beyond the coding realm, and into the networking realm. There must be room “built in” to experiment, deploy, and enhance technologies over time. This means accepting and managing risk rather than avoiding it, and having a deeper understanding of how networks work and why they work that way, rather than the blind focus on configuration and deployment we currently teach.

Should We Stop Encryption? Can We?

It’s not like they’re asking for a back door for every device.
If the world goes dark through encryption, we’ll be back to the wild west!
After all, if it were your daughter who had been killed in a terrorist attack, you’d want the government to get to that information, too.

While sitting on a panel this last week, I heard all three reactions to the Apple versus FBI case. But none of these reactions ring true to me.

Let’s take the first one: no, they’re not asking for a back door for every device. Under the time tested balance between privacy and government power, the specific point is that people have a reasonable expectation of privacy until they come under suspicion of wrongdoing. However, it’s very difficult to trust that, in the current environment, that such power, once granted, won’t be broadened to every case, all the time. The division between privacy and justice before the law was supposed to be at the point of suspicion. That wall, however, has already been breached, so the argument now moves to “what information should the government be able to trawl through in order to find crimes?” They are asking for the power to break one phone in one situation, but that quickly becomes the power to break every phone all the time on the slimmest of suspicions (or no suspicion at all).

Essentially, hard cases make bad law (which is precisely why specific hard cases are chosen as a battering ram against specific laws).

The second one? Let’s reconsider exactly why it is the laws protect personal action from government snooping without reason. No-one is perfect. Hence, if you dig hard enough, especially in a world where the size of the code of law is measured in the hundreds of thousands of pages, and the Federal tax code is over 70,000 pages long, you will find something someone has done wrong at some point within the last few years.

Putting insane amounts of law together with insane amounts of power to investigate means that anyone can be prosecuted at any time for any reason someone with a uniform might like. Keeping your nose clean, in this situation, doesn’t mean not committing any crimes, as everyone does. Keeping your nose clean, in this situation, means not sticking your neck too far out politically, or making someone with the power to prosecute too angry. We do want to prevent a situation where criminals can run wild, but we don’t want to hand the government—any government—the power to prosecute anyone they like, as that’s just another form of the “wild west” we all say we want to prevent.

By the way, who is going to force every cryptographer in the world to hand over their back doors?

Even if the U.S. government prevails in its quest to compel Apple and other U.S. companies to give the authorities access to encrypted devices or messaging services when they have a warrant, such technology would still be widely available to terrorists and criminals, security analysts say. That’s because so many encrypted products are made by developers working in foreign countries or as part of open source projects, putting them outside the federal government’s reach. For example, instant messaging service Telegram — which offers users encrypted “secret chats” — is headquartered in Germany while encrypted voice call and text-messaging service Silent Phone is based out of Switzerland. And Signal, a popular app for encrypted voice calls and text messaging, is open source. -via the Washington Post

If we’re going to play another round of “the law abiding can be snagged for crimes real criminals can’t be snagged for,” count me out of the game.

The third one? I never trust an argument I can turn around so easily. Let me ask this—would you want breakable encryption on your daughter’s phone if she were being stalked by someone who happens to have a uniform? Oh, but no-one in uniform would do such a thing, because they’d be caught, and held accountable, and…

We tend to forget, all too easily, the reality of being human. As Solzhenitsyn says—

Gradually it was disclosed to me that the line separating good and evil passes not through states, nor between classes, nor between political parties either—but right through every human heart—and through all human hearts. This line shifts. Inside us, it oscillates with the years. And even within hearts overwhelmed by evil, one small bridgehead of good is retained. And even in the best of all hearts, there remains … an unuprooted small corner of evil. -The Gulag Archipelago.

Strong encryption is too important to play games with. As Tom says—

Weakening encryption to enable it to be easily overcome by brute force is asking for a huge Pandora’s box to be opened. Perhaps in the early nineties it was unthinkable for someone to be able to command enough compute resources to overcome large number theory. Today it’s not unheard of to have control over resources vast enough to reverse engineer simple problems in a matter or hours or days instead of weeks or years. Every time a new vulnerability comes out that uses vast computing power to break theory it weakens us all. -via Networking Nerd

A Neutral ‘Net?

This week I’m going to step off the beaten path for a moment and talk about ‘net neutrality. It appears we are about to enter a new phase in the life of the Internet — at least in the United States — as the FCC is out and about implying we should expect a ruling on Title II regulation of the ‘net within the United States in the near future. What the FCC’s chairman has said is —

  • The Internet would be reclassified as a Title II communication service, which means the portions within the United States would fall under the same regulations as telephone and television service.
  • “comma, but…” The ‘net infrastructure in the United States won’t be subject to all the rules of Title II regulation.
  • Specifically mentioned is the last mile, “there will be no rate regulation, no tariffs, no last-mile unbundling.”

A lot of digital ink has been spilled over how the proposed regulations will impact investment — for instance, AT&T has made a somewhat veiled threat that if the regulations don’t go the way they’d like to see them go, there will be no further investment in last mile broadband throughout the US (as if there were a lot of investment today — most of the US is a “bandwidth desert” of single vendor territories, and the vendors often treat you poorly). But while these are concerns of mine, I have a deeper concern, one that’s not really being voiced in the wide world of technology.

Here’s the problem I see — the “comma, but…” part of this game. The Internet developed in a world that was encouraged by the government, through direct investment in the technologies involved, by buying the first few truly large scale networks, by encouraging and/or tolerating monopolies in the last mile, by unbundling companies that seemed to be “too big,” and many other things. Internet infrastructure, in other words, hasn’t ever really been a “free market,” in the Adam Smith sense of the term. Content and connectivity have, however, largely been a “free market,” to the point that IP is in danger of becoming a dial tone to the various “over the top” options that are available (we can have a separate discussion about the end to end principle, and the intentionality of being dial tone).

We might not have understood the rules, but at least there were rules. What we seem to be going into now is a world where there are no rules, except the rules made up by a small group of “experts,” who decide, well, whatever they decide, however they decide it. The process will be “transparent,” in much the same way the IETF process is “transparent” — if you can spend almost all your time paying attention to the high volume of proposals, ideas, and trial balloons. In other words, the process will be transparent for those who have the time, and can put the effort into “paying attention.”

And who will that be, precisely? Well, as always, it will be companies big enough to carry the load of paying people to pay attention. Which means, in the end, that we may well just be seeing yet another instance of rent seeking, of setting things as they exist “in stone,” to benefit the current players against anyone who might come along and want to challenge the status quo. The wishy-washiness of the statements on the part of those speaking for the FCC lends credence to this view of things — “we’ll implement the parts of the regulations we think fit, a determination that might happen to change over time.”

And here we reach a point made by Ayn Rand (no, I’m not a Rand-head, but I still agree with many points Ms Rand made over the course of her work). There is no difference between having an overly broad set of selectively enforced regulations in place and simply allowing a small group of people do what they like on a day-to-day basis. There is, in fact, a word for governments that don’t live by the rule of law — you might think it’s harsh, but that word is tyranny.

So what bothers me about this isn’t so much the regulation itself — though the regulations outlined thus far indicate a clear preference for the status quo big players than for real innovation by smaller players. It’s the way the regulations are being approached. “We’ll know the right regulations when we see them.”

Down this path lies regulation of content because “it’s offensive,” and gaming the system towards those who make the largest contributions, a vicious brew of political correctness and rent seeking on a grand scale.

And, in the end, this is one of the quickest ways to effect the obsolescence of the Internet as we know it. On the other hand, maybe that’s the point.