IPv6, DHCP, and Unintended Consequences

I ran into an interesting paper on the wide variety of options for assigning addresses, and providing DNS information, in IPv6, over at ERNW. As always, with this sort of thing, it started me thinking about the power of unintended consequences, particularly in the world of standardization. The authors of this paper noticed there are a lot of different options available in the realm of assigning addresses, and providing DNS information, through IPv6.

Alongside these various options, there are a number of different flags that are supposed to tell the host which of these options should, and which shouldn’t, be used, prioritized, etc. The problem is, of course, that many of these flags, and many of the options, are, well, optional, which means they may or may not be implemented across different versions of code and vendor products. Hence, combining various flags with various bits of information can have a seemingly random impact on the IPv6 addresses and DNS information different hosts actually use. Perhaps the most illustrative chart is this one—

Each operating system tested seems to act somewhat differently when presented with all possible flags, and all possible sources of information. As the paper notes, this can cause major security holes. For instance, if an attacker simply brings up a DHCPv6 server on your network, and you’re not already using DHCPv6, the attacker can position itself to be a “man in the middle” for most DNS queries.

What lessons can we, as engineers and network operators, take away from this sort of thing?

First, standards bodies aren’t perfect. Standards bodies are, after all, made up of people (and a lot less people than you might imagine), and people are not perfect. Not only are people not perfect, they are often under pressures of various sorts which can lead to “less than optimal” decisions in many situations, particularly in the case of systems designed by a lot of different people, over a long stretch of time, with different pieces and parts designed to solve particular problems (corner or edge cases), and subjected to the many pressures of actually holding a day job.

Second, this means you need to be an engineer, even if you are relying on standards. In other words, don’t fall back to “but the RFC says…” as an excuse. Do the work of researching why “the RFC says,” find out what implementations do, and consider what alternatives might be. Ultimately, if you call yourself an engineer, be one.

Third, always know what is going on, on your network, and always try to account for negative possibilities, rather than just positive ones. I wonder how many times I have said, “but I didn’t deploy x, so I don’t need to think about how x interacts with my environment.” We never stop to ask if not deploying x leaves me open to security holes or failure modes I have not even considered.

Unintended consequences are, after all, unintended, and hence “Out of sight, out of mind.” But out of sight, and even out of mind, definitely does not mean out of danger.

snaproute Go BGP Code Dive (14): First Steps in Processing an Update

In the last post on this topic, we found the tail of the update chain. The actual event appears to be processed here—

case BGPEventUpdateMsg:
  bgpMsg := data.(*packet.BGPMessage)

—which is found around line 734 of fsm.go. The second line of code in this snippet is interesting; it’s a little difficult to understand what it’s actually doing. There are three crucial elements to figuring out what is going on here—

:=, in go, is a way of assigning information to a data structure. But what, precisely, is being assigned to bgpMsg from the data structure?

The * (asterisk) is a way to reference a pointer within a structure. We’ve not talked about pointers before, so it’s worth spending just a moment with them. The illustration below will help a bit.

Each letter in the string “this is a string” is stored in a single memory location (this isn’t necessarily true, but let’s assume it is for this example). Further, each memory location has a location identifier, or rather some form of number that says, “this is memory location x.” This memory locator is, of course a number—hence the memory locator itself can be assigned to a variable, which can then be treated as a separate object from the string itself.

This memory locator is called a pointer.

It should only make sense that the locator is called a pointer, because it points to the string. The question that should pop up in your head right now is—”but wait, if each letter is stored in a different memory location, then which memory location does the pointer actually point to?” If you’re trying to describe the entire string, the pointer would normally point to the first character in the string. You can, of course, also describe just some part of the string by pointing to a memory location that’s someplace in the middle of the string. For instance, you could point to just the part of the string “is a string” by finding the memory location of the second “i” in the string, and storing its memory location.

How can you find the location of a string, or some other data structure? You place an & (ampersand) in front of it. So, if you do this—

my-pointer = &a-string

Now I have the pointer, but how do I get back to the value from the pointer? Like this—

a-string-copy = *my-pointer

So the * takes the data that is pointed at by the pointer and pulls it out for assignment to another variable. In this case, then, this line of code—

bgpMsg := data.(*packet.BGPMessage)


  • taking the data located at packet.BGPMessage
  • assigning it to the data structure bgpMsg

In other words, this is copying the actual packet contents out of the buffer into which they were copied by the BGP FSM when the packet was received, and into another structure where they can be processed as an update. We need to look elsewhere for the code that removes messages from this data structure—we will most likely find it when we start looking through ProcessUpdateMessage, which is where we will start next time.

Traffic Pattern Attacks: A Real Threat

Assume, for a moment, that you have a configuration something like this—


Some host, A, is sending queries to, and receiving responses from, a database at C. An observer, B, has access to the packets on the wire, but neither the host nor the server. All the information between the host and the server is encrypted. There is nothing the observer, B, can learn about the information being carried between the client and the server? Given the traffic is encrypted, you might think… “not very much.”

A recent research paper published at CCS ’16 in Vienna argues the observer could know a lot more. In fact, based on just the patterns of traffic between the server and the client, given the database uses atomic operations and encrypts each record separately, it’s possible to infer the key used to query the database (not the cryptographic key). The paper can be found here. Specifically:

We then develop generic reconstruction attacks on any system supporting range queries where either access pattern or communication volume is leaked. These attacks are in a rather weak passive adversarial model, where the untrusted server knows only the underlying query distribution. In particular, to perform our attack the server need not have any prior knowledge about the data, and need not know any of the issued queries nor their results. Yet, the server can reconstruct the secret attribute of every record in the database after about N^4 queries, where N is the domain size.

What is interesting about this is the attack infers a potentially useful piece of information from passive observation of encrypted data being passed between a client and server. To put this in more real world terms, assume for a moment that A is your computer, C is a web server that has information stored about you based on a nonce stored in a cookie stored on your local hard drive. The web site uses only encrypted sockets (TLS, let’s say). Some outside observer has access to your wifi connection, but not to your computer.

The nonce, in effect, acts like a key into the database. If the site wants to know the last five purchases you’ve made without you signing in, or the last five sites you’ve visited, etc., it would pull the cookie off your computer and query a database server that translates the cookie into the information required. If this all happens on you computer (a likely scenario, as most such information gathering system rely on locally executed code), over enough observations—even if those observations are of encrypted traffic—the observer can infer the value of your cookie. Using the information gathered over these observations, the attacker could then retrieve the information stored about you in the database…

The likelihood of such an attack actually working is going to be mitigated by several factors. First, there needs to be a lot of observations. Second, the key needs to not change across those transactions. Third, the transactions must be atomic (which means each transaction stands alone, there are not multiple parts to a single transaction spread out over a number of packet exchanges).

At the same time, this type of research shows what is possible just by observing traffic passing over the wire between two devices—which means it is something to think about when considering cloud based storage and database solutions of data (particularly sensitive data).