Securing BGP: A Case Study (6)

In my last post on securing BGP, I said—

Here I’m going to discuss the problem of a centralized versus distributed database to carry the information needed to secure BGP. There are actually, again, two elements to this problem—a set of pure technical issues, and a set of more business related problems. The technical problems revolve around the CAP theorem, which is something that wants to be discussed in a separate post; I’ll do something on CAP in a separate post next week and link it back to this series.

The CAP theorem post referenced above is here.

Before I dive into the technical issues, I want to return to the business issues for a moment. In a call this week on the topic of BGP security, someone pointed out that there is no difference between an advertisement in BGP asserting some piece of information (reachability or connectivity, take your pick), and an advertisements outside BGP asserting this same bit of information. The point of the question is this: if I can’t trust you to advertise the right thing in one setting, then why should I trust you to advertise the right thing in another? More specifically, if you’re using an automated system to build both advertisements, then both are likely to fail at the same time and in the same way.

First, this is an instance of how automation can create a system that is robust yet fragile—which leads directly back to complexity as it applies to computer networks. Remember—if you don’t see the tradeoff, then you’re not looking hard enough.

Second, this is an instance of separating trust for the originator from trust for the transit. Let’s put this in an example that might be more useful. When a credit card company sends you a new card, they send it in a sealed envelope. The sealed envelope doesn’t really do much in the way of security, as it can be opened by anyone along the way. What it does do, however (so long as the tamper resistance of the envelope is good enough) is inform the receiver about whether or not the information received is the same as the information sent. The seal on the envelope cannot make any assertions about the truthfulness of the information contained in the envelope. If the credit card company is a scam, then the credit cared in the envelope is still a scam even if it’s well sealed.

There is only one way to ensure what’s in the envelope is true and useful information—having an trustworthy outsider observe the information, the sender, and the receiver, to make certain they’re all honest. But now we dive into philosophy pretty heavily, and this isn’t a philosophy blog (though I’ve often thought about changing my title to “network philosopher,” just because…).

But it’s worth considering two things about this problem—the idea of having a third party watching to make certain everyone is honest is, itself, full of problems.

First, who watches the watchers? Pretty Good Privacy has always used the concept of a web of trust—this is actually a pretty interesting idea in the realm of BGP security, but one I don’t want to dive in to right this moment (maybe in a later post).

Second, Having some form of verification will necessarily cause any proposed system to rely on third parties to “make it go.” This seems to counter the ideal state of allowing an operator to run their business based on locally available information as much as possible. From a business perspective, there needs to be a balance between obtaining information that can be trusted, and obtaining information in a way that allows the business to operate—particularly in its core realm—without relying on others.

Next time, I’ll get back to the interaction of the CAP theorem and BGP security, with a post around the problem of convergence speed.