The Hedge 82: Jared Smith and Route Poisoning

Intentionally poisoning BGP routes in the Default-Free Zone (DFZ) would always be a bad thing, right? Actually, this is a fairly common method to steer traffic flows away from and through specific autonomous systems. How does this work, how common is it, and who does this? Jared Smith joins us on this episode of the Hedge to discuss the technique, and his research into how frequently it is used.

download

If you haven’t found the tradeoffs …

One of the big movements in the networking world is disaggregation—splitting the control plane and other applications that make the network “go” from the hardware and the network operating system. This is, in fact, one of the movements I’ve been arguing in favor of for many years—and I’m not about to change my perspective on the topic. There are many different arguments in favor of breaking the software from the hardware. The arguments for splitting hardware from software and componentizing software are so strong that much of the 5G transition also involves the open RAN, which is a disaggregated stack for edge radio networks.

If you’ve been following my work for any amount of time, you know what comes next: If you haven’t found the tradeoffs, you haven’t looked hard enough.

This article on hardening Linux (you should go read it, I’ll wait ’til you get back) exposes some of the complexities and tradeoffs involved in disaggregation in the area of security. Some further thoughts on hardening Linux here, as well. Two points.

First, disaggregation has serious advantages, but disaggregation is also hard work. With a commercial implementation you wouldn’t necessarily think about these kinds of supply chain issues. This is an example of the state/optimization/surfaces tradeoff. You can optimize your network more fully using disaggregation techniques, but there are going to be more interaction surfaces, and there’s going to be more state to deal with (for instance, the security state on individual devices).

There are several items on this list that also illustrate the state/optimization/surfaces tradeoff. For instance, eBPF is on the list of things to disable … but eBPF is probably going to be crucial to many future network-facing implementations. Anything that’s useful is going to inherently create attack surfaces you need to deal with. Get over it.

Second, just because you don’t think about these issues with a commercial implementation does not mean you don’t need to think about these things—it just means these kinds of things are opaque to you. Rather than trying to do the “right thing” yourself, you are outsourcing this work to a vendor. This is often a rational decision, and even might often be the right decision, but it’s a decision. We often “bury” these kinds of decisions in our thinking, not realizing we are making tradeoffs.

Loose Lips

When I was in the military we were constantly drilled about the problem of Essential Elements of Friendly Information, or EEFIs. What are EEFis? If an adversary can cast a wide net of surveillance, they can often find multiple clues about what you are planning to do, or who is making which decisions. For instance, if several people married to military members all make plans to be without their spouses for a long period of time, the adversary can be certain a unit is about to be deployed. If the unit of each member can be determined, then the strength, positioning, and other facts about what action you are taking can be guessed.

Given enough broad information, an adversary can often guess at details that you really do not want them to know.

What brings all of this to mind is a recent article in Dark Reading about how attackers take advantage of publicly available information to form Spear Phishing attacks—

Most security leaders are acutely aware of the threat phishing scams pose to enterprise security. What garners less attention is the vast amount of publicly available information about organizations and their employees that enables these attacks.

Going back further in time, during World War II, we have—

What does all of this mean for the average network engineer concerned about security? Probably nothing different than being just slightly paranoid about your personal security in the first place (way too much modern security is driven by an anti-paranoid mindset, a topic for a future post). Things like—

  • Don’t let people know, either through your job description or anything else, that you hold the master passwords for your company, or that your account holds administrator rights.
  • Don’t always go to the same watering holes, and don’t talk about work while there to people you’ve just met, or even people you see there all the time.
  • Don’t talk about when and where you’re going on vacation. You can talk about it, and share pictures, once you’re back.

If an attacker knows you are going to be on vacation, it’s a lot easier to create a fake “emergency,” tempting you to give out information about accounts, people, and passwords you shouldn’t. Phishing is primarily a matter of social engineering rather than technical acumen. Countering social engineering is also a social skill, rather than a technical one. You can start by learning to just say less about what you are doing, when you are doing it, and who holds the keys to the kingdom.

The Insecurity of Ambiguous Standards

Why are networks so insecure?

One reason is we don’t take network security seriously. We just don’t think of the network as a serious target of attack. Or we think of security as a problem “over there,” something that exists in the application realm, that needs to be solved by application developers. Or we think the consequences of a network security breach as “well, they can DDoS us, and then we can figure out how to move load around, so if we build with resilience (enough redundancy) we’re already taking care of our security issues.” Or we put our trust in the firewall, which sits there like some magic box solving all our problems.

The problem is–none of this is true. In any system where overall security is important, defense-in-depth is the key to building a secure system. No single part of the system bears the “primary responsibility” for “security.” The network is certainly a part of any defense-in-depth scheme that is going to work.

Which means network protocols need to be secure, at least in some sense, as well. I don’t mean “secure” in the sense of privacy—routes are not (generally) personally identifiable information (there are always exceptions, however). But rather “secure” in the sense that they cannot be easily attacked. On-the-wire encryption should prevent anyone from reading the contents of the packet or stream all the time. Network devices like routers and switches should be difficult to break in too, which means the protocols they run must be “secure” in the fuzzing sense—there should be no unexpected outputs because you’ve received an unexpected input.

I definitely do not mean path security of any sort. Making certain a packet (or update or whatever else) has followed a specified path is a chimera in packet switched networks. It’s like trying to nail your choice of multicolored gelatin desert to the wall. Packet switched networks are designed to adapt to changes in the network by rerouting traffic. Get over it.

So why are protocols and network devices so insecure? I recently ran into an interesting piece of research that provides some of the answer. To wit—

Our research saw that ambiguous keywords SHOULD and MAY had the second highest number of occurrences across all RFCs. We’ve also seen that their intended meaning is only to be interpreted as such when written in uppercase (whereas often they are written in lowercase). In addition, around 40% of RFCs made no use of uppercase requirements level keywords. These observations point to inconsistency in use of these keywords, and possibly misunderstanding about their importance in a security context. We saw that RFCs relating to Session Initiation Protocol (SIP) made most use of ambiguous keywords, and had the most number of implementation flaws as seen across SIP-based CVEs. While not conclusive, this suggests that there may be some correlation between the level of ambiguity in RFCs and subsequent implementation security flaws.

In other words, ambiguous language leads to ambiguous implementations which leads to security flaws in protocols.

The solution for this situation might be just this—specify protocols more rigorously. But simple solutions rarely admit reality within their scope. It’s easy to build more precise specifications—so why aren’t our specifications more precise?

In a word: politics.

For every RFC I’ve been involved in drafting, reviewing, or otherwise getting through the IETF, there are two reasons for each MAY or SHOULD therein. The first is someone has thought of a use-case where an implementor or operator might want to do something that would be otherwise not allowed by MUST. In these cases, everyone looks at the proposed MAY or SHOULD, thinks about how not doing it might be useful, and then thinks … “this isn’t so bad, the available functionality is a good thing, and there’s no real problem I can see with making this a MAY or SHOULD.” In other words, we can think of possible worlds where someone might want to do something, so we allow them to do it. Call this the “freedom principle.”

The second reason is that multiple vendors have multiple customers who want to do things different ways. When the two vendors clash in the realm of standards, the result is often a set of interlocking MAYs and SHOULDs that allow two implementors to build solutions that are interoperable in the main, but not along the edges, that satisfy both of their existing customer’s requirements. Call this the “big check principle.”

The problem with these situations is—the specification has an undetermined set of MAYs and SHOULDs that might interlock in unforeseen ways, resulting in unanticipated variances in implementations that ultimately show up as security holes.

Okay—now that I’ve described the problem, what can you do about it? One thing is to simplify. Stop putting everything into a small set of protocols. The more functionality you pour into a protocol or system, the harder it is to secure. Complexity is the enemy of security (and privacy!).

As for the political problems, these are human-scale, which means they are larger than any network you can ever build—but I’ll ponder this more and get back to you if I come up with any answers.

The Hedge 66: Tyler McDaniel and BGP Peer Locking

Tyler McDaniel joins Eyvonne, Tom, and Russ to discuss a study on BGP peerlocking, which is designed to prevent route leaks in the global Internet. From the study abstract:

BGP route leaks frequently precipitate serious disruptions to interdomain routing. These incidents have plagued the Internet for decades while deployment and usability issues cripple efforts to mitigate the problem. Peerlock, introduced in 2016, addresses route leaks with a new approach. Peerlock enables filtering agreements between transit providers to protect their own networks without the need for broad cooperation or a trust infrastructure.

download

Current Work in BGP Security

I’ve been chasing BGP security since before the publication of the soBGP drafts, way back in the early 2000’s (that’s almost 20 years for those who are math challenged). The most recent news largely centers on the RPKI, which is used to ensure the AS originating an advertisements is authorized to do so (or rather “owns” the resource or prefix). If you are not “up” on what the RPKI does, or how it works, you might find this old blog post useful—its actually the tenth post in a ten post series on the topic of BGP security.

Recent news in this space largely centers around the ongoing deployment of the RPKI. According to Wired, Google and Facebook have both recently adopted MANRS, and are adopting RPKI. While it might not seem like autonomous systems along the edge adopting BGP security best practices and the RPKI system can make much of a difference, but the “heavy hitters” among the content providers can play a pivotal role here by refusing to accept routes that appear to be hijacked. This not only helps these providers and their customers directly—a point the Wired article makes—this also helps the ‘net in a larger way by blocking attackers access to at least some of the “big fish” in terms of traffic.

Leslie Daigle, over at the Global Cyber Alliance—an organization I’d never heard of until I saw this—has a post up explaining exactly how deploying the RPKI in an edge AS can make a big difference in the service level from a customer’s perspective. Leslie is looking for operators who will fill out a survey on the routing security measures they deploy. If you operate a network that has any sort of BGP presence in the default-free zone (DFZ), it’s worth taking a look and filling the survey out.

One of the various problems with routing security is just being able to see what’s in the RPKI. If you have a problem with your route in the global table, you can always go look at a route view server or looking glass (a topic I will cover in some detail in an upcoming live webinar over on Safari Books Online—I think it’s scheduled for February right now). But what about the RPKI? RIPE NCC has released a new tool called the JDR:

Just like RP software, JDR interprets certificates and signed objects in the RPKI, but instead of producing a set of Verified ROA Payloads (VRPs) to be fed to a router, it annotates everything that could somehow cause trouble. It will go out of its way to try to decode and parse objects: even if a file is clearly violating the standards and should be rejected by RP software, JDR will try to process it and present as much troubleshooting information to the end-user afterwards.

You can find the JDR here.

Finally, the folks at APNIC, working with NLnet Labs, have taken a page from the BGP playbook and proposed an opaque object for the RPKI, extending it beyond “just prefixes.” They’ve created a new Resource Tagged Attestations, or RTAs, which can carry “any arbitrary file.” They have a post up explaining the rational and work here.