When I was in the military we were constantly drilled about the problem of Essential Elements of Friendly Information, or EEFIs. What are EEFis? If an adversary can cast a wide net of surveillance, they can often find multiple clues about what you are planning to do, or who is making which decisions. For instance, if several people married to military members all make plans to be without their spouses for a long period of time, the adversary can be certain a unit is about to be deployed. If the unit of each member can be determined, then the strength, positioning, and other facts about what action you are taking can be guessed.

Given enough broad information, an adversary can often guess at details that you really do not want them to know.

What brings all of this to mind is a recent article in Dark Reading about how attackers take advantage of publicly available information to form Spear Phishing attacks—

Most security leaders are acutely aware of the threat phishing scams pose to enterprise security. What garners less attention is the vast amount of publicly available information about organizations and their employees that enables these attacks.

Going back further in time, during World War II, we have—

What does all of this mean for the average network engineer concerned about security? Probably nothing different than being just slightly paranoid about your personal security in the first place (way too much modern security is driven by an anti-paranoid mindset, a topic for a future post). Things like—

  • Don’t let people know, either through your job description or anything else, that you hold the master passwords for your company, or that your account holds administrator rights.
  • Don’t always go to the same watering holes, and don’t talk about work while there to people you’ve just met, or even people you see there all the time.
  • Don’t talk about when and where you’re going on vacation. You can talk about it, and share pictures, once you’re back.

If an attacker knows you are going to be on vacation, it’s a lot easier to create a fake “emergency,” tempting you to give out information about accounts, people, and passwords you shouldn’t. Phishing is primarily a matter of social engineering rather than technical acumen. Countering social engineering is also a social skill, rather than a technical one. You can start by learning to just say less about what you are doing, when you are doing it, and who holds the keys to the kingdom.

Why are networks so insecure?

One reason is we don’t take network security seriously. We just don’t think of the network as a serious target of attack. Or we think of security as a problem “over there,” something that exists in the application realm, that needs to be solved by application developers. Or we think the consequences of a network security breach as “well, they can DDoS us, and then we can figure out how to move load around, so if we build with resilience (enough redundancy) we’re already taking care of our security issues.” Or we put our trust in the firewall, which sits there like some magic box solving all our problems.

The problem is–none of this is true. In any system where overall security is important, defense-in-depth is the key to building a secure system. No single part of the system bears the “primary responsibility” for “security.” The network is certainly a part of any defense-in-depth scheme that is going to work.

Which means network protocols need to be secure, at least in some sense, as well. I don’t mean “secure” in the sense of privacy—routes are not (generally) personally identifiable information (there are always exceptions, however). But rather “secure” in the sense that they cannot be easily attacked. On-the-wire encryption should prevent anyone from reading the contents of the packet or stream all the time. Network devices like routers and switches should be difficult to break in too, which means the protocols they run must be “secure” in the fuzzing sense—there should be no unexpected outputs because you’ve received an unexpected input.

I definitely do not mean path security of any sort. Making certain a packet (or update or whatever else) has followed a specified path is a chimera in packet switched networks. It’s like trying to nail your choice of multicolored gelatin desert to the wall. Packet switched networks are designed to adapt to changes in the network by rerouting traffic. Get over it.

So why are protocols and network devices so insecure? I recently ran into an interesting piece of research that provides some of the answer. To wit—

Our research saw that ambiguous keywords SHOULD and MAY had the second highest number of occurrences across all RFCs. We’ve also seen that their intended meaning is only to be interpreted as such when written in uppercase (whereas often they are written in lowercase). In addition, around 40% of RFCs made no use of uppercase requirements level keywords. These observations point to inconsistency in use of these keywords, and possibly misunderstanding about their importance in a security context. We saw that RFCs relating to Session Initiation Protocol (SIP) made most use of ambiguous keywords, and had the most number of implementation flaws as seen across SIP-based CVEs. While not conclusive, this suggests that there may be some correlation between the level of ambiguity in RFCs and subsequent implementation security flaws.

In other words, ambiguous language leads to ambiguous implementations which leads to security flaws in protocols.

The solution for this situation might be just this—specify protocols more rigorously. But simple solutions rarely admit reality within their scope. It’s easy to build more precise specifications—so why aren’t our specifications more precise?

In a word: politics.

For every RFC I’ve been involved in drafting, reviewing, or otherwise getting through the IETF, there are two reasons for each MAY or SHOULD therein. The first is someone has thought of a use-case where an implementor or operator might want to do something that would be otherwise not allowed by MUST. In these cases, everyone looks at the proposed MAY or SHOULD, thinks about how not doing it might be useful, and then thinks … “this isn’t so bad, the available functionality is a good thing, and there’s no real problem I can see with making this a MAY or SHOULD.” In other words, we can think of possible worlds where someone might want to do something, so we allow them to do it. Call this the “freedom principle.”

The second reason is that multiple vendors have multiple customers who want to do things different ways. When the two vendors clash in the realm of standards, the result is often a set of interlocking MAYs and SHOULDs that allow two implementors to build solutions that are interoperable in the main, but not along the edges, that satisfy both of their existing customer’s requirements. Call this the “big check principle.”

The problem with these situations is—the specification has an undetermined set of MAYs and SHOULDs that might interlock in unforeseen ways, resulting in unanticipated variances in implementations that ultimately show up as security holes.

Okay—now that I’ve described the problem, what can you do about it? One thing is to simplify. Stop putting everything into a small set of protocols. The more functionality you pour into a protocol or system, the harder it is to secure. Complexity is the enemy of security (and privacy!).

As for the political problems, these are human-scale, which means they are larger than any network you can ever build—but I’ll ponder this more and get back to you if I come up with any answers.

Tyler McDaniel joins Eyvonne, Tom, and Russ to discuss a study on BGP peerlocking, which is designed to prevent route leaks in the global Internet. From the study abstract:

BGP route leaks frequently precipitate serious disruptions to interdomain routing. These incidents have plagued the Internet for decades while deployment and usability issues cripple efforts to mitigate the problem. Peerlock, introduced in 2016, addresses route leaks with a new approach. Peerlock enables filtering agreements between transit providers to protect their own networks without the need for broad cooperation or a trust infrastructure.

download

I’ve been chasing BGP security since before the publication of the soBGP drafts, way back in the early 2000’s (that’s almost 20 years for those who are math challenged). The most recent news largely centers on the RPKI, which is used to ensure the AS originating an advertisements is authorized to do so (or rather “owns” the resource or prefix). If you are not “up” on what the RPKI does, or how it works, you might find this old blog post useful—its actually the tenth post in a ten post series on the topic of BGP security.

Recent news in this space largely centers around the ongoing deployment of the RPKI. According to Wired, Google and Facebook have both recently adopted MANRS, and are adopting RPKI. While it might not seem like autonomous systems along the edge adopting BGP security best practices and the RPKI system can make much of a difference, but the “heavy hitters” among the content providers can play a pivotal role here by refusing to accept routes that appear to be hijacked. This not only helps these providers and their customers directly—a point the Wired article makes—this also helps the ‘net in a larger way by blocking attackers access to at least some of the “big fish” in terms of traffic.

Leslie Daigle, over at the Global Cyber Alliance—an organization I’d never heard of until I saw this—has a post up explaining exactly how deploying the RPKI in an edge AS can make a big difference in the service level from a customer’s perspective. Leslie is looking for operators who will fill out a survey on the routing security measures they deploy. If you operate a network that has any sort of BGP presence in the default-free zone (DFZ), it’s worth taking a look and filling the survey out.

One of the various problems with routing security is just being able to see what’s in the RPKI. If you have a problem with your route in the global table, you can always go look at a route view server or looking glass (a topic I will cover in some detail in an upcoming live webinar over on Safari Books Online—I think it’s scheduled for February right now). But what about the RPKI? RIPE NCC has released a new tool called the JDR:

Just like RP software, JDR interprets certificates and signed objects in the RPKI, but instead of producing a set of Verified ROA Payloads (VRPs) to be fed to a router, it annotates everything that could somehow cause trouble. It will go out of its way to try to decode and parse objects: even if a file is clearly violating the standards and should be rejected by RP software, JDR will try to process it and present as much troubleshooting information to the end-user afterwards.

You can find the JDR here.

Finally, the folks at APNIC, working with NLnet Labs, have taken a page from the BGP playbook and proposed an opaque object for the RPKI, extending it beyond “just prefixes.” They’ve created a new Resource Tagged Attestations, or RTAs, which can carry “any arbitrary file.” They have a post up explaining the rational and work here.

Let’s play the analogy game. The Internet of Things (IoT) is probably going end up being like … a box of chocolates, because you never do know what you are going to get? a big bowl of spaghetti with a serious lack of meatballs? Whatever it is, the IoT should have network folks worried about security. There is, of course, the problem of IoT devices being attached to random places on the network, exfiltrating personal data back to a cloud server you don’t know anything about. Some of these devices might be rogue, of course, such as Raspberry Pi attached to some random place in the network. Others might be more conventional, such as those new exercise machines the company just brought into the gym that’s sending personal information in the clear to an outside service.

While there is research into how to tell the difference between IoT and “larger” devices, the reality is spoofing and blurred lines will likely make such classification difficult. What do you do with a virtual machine that looks like a Raspberry Pi running on a corporate laptop for completely legitimate reasons? Or what about the Raspberry Pi-like device that can run a fully operational Windows stack, including “background noise” applications that make it look like a normal compute platform? These problems are, unfortunately, not easy to solve.

To make matters worse, there are no standards by which to judge the security of an IoT device. Even if the device manufacturer–think about the new gym equipment here–has the best intentions towards security, there is almost no way to determine if a particular device is designed and built with good security. The result is that IoT devices are often infected and used as part of a botnet for DDoS, or other, attacks.

What are our options here from a network perspective? The most common answer to this is segmentation–and segmentation is, in fact, a good start on solving the problem of IoT. But we are going to need a lot more than segmentation to avert certain disaster in our networks. Once these devices are segmented off, what do we do with the traffic? Do we just allow it all (“hey, that’s an IoT device, so let it send whatever it wants to… after all, it’s been segmented off the main network anyway”)? Do we try to manage and control what information is being exfiltrated from our networks? Is machine learning going to step in to solve these problems? Can it, really?

To put it another way–the attack surface we’re facing here is huge, and the smallest mistake can have very bad ramifications in individual lives. Take, for instance, the problem of data and IoT devices in abusive relationships. Relationships are dynamic; how is your company going to know when an employee is in an abusive relationship, and thus when certain kinds of access should be shut off? There is so much information here it seems almost impossible to manage it all.

It looks, to me, like the future is going to be a bit rough and tumble as we learn to navigate this new realm. Vendors will have lots of good ideas (look at Mists’ capabilities in tracking down the location of rogue devices, for instance), but in the end it’s going to be the operational front line that is going to have to figure out how to manage and deploy networks where there is a broad blend of ultimately untrustable IoT devices and more traditional devices.

Now would be the time to start learning about security, privacy, and IoT if you haven’t started already.