Complexity Reduction?

Back in January, I ran into an interesting article called The many lies about reducing complexity:

Reducing complexity sells. Especially managers in IT are sensitive to it as complexity generally is their biggest headache. Hence, in IT, people are in a perennial fight to make the complexity bearable.

Gerben then discusses two ways we often try to reduce complexity. First, we try to simply reduce the number of applications we’re using. We see this all the time in the networking world—if we could only get to a single pane of glass, or reduce the number of management packages we use, or reduce the number of control planes (generally to one), or reduce the number of transport protocols … but reducing the number of protocols doesn’t necessarily reduce complexity. Instead, we can just end up with one very complex protocol. Would it really be simpler to push DNS and HTTP functionality into BGP so we can use a single protocol to do everything?

Second, we try to reduce complexity by hiding it. While this is sometimes effective, it can also lead to unacceptable tradeoffs in performance (we run into the state, optimization, surfaces triad here). It can also make the system more complex if we need to go back and leak information to regain optimal behavior. Think of the OSPF type 4, which just reinjects information lost in building an area summary, or even the complexity involved in the type7 to type 5 process required to create not-so-stubby areas.

It would seem, then, that you really can’t get rid of complexity. You can move it around, and sometimes you can effectively hide it, but you cannot get rid of it.

This is, to some extent, true. Complexity is a reaction to difficult environments, and networks are difficult environments.

Even so, there are ways to actually reduce complexity. The solution is not just hiding information because it’s messy, or munging things together because it requires fewer applications or protocols. You cannot eliminate complexity, but if you think about how information flows through a system you might be able to reduce the amount of complexity, and even create boundaries where state (hence complexity) can be more effectively hidden.

As an instance, I have argued elsewhere that building a DC fabric with distinct overlay and underlay protocols can actually create a simpler overall design than using a single protocol. Another instance might be to really think about where route aggregation takes place—is it really needed at all? Why? Is this the right place to aggregate routes? Is there any way I can change the network design to reduce state leaking through the abstraction?

The problem is there are no clear-cut rules for thinking about complexity in this way. There’s no rule of thumb, there’s no best practices. You just have to think through each individual situation and consider how, where, and why state flows, and then think through the state/optimization/surface tradeoffs for each possible way of reducing the complexity of the system. You have to take into account that local reductions in complexity can cause the overall system to be much more complex, as well, and eventually make the system brittle.

There’s no “pat” way to reduce complexity—that there is, is perhaps one of the biggest lies about complexity in the networking world.

The Hedge 79: Brooks Westbrook and the Data Driven Lens

Many networks are designed and operationally drive by the configuration and management of features supporting applications and use cases. For network engineering to catch up to the rest of the operational world, it needs to move rapidly towards data driven management based on a solid understanding of the underlying protocols and systems. Brooks Westbrook joins Tom Amman and Russ White to discuss the data driven lens in this episode of the Hedge.

download

Loose Lips

When I was in the military we were constantly drilled about the problem of Essential Elements of Friendly Information, or EEFIs. What are EEFis? If an adversary can cast a wide net of surveillance, they can often find multiple clues about what you are planning to do, or who is making which decisions. For instance, if several people married to military members all make plans to be without their spouses for a long period of time, the adversary can be certain a unit is about to be deployed. If the unit of each member can be determined, then the strength, positioning, and other facts about what action you are taking can be guessed.

Given enough broad information, an adversary can often guess at details that you really do not want them to know.

What brings all of this to mind is a recent article in Dark Reading about how attackers take advantage of publicly available information to form Spear Phishing attacks—

Most security leaders are acutely aware of the threat phishing scams pose to enterprise security. What garners less attention is the vast amount of publicly available information about organizations and their employees that enables these attacks.

Going back further in time, during World War II, we have—

What does all of this mean for the average network engineer concerned about security? Probably nothing different than being just slightly paranoid about your personal security in the first place (way too much modern security is driven by an anti-paranoid mindset, a topic for a future post). Things like—

  • Don’t let people know, either through your job description or anything else, that you hold the master passwords for your company, or that your account holds administrator rights.
  • Don’t always go to the same watering holes, and don’t talk about work while there to people you’ve just met, or even people you see there all the time.
  • Don’t talk about when and where you’re going on vacation. You can talk about it, and share pictures, once you’re back.

If an attacker knows you are going to be on vacation, it’s a lot easier to create a fake “emergency,” tempting you to give out information about accounts, people, and passwords you shouldn’t. Phishing is primarily a matter of social engineering rather than technical acumen. Countering social engineering is also a social skill, rather than a technical one. You can start by learning to just say less about what you are doing, when you are doing it, and who holds the keys to the kingdom.

Time and Mind Savers: RSS Feeds

I began writing this post just to remind readers this blog does have a number of RSS feeds—but then I thought … well, I probably need to explain why that piece of information is important.

The amount of writing, video, and audio being thrown at the average person today is astounding—so much so that, according to a lot of research, most people in the digital world have resorted to relying on social media as their primary source of news. Why do most people get their news from social media? I’m pretty convinced this is largely a matter of “it saves time.” The resulting feed might not be “perfect,” but it’s “close enough,” and no-one wants to spend time seeking out a wide variety of news sources so they will be better informed.

The problem, in this case, is that “close enough” is really a bad idea. We all tend to live in information bubbles of one form or another (although I’m fully convinced it’s much easier to live in a liberal/progressive bubble, being completely insulated from any news that doesn’t support your worldview, than it is to live in a conservative/traditional one). If you think about the role of social media and the news feed on social media services, this makes some kind of sense. The social media service tries to guess at what will keep you interested (engaged, and therefore coming back to the service), but at the same time each social media service also has a worldview they want to promote. The service largely attempts to both cater to what keeps you there and to pull you towards what the service, itself, believes.

The solution is stop getting your news from social media. period, full stop, end of sentence (although I’ve seen a recent paper indicating people find periods and other punctuation marks offensive in some way—when you find a period offensive, maybe it’s time to grow a little thicker skin).

So how should you get information instead? There are a lot of ways, from email based newsletters to watching television (please don’t, television turns everything into entertainment, including things that are not meant to entertain). My suggestion is, however, is through RSS feeds. Grab an account on Feedly or some other service, find the RSS feeds for the sites you find informative, and subscribe to their feeds. Some services have a learning mechanism that tries to accomplish the same thing as social media feeds—building intelligent filters to emphasize things you find important. I don’t tend to use these things; I have learned to just glance at the headline and first paragraph and make a quick decision about whether I think the post is worth reading.

Following RSS feeds can help you stop binging, jumping from place to place on a single site—essentially wasting time. It works against the mechanisms designers use to “increase engagement,” which often just means to consume more of your attention and time than you intended to give away. Following RSS feeds can also help you gain a broader view of the world if you intentionally subscribe to feeds from sites and people you don’t always agree with. It’s healthy to regularly read “the other side.” Following strong, well-written arguments from “the other side” will do much more for your mind than seeing just the facile, emotionally charged, straw-man arguments often presented (and allowed through the filters) on social media.

Further, services like feedly also allow you to follow lots of other things, including twitter accounts, youtube channels, and podcasts. I follow almost all podcasts through feedly, downloading the individual episodes I want to listen to, storing them in a cloud directory, and then deleting the files when I’m done. This gives me one list of things to listen to, rather than a huge playlist full of seemingly never-ending content.

All this said, this blog has a lot of different RSS feeds available. I don’t have a complete list, but these are a good place to start—

I keep these very same links on a page of RSS feeds you can find under the about menu. If you’re interested in the RSS feeds I follow, please reach out to me directly, as feedly no longer has any way to share your feeds other than pushing an OPML file (at least not that I can find).

The Hedge 77: The Internet is for End Users

When the interests of the end user, the operator, and the vendor come into conflict, who should protocol developers favor? According to RFC8890, the needs and desires of the end user should be the correct answer. According to the RFC:

As the Internet increasingly mediates essential functions in societies, it has unavoidably become profoundly political; it has helped people overthrow governments, revolutionize social orders, swing elections, control populations, collect data about individuals, and reveal secrets. It has created wealth for some individuals and companies while destroying that of others. All of this raises the question: For whom do we go through the pain of gathering rough consensus and writing running code?

Mark Nottingham joins Alvaro Retana and Russ White on this episode of the Hedge to discuss why the Internet is for end users.

download

The Insecurity of Ambiguous Standards

Why are networks so insecure?

One reason is we don’t take network security seriously. We just don’t think of the network as a serious target of attack. Or we think of security as a problem “over there,” something that exists in the application realm, that needs to be solved by application developers. Or we think the consequences of a network security breach as “well, they can DDoS us, and then we can figure out how to move load around, so if we build with resilience (enough redundancy) we’re already taking care of our security issues.” Or we put our trust in the firewall, which sits there like some magic box solving all our problems.

The problem is–none of this is true. In any system where overall security is important, defense-in-depth is the key to building a secure system. No single part of the system bears the “primary responsibility” for “security.” The network is certainly a part of any defense-in-depth scheme that is going to work.

Which means network protocols need to be secure, at least in some sense, as well. I don’t mean “secure” in the sense of privacy—routes are not (generally) personally identifiable information (there are always exceptions, however). But rather “secure” in the sense that they cannot be easily attacked. On-the-wire encryption should prevent anyone from reading the contents of the packet or stream all the time. Network devices like routers and switches should be difficult to break in too, which means the protocols they run must be “secure” in the fuzzing sense—there should be no unexpected outputs because you’ve received an unexpected input.

I definitely do not mean path security of any sort. Making certain a packet (or update or whatever else) has followed a specified path is a chimera in packet switched networks. It’s like trying to nail your choice of multicolored gelatin desert to the wall. Packet switched networks are designed to adapt to changes in the network by rerouting traffic. Get over it.

So why are protocols and network devices so insecure? I recently ran into an interesting piece of research that provides some of the answer. To wit—

Our research saw that ambiguous keywords SHOULD and MAY had the second highest number of occurrences across all RFCs. We’ve also seen that their intended meaning is only to be interpreted as such when written in uppercase (whereas often they are written in lowercase). In addition, around 40% of RFCs made no use of uppercase requirements level keywords. These observations point to inconsistency in use of these keywords, and possibly misunderstanding about their importance in a security context. We saw that RFCs relating to Session Initiation Protocol (SIP) made most use of ambiguous keywords, and had the most number of implementation flaws as seen across SIP-based CVEs. While not conclusive, this suggests that there may be some correlation between the level of ambiguity in RFCs and subsequent implementation security flaws.

In other words, ambiguous language leads to ambiguous implementations which leads to security flaws in protocols.

The solution for this situation might be just this—specify protocols more rigorously. But simple solutions rarely admit reality within their scope. It’s easy to build more precise specifications—so why aren’t our specifications more precise?

In a word: politics.

For every RFC I’ve been involved in drafting, reviewing, or otherwise getting through the IETF, there are two reasons for each MAY or SHOULD therein. The first is someone has thought of a use-case where an implementor or operator might want to do something that would be otherwise not allowed by MUST. In these cases, everyone looks at the proposed MAY or SHOULD, thinks about how not doing it might be useful, and then thinks … “this isn’t so bad, the available functionality is a good thing, and there’s no real problem I can see with making this a MAY or SHOULD.” In other words, we can think of possible worlds where someone might want to do something, so we allow them to do it. Call this the “freedom principle.”

The second reason is that multiple vendors have multiple customers who want to do things different ways. When the two vendors clash in the realm of standards, the result is often a set of interlocking MAYs and SHOULDs that allow two implementors to build solutions that are interoperable in the main, but not along the edges, that satisfy both of their existing customer’s requirements. Call this the “big check principle.”

The problem with these situations is—the specification has an undetermined set of MAYs and SHOULDs that might interlock in unforeseen ways, resulting in unanticipated variances in implementations that ultimately show up as security holes.

Okay—now that I’ve described the problem, what can you do about it? One thing is to simplify. Stop putting everything into a small set of protocols. The more functionality you pour into a protocol or system, the harder it is to secure. Complexity is the enemy of security (and privacy!).

As for the political problems, these are human-scale, which means they are larger than any network you can ever build—but I’ll ponder this more and get back to you if I come up with any answers.