An Interesting take on Mapping an Attack Surface

Security often lives in one of two states. It’s either something “I” take care of, because my organization is so small there isn’t anyone else taking care of it. Or it’s something those folks sitting over there in the corner take care of because the organization is, in fact, large enough to have a separate security team. In both cases, however, security is something that is done to networks, or something thought about kind-of off on its own in relation to networks.

I’ve been trying to think of ways to challenge this way of thinking for many years—a long time ago, in a universe far away, I created and gave a presentation on network security at Cisco Live (raise your hand if you’re old enough to have seen this presentation!).

Reading through my paper pile this week, I ran into a viewpoint in the Communications of the ACM that revived my older thinking about network security and gave me a new way to think about the problem. The author’s expression of the problem of supply chain security can be used more broadly. The illustration below is replicated from the one in the original article; I will use this as a starting point.

This is a nice way to visualize your attack surface. The columns represent applications or systems and the rows represent vulnerabilities. The colors represent the risk, as explained across the bottom of the chart. One simple way to use this would be just to list all the things in the network along the top as columns, and all the things that can go wrong as rows and use it in the same way. This would just be a cut down, or more specific, version of the same concept.

Another way to use this sort of map—and this is just a nub of an idea, so you’ll need to think about how to apply it to your situation a little more deeply—is to create two groups of columns; one column for each application that relies on network services, and one for network infrastructure devices and services you rely on. Rows would be broken up into three classes, from the top to bottom—protection, services, and systems. In the protection group you would have things the network does to protect data and applications, like segmentation, preventing data exfiltration, etc. In the services group, you would mostly have various forms of denial of service and configuration. In the systems group, you would have individual hardware devices, protocols, software packages used to make the network “go,” etc. Maybe something like the illustration below.

If you place the most important applications towards the left, and the protection towards the top, the more severe vulnerabilities will be in the upper left corner of the chart, with less severe areas falling to the right and (potentially) towards the bottom. You would fill this chart out starting in the upper left, figuring out what each kind of “protection” the network as a service can offer to each application. These should, in turn, roll down to the services the network offers and their corresponding configurations. These should, in turn, roll across to the devices and software used to create these services, and then roll back down to the vulnerabilities of those services and devices. For instance, if sales management relies on application access control, and application access control relies on proper filtering, and filtering is configured on BGP and some sort of overlay virtual link to a cloud service… You start to get the idea of where different kinds of services rely on underlying capabilities, and then how those are related to suppliers, hardware, etc.

You can color the squares in different ways—the way the original article does, perhaps, or your reliance on an outside vendor to solve this problem, etc. Once the basic chart is in place you can use multiple color schemes to get different views of the attack surface by using the chart as a sort of heat map.

Again, this is something of a nub of an idea, but it is a potentially interesting way to get a single view of the entire network ecosystem from a security standpoint, know where things are weak (and hence need work), and understand where cascading failures might happen.

The Hedge 27: New directions in network and computing systems

On this episode of the Hedge, Micah Beck joins us to discuss a paper he wrote recently considering a new model of compute, storage, and networking. Micah Beck is Associate Professor in computer science at the University of Tennessee, Knoxville, where he researches and publishes in the area of networking technologies, including the hourglass model and the end-to-end principle.

If you are interested in the paper we are discussing on this episode, or Micah’s other work, you can find it at his personal site.

download

Enterprise and Service Provider—Once more into the Windmill

There is no enterprise, there is no service provider—there are problems, and there are solutions. I’m certain everyone reading this blog, or listening to my podcasts, or listening to a presentation I’ve given, or following along in some live training or book I’ve created, has heard me say this. I’m also certain almost everyone has heard the objections to my argument—that hyperscaler’s problems are not your problems, the technologies and solutions providers user are fundamentally different than what enterprises require.

Let me try to recap some of the arguments I’ve heard used against my assertion.

The theory that enterprise and service provider networks require completely different technologies and implementations is often grounded in scale. Service provider networks are so large that they simply must use different solutions—solutions that you cannot apply to any network running at a smaller scale.

The problem with this line of thinking is it throws the baby out with the bathwater. Google is using automation to run their network? Well, then… you shouldn’t use automation because Google’s problems are not your problems. Microsoft is deploying 100g Ethernet over fiber? Then clearly enterprise networks should be using Token Ring or ARCnet because… Microsoft’s problems are not your problems.

The usual answer is—“I’m not saying we shouldn’t take good ideas when we see them, but we shouldn’t design networks the way someone else does just because.” I don’t see how this clarifies the solution, though—when is it a good idea or a bad one? What is our criterion to decide what to adopt and what not to adopt? Simply saying “X’s problems aren’t your problems” doesn’t really give me any actionable information—or at least I’m not getting it if it’s buried in there someplace.

Instead—maybe—just maybe—we are looking at this all wrong. Maybe there is some other way classify networks that will help us see the problem set better.

I don’t think networks are undifferentiated—I think the enterprise/service provider/hyerpscaler divide is not helpful to understand how different networks are … different, and how to correctly identify an environment and build to it. Reading a classic paper in software design this week—Programs, Life Cycles, and Laws of Software Evolution—brought all this to mind. In writing this paper, Meir Lehman was facing many of the same classification problems, just in software development rather than in building networks.

Rather than saying “enterprise software is different than service provider software”—an assertion absolutely no-one makes—or even “commercial software is different than private software, and developers working in these two areas cannot use the same tools and techniques,” Lehman posits there are three kinds of software systems. He calls these S-Programs, in which the problem and solution can be fully specified; P-Programs, in which the problem can be fully specified, but the program can only be partially specified because of complexity and scale; and E-Programs, where the program itself become part of the world it models. Lehman thinks most software will move towards S-Program status as time moves on—something that hasn’t happened (the reasons are out of scope for this already-too-long-blog-post).

But the classification is useful. For S-Programs, the inputs and outputs can be fully specified, full-on testing can take place before the software is deployed, and lifecycle management is largely about making the software more fully conform to its original conditions. Maybe there are S-Networks, too? Single-purpose networks which are aimed at fulfilling on well-defined thing, and only that thing. Lehman talks about learning how to breaking larger problems into smaller one so the S-Problems can be dealt with separately—is this anything different than separating out the basic problem of providing IP connectivity in a DC fabric underlay, or even providing basic IP connectivity in a transit or campus network, treating it as a separate module with fairly well design goals and measurements?

Lehman talks about P-Programs, where the problem is largely definable, but the solutions end up being more heuristic. Isn’t this similar to a traffic engineering overlay, where we largely know what the goals are, but we don’t necessarily know what specific solution is going to needed at any moment, and the complete set of solutions is just too large to initially calculate? What about E-Programs, where the software becomes a part of the world it models? Isn’t this like the intent-based stuff we’ve been talking about networking for going one 30 years now?

Looking at it another way, isn’t it possible that some networks are largely just S-Networks? And others are largely E-Networks? And that these classifications have nothing to do with whether the network is being built by what we call an “enterprise” or a “service provider?” Isn’t is possible that S-Networks should probably all use the same basic sort of structure and largely be classified as a “commodity,” while E-Networks will all be snowflakes, and largely classified as having high business importance?

Just like I don’t think the OSI model is particularly helpful in teaching and understanding networks any longer, I don’t find the enterprise/service/hyperscaler model very useful in building and operating networks. The service enterprise/service provider divide tends to artificially limit idea transfer when it wants to be transferred, and artificially “hype up” some networks while degrading others—largely based on perceptions of scale.

Scale != complexity. It’s not about service providers and enterprises. It doesn’t matter if Google’s problems are not your problems; borrowing from the hyperscale is not a “bad thing.” It’s just a “thing.” Think clearly about the problem set, understand the problem set, and borrow liberally. There is no such thing as a “service provider technology,” nor is there any such thing as an “enterprise technology.” There are problems, and there are solutions. To be an engineer is to connect the two.

The Hedge 26: Jason Gooley and CHINOG

CHINOG is a regional network operators group that meets in Chicago once a year. For this episode of the Hedge, Jason Gooley joins us to talk about the origins of CHINOG, the challenges involved in running a small conference, some tips for those who would like to start a conference of this kind, and thoughts on the importance of community in the network engineering world.

download

The Art and Necessity of Refocusing

Over at his blog The Forwarding Plane, Nick Buraglio posted about embracing change and how technology is mostly unimportant. In the technology-driven world networking folks live in, how can technology be mostly inconsequential? One answer is people drive technology, rather than the other way around—but this misses the real-world consequences of technological adoption on culture. To paraphrase Andy Crouch, technology makes some things possible that were once impossible, some things easy that were once hard, some things hard that were once easy, and some things impossible that were once possible.

There is another answer to this question, though—the real versus the perceived rate of change. When I was a kid, I would ride around with my uncle in his Jeep, a 1968 CJ5 with a soft top and soft doors. He would take the doors off when he took the top down, and—these older Jeeps being much smaller than current models—you could look just to your right and see the road passing by just there under your feet. What always amazed me was I could make myself think I was moving at different speeds just by changing my focus. If I looked across a field at a telephone pole in the distance, it didn’t seem like I was moving all that quickly. If I stared down at the white line on the side of the road, it looked like I was moving very fast indeed. By shifting my focus from here to there, I could adjust my perceived speed.

Here is where the focus on details becomes critical in networking. We do tend to focus on the details. To make matters worse, the average network operator tends to be something of a generalist. Being a generalist focused on details can be a frightening experience.

If you live entirely in the world of Ethernet, then you see past and future changes in the context of the history of Ethernet. This is something like looking at an object a few hundred feet off the road, perhaps. Things are moving quickly, but they aren’t insanely fast, blurry, up close and personal. If you live in wholly in the world of routing protocols, you are going to have a different picture, but the apparent speed is going to be similar, or perhaps even slower.

If you’re a generalist who focuses on detail, though, you’re going to be staring at the white line—at all the features, physical form factors, and products created by a combination of the changes made in routing and Ethernet. If there are two changes in Ethernet, and two in routing, product marketing will create at least four, and probably eight, new features out of the combination of these two, across twenty or thirty product lines. Each of these features will likely be called something different and sold to solve completely different problems.

Staring at the white line is fun at first, then mesmerizing, then it is frightening… then finally it is just plain dull. But let’s talk about the terrifying bit because it’s the scary stage that makes us all reject change out of fear for the future. And, trust me, a kid sitting in a car with no doors staring down at the white line while his uncle drives 60 miles-per-hour is going to be frightened from time to time.

The point Nick is making is we should back off the details and embrace the change. This is great advice—but how? It can feel like you’re going to run off the road if you don’t keep staring at the white line. The answer lies in putting your eyes someplace else—on the posts way out in the field. Ethernet still solves the same problems Bob Metcalf designed it to solve, and it always solves those problems using a small’ish set of solutions. Routing still solves the same problems it did when Dijkstra was mulling around toy problems to show off the processing power of a new computer some 60-odd years ago, and it still solves these problems using a small set of tools.

If you stop looking at the white line and start looking at the poles out in the distance, you’ll not only save your sanity, you’ll also permit yourself to start looking at the sociological and business impacts of new technology, including what matters and what doesn’t.

Two hundred years ago, if you wanted to get from Memphis to say… Lake Providence, Mississippi, you could take a boat directly between the two. Today you would take a car, and the only paths between the two are pretty round-a-about and “small country road” sorts of affairs. On the other hand, getting from Memphis Tennessee to Atlanta, Georgia, is now easy, while a couple of hundred years ago would be a big deal indeed. The sociological changes wrought by moving from rivers to roads are almost impossible to fathom. But you wouldn’t know that if you just stared at the white lines.

The Hedge 25: Building the Next Generation of Network Engineer

If there is one thing I notice when I look around at the IETF—and many other places where I meet a lot of network operations and engineering folk—it’s that we all seem to be getting a bit older. This should lead us to an obvious question—what are we doing about bringing up a new generation of network engineers? David Huberman joins Tom Ammon and I to discuss this interesting question. David i s involved in a number of community-based efforts to train next generation network engineers, some of which he discusses in his excellent article at the APNIC blog.