BGP

BGP Hijacks: Two more papers consider the problem

The security of the global Default Free Zone DFZ) has been a topic of much debate and concern for the last twenty years (or more). Two recent papers have brought this issue to the surface once again—it is worth looking at what these two papers add to the mix of what is known, and what solutions might be available. The first of these—

Demchak, Chris, and Yuval Shavitt. 2018. “China’s Maxim – Leave No Access Point Unexploited: The Hidden Story of China Telecom’s BGP Hijacking.” Military Cyber Affairs 3 (1). https://doi.org/10.5038/2378-0789.3.1.1050.

—traces the impact of Chinese “state actor” effects on BGP routing in recent years.

cross posted to CircleID

Whether these are actual attacks, or mistakes from human error for various reasons generally cannot be known, but the potential, at least, for serious damage to companies and institutions relying on the DFZ is hard to overestimate. This paper lays out the basic problem, and the works through a number of BGP hijacks in recent years, showing how they misdirected traffic in ways that could have facilitated attacks, whether by mistake or intentionally. For instance, quoting from the paper—

  • Starting from February 2016 and for about 6 months, routes from Canada to Korean government sites were hijacked by China Telecom and routed through China.
  • On October 2016, traffic from several locations in the USA to a large Anglo-American bank
  • headquarters in Milan, Italy was hijacked by China Telecom to China.
  • Traffic from Sweden and Norway to the Japanese network of a large American news organization was hijacked to China for about 6 weeks in April/May 2017.

What impact could such a traffic redirection have? If you can control the path of traffic while a TLS or SSL session is being set up, you can place your server in the middle as an observer. This can, in many situations, be avoided if DNSSEC is deployed to ensure the certificates used in setting up the TLS session is valid, but DNSSEC is not widely deployed, either. Another option is to simply gather encrypted traffic and either attempt to break the key, or use data analytics to understand what the flow is doing (a side channel attack).

What can be done about these kinds of problems? The “simplest”—and most naïve—answer is “let’s just secure BGP.” There are many, many problems with this solution. Some of them are highlighted in the second paper under review—

Bonaventure, Olivier. n.d. “A Survey among Network Operators on BGP Prefix Hijacking – Computer Communication Review.” Accessed November 3, 2018. https://ccronline.sigcomm.org/2018/ccr-january-2018/a-survey-among-network-operators-on-bgp-prefix-hijacking/.

—which illustrates the objections providers have to the many forms of BGP security that have been proposed to this point. The first is, of course, that it is expensive. The ROI of the systems proposed thus far are very low; the cost is high, and the benefit to the individual provider is rather low. There is both a race to perfection problem here, as well as a tragedy of the commons problem. The race to perfection problem is this: we will not design, nor push for the deployment of, any system which does not “solve the problem entirely.” This has been the mantra behind BGPSEC, for instance. But not only is BGPSEC expensive—I would say to the point of being impossible to deploy—it is also not perfect.

The second problem in the ROI space is the tragedy of the commons. I cannot do much to prevent other people from misusing my routes. All I can really do is stop myself and my neighbors from misusing other people’s routes. What incentive do I have to try to make the routing in my neighborhood better? The hope that everyone else will do the same. Thus, the only way to maintain the commons of the DFZ is for everyone to work together for the common good. This is difficult. Worse than herding cats.

A second point—not well understood in the security world—is this: a core point of DFZ routing is that when you hand your reachability information to someone else, you lose control over that reachability information. There have been a number of proposals to “solve” this problem, but it is a basic fact that if you cannot control the path traffic takes through your network, then you have no control over the profitability of your network. This tension can be seen in the results of the survey above. People want security, but they do not want to release the information needed to make security happen. Both realities are perfectly rational!

Part of the problem with the “more strict,” and hence (considered) “more perfect” security mechanisms proposed is simply this: they are not quiet enough. They expose far too much information. Even systems designed to prevent information leakage ultimately leak too much.

So… what do real solutions on the ground look like?

One option is for everyone to encrypt all traffic, all the time. This is a point of debate, however, as it also damages the ability of providers to optimize their networks. One point where the plumbing allegory for networking breaks down is this: all bits of water are the same. Not all bits on the wire are the same.

Another option is to rely less on the DFZ. We already seem to be heading in this direction, if Geoff Huston and other researchers are right. Is this a good thing, or a bad one? It is hard to tell from this angle, but a lot of people think it is a bad thing.

Perhaps we should revisit some of the proposed BGP security solutions, reshaping some of them into something that is more realistic and deployable? Perhaps—but the community is going to let go of the “but it’s not perfect” line of thinking, and start developing some practical, deployable solutions that don’t leak so much information.

Finally, there is a solution Leslie Daigle and I have been tilting at for a couple of years now. Finding a way to build a set of open source tools that will allow any operator or provider to quickly and cheaply build an internal system to check the routing information available in their neighborhood on the ‘net, and mix local policy with that information to do some bare bones work to make their neighborhood a little cleaner. This is a lot harder than “just build some software” for various reasons; the work is often difficult—as Leslie says, it is largely a matter of herding cats, rather than inventing new things.

BGP and Suboptimal Route Reflection

One of the crucial points in understanding the operation of BGP is the reliance on the AS path to ensure all routes are loop-free. Within a single AS, however, there is no AS path. How, then, can you ensure the path through an AS is loop-free? The original plan was to fully mesh all the BGP speakers in the AS (a full mesh of iBGP speakers)—but building and maintaining a full mesh of iBGP speakers is difficult, so other solutions were quickly designed. The first of these as the BGP Confederation, which allows a set of autonomous systems to look like a single AS from the outside. This solution, however, is also cumbersome, so… the RR was invented.

TL;DR

  • BGP RR’s abstract information in a way that can cause suboptimal routing
  • To resolve this suboptimal routing, additional paths are advertised to RRCs by RRs

 

The basic operation of an RR is fairy simple; as new attribute, the cluster list, is added to a route as is passes from client to server. The cluster list contains as list of the clusters the route has passed through, identified by the identifier of the route reflector that “heads” the cluster. If a route is advertised to an iBGP speaker with a cluster ID already on the cluster list, the route is ignored. In a sense, each cluster acts as a “sub-AS,” much like in a confederation, and the cluster ID acts as the sub-AS number. Loop-freeness is guaranteed by making certain the route is not advertised as reachable through the same RR cluster twice, much like eBGP loop-freeness is guaranteed by not advertising a route as reachable through the same AS twice.

Route reflectors emulate the eBGP construction in one other way, as well—only the best path from each cluster’s perspective is advertised. This improves scaling dramatically; with a full mesh of iBGP speakers, every BGP speaker will learn about every path available to reach a destination. If there are ten eBGP peers that can reach 2001:db8:3e8:100::/64, for instance, then every BGP speaker in the AS will know about all ten paths. If route reflectors are used instead of a full mesh of iBGP speakers, the RR server will choose the best exit point from the local AS, and advertise only that one route to each of its clients. An important point to note here: the best path is chosen from the perspective of the RR.

For instance, in this diagram, the best path to 2011:db8:3e8:100::/64 for B is through A. Since B is the route reflector, it will advertise only the path through A to its two clients, which are C and E. Because of this, E will not know about the path through C, even though this is a valid path—and a better path, from E’s perspective.

This is a classic case of aggregation and information hiding; B is removing some available routes towards its route reflector clients, which allows the control plane to scale. According to the state/optimization/surface triad, a reduction in the amount of state should have some effect on the optimization of traffic flow through the network. In this case, E has a suboptimal route.

How can this problem be solved? The most obvious way is to add more information back into the control plane. The solutions offered for this problem vary in which information is added back, and how that information is carried.

The first proposal for solving the suboptimal routing problem was for the RR to send all of its routes to each of its clients. Normally, if a BGP speaker sends a route for a destination, and then follows with another update about this same destination, the receiving speaker will discard the first advertisement. This is called an implicit withdraw. RFC7911 specifies a mechanism for a BGP speaker to carry more than one route for the same destination to a peer by including a Route Identifier (RID) with each route. The function of the route identifier is the same as the route distinguisher in a BGP-VPN deployment—to describe two different routes that happen to share the same destination prefix. By setting a different RID on each route, the reflector can advertise all of its route to its clients. The problem with this solution is that it “undoes” the scaling advantages conferred by the RR; every RRC receives every route. In large networks, the scaling and convergence speed differences can be dramatic.

It seems better to send only part of the routes, but which part? An alternative proposal is for the RR to compute the BGP bestpath from the perspective of each of its RRCs and send only the best route from that RRC’s perspective each RRC. There are several tradeoffs here, as well. First, the RRC must calculate the IGP path from each of its RRC’s in some way; this normally means running SPF from the perspective of each RRC. The RR must then run bestpath for each RRC and send the correct route to each one. Finally, the RRC must keep track of which route it has sent to each RRC, so it can determine when it needs to resend a new route based on changes in the network topology—both for changes in BGP and IGP reachability. This adds a lot of complexity to the BGP RR implementation, and creates a rather deep interaction surface between the IGP and BGP. It also just happens to assume there is an underlying link-state IGP in operation—which may not always be true.

Another option is for the RR to send the bestpath and the second-best path to each RRC. The assumption is one of these two paths will either be optimal, or close to optimal, for all RRCs. This is the solution most implementations have opted for, as it seems to strike a good balance between adding more control plane information, implementation complexity, and solving the suboptimal routing problem. Most implementations will allow the operator to send as many “second-best” paths as they desire to, which allows the operator to fine-tune optimization against state differently in diverse parts of their network.

BGP Security: A Gentle Reminder that Networking is Business

At NANOG on the Road (NotR) in September of 2018, I participated in a panel on BGP security—specifically the deployment of Route Origin Authentication (ROA), with some hints and overtones of path validation by carrying signatures in BGP updates (BGPsec). This is an area I have been working in for… 20 years? … at this point, so I have seen the argument develop across these years many times, and in many ways. What always strikes me about this discussion, whenever and wherever it is aired, is the clash between business realities and the desire for “someone to do something about routing security in the DFZ, already!” What also strikes me about these conversations it the number of times very fundamental concepts end up being explained to folks who are “new to the problem.”

TL;DR

  • BGP security is a business problem first, and a technology problem second
  • Signed information is only useful insofar as it is maintained
  • The cost of deployment must be lower than the return on that cost
  • Local policy will always override global policy—as it should
  • The fear of losing business is a stronger motivator than gaining new business

 

Part of the problem here is solutions considered “definitive and final” have been offered, the operator community has rejected them for many years, and yet these same solutions are put on the table year after year—like the perennial fruit cake made by someone’s great great aunt in the mists of Christmastime history that has been regifted so many times no-one really remembers where it came from, nor what sorts of fruit it actually contains.

cross posted at CircleID

The business reality, in terms of BGP security, is simple. To deploy some sort of check on the global routing table, at point must be reached where it costs more to not deploy it than to deploy it. This simply business reality is something network designers and architects beat their heads against every day. The solution can be the neatest solution in the world. It might even shop for the ingredients, mix the cookie dough in perfect proportions, bake the cookies, and then transport them to the proper location for the perfect amount of enjoyment from just the right people (insert Goldilocks here, perhaps). But none of this will ever matter if there is no financial upside, or if the financial risks are greater than the financial gains.

So, some hopefully helpful business realities.

Signed information is no more useful than unsigned information if it is not kept up to date. It is great to get everyone out in full force to build a cryptographically secured database of who owns what prefixes and AS numbers. It is wonderful to find a way to distribute that information throughout the ‘net so people can use it as another tool to determine whether or not to accept a route, or what weight to place on that route. People might even spend a bit of time building this database, just because they believe it is good for the community.

The problem is not day one. It is day two, and then day two thousand. What motivation is there to keep the information in this database up-to-date? Unless there is some—and here I mean financial motivation—the database will lose its effectiveness over time. At some point, when the error rate reaches some number (around 30% seems to be about right), people will simply stop trusting it. When the error rate gets high enough, the tools will stop being used, and the ‘net will revert to its old self.

The cost of deployment and operation must at least be close to the gains from deployment. At this point, there is no financial gain any one company can see from deploying anything in the realm of BGP security, so the cost of deployment and maintenance must be close to zero. While there are folks (including me) trying to reduce the cost as close to zero as possible, we are not there yet, and I do not know if we will ever be there.

Local policy will always override global policy. The literature of BGP security is replete with statements like: “if the route meets this criteria, the BGP speaker MUST drop it.” Good luck. The Internet is a confederation of independent companies, each of which runs their network in a way that they believe will make the most money at the lowest cost possible. One of the ways this happens is that people tune their local policies to charge their adjacent autonomous systems as much money as possible, while reducing their OPEX and CAPEX as much as possible. There will always be money to be made in the grey space around local tuning of policies for optimal traffic flow. Hence local policy will always win over what any database anyplace might say.

To give a specific example: assume you run a network, and you have peered with another operator in multiple places for many years. You know the other operator’s routes well, as they have not changed for many years. One day, you receive a route from this operator in which everything looks correct, but the route is not contained in this outside database of “correct routes.”

Noting this route is a route you have received from this very same operator for many years, are you going to drop it because it’s not right in some database, or are you going to use it given your past standing and relationship?

The fear of losing business is always the strongest motivator. Which leads to the next issue. It does not matter how wonderful your network is if you have a high customer churn rate. The most certain way to have a high churn rate is to place your customer’s experience in the hands of someone else. Such as a communally managed database, perhaps. This is another reason why local policy will always win over remote policy—the local provider is handed checks by customers, not the community at large. They have more incentive to keep their customers happy than the community.

You cannot secure things you do not tell anyone about. This final one is probably not as obvious as the others, but it is just as important as any other item on this list. There are many backdoor arrangements and sealed contracts in the provider world. People transit traffic without telling anyone else that traffic is being transited. Some people are customers of others only in the event of a massive failure someplace else in their network, but do not want anyone to know about this.

All of these arrangements are perfectly legitimate and legal in their respective jurisdictions. But you cannot secure something that no-one knows about. The more information that is hidden in a system, the harder it is to validate the information that exists is correct.

The bottom line is this: BGP security, like most networking problems, is not a technology problem. BGP security is, at its heart, a business problem. The lesson is here not just for security, but for network engineering in general. Business is the bottom line, not technology.

Is BGP Good Enough?

In a recent podcast, Ivan and Dinesh ask why there is a lot of interest in running link state protocols on data center fabrics. They begin with this point: if you have less than a few hundred switches, it really doesn’t matter what routing protocol you run on your data center fabric. Beyond this, there do not seem to be any problems to be solved that BGP cannot solve, so… why bother with a link state protocol? After all, BGP is much simpler than any link state protocol, and we should always solve all our problems with the simplest protocol possible.

TL;DR

  • BGP is both simple and complex, depending on your perspective
  • BGP is sometimes too much, and sometimes too little for data center fabrics
  • We are danger of treating every problem as a nail, because we have decided BGP is the ultimate hammer

 
Will these these contentions stand up to a rigorous challenge?

I will begin with the last contention first—BGP is simpler than any link state protocol. Consider the core protocol semantics of BGP and a link state protocol. In a link state protocol, every network device must have a synchronized copy of the Link State Database (LSDB). This is more challenging than BGP’s requirement, which is very distance-vector like; in BGP you only care if any pair of speakers have enough information to form loop-free paths through the network. Topology information is (largely) stripped out, metrics are simple, and shared information is minimized. It certainly seems, on this score, like BGP is simpler.

Before declaring a winner, however, this simplification needs to be considered in light of the State/Optimization/Surface triad.

When you remove state, you are always also reducing optimization in some way. What do you lose when comparing BGP to a link state protocol? You lose your view of the entire topology—there is no LSDB. Perhaps you do not think an LSDB in a data center fabric is all that important; the topology is somewhat fixed, and you probably are not going to need traffic engineering if the network is wired with enough bandwidth to solve all problems. Building a network with tons of bandwidth, however, is not always economically feasible. The more likely reality is there is a balance between various forms of quality of service, including traffic engineering, and throwing bandwidth at the problem. Where that balance is will probably vary, but to always assume you can throw bandwidth at the problem is naive.

There is another cost to this simplification, as well. Complexity is inserted into a network to solve hard problems. The most common hard problem complexity is used to solve is guarding against environmental instability. Again, a data center fabric should be stable; the topology should never change, reachability should never change, etc. We all know this is simply not true, however, or we would be running static routes in all of our data center fabrics. So why aren’t we?

Because data center fabrics, like any other network, do change. And when they do change, you want them to converge somewhat quickly. Is this not what all those ECMP parallel paths are for? In some situations, yes. In others, those ECMP paths actually harm BGP convergence speed. A specific instance: move an IP address from one ToR on your fabric to another, or from one virtual machine to another. In this situation, those ECMP paths are not working for you, they are working against you—this is, in fact, one of the worst BGP convergence scenarios you can face. IS-IS, specifically, will converge much faster than BGP in the case of detaching a leaf node from the graph and reattaching it someplace else.

Complexity can be seen from another perspective, as well. When considering BGP in the data center, we are considering one small slice of the capabilities of the protocol.

in the center of the illustration above there is a small grey circle representing the core features of BGP. The sections of the ten sided figure around it represent the features sets that have been added to BGP over the years to support the many places it is used. When we look at BGP for one specific use case, we see the one “slice,” the core functionality, and what we are building on top. The reality of BGP, from a code base and complexity perspective, is the total sum of all the different features added across the years to support every conceivable use case.

Essentially, BGP has become not only a nail, but every kind of nail, including framing nails, brads, finish nails, roofing nails, and all the other kinds. It is worse than this, though. BGP has also become the universal glue, the universal screw, the universal hook-and-loop fastener, the universal building block, etc.

BGP is not just the hammer with which we turn every problem into a nail, it is a universal hammer/driver/glue gun that is also the universal nail/screw/glue.

When you run BGP on your data center fabric, you are not just running the part you want to run. You are running all of it. The L3VPN part. The eVPN part. The intra-AS parts. The inter-AS parts. All of it. The apparent complexity may appear to be low, because you are only looking at one small slice of the protocol. But the real complexity, under the covers, where attack and interaction surfaces live, is very complex. In fact, by any reasonable measure, BGP might have the simplest set of core functions, but it is the most complicated routing protocol in existence.

In other words, complexity is sometimes a matter of perspective. In this perspective, IS-IS is much simpler. Note—don’t confuse our understanding of a thing with its complexity. Many people consider link state protocols more complex simply because they don’t understand them as well as BGP.

Let me give you an example of the problems you run into when you think about the complexity of BGP—problems you do not hear about, but exist in the real world. BGP uses TCP for transport. So do many applications. When multiple TCP streams interact, complex problems can result, such as the global synchronization of TCP streams. Of course we can solve this with some cool QoS, including WRED. But why do you want your application and control plane traffic interacting in this way in the first place? Maybe it is simpler just to separate the two?

Is BGP really simpler? From one perspective, it is simpler. From another, however, it is more complex.

Is BGP “good enough?” For some applications, it is. For others, however, it might not be.

You should decide what to run on your network based on application and business drivers, rather than “because it is good enough.” Which leads me back to where I often end up: If you haven’t found the trade-offs, you haven’t look hard enough.

Research: Facebook’s Edge Fabric

The Internet has changed dramatically over the last ten years; more than 70% of the traffic over the Internet is now served by ten Autonomous Systems (AS’), causing the physical topology of the Internet to be reshaped into more of a hub-and-spoke design, rather than the more familiar scale-free design (I discussed this in a post over at CircleID in the recent past, and others have discussed this as well). While this reshaping might be seen as a success in delivering video content to most Internet users by shortening the delivery route between the server and the user, the authors of the paper in review today argue this is not enough.

Brandon Schlinker, Hyojeong Kim, Timothy Cui, Ethan Katz-Bassett, Harsha V. Madhyastha, Italo Cunha, James Quinn, Saif Hasan, Petr Lapukhov, and Hongyi Zeng. 2017. Engineering Egress with Edge Fabric: Steering Oceans of Content to the World. In Proceedings of the Conference of the ACM Special Interest Group on Data Communication (SIGCOMM ’17). ACM, New York, NY, USA, 418-431. DOI: https://doi.org/10.1145/3098822.3098853

Why is this not enough? The authors point to two problems in the routing protocol tying the Internet together: BGP. First, they state that BGP is not capacity-aware. It is important to remember that BGP is focused on policy, rather than capacity; the authors of this paper state they have found many instances where the preferred path, based on BGP policy, is not able to support the capacity required to deliver video services. Second, they state that BGP is not performance-aware. The selection criteria used by BGP, such as MED and Local Pref, do not correlate with performance.

Based on these points, the authors argue traffic needs to be routed more dynamically, in response to capacity and performance, to optimize efficiency. The paper presents the system Facebook uses to perform this dynamic routing, which they call Edge Fabric. As I am more interested in what this study reveals about the operation of the Internet than the solution Facebook has proposed to the problem, I will focus on the problem side of this paper. Readers are invited to examine the entire paper at the link above, or here, to see how Facebook is going about solving this problem.

The paper begins by examining the Facebook edge; as edges go, Facebook’s is fairly standard for a hyperscale provider. Facebook deploys Points of Presence, which are essentially private Content Delivery Network (CDN) compute and edge pushed to the edge, and hence as close to users as possible. To provide connectivity between these CDN nodes and their primary data center fabrics, Facebook uses transit provided through peering across the public ‘net. The problem Facebook is trying to solve is not the last mile connectivity, but rather the connectivity between these CDN nodes and their data center fabrics.

The authors begin with the observation that if left to its own decision process, BGP will evenly distribute traffic across all available peers, even though each peer is actually different levels of congestion. This is not a surprising observation. In fact, there was at least one last mile provider that used their ability to choose an upstream based on congestion in near real time. This capability was similar to the concept behind Performance Based Routing (PfR), developed by Cisco, which was then folded into DMVPN, and thus became part of the value play of most Software Defined Wide Area Network (SD-WAN) solutions.

The authors then note that BGP relies on rough proxies to indicate better performing paths. For instance, the shortest AS Path should, in theory, be shortest physical or logical path, as well, and hence the path with the lowest end-to-end time. In the same way, local preference is normally set to prefer peer connections rather than upstream or transit connections. This should mean traffic will take a shorter path through a peer connected to the destination network, rather than a path up through a transit provider, then back down to the connected network. This should result in traffic passing through less lightly loaded last mile provider networks, rather than more heavily used transit provider networks. The authors present research showing these policies can often harm performance, rather than enhancing it; sometimes it is better to push traffic to a transit peer, rather than to a directly connected peer.

How often are destination prefixes constrained by BGP into a lower performing path? The authors provide this illustration—

The percentage of impacted destination prefixes is, by Facebook’s measure, high. But what kind of solution might be used to solve this problem?

Note that no solution that uses static metrics for routing traffic will be able to solve these problems. What is required, if you want to solve these problems, is to measure the performance of specific paths to given destinations in near real time, and somehow adjust routing to take advantage of higher performance paths regardless of what the routing protocol metrics indicate. In other words, the routing protocol needs to find the set of possible loop-free paths, and some other system must choose which path among this set should be used to forward traffic. This is a classic example of the argument for layered control planes (such as this one).

Facebook’s solution to this problem is to overlay an SDN’ish solution on top of BGP. Their solution does not involve tunneling, like many SD-WAN solutions. Rather, they adjust the BGP metrics in near real time based on current measured congestion and performance. The paper goes on to describe their system, which only uses standard BGP metrics to steer traffic onto higher performance paths through the ‘net.

A few items of note from this research.

First, note that many of the policies set up by providers are not purely shorthand for performance; they actually represent a price/performance tradeoff. For instance, the use of local preference to send traffic to peers, rather than transits, is most often an economic decision. Providers, particularly edge providers, normally configure settlement-free peering with peers, and pay for traffic sent to an upstream transit provider. Directing more traffic at an upstream, rather than a peer, can have a significant financial impact. Hyperscalers, like Facebook, don’t often see these financial impacts, as they are purchasing connectivity from the provider. Over time, forcing providers to use more expensive links for performance reasons could increase cost, but in this situation the costs are not immediately felt, so the cost/performance feedback loop is somewhat muted.

Second, there is a fair amount of additional complexity in pulling this bit of performance out of the network. While it is sometimes worth adding complexity to increase complexity, this is not always true. It likely is for many hyperscalers, who’s business relies largely on engagement. Given there is a directly provable link between engagement and speed, every bit of performance makes a large difference. But this is simply not true of all networks.

Third, you can replicate this kind of performance-based routing in your network by creating a measurement system. You can then use the communities operators providers allow their customers to use to shape the direction of traffic flows to optimize traffic performance. This might not work in all cases, but it might give you a fair start on a similar system—if this kind of wrestling match with performance is valuable in your environment.

Another option might be to use an SD-WAN solution, which should have the measurement and traffic shaping capabilities “built in.”

Fourth, there is a real possibility of building a system that fails in the face of positive feedback loops or reduces performance in the face of negative feedback loops. Hysteresis, the tendency to cause a performance problem in the process of reacting to a performance problem, must be carefully considered when designing such as system, as well.

The Bottom Line

Statically defined metrics in dynamic control planes cannot provide optimal performance in near real time. Building a system that can involves a good bit of additional complexity—complexity that is often best handled in a layered control plane.

Are these kinds of tools suitable for a network other than Facebook? In the right situation, the answer is clearly yes. But heed the tradeoffs. If you haven’t found the tradeoff, you haven’t looked hard enough.