Latency is a big deal for many modern applications, particularly in the realm of machine learning applied to problems like determining if someone standing at your door is a delivery person or a … robber out to grab all your smart toasters and big screen television. The problem is networks, particularly in the last mile don’t deal with latency very well. In fact, most of the network speeds and feeds available in anything outside urban areas kindof stinks. The example given by Bagchi et al. is this—

A fixed video sensor may generate 6Mbps of video 24/7, thus producing nearly 2TB of data per month—an amount unsustainable according to business practices for consumer connections, for example, Comcast’s data cap is at 1TB/month and Verizon Wireless throttles traffic over 26GB/month. For example, with DOCSIS 3.0, a widely deployed cable Internet technology, most U.S.-based cable systems deployed today support a maximum of 81Mbps aggregated over 500 home—just 0.16Mbps per home.

Bagchi, Saurabh, Muhammad-Bilal Siddiqui, Paul Wood, and Heng Zhang. “Dependability in Edge Computing.” Communications of the ACM 63, no. 1 (December 2019): 58–66. https://doi.org/10.1145/3362068.

The authors claim a lot of the problem here is just that edge networks have not been built out, but there is a reason these edge networks aren’t built out large enough to support pulling this kind of data load into a centrally located data center: the network isn’t free.

This is something so obvious to network engineers that it almost slips under our line of thinking unnoticed—except, of course, for the constant drive to make the network cost less money. For application developers, however, the network is just a virtual circuit data rides over… All the complexity of pulling fiber out to buildings or curbs, all the work of physically connecting things to the fiber, all the work of figuring out how to make routing scale, it’s all just abstracted away in a single QUIC or TCP session.

If you can’t bring the data to the compute, which is typically contained in some large-scale data center, then you have to bring the computing power to the data. The complexity of bringing the computing power to the data is applications, especially modern micro-services based applications optimized for large-scale, low latency data center fabrics, just aren’t written to be broken into components and spread all over the world.

Let’s consider the case of the smart toaster—the case used in the paper in hand. Imagine a toaster with little cameras to sense the toastiness of the bread, electronically controlled heating elements, an electronically controlled toast lifter, and some sort of really nice “bread storage and moving” system that can pull bread out of a reservoir, load them into the toaster, and make it all work. Imagine being able to get up in the morning to a fresh cup of coffee and a nice bagel fresh and hot just as you hit the kitchen…

But now let’s look at the complexity required to do such a thing. We must have local processing power and storage, along with some communication protocol that periodically uploads and downloads data to improve the toasting process. You have to have some sort of handling system that can learn about new kinds of bread and adapt to them automatically—this is going to require data, as well. You have to have a bread reservoir that will keep the bread fresh for a few days so you don’t have refill it constantly.

Will you save maybe five minutes every morning? Maybe.

Will you spend a lot of time getting this whole thing up and running? Definitely.

What will the MTBF be, precisely? What about the MTTR?

All to save five minutes in the morning? Of course the authors chose a trivial—perhaps even silly—example to use, just to illustrate the kinds of problems IoT devices combined with edge computing are going to encounter. But still … in five years you’re going to see advertisements for this smart toaster out there. There are toasters that already have a few of these features, and refrigerators that go far beyond this.

Sometimes we have to remember the cost of the network is telling us something—just because we can do a thing doesn’t mean we should. If the cost of the network forces us to consider the tradeoffs, that’s a good thing.

And remember that if your toaster makes your bread at the same time every morning, you have to adjust to the machine’s schedule, rather than the machine adjusting to yours…

I’s fnny, bt yu cn prbbly rd ths evn thgh evry wrd s mssng t lst ne lttr. This is because every effective language—or rather every communication system—carried enough information to reconstruct the original meaning even when bits are dropped. Over-the-wire protocols, like TCP, are no different—the protocol must carry enough information about the conversation (flow data) and the data being carried (metadata) to understand when something is wrong and error out or ask for a retransmission. These things, however, are a form of data exhaust; much like you can infer the tone, direction, and sometimes even the content of conversation just by watching the expressions, actions, and occasional word spoken by one of the participants, you can sometimes infer a lot about a conversation between two applications by looking at the amount and timing of data crossing the wire.

The paper under review today, Off-Path TCP Exploit, uses cleverly designed streams of packets and observations about the timing of packets in a TCP stream to construct an off-path TCP injection attack on wireless networks. Understanding the attack requires understanding the interaction between the collision avoidance used in wireless systems and TCP’s reaction to packets with a sequence number outside the current window.

Beginning with the TCP end of things—if a TCP packet is received with a window falling outside the current window, TCP implementations will send a duplicate of the last ACK it sent back to the transmitter. From the Wireless network side of things, only one talker can use the channel at a time. If a device begins transmitting a packet, and then hears another packet inbound, it should stop transmitting and wait some random amount of time before trying to transmit again. These two things can be combined to guess at the current window size.

Assume an attacker sends a packet to a victim which must be answered, such as a probe. Before the victim can answer, the attacker than sends a TCP segment which includes a sequence number the attacker thinks might be within the victim’s receive window, sourcing the packet from the IP address of some existing TCP session. Unless the IP address of some existing session is used in this step, the victim will not answer the TCP segment. Because the attacker is using a spoofed source address, it will not receive the ACK from this segment, so it must find some other way to infer if an ACK was sent by the victim.

How can the attacker infer this? After sending this TCP sequence, the attacker sends another probe of some kind to the victim which must be answered. If the TCP segment’s sequence number is outside the current window, the victim will attempt to send a copy of its previous ACK. If the attacker times things correctly, the victim will attempt to send this duplicate ACK while the attacker is transmitting the second probe packet; the two packets will collide, causing the victim to back off, slowing the receipt of the probe down a bit from the attacker’s perspective.

If the answer to the second probe is slower than the answer to the first probe, the attacker can infer the sequence number of the spoofed TCP segment is outside the current window. If the two probes are answered in close to the same time, the attacker can infer the sequence number of the spoofed TCP segment is within the current window.

Combining this information with several other well-known aspects of widely deployed TCP stacks, the researchers found they could reliably inject information into a TCP stream from an attacker. While these injections would still need to be shaped in some way to impact the operation of the application sending data over the TCP stream, the ability to inject TCP segments in this way is “halfway there” for the attacker.

There probably never will be a truly secure communication channel invented that does not involve encryption—the data required to support flow control and manage errors will always provide enough information to an attacker to find some clever way to break into the channel.

This last week I was a guest on the TechSequences podcast with Leslie and Alexa discussing the centralization of the routed infrastructure in the ‘net. When that episode posts, I’ll cross post it here (but, of course, you should really just subscribe to their podcast, as they always have interesting guests—I’ll have Leslie and Alexa on the Hedge at some point, as well). The topic is related to this post on CircleID about the death of transit, which was a reaction to Geoff Huston’s article on the death of transit some time before.

All that to say… while reading through some research papers this week, I ran into a recent (2018) paper where Carisimo et al. try out different ways of measuring which autonomous systems belong to the “core” of the ‘net. They went about this by taking a set of AS’ “everyone” acknowledges to be “part of the core,” and then trying to find some measurement that successfully describes something all of them have in common.

The result is the k-metric, which measures the connectivity of an AS’ peers. If an AS has peers who are just as connected as they are, then k-metric is high. Otherwise, the k-metric is low. It does make sense this measure would be able to pick out “core” AS’, because it picks out the set of most highly interconnected AS’ in the ‘net.

Once they determined the k-metric is a good way to determine which AS’ are in the core of the ‘net, they calculated the membership of the core over time. Their graph is below.

The way the chart is laid out is a little difficult to see, but the green is transit providers and the blue is content providers. Certainly enough, the percentage of content providers in the core of the ‘net, in terms of sheer connectivity, has increased over time. These same content providers now account for some 80% (or more?) of the traffic on the ‘net. All this means is the centralization of content is visible in objective measurements, so its a real thing. Content providers are currently “only” 20% of the core but given their traffic levels this is a much bigger deal than it seems. There are many parts of the world where the population or access density is not high enough for large content providers to justify building out so they touch the last mile. If communities build out last mile optical networks, however, its likely these large content providers will consume ever-larger percentages of the “core” AS’.

QUIC is a relatively new data transport protocol developed by Google, and currently in line to become the default transport for the upcoming HTTP standard. Because of this, it behooves every network engineer to understand a little about this protocol, how it operates, and what impact it will have on the network. We did record a History of Networking episode on QUIC, if you want some background.

In a recent Communications of the ACM article, a group of researchers (Kakhi et al.) used a modified implementation of QUIC to measure its performance under different network conditions, directly comparing it to TCPs performance under the same conditions. Since the current implementations of QUIC use the same congestion control as TCP—Cubic—the only differences in performance should be code tuning in estimating the round-trip timer (RTT) for congestion control, QUIC’s ability to form a session in a single RTT, and QUIC’s ability to carry multiple streams in a single connection. The researchers asked two questions in this paper: how does QUIC interact with TCP flows on the same network, and does UIC perform better than TCP in all situations, or only some?

To answer the first question, the authors tried running QUIC and TCP over the same network in different configurations, including single QUIC and TCP sessions, a single QUIC session with multiple TCP sessions, etc. In each case, they discovered that QUIC consumed about 50% of the bandwidth; if there were multiple TCP sessions, they would be starved for bandwidth when running in parallel with the QUIC session. For network folk, this means an application implemented using QUIC could well cause performance issues for other applications on the network—something to be aware of. This might mean it is best, if possible, to push QUIC-based applications into a separate virtual or physical topology with strict bandwidth controls if it causes other applications to perform poorly.

Does QUIC’s ability to consume more bandwidth mean applications developed on top of it will perform better? According to the research in this paper, the answer is how many balloons fit in a bag? In other words, it all depends. QUIC does perform better when its multi-stream capability comes into play and the network is stable—for instance, when transferring variably sized objects (files) across a network with stable jitter and delay. In situations with high jitter or delay, however, TCP consistently outperforms QUIC.

TCP outperforming QUIC is a bit of a surprise in any situation; how is this possible? The researchers used information from their additional instrumentation to discover QUIC does not tolerate out-of-order packet delivery very well because of its fast packet retransmission implementation. Presumably, it should be possible to modify these parameters somewhat to make QUIC perform better.

This would still leave the second problem the researchers found with QUIC’s performance—a large difference between its performance on desktop and mobile platforms. The difference between these two comes down to where QUIC is implemented. Desktop devices (and/or servers) often have smart NICs which implement TCP in the ASIC to speed packet processing up. QUIC, because it runs in user space, only runs on the main processor (it seems hard to see how a user space application could run on a NIC—it would probably require a specialized card of some type, but I’ll have to think about this more). The result is that QUIC’s performance depends heavily on the speed of the processor. Since mobile devices have much slower processors, QUIC performs much more slowly on mobile devices.

QUIC is an interesting new transport protocol—one everyone involved in designing or operating networks is eventually going to encounter. This paper gives good insight into the “soul” of this new protocol.

When you are building a data center fabric, should you run a control plane all the way to the host? This is question I encounter more often as operators deploy eVPN-based spine-and-leaf fabrics in their data centers (for those who are actually deploying scale-out spine-and-leaf—I see a lot of people deploying hybrid sorts of networks designed as “mini-hierarchical” designs and just calling them spine-and-leaf fabrics, but this is probably a topic for another day). Three reasons are generally given for deploying the control plane all on the hosts attached to the fabric: faster down detection, load sharing, and traffic engineering. Let’s consider each of these in turn.

Faster Down Detection. There’s no simple way for ToR switches to determine when the connection to a host has failed, whether the host is single or dual-homed. Somehow the set of routes reachable through the host must be related to the interface state, or some underlying fast hello state (such as BFD), so that if a link fails the ToR knows to pull the correct set of routes from the routing table. It’s simpler to just let the host itself advertise the correct reachability information; when the link fails, the routing session will fail, and the correct routes will automatically be withdrawn.

Load Sharing. While this only applies to hosts with two connections into the fabric (dual-homed hosts), this is still an important use case. If a dual-homed host only has two default routes to work from, the host is blind to network conditions, and can only load share equally across the available paths. Equal load sharing, however, may not be ideal in all situations. If the host is running routing, it is possible to inject more intelligence into the load sharing between the upstream links.

Traffic Engineering. Or traffic shaping, steering, etc. In some cases, traffic engineering requires injecting a label or outer header onto the packet as it enters the fabric. In others, more specific routes might be sent along one path and not another to draw specific kinds of traffic through a more optimal route in the fabric. This kind of traffic engineering is only possible if the control plane is running on the host.

All these reasons are well and good, but they all assume something that should be of great interest to the network designer: which control plane are we talking about?

Most DC fabric designs I see today assume there is a single control plane running on the fabric—generally this single control plane is BGP, and it’s being used both to provide basic IP connectivity through the fabric (the infrastructure underlay control plane) and to provide tunneled overlay reachability (the infrastructure overlay control plane—generally eVPN).

This entangling of the infrastructure underlay and overlay has always seemed, to me, to be less than ideal. When I worked on large-scale transit provider networks in my more youthful days, we intentionally designed networks that separated customer routes from infrastructure routes. This created two separate failure and security domains in the network, as well as dividing the telemetry data in ways that allowed faster troubleshooting of common problems.

The same principles should apply in a DC fabric—after all, the workloads are essentially customers of the fabric, while the basic underlay connectivity counts as infrastructure. The simplest way to adopt this sort of division of labor is the same way large-scale transit providers did (and do)—use two different routing protocols for the underlay and overlay. For instance, IS-IS or RIFT for the underlay and eVPN using BGP for the overlay.

If you move to two layers of control plane, the question above becomes a bit more nuanced—should the overlay control plane run on the hosts? Should the underlay control plane run on the hosts?

For faster down detection—for those hosts that need faster down detection, BFD tied to IGP neighbor state can remove the correct nexthop from the local routing table at a ToR, causing the correct reachable destinations to be withdrawn. Alternatively, the host can run an instance of the overlay control plane, which allows it to advertise and withdraw “customer routes” directly. In neither case is the underlay control plane required to run on the host.

For load sharing and traffic engineering—if something like SRm6, or even other more traditional forms of traffic engineering, the information needed will be carried in the overlay rather than the underlay—so the underlay routing protocol does not need to run on the host.

On the other side of the coin, not running the underlay protocol on the host can help the overall network security posture. Assume a public facing host connected to the fabric is somehow pwned… If the host is running the underlay protocol, its pretty simple to DoS the entire fabric to take it down, or to inject incorrect routing information. If the overlay is configured correctly, however, only the virtual topology which the host has access to can be impacted by an attack—and if microsegmentation is deployed, that damage can be minimized as well.

From a complexity perspective, running the underlay control plane on the host dramatically increases the amount of state the host must maintain; there is no effective filter you can run to reduce state on the host without destroying some of the advantages gained by running the underlay control plane there. On the other hand, the ToR can be configured to filter routing information the host receives, controlling the amount of state the host needs to manage.

Control plane on the host or not? This is one of those questions where properly modularized and layered network design can make a big difference in what the right answer should be.

Many years ago I attended a presentation by Dave Meyers on network complexity—which set off an entire line of thinking about how we build networks that are just too complex. While it might be interesting to dive into our motivations for building networks that are just too complex, I starting thinking about how to classify and understand the complexity I was seeing in all the networks I touched. Of course, my primary interest is in how to build networks that are less complex, rather than just understanding complexity…

This led me to do a lot of reading, write some drafts, and then write a book. During this process, I ended coining what I call the complexity triad—State, Optimization, and Surface. If you read the book on complexity, you can see my views on what the triad consisted of changed through in the writing—I started out with volume (of state), speed (of state), and optimization. Somehow, though, interaction surfaces need to play a role in the complexity puzzle.

First, you create interaction surface when you modularize anything—and you modularize to control state (the scope to set apart failure domains, the speed and volume to enable scaling). Second, adding interaction surfaces adds complexity by creating places where information must be exchanged—which requires protocols and other things. Finally, reducing state through abstraction at an interaction surface is the primary cause of many forms of suboptimal behavior in a control plane, and causes unintended consequences. Since interaction surfaces are so closely tied to state and optimization, then, I added surfaces to the triad, and merged the two kinds of state into one, just state.

I have been thinking through the triad again in the last several weeks for various reasons, and I’m not certain it’s quite right still because I’m not convinced surfaces are really a tradeoff against state and optimization. It seems more accurate to say that state and optimization trade off through interaction surfaces. This does not make it any less of a triad, but it might mean I need to find a little different way to draw it. One way to illustrate it is as a system of moving parts, such as the illustration below.

If you think of the interaction surface between modules 1 and 2—two topological parts, or a virtual topology on top of a physical—then the abstraction is the amount of information allowed to pass between the two modules. For instance, in aggregation the length of the aggregated prefixes, or the aggregated prefix metrics, etc.

When you “turn the crank,” so-to-speak, you adjust the volume, speed (velocity), breadth, or depth of information being passed between the modules—either more or less information, faster or slower, in more places or fewer, or the reaction of the module receiving the state. Every time you turn the crank, however, there is not one reaction but many. Notices optimization 1 will turn in the opposite direction from optimization 2 in the diagram—so turning the crank for 1 to be more optimal will always result in 2 becoming less optimal. There are tens or hundreds of such interactions in any system, and it is impossible for any person to know or understand all of them.

For instance, if you aggregate hundreds of /64’s to tens of /60’s, you reduce the state and optimize by reducing the scope of the failure domain. On the other hand, because you have less specific routing information, traffic is (most likely) going to flow along less-than-optimal paths. If you “turn the crank” by aggregating those same hundreds of /64’s to a 0::0, you will have more “airtight” failure domains or modules, but less optimal traffic flow. Hence …

If you haven’t found the tradeoffs, you haven’t looked hard enough.

What understanding the SOS triad allows you, combined with a fundamental knowledge of how these things work, is to know where to look for the tradeoffs. Maybe it would be better to illustrate the SOS triad with surfaces at the bottom all the time, acting as a sort of fulcrum or balance point between state and optimization… Or maybe a completely different illustration would be better. This is something for me to think about more and figure out.

Complexity interacts with these interaction surfaces as well, of course—the more complex a system becomes, the more complex the interaction surface within the system become or the more of them you have. A key point in design of any kind is balancing the number of interaction surfaces with their complexity, depth, and breath—in other words, where should you modularize, what should each module contain, what sort of state passed between the modules, where does state pass between the modules, etc. Somehow, mentally, you have to factor in the unintended consequences of hiding information (the first corollary to Keith’s Law, in effect), and the law of leaky abstractions (all nontrivial abstractions leak).

This is a far different way of looking at networks and their design than what you learned in any random certification, and its probably not even something you will find in a college textbook. It is quite difficult to apply when you’re down in the configuration of individual devices. But it’s also the key to understanding networks as a system and beginning the process of thinking about where and how to modularize to create the simplest system to solve a given hard problem.

Going back to the beginning, then—one of the reasons we build such complex networks is we do not really think about how the modules fit together. Instead, we use rules-of-thumb and folk wisdom while we mumble about failure domains and “this won’t scale” under our breath. We are so focused on the individual gears becoming commodities that we fail to see the system and all its moving parts—or we somehow think “this is all so easy,” that we build very inefficient systems with brute-force resolutions, often resulting in mass failures that are hard to understand and resolve.

Sorry, there’s no clear point or lesson here… This is just what happens when I’ve been buried in dissertation work all day and suddenly realize I have not written a blog post for this week… But it should give you something to think about.

Post-mortem reviews seem to be quite common in the software engineering and application development sides of the IT world—but I do not recall a lot of post-mortems in network engineering across my 30 years. This puzzling observation sprang to mind while I was reading a post over at the ACM this last week about how to effectively learn from the post-mortem exercise.

The common pattern seems to be setting aside a one hour meeting, inviting a lot of people, trying to shift blame while not actually saying you are shifting blame (because we are all supposed to live in a blame-free environment now—fix the problem, not the blame!), and then … a list is created on a whiteboard, pictures are taken, and everyone walks away with a rock-solid plan to never do that again.

In a few months’ time, the same team will be in the same room, draw the same drawings, and say the same things all over again. At least that is the way it seems to me. If there is an effective post-mortem process in use by a company someplace, I do not think I have seen it.

From the article—

Are we missing anything in this prevalent rinse-and-repeat cycle of how the industry generally addresses incidents that could be helpful? Put another way: As we experience incidents, work through them, and deal with their aftermath, if we set aside incident-specific, and therefore fundamentally static, remediation items, both in technology and process, are we learning anything else that would be useful in addressing and responding to incidents? Can we describe that knowledge? And if so, how would we then make use of it to leverage past pain and improve future chances at success?

I tend to think, from the few times I have seen network post-mortems performed, that the reason they do not work well is because we slip into the same appliance/configuration frame of mind so quickly. We want to understand what configuration was entered incorrectly, or what defect should be reported back to the vendor, rather than thinking about organizational and process changes. The smaller the detail, the safer the conclusions, after all—aim small, miss big, is what we say in the shooting world.

We focus so much on mean time to innocence, and how to create a practically perfect process that will never fail, that we fail to do the one thing we should be doing: learning.

Okay, so enough whining—what can be done about this situation? A few practical suggestions come to mind. These are not, of course, well-thought-out solutions, but rather, perhaps, “part of the solution.”

Rather than trying to figure out the root cause, spend that precious hour of post-mortem time mapping out three distinct workflows. The first should be the process that set up the failure. What drove the installation of this piece of hardware or software? What drove the deployment of this protocol? How did we get to the place where this failure had that effect? Once this is mapped out, see if there is anything in that process, or even in the political drivers and commitments made during that process, that could or should be modified to really change the way technology is deployed in your network.

The second process you should map out is the steps taken to detect the problem. Dwell time is a huge problem in modern networks—the time between a failure occurring and being detected. You should constantly focus on bringing dwell time down while paying close attention to the collateral damage of false positives. Mapping out how this failure was detected, and where it should have been caught sooner, can help improve telemetry systems, ultimately decreasing MTTR.

The third, and final, workflow you map out should be the troubleshooting process itself. People rarely map out their troubleshooting process for later reference, but this little trick I learned from way back in tube-type electronics days used to save me hours of time in the field. As you troubleshoot, make a flow chart. Record what you checked, why you checked it, how you checked it, and what you learned from the check. This flowchart, or workflow, is precious material in the post-mortem process. What can you instrument, or make easier to find, to reduce troubleshooting time in the next go-round? How can you traverse the network and find the root cause faster next time? These are crucial questions you can only answer with the use of a troubleshooting workflow.

I don’t know if you already do post-mortems or not, or how valuable you think they are—but I would suggest they can be, and are, quite useful. So long as you get out of the narrows and focus on systems and workflows. Aim small, miss big—but aim big and you’ll either hit the target or, at worst, miss small.