Skip to content

RESEARCH

IPv6 Backscatter and Address Space Scanning

Backscatter is often used to detect various kinds of attacks, but how does it work? The paper under review today, Who Knocks at the IPv6 Door, explains backscatter usage in IPv4, and examines how effectively this technique might be used to detect scanning of IPv6 addresses, as well. The best place to begin is with an explanation of backscatter itself; the following network diagram will be helpful—

Assume A is scanning the IPv4 address space for some reason—for instance, to find some open port on a host, or as part of a DDoS attack. When A sends an unsolicited packet to C, a firewall (or some similar edge filtering device), C will attempt to discover the source of this packet. It could be there is some local policy set up allowing packets from A, or perhaps A is part of some domain none of the devices from C should be connecting to. IN order to discover more, the firewall will perform a reverse lookup. To do this, C takes advantage of the PTR DNS record, looking up the IP address to see if there is an associated domain name (this is explained in more detail in my How the Internet Really Works webinar, which I give every six months or so). This reverse lookup generates what is called a backscatter—these backscatter events can be used to find hosts scanning the IP address space. Sometimes these scans are innocent, such as a web spider searching for HTML servers; other times, they could be a prelude to some sort of attack.

Kensuke Fukuda and John Heidemann. 2018. Who Knocks at the IPv6 Door?: Detecting IPv6 Scanning. In Proceedings of the Internet Measurement Conference 2018 (IMC ’18). ACM, New York, NY, USA, 231-237. DOI: https://doi.org/10.1145/3278532.3278553

Scanning the IPv6 address space is much more difficult because there are 2128 addresses rather than 232. The paper under review here is one of the first attempts to understand backscatter in the IPv6 address space, which can lead to a better understanding of the ways in which IPv6 scanners are optimizing their search through the larger address space, and also to begin understanding how backscatter can be used in IPv6 for many of the same purposes as it is in IPv4.

The researchers begin by setting up a backscatter testbed across a subset of hosts for which IPv4 backscatter information is well-known. They developed a set of heuristics for identifying the kind of service or host performing the reverse DNS lookup, classifying them into major services, content delivery networks, mail servers, etc. They then examined the number of reverse DNS lookups requested versus the number of IP packets each received.

It turns out that about ten times as many backscatter incidents are reported for IPv4 than IPv6, which either indicates that IPv6 hosts perform reverse lookup requests about ten times less often than IPv4 hosts, or IPv6 hosts are ten times less likely to be monitored for backscatter events. Either way, this result is not promising—it appears, on the surface, that IPv6 hosts will be less likely to cause backscatter events, or IPv6 backscatter events are ten times less likely to be reported. This could indicate that widespread deployment of IPv6 will make it harder to detect various kinds of attacks on the DFZ. A second result from this research is that using backscatter, the researchers determined IPv6 scanning is increasing over time; while the IPv6 space is not currently a prime target for attacks, it might become more so over time, if the scanning rate is any indicator.

The bottom line is—IPv6 hosts need to be monitored as closely, or more closely than IPv6 hosts, for scanning events. The techniques used for scanning the IPv6 address space are not well understood at this time, either.

 

The Floating Point Fix

Floating point is not something many network engineers think about. In fact, when I first started digging into routing protocol implementations in the mid-1990’s, I discovered one of the tricks you needed to remember when trying to replicate the router’s metric calculation was always round down. When EIGRP was first written, like most of the rest of Cisco’s IOS, was written for processors that did not perform floating point operations. The silicon and processing time costs were just too high.

What brings all this to mind is a recent article on the problems with floating point performance over at The Next Platform by Michael Feldman. According to the article:

While most programmers use floating point indiscriminately anytime they want to do math with real numbers, because of certain limitations in how these numbers are represented, performance and accuracy often leave something to be desired.

For those who have not spent a lot of time in the coding world, a floating point number is one that has some number of digits after the decimal. While integers are fairly easy to represent and calculate over in the binary processors use, floating point numbers are much more difficult, because floating point numbers are very difficult to represent in binary. The number of bits you have available to represent the number makes a very large difference in accuracy. For instance, if you try to store the number 101.1 in a float, you will find the number stored is actually 101.099998 To store 101.1, you need a double, which is twice as long as a float

Okay—this is all might be fascinating, but who cares? Scientists, mathematicians, and … network engineers do, as a matter of fact. Fist, carrying around double floats to store numbers with higher precision means a lot more network traffic. Second, when you start looking at timestamps and large amounts of telemetry data, the efficiency and accuracy of number storage becomes a rather big deal.

Okay, so the current floating point storage format, called IEEE754, is inaccurate and rather inefficient. What should be done about this? According to the article, John Gustafson, a computer scientist, has been pushing for the adoption of a replacement called posits. Quoting the article once again:

It does this by using a denser representation of real numbers. So instead of the fixed-sized exponent and fixed-sized fraction used in IEEE floating point numbers, posits encode the exponent with a variable number of bits (a combination of regime bits and the exponent bits), such that fewer of them are needed, in most cases. That leaves more bits for the fraction component, thus more precision.

Did you catch why this is more efficient? Because it uses a variable length field. In other words, posits replaces a fixed field structure (like what was originally used in OSPFv2) with a variable length field (like what is used in IS-IS). While you must eat some space in the format to carry the length, the amount of "unused space" in current formats overwhelms the space wasted, resulting in an improvement in accuracy. Further, many numbers that require a double today can be carried in the size of a float. Not only does using a TLV format increase accuracy, it also increases efficiency.

From the perspective of the State/Optimization/Surface (SOS) tradeoff, there should be some increase in complexity somewhere in the overall system—if you have not found the tradeoffs, you have not looked hard enough. Indeed, what we find is there is an increase in the amount of state being carried in the data channel itself; there is additional state, and additional code that knows how to deal with this new way of representing numbers.

It's always interesting to find situations in other information technology fields where discussions parallel to discussions in the networking world are taking place. Many times, you can see people encountering the same design tradeoffs we see in network engineering and protocol design.

Design Intelligence from the Hourglass Model

Over at the Communications of the ACM, Micah Beck has an article up about the hourglass model. While the math is quite interesting, I want to focus on transferring the observations from the realm of protocol and software systems development to network design. Specifically, start with the concept and terminology, which is very useful. Taking a typical design, such as this—

The first key point made in the paper is this—

The thin waist of the hourglass is a narrow straw through which applications can draw upon the resources that are available in the less restricted lower layers of the stack.

A somewhat obvious point to be made here is applications can only use services available in the spanning layer, and the spanning layer can only build those services out of the capabilities of the supporting layers. If fewer applications need to be supported, or the applications deployed do not require a lot of “fancy services,” a weaker spanning layer can be deployed. Based on this, the paper observes—

The balance between more applications and more supports is achieved by first choosing the set of necessary applications N and then seeking a spanning layer sufficient for N that is as weak as possible. This scenario makes the choice of necessary applications N the most directly consequential element in the process of defining a spanning layer that meets the goals of the hourglass model.

Beck calls the weakest possible spanning layer to support a given set of applications the minimally sufficient spanning layer (MSSL). There is one thing that seems off about this definition, however—the correlation between the number of applications supported and the strength of the spanning layer. There are many cases where a network supports thousands of applications, and yet the network itself is quite simple. There are many other cases where a network supports just a few applications, and yet the network is very complex. It is not the number of applications that matter, it is the set of services the applications demand from the spanning layer.

Based on this, we can change the definition slightly: an MSSL is the weakest spanning layer that can provide the set of services required by the applications it supports. This might seem intuitive or obvious, but it is often useful to work these kinds of intuitive things out, so they can be expressed more precisely when needed.

First lesson: the primary driver in network complexity is application requirements. To make the network simpler, you must reduce the requirements applications place on the network.

There are, however, several counter-intuitive cases here. For instance, TCP is designed to emulate (or abstract) a circuit between two hosts—it creates what appears to be a flow controlled, error free channel with no drops on top of IP, which has no flow control, and drops packets. In this case, the spanning layer (IP), or the wasp waist, does not support the services the upper layer (the application) requires.

In order to make this work, TCP must add a lot of complexity that would normally be handled by one of the supporting layers—in fact, TCP might, in some cases, recreate capabilities available in one of the supporting layers, but hidden by the spanning layer. There are, as you might have guessed, tradeoffs in this neighborhood. Not only are the mechanisms TCP must use more complex that the ones some supporting layer might have used, TCP represents a leaky abstraction—the underlying connectionless service cannot be completely hidden.

Take another instance more directly related to network design. Suppose you aggregate routing information at every point where you possibly can. Or perhaps you are using BGP route reflectors to manage configuration complexity and route counts. In most cases, this will mean information is flowing through the network suboptimally. You can re-optimize the network, but not without introducing a lot of complexity. Further, you will probably always have some form of leaky abstraction to deal with when abstracting information out of the network.

Second lesson: be careful when stripping information out of the spanning layer in order to simplify the network. There will be tradeoffs, and sometimes you end up with more complexity than what you started with.

A second counter-intuitive case is that of adding complexity to the supporting layers in order to ultimately simplify the spanning layer. It seems, on the model presented in the paper, that adding more services to the spanning layer will always end up adding more complexity to the entire system. MPLS and Segment Routing (SR), however, show this is not always true. If you need traffic steering, for instance, it is easier to implement MPLS or SR in the support layer rather than trying to emulate their services at the application level.

Third lesson: sometimes adding complexity in a lower layer can simplify the entire system—although this might seem to be counter-intuitive from just examining the model.

The bottom line: complexity is driven by applications (top down), but understanding the full stack, and where interactions take place, can open up opportunities for simplifying the overall system. The key is thinking through all parts of the system carefully, using effective mental models to understand how they interact (interaction surfaces), and the considering the optimization tradeoffs you will make by shifting state to different places.

DORA, DevOps, and Lessons for Network Engineers

DevOps Research and Assessment (DORA) released their 2018 Accelerate report on the state of DevOps at the end of 2018; I’m a little behind in my reading, so I just got around to reading it, and trying to figure out how to apply their findings to the infrastructure (networking) side of the world.

DORA found organizations that outsource entire functions, such as building an entire module or service, tend to perform more poorly than organizations that outsource by integrating individual developers into existing internal teams (page 43). It is surprising companies still think outsourcing entire functions is a good idea, given the many years of experience the IT world has with the failures of this model. Outsourced components, it seems, too often become a bottleneck in the system, especially as contracts constrain your ability to react to real-world changes. Beyond this, outsourcing an entire function not only moves the work to an outside organization, but also the expertise. Once you have lost critical mass in an area, and any opportunity for employees to learn about that area, you lose control over that aspect of your system.

DORA also found a correlation between faster delivery of software and reduced Mean Time To Repair (MTTR) (page 19). On the surface, this makes sense. Shops that delivery software continuously are bound to have faster, more regularly exercised processes in place for developing, testing, and rolling out a change. Repairing a fault or failure requires change; anything that improves the speed of rolling out a change is going to drive MTTR down.

Organizations that emphasize monitoring and observability tended to perform better than others (page 55). This has major implications for network engineering, where telemetry and management are often “bolted on” as an afterthought, much like security. This is clearly not optimal, however—telemetry and network management need to be designed and operated like any other application. Data sources, stores, presentation, and analysis need to be segmented into separate services, so new services can be tried out on top of existing data, and new sources can feed into existing services. Network designers need to think about how telemetry will flow through the management system, including where and how it will originate, and what it will be used for.

These observations about faster delivery and observability should drive a new way of thinking about failure domains; while failure domains are often primarily thought of as reducing the “blast radius” when a router or link fails, they serve two much larger roles. First, failure domain boundaries are good places to gather telemetry because this is where information flows through some form of interaction surface between two modules. Information gathered at a failure domain boundary will not tend to change as often, and it will often represent the operational status of the entire module.

Second, well places failure domain boundaries can be used to stake out areas where “new things” can be put in operation with some degree of confidence. If a network has well-designed failure domain boundaries, it is much easier to deploy new software, hardware, and functionality in a controlled way. This enables a more agile view of network operations, including the ability to roll out changes incrementally through a canary process, and to use processes like chaos monkey to understand and correct unexpected failure modes.

Another interesting observation is the j-curve of adoption (page 3):

This j-curve shows the “tax” of building the underlying structures needed to move from a less automated state to a more automated one. Keith’s Law:

In a complex system, the cumulative effect of a large number of small optimizations is externally indistinguishable from a radical leap.

…operates in part because of this j-curve. Do not be discouraged if it seems to take a lot of work to make small amounts of progress in many stages of system development—the results will come later.

The bottom line: it might seem like a report about software development is too far outside the realm of network engineering to be useful—but the reality is network engineers can learn a lot about how to design, build, and operate a network from software engineers.

Why You Should Block Notifications and Close Your Browser

Every so often, while browsing the web, you run into a web page that asks if you would like to allow the site to push notifications to your browser. Apparently, according to the paper under review, about 12% of the people who receive this notification allow notifications. What, precisely, is this doing, and what are the side effects?

Papadopoulos, Panagiotis, Panagiotis Ilia, Michalis Polychronakis, Evangelos P. Markatos, Sotiris Ioannidis, and Giorgos Vasiliadis. “Master of Web Puppets: Abusing Web Browsers for Persistent and Stealthy Computation.” In Proceedings 2019 Network and Distributed System Security Symposium. San Diego, CA: Internet Society, 2019. https://doi.org/10.14722/ndss.2019.23070.

Allowing notifications allows the server to kick off one of two different kinds of processes on the local computer, a service worker. There are, in fact, two kinds of worker apps that can run “behind” a web site in HTML5; the web worker and the service worker. The web worker is designed to calculate or locally render some object that will appear on the site, such as unencrypting a downloaded audio file for local rendition. This moves the processing load (including the power and cooling use!) from the server to the client, saving money for the hosting provider, and (potentially) rendering the object in question more quickly.

A service worker, on the other hand, is designed to support notifications. For instance, say you keep a news web site open all day in your browser. You do not necessarily want to reload the page ever few minutes; instead, you would rather the site send you a notification through the browser when some new story has been posted. Since the service worker is designed to cause an action in the browser on receiving a notification from the server, it has direct access to the network side of the host, and it can run even when the tab showing the web site is not visible.

In fact, because service workers are sometimes used to coordinate the information on multiple tabs, a service worker can both communicate between tabs within the same browser and stay running in the browser’s context even though the tab that started the service worker is closed. To make certain other tabs do not block while the server worker is running, they are run in a separate thread; they can consume resources from a different core in your processor, so you are not aware (from a performance perspective) they are running. To sweeten the pot, a service worker can be restarted after your browser has restarted by a special push notification from the server.

If a service worker sounds like a perfect setup for running code that can mine bitcoins or launch DDoS attacks from your web browser, then you might have a future in computer security. This is, in fact, what MarioNet, a proof-of-concept system described in this paper, does—it uses a service worker to consume resources off as many hosts as it can install itself on to do just about anything, including launching a DDoS attack.

Given the above, it should be simple enough to understand how the attack works. When the user lands on a web page, ask for permission to push notifications. A lot of web sites that do not seem to need such permission ask now, particularly ecommerce sites, so the question does not seem out of place almost anywhere any longer. Install a service worker, using the worker’s direct connection to the host’s network to communicate to a controller. The controller can then install code to be run into the service worker and direct the execution of that code. If the user closes their browser, randomly push notifications back to the browser, in case the user opens it again, thus recreating the service worker.

Since the service worker runs in a separate thread, the user will not notice any impact on web browsing performance from the use of their resources—in fact, MarioNet’s designers use fine-grained tracking of resources to ensure they do not consume enough to be noticed. Since the service worker runs between the browser and the host operating system, no defenses built into the browser can detect the network traffic to raise a flag. Since the service worker is running in the context of the browser, most anti-virus software packages will give the traffic and processing a pass.

Lessons learned?

First, making something powerful from a compute perspective will always open holes like this. There will never be any sort of system that both allows the transfer of computation from one system to another that will not have some hole which can be exploited.

Second, abstraction hides complexity, even the complexity of an attack or security breach, nicely. Abstraction is like anything else in engineering: if you haven’t found the tradeoffs, you haven’t looked hard enough.

Third, close your browser when you are done. The browser is, in many ways, an open door to the outside world through which all sorts of people can make it into your computer. I have often wanted to create a VM or container in which I can run a browser from a server on the ‘net. When I’m done browsing, I can shut the entire thing down and restore the state to “clean.” No cookies, no java stuff, no nothing. A nice fresh install each time I browse the web. I’ve never gotten around to building this, but I should really put it on my list of things to do.

Fourth, don’t accept inbound connection requests without really understanding what you are doing. A notification push is, after all, just another inbound connection request. It’s like putting a hole in your firewall for that one FTP server that you can’t control. Only it’s probably worse.

The Network Sized Holes in Serverless

Until about 2017, the cloud was going to replace all on-premises data centers. As it turns out, however, the cloud has not replaced all on-premises data centers. Why not? Based on the paper under review, one potential answer is because containers in the cloud are still too much like “serverfull” computing. Developers must still create and manage what appear to be virtual machines, including:

  • Machine level redundancy, including georedundancy
  • Load balancing and request routing
  • Scaling up and down based on load
  • Monitoring and logging
  • System upgrades and security
  • Migration to new instances

Serverless solves these problems by placing applications directly onto the cloud, or rather a set of libraries within the cloud.

Jonas, Eric, Johann Schleier-Smith, Vikram Sreekanti, Chia-Che Tsai, Anurag Khandelwal, Qifan Pu, Vaishaal Shankar, et al. “Cloud Programming Simplified: A Berkeley View on Serverless Computing.” ArXiv:1902.03383 [Cs], February 9, 2019. http://arxiv.org/abs/1902.03383.

The authors define serverless by contrasting it with serverfull computing. While software is run based on an event in serverless, software runs until stopped in a cloud environment. While an application does not have a maximum run time in a serverfull environment, there is some maximum set by the provider in a serverless environment. The server instance, operating system, and libraries are all chosen by the user in a serverfull environment, but they are chosen by the provider in a serverless environment. The serverless environment is a higher-level abstraction of compute and storage resources than a cloud instance (or an on-premises solution, even a private cloud).

These differences add up to faster application development in a serverless environment; application developers are completely freed from any system administration tasks to focus entirely on developing and deploying useful software. This should, in theory, free application developers to focus on solving business problems, rather than worrying about any of the infrastructure. Two key points the authors point out in the serverless realm are the complex software techniques used to bring serverless processes up quickly (such as preloading and holding the VM instances that back services), and the security isolation provided through VM level separation.

The authors provide a section on challenges in serverless environments and the workarounds to these challenges. For instance, one problem with real-time video compression is the object store used to communicate between processes running on a serverless infrastructure is too slow to support fine-grained communication, while the functions are too course-grained to support some of the required tasks. To solve this problem, they propose using function-to-function communication, which moves the object store out of the process. This provides dramatic processing speedups, as well as reducing the costs of serverless to a fraction of a cloud instance.

One of the challenges discussed here is the problem of communication patterns, including broadcast, aggregation, and shuffle. Each of these, of course rely on the underlying network to transport data between the compute nodes on which serverless functions are running. Since the serverless user cannot determine where a particular function will run, the performance of the underlying transport is—of course–quite variable. The authors say: “Since the application cannot control the location of the cloud functions, a serverless computing application may need to send two and four orders of magnitude more data than an equivalent VM-based solution.”

And this is where the network sized hole in serverless comes into play. It is common fare today to say the network is “just a commodity.” Speeds are feeds are so high, and so easy to build, that we do not need to worry about building software that knows how to use a network efficiently, or even understands the network at all. That matching network to software requirements is a thing of the past—bandwidth is all a commodity now.

The law of leaky abstractions, however, will always have its say—a corollary here is higher level abstractions will always have larger and more consequential leaks. The solutions offered to each of the challenges listed in the paper are all, in fact, resolved by introducing layering violations which allow the developer to “work around” an inefficiency at some lower layer in the abstraction. Ultimately, such work arounds will compound into massive technical debt, and some “next new thing” will come along to “solve the problems.”

Moving data ultimately still takes time, still takes energy; the network still (often) needs to be tuned to the data being moved. Serverless is a great technology for some solutions—but there is ultimately no way to abstract out the hard work of building an entire system tuned to do a particular task and do it well. When you face abstraction, you should always ask: what is gained, and what is lost?

Research: Service Fabric

Microservices architectures probably will not “take over the world,” in terms of solving every application you can throw at them, but they are becoming more widespread. Microservices and related “staged” design patterns are ideal for edge facing applications, where the edge facing services, in particular, need to scale quickly across broad geographical regions. Supporting microservices using a standard overlay model can be challenging; somehow the network control plane, container placement/spinup/cleanup, and service discovery must be coordinated. While most networks would treat each of these as a separate problem, service fabrics are designed to either interact with, or even replace, each of the systems involved with a single, unified overlay construct.

Kakivaya, Gopal, Lu Xun, Richard Hasha, Shegufta Bakht Ahsan, Todd Pfleiger, Rishi Sinha, Anurag Gupta, et al. “Service Fabric: A Distributed Platform for Building Microservices in the Cloud.” In Proceedings of the Thirteenth EuroSys Conference, 33:1–33:15. EuroSys ’18. New York, NY, USA: ACM, 2018. https://doi.org/10.1145/3190508.3190546.

Kakivaya, et al., begin by considering the five major design principles of a service fabric: modular and layered design; self-* properties; decentralized operation; strong consistency; and support for stateful services. They then introduce Microsoft’s Service Fabric (SF) service, which they say has taken over sixteen years and the work of more than a hundred core software engineers. After considering some of the components of SF at a high level, they discuss a single use case; if you do not understand the design and application of the microservices design pattern, this section is a great tutorial to start from. The authors then dive into several interesting (for network engineers) components of SF in more detail.

The first of these is the federation subsystem; this allows groups of nodes to be organized into a single federation. Nodes in a federation form themselves into a virtual ring topology regardless of the underlying topology. From a networking perspective, rings have several interesting characteristics.

First, routing through a ring converges more slowly than other topologies; the larger the ring, the slower the convergence. Second, ring topologies tend to form microloops while converging, as well. Third, the addition of a new node does not increase the number of neighbors on any node (each node in a ring has two neighbors regardless of how large the ring is), but the stretch, or the total length of the longest path through the network, increases with each additional node.

Since the rings in SF are primarily used for control plane functions, rather than routing—more on this in a minute—the convergence properties of ring topologies in this application really only apply to the speed at which nodes can be inserted and removed from the ring, rather than to the speed of routing through the ring. Federated rings use a strong consistent membership model, which means that although a single node might be polled for liveness by multiple other nodes in the mesh, only one needs to declare the node down in order to remove it from the ring. Down detection in SF is symmetric; every node is both responsible to monitor some other set of nodes, and also to report on its own liveness to the nodes by which it is being monitored.

How can these federated rings avoid the downsides of routing through a ring topology? Because routed paths do not follow the ring. If a node needs to communicate with another node, it first uses service discovery to determine the IP address of the remote service, then sends traffic directly to that IP address. The traffic between nodes is, then, IP routed. Routing tables are build and maintained through a Distributed Hash Table (DHT). What is a DHT?

A network of five nodes is illustrated here; each node has one or two labeled links attached. While a service mesh would use nodes or service identifiers instead of links, the principle is the same. Assume two of the nodes in this network are given routing responsibilities; A is to handle routing for all even numbered addresses, while D is to handle all odd numbered addresses. This even/odd split is a very primitive form of a hash, which is simply used to split a larger number space into smaller buckets. Smaller buckets are easier to search; splitting the buckets up on multiple systems allows each to process and manage a smaller set of table entries.

Hashes are considered in more detail in Computer Networking Problems and Solutions.

If node E wants to reach link (or service) 6, it runs the hashing algorithm used by all the devices (divide by two in this case), then consults a local table to determine which node it should query about information to reach 6. It will discover the correct node to query, in this simple case, is A. Given the hashing is set up correctly, this is an efficient way to find and route to individual nodes fairly quickly.

Note this kind of system would suffer from the normal ills of a distributed routing protocol, including the limitations of the CAP theorem. In fact, the authors note that routing in SF is eventually consistent, which means nodes querying for a particular destination can receive stale information, just like in BGP, OSPF, IS-IS, etc.

This paper is a terrific introduction to the world of service mesh systems; it is well worth reading if you are interested in this new and emerging kind of overlay.

Scroll To Top