CONTENT TYPE
The Floating Point Fix
Floating point is not something many network engineers think about. In fact, when I first started digging into routing protocol implementations in the mid-1990’s, I discovered one of the tricks you needed to remember when trying to replicate the router’s metric calculation was always round down. When EIGRP was first written, like most of the rest of Cisco’s IOS, was written for processors that did not perform floating point operations. The silicon and processing time costs were just too high.
What brings all this to mind is a recent article on the problems with floating point performance over at The Next Platform by Michael Feldman. According to the article:
For those who have not spent a lot of time in the coding world, a floating point number is one that has some number of digits after the decimal. While integers are fairly easy to represent and calculate over in the binary processors use, floating point numbers are much more difficult, because floating point numbers are very difficult to represent in binary. The number of bits you have available to represent the number makes a very large difference in accuracy. For instance, if you try to store the number 101.1 in a float, you will find the number stored is actually 101.099998 To store 101.1, you need a double, which is twice as long as a float
Okay—this is all might be fascinating, but who cares? Scientists, mathematicians, and … network engineers do, as a matter of fact. Fist, carrying around double floats to store numbers with higher precision means a lot more network traffic. Second, when you start looking at timestamps and large amounts of telemetry data, the efficiency and accuracy of number storage becomes a rather big deal.
Okay, so the current floating point storage format, called IEEE754, is inaccurate and rather inefficient. What should be done about this? According to the article, John Gustafson, a computer scientist, has been pushing for the adoption of a replacement called posits. Quoting the article once again:
It does this by using a denser representation of real numbers. So instead of the fixed-sized exponent and fixed-sized fraction used in IEEE floating point numbers, posits encode the exponent with a variable number of bits (a combination of regime bits and the exponent bits), such that fewer of them are needed, in most cases. That leaves more bits for the fraction component, thus more precision.
Did you catch why this is more efficient? Because it uses a variable length field. In other words, posits replaces a fixed field structure (like what was originally used in OSPFv2) with a variable length field (like what is used in IS-IS). While you must eat some space in the format to carry the length, the amount of "unused space" in current formats overwhelms the space wasted, resulting in an improvement in accuracy. Further, many numbers that require a double today can be carried in the size of a float. Not only does using a TLV format increase accuracy, it also increases efficiency.
From the perspective of the State/Optimization/Surface (SOS) tradeoff, there should be some increase in complexity somewhere in the overall system—if you have not found the tradeoffs, you have not looked hard enough. Indeed, what we find is there is an increase in the amount of state being carried in the data channel itself; there is additional state, and additional code that knows how to deal with this new way of representing numbers.
It's always interesting to find situations in other information technology fields where discussions parallel to discussions in the networking world are taking place. Many times, you can see people encountering the same design tradeoffs we see in network engineering and protocol design.
Design Intelligence from the Hourglass Model
Over at the Communications of the ACM, Micah Beck has an article up about the hourglass model. While the math is quite interesting, I want to focus on transferring the observations from the realm of protocol and software systems development to network design. Specifically, start with the concept and terminology, which is very useful. Taking a typical design, such as this—

The first key point made in the paper is this—
The thin waist of the hourglass is a narrow straw through which applications can draw upon the resources that are available in the less restricted lower layers of the stack.
A somewhat obvious point to be made here is applications can only use services available in the spanning layer, and the spanning layer can only build those services out of the capabilities of the supporting layers. If fewer applications need to be supported, or the applications deployed do not require a lot of “fancy services,” a weaker spanning layer can be deployed. Based on this, the paper observes—
The balance between more applications and more supports is achieved by first choosing the set of necessary applications N and then seeking a spanning layer sufficient for N that is as weak as possible. This scenario makes the choice of necessary applications N the most directly consequential element in the process of defining a spanning layer that meets the goals of the hourglass model.
Beck calls the weakest possible spanning layer to support a given set of applications the minimally sufficient spanning layer (MSSL). There is one thing that seems off about this definition, however—the correlation between the number of applications supported and the strength of the spanning layer. There are many cases where a network supports thousands of applications, and yet the network itself is quite simple. There are many other cases where a network supports just a few applications, and yet the network is very complex. It is not the number of applications that matter, it is the set of services the applications demand from the spanning layer.
Based on this, we can change the definition slightly: an MSSL is the weakest spanning layer that can provide the set of services required by the applications it supports. This might seem intuitive or obvious, but it is often useful to work these kinds of intuitive things out, so they can be expressed more precisely when needed.
First lesson: the primary driver in network complexity is application requirements. To make the network simpler, you must reduce the requirements applications place on the network.
There are, however, several counter-intuitive cases here. For instance, TCP is designed to emulate (or abstract) a circuit between two hosts—it creates what appears to be a flow controlled, error free channel with no drops on top of IP, which has no flow control, and drops packets. In this case, the spanning layer (IP), or the wasp waist, does not support the services the upper layer (the application) requires.
In order to make this work, TCP must add a lot of complexity that would normally be handled by one of the supporting layers—in fact, TCP might, in some cases, recreate capabilities available in one of the supporting layers, but hidden by the spanning layer. There are, as you might have guessed, tradeoffs in this neighborhood. Not only are the mechanisms TCP must use more complex that the ones some supporting layer might have used, TCP represents a leaky abstraction—the underlying connectionless service cannot be completely hidden.
Take another instance more directly related to network design. Suppose you aggregate routing information at every point where you possibly can. Or perhaps you are using BGP route reflectors to manage configuration complexity and route counts. In most cases, this will mean information is flowing through the network suboptimally. You can re-optimize the network, but not without introducing a lot of complexity. Further, you will probably always have some form of leaky abstraction to deal with when abstracting information out of the network.
Second lesson: be careful when stripping information out of the spanning layer in order to simplify the network. There will be tradeoffs, and sometimes you end up with more complexity than what you started with.
A second counter-intuitive case is that of adding complexity to the supporting layers in order to ultimately simplify the spanning layer. It seems, on the model presented in the paper, that adding more services to the spanning layer will always end up adding more complexity to the entire system. MPLS and Segment Routing (SR), however, show this is not always true. If you need traffic steering, for instance, it is easier to implement MPLS or SR in the support layer rather than trying to emulate their services at the application level.
Third lesson: sometimes adding complexity in a lower layer can simplify the entire system—although this might seem to be counter-intuitive from just examining the model.
The bottom line: complexity is driven by applications (top down), but understanding the full stack, and where interactions take place, can open up opportunities for simplifying the overall system. The key is thinking through all parts of the system carefully, using effective mental models to understand how they interact (interaction surfaces), and the considering the optimization tradeoffs you will make by shifting state to different places.
DORA, DevOps, and Lessons for Network Engineers
DevOps Research and Assessment (DORA) released their 2018 Accelerate report on the state of DevOps at the end of 2018; I’m a little behind in my reading, so I just got around to reading it, and trying to figure out how to apply their findings to the infrastructure (networking) side of the world.
DORA found organizations that outsource entire functions, such as building an entire module or service, tend to perform more poorly than organizations that outsource by integrating individual developers into existing internal teams (page 43). It is surprising companies still think outsourcing entire functions is a good idea, given the many years of experience the IT world has with the failures of this model. Outsourced components, it seems, too often become a bottleneck in the system, especially as contracts constrain your ability to react to real-world changes. Beyond this, outsourcing an entire function not only moves the work to an outside organization, but also the expertise. Once you have lost critical mass in an area, and any opportunity for employees to learn about that area, you lose control over that aspect of your system.
DORA also found a correlation between faster delivery of software and reduced Mean Time To Repair (MTTR) (page 19). On the surface, this makes sense. Shops that delivery software continuously are bound to have faster, more regularly exercised processes in place for developing, testing, and rolling out a change. Repairing a fault or failure requires change; anything that improves the speed of rolling out a change is going to drive MTTR down.
Organizations that emphasize monitoring and observability tended to perform better than others (page 55). This has major implications for network engineering, where telemetry and management are often “bolted on” as an afterthought, much like security. This is clearly not optimal, however—telemetry and network management need to be designed and operated like any other application. Data sources, stores, presentation, and analysis need to be segmented into separate services, so new services can be tried out on top of existing data, and new sources can feed into existing services. Network designers need to think about how telemetry will flow through the management system, including where and how it will originate, and what it will be used for.
These observations about faster delivery and observability should drive a new way of thinking about failure domains; while failure domains are often primarily thought of as reducing the “blast radius” when a router or link fails, they serve two much larger roles. First, failure domain boundaries are good places to gather telemetry because this is where information flows through some form of interaction surface between two modules. Information gathered at a failure domain boundary will not tend to change as often, and it will often represent the operational status of the entire module.
Second, well places failure domain boundaries can be used to stake out areas where “new things” can be put in operation with some degree of confidence. If a network has well-designed failure domain boundaries, it is much easier to deploy new software, hardware, and functionality in a controlled way. This enables a more agile view of network operations, including the ability to roll out changes incrementally through a canary process, and to use processes like chaos monkey to understand and correct unexpected failure modes.
Another interesting observation is the j-curve of adoption (page 3):

This j-curve shows the “tax” of building the underlying structures needed to move from a less automated state to a more automated one. Keith’s Law:
…operates in part because of this j-curve. Do not be discouraged if it seems to take a lot of work to make small amounts of progress in many stages of system development—the results will come later.
The bottom line: it might seem like a report about software development is too far outside the realm of network engineering to be useful—but the reality is network engineers can learn a lot about how to design, build, and operate a network from software engineers.
Cloudy with a chance of lost context
One thing we often forget in our drive to “more” is this—context matters. Ivan hit on this with a post about the impact of controller failures in software-defined networks just a few days ago:
What is the context in terms of a network controller? To broaden the question—what is the context for anything? The context, in philosophical terms, is the telos, the final end, or the purpose of the system you are building. Is the point of the network to provide neat gee-whiz capabilities, or is it to move data in a way that creates value for the business?
Ivan’s article reminded me of another article I read recently about the larger world of cloud, IoT, and context problems by Bob Frankston., who says:
We are currently building networks with the presupposition that cloud based services are more secure, more agile, and more reliable than we can build locally. Somehow we think “the cloud” will never—can never—fail.
But of course, cloud services can fail. We know this because they have. Office 365 has failed a number of times, including this one. Azure failed at least once in 2014 due to operator error. Dyn was taken down by a DDoS attack not so long ago. Google cloud failed not too long ago, as well. Back in 2016, Salesforce suffered a major outage.
The point is not that we should not use cloud services. The point is that systems should be designed so any form of failure in a centralized component should not cause local services, at least at some minimal level, to fail. And yet, how many networks today rely on cloud-based services to operate at all, much less correctly? Would your company’s email still be delivered if your cloud based anti-spam software service failed? Would your company’s building locks work if the cloud based security service you use goes down?
If they would not, do you have a contingency plan in place for when these services do fail? Can you open the dors manually, do you have a way to turn off the spam filtering, do you have a way to manually configure the network to “get by” until service can be restored? Do you know which data flows must be supported in the case of an outage of this sort, or what the impact on the entire system will be? Do you have a plan to restore the network and its services to their optimal state once the external services optimal operation relies on are available after an outage?
A larger point can be made here: when pulling services into the network itself, we often rely on unexamined presuppositions, and we often leave the system with unexamined and little-understood risks.
Context matters—what is it you are trying to do? Do you really need to do it this way? What happens if the centralized or cloudy system you rely on fails? It’s not that we should not use or deploy cloud-based systems—but we should try to expose and understand the failure modes and risks we are building into our networks.
It’s not a CLOS, it’s a Clos
Way back in the day, when telephone lines were first being installed, running the physical infrastructure was quite expensive. The first attempt to maximize the infrastructure was the party line. In modern terms, the party line is just an Ethernet segment for the telephone. Anyone can pick up and talk to anyone else who happens to be listening. In order to schedule things, a user could contact an operator, who could then “ring” the appropriate phone to signal another user to “pick up.” CSMA/CA, in essence, with a human scheduler.
This proved to be somewhat unacceptable to everyone other than various intelligence agencies, so the operator’s position was “upgraded.” A line was run to each structure (house or business) and terminated at a switchboard. Each line terminated into a jack, and patch cables were supplied to the operator, who could then connect two telephone lines by inserting a jumper cable between the appropriate jacks.
An important concept: this kind of operator driven system is nonblocking. If Joe calls Susan, then Joe and Susan cannot also talk to someone other than one another for the duration of their call. If Joe’s line is tied up, when someone tries to call him, they will receive a busy signal. The network is not blocking in this case, the edge is—because the node the caller is trying to reach is already using 100% of its available bandwidth for an existing call. This is called admission control; all non-blocking networks must either be provisioned so the sum total of the transmitters cannot consume the whole of the cross-sectional bandwidth available in the network, or they must have effective admission control.
Blocking networks did exist in the form of trunk connections, or connections between these switch panels. Trunk connections not only consumed ports on the switchboard, they were expensive to build, and required a lot of power to run. Hence, making a “long distance call” costs money because it consumed a blocking resource. It is only when we get to packet switched digital networks that the cost of a “long distance call” drops to the rough equivalent of a “normal” call, and we see “long distance” charges fade into memory (many of my younger readers have never been charged for “long distance calls,” in fact, and may not even know what I’m talking about).
The human operators were eventually replaced by relays.

The old “crank to ring” phones were replaced with “dial phones,” which break the circuit between the phone and the relay to signal a digit being dialed. The process begins when you lift the handset, which causes voltage to run across the line. Spinning the dial breaks the circuit once for each digit on the dial. Each time the circuit breaks, the armature on this stack of circular relays is moved. The first digit dialed causes the relay to move up (or down, depending on the relay) the stack. When you pause for a second before dialing the second number, the motors reset so the arm will now move around the circle, starting with the zero position. When it reaches this spot, the arm connects your physical line to another relay, which then repeats the process, ultimately reaching the line of the person you want to call and creating a direct electrical connection between the caller and the receiver. We used to have huge banks of these relays, accompanied by 66 style punch-down blocks, and sometimes (in newer systems) spin-down connections for permanent circuits. We mostly “troubleshot” them with WD-40 and electrical contact solution from spray cans (yes, I have personal experience).
These huge relay banks became unwieldy to support, of course, so they were eventually replaced with fabrics—crossbar fabrics.

In a crossbar fabric, each device is “attached twice,” once on the send side, and once on the receive side. When you place a call, your “send” is connected to the receivers “receive” by making an electrical connection where your “send” line intersects with the receivers “receive” line. All of this hardware, however, has the property of scaling up rather than out. In order to create a larger telephone switch, you simply have to buy a larger fabric of some kind. You cannot “add to” a fabric once it is built, which means when a region outgrows its fabric, you either must install a new on and connect the two fabrics with trunk lines, you must rip the entire fabric out and replace it.
This is the problem Edson Erwin, and later Charles Clos, were trying to solve—how do you build a large-scale switching fabric out of simple components, and yet scale to virtually any size. While Erwin invented the concept in 1938, the concept was formalized by Clos in 1952. Charles Clos, by the way, was French, so the proper pronunciation is “klo,” with a long “o.”
The solution was to use four port switches (or relays, in the case of circuits), each of which are connected into a familiar looking fabric.

This was designed to be a unidirectional fabric, with each edge node (leaf) having a send and receive side, much like a crossbar. It is fairly obvious that the fabric is nonblocking on admission control; if a1 connects to b3, then a1’s entire bandwidth is consumed in that connection, so a1 cannot accept any other connections. If d1 also tries to connect to b3, it will receive the traditional “busy signal.” The network does not block, it just refuses to accept the new connection at the edge.
What if you wanted to make this fabric bidirectional? To allow traffic to flow from a1 to b3 while also allowing traffic to flow from a3 to, say, d1? You would “fold” the fabric. This does not involve changing the physical layout at all; folded Clos fabrics are not drawn any differently than non-folded Clos fabrics. They look identical on paper (stop drawing them as if putting a1 and a3 on the same side of the fabric makes them into a “two-stage folded fabric”—this is not what “folded” means). The primary change here is in the “control plane,” or the scheduler, which must be modified to allow traffic to flow in either direction. Packet switching is almost always bidirectional, so all Clos fabrics used in packet switched networks are “folded” in this way.
Another interesting point—you cannot really perform “acceptance flow control” on a packet-switched network, so there is no way to make the fabric “nonblocking.” In a circuit-based network, the number of edge ports can be tied to the amount of cross-sectional bandwidth available in the fabric. You can over-subscribe the fabric by provisioning more ports on the edge than the cross-sectional bandwidth of the fabric itself can support, of course, which makes the fabric non-contending, rather than non-blocking. In a packet-switched network, it just is not possible to perform reliable admission control; because of this, packet switched Clos fabrics, then, are always non-contending rather than non-blocking.
So the next time you put CLOS in a document, or on a white board, don’t. It’s a Clos fabric, not a CLOS fabric. It’s not non-blocking, and you do not “fold” it by drawing it with two stages. Clos fabrics always have an odd number of stages. There is a mathematical reason for this, but this post is already long enough.
Another problem with our common usage of these terms is we call every kind of spine-and-leaf (or leaf-and-spine) fabric a Clos, and we have started calling all kinds of networks, including overlay networks, “fabrics.” This post is already long (as above), so I will leave these for the future.
If you liked this short article, and would like to understand more about fabrics and fabric design, please join me for me upcoming Data Center Design live webinar on Safari Books. I will cover this history there, but I will also cover a lot of other things involved in designing data center fabrics.
Reaction: Overly Attached
The longer you work on one system or application, the deeper the attachment. For years you have been investing in it—adding new features, updating functionality, fixing bugs and corner cases, polishing, and refactoring. If the product serves a need, you likely reap satisfaction for a job well done (and maybe you even received some raises or promotions as a result of your great work).
Attachment is a two-edged sword—without some form of attachment, it seems there is no way to have pride in your work. On the other hand, attachment leads to poorly designed solutions. For instance, we all know the hyper-certified person who knows every in and out of a particular vendor’s solution, and hence solves every problem in terms of that vendor’s products. Or the person who knows a particular network automation system and, as a result, solves every problem through automation.
The most pernicious forms of attachment in the network engineering world are to a single technology or vendor. One of the cycles I have seen play out many times across the last 30 years is: a new idea is invented; this new idea is applied to every possible problem anyone has ever faced in designing or operating a network; the resulting solution becomes overburdened and complicated; people complain about the complexity of the solution and rush to… the next new idea. I could name tens or hundreds of technologies that have been through this cycle over time.
Another related cycle: a team adopts a new technology in order to solve a problem.
Kate points out some very helpful ways to solve over-attachment at an organizational level. For instance, aligning on goals and purpose, and asking everyone to be open to ideas and alternatives. But these organizational level solutions are more difficult to apply at an individual level. How can this be applied to the individual—to your life?
Perhaps the most important piece of advice Kate gives here is ask for stories, not solutions. In telling stories you are not eliminating attachment but refocusing it. Rather than becoming attached to a solution or technology, you are becoming attached to a goal or a narrative. This accepts that you will always be attached to something—in fact, that it is ultimately healthy to be attached to something outside yourself in a fundamental way. The life that is attached to nothing is ugly and self-centered, ultimately failing to accomplish anything.
Even here, however, there are tradeoffs. You can attach yourself to the story of a company, dedicating yourself to that single brand. To expand this a little, then, you should focus on stories about solving problems for people rather than stories about a product or technology. This might mean telling people they are wrong, by the way—sometimes the best thing is not what someone thinks they want.
Stories are ultimately about people. This is something not many people in engineering fields like to hear, because most of us are in these kinds of fields because we are either introverted, or because we struggle to relate to people in some other way.
To expand this a bit more, you should be willing to tell multiple stories, rather than just one. These stories might overlap or intersect, of course—I have been invested in a story about reducing complexity, disaggregation, and understanding why rather than how for the last ten or fifteen years. These three stories are, in many ways, the same story, just told from different perspectives. You need to allow the story to be shaped, and the path to tell that story to change, over time.
Realize your work is neither as bad as you think it is, nor as good as you think it is. Do not take criticism personally. This is a lesson I had to learn the hard way, from receiving many manuscripts back covered in red marks, either physical or virtual. Failure is not an option; it is a requirement. The more you fail, the more you will actively seek out the tradeoffs, and approach problems and people with humility.
Finally, you need to internalize modularity. Do not try to solve all the problems with a single solution, no matter how neat or new. Part of this is going to go back to understanding why things work the way they do and the limits of people (including yourself!). Solve problems incrementally and set limits on what you will try to do with any single technology.
Ultimately, refusing to become overly attached is a matter of attitude. It is something that is learned through hard work, a skill developed across time.
When should you use IPv6 PA space?
I was reading RFC8475 this week, which describes some IPv6 multihoming ‘net connection solutions. This set me to thinking about when you should uses IPv6 PA space. To begin, it’s useful to review the concept of IPv6 PI and PA space.
PI, or provider independent, space, is assigned by a regional routing registry to network operators who can show they need an address space that is not tied to a service provider. These requirements generally involve having a specific number of hosts, showing growth in the number of IPv6 addresses used over time, and other factors which depend on the regional registry providing the address space. PA, or provider assigned, IPv6 addresses can be assigned by a provider from their PI pool to an operator to which they are providing connectivity service.
There are two main differences between these two kinds of addresses. PI space is portable, which means the operator can take the address space when them when they change providers. PI space is also fixed; it is (generally) safe to use PI space as you might private or other IP address spaces; you can assign them to individual subnets, hosts, etc., and count on them remaining the same over time. If everyone obtained PI space, however, the IPv6 routing table in the default free zone (DFZ) could explode. PI space cannot be aggregated by the operator’s upstream provider because it is portable in just this way.
PA space, on the other hand, can be aggregated by your upstream provider because it is assigned by the provider. On the other hand, the provider can change the address block assigned to its customer at any time. The general idea is the renumbering capabilities built into IPv6 make it possible to “not care” about the addresses assigned to individual hosts on your network.
How does this work out in real life? Consider the following network—

Assume AS65000 assigns 2001:db8:1:1::/64 to the operator, and AS65001 assigns 2001:db8:2:2::/64. IPv6 provides mechanisms for A, B, and D to obtain addresses from within these two ranges, so each device has two IP addresses. Now assume A wants to send a packet to some site connected to the public Internet. If it sources the packet from its address in the 1:1::/64 range, A should send the packet to E; if it sources the packet from its address in the 2:2::/64 range, it should send the packet to F. This behavior is described in RFC6724, rule 5.5:
Prefer addresses in a prefix advertised by the next-hop. If SA or SA’s prefix is assigned by the selected next-hop that will be used to send to D and SB or SB’s prefix is assigned by a different next-hop, then prefer SA. Similarly, if SB or SB’s prefix is assigned by the next-hop that will be used to send to D and SA or SA’s prefix is assigned by a different next-hop, then prefer SB.
If the host uses this solution, it needs to remember which sources it has used in the past for which destinations—at least for the length of a single session, at any rate. RFC6724, however, notes supporting rule 5.5 is a SHOULD, rather than a MUST, which means some hosts may not have this capability. What should the network operator do in this case?
RFC8475 suggests using a set of policy based routing or filter based forwarding policies at the routers in the network to compensate. If E, for instance, receives a packet with a source in the range 2:2::/64 range, it should route the packet to F for forwarding. This will generally mean forwarding the packet out the interface on which E received the packet. Likewise, F should have a local policy that forwards packets it receives with a source address in the 1:1::/64 range to E. RFC8475 provides several examples of policies which would work for a number of different situations (for active/standby, active/active, one border router or two).
There are, however, two failure modes here. For the first one, assume AS65000 decides to assign the operator another IPv6 address range. IPv6’s renumbering capabilities will take care of getting the correct addresses onto A, B, and D—but the policies at E and F must be manually updated for the new address space to work correctly. It should be possible to automate the management of these filters in some way, of course, but the complexity injected into the network is larger than you might initially think.
The second failure relates to a deeper problem. What if B is not allowed, by policy, to talk to D? If the addressing in the network were consistent, the operator could set up a filter at C to prevent traffic from flowing between these two devices. When the network is renumbered, any such filters must be reconfigured, of course. A second instance of this same kind of failure: what if D is the internal DNS server for the network? While the DNS server’s address can be pushed out through the IPv6 renumbering capabilities (through NA, specifically), some manual or automated configuration must be adjusted to get the new address into the IPv6 advertisements so it can be distributed.
The short answer to the question above—when should you use PA space for a network?—comes down to this: when you have a very small, simple network behind a set of routers connecting to the ‘net, where the hosts attached to the network support RFC6724 rule 5.5, and intra-site communication is very simple (or there is no intra-site communications at all). Essentially, renumbering is not the only problem to solve when renumbering a network.
