It always takes longer than you think

Everyone is aware that it always takes longer to find a problem in a network than it should. Moving through the troubleshooting process often feels like swimming in molasses—you’re pulling hard, and progress is being made, but never fast enough or far enough to get the application back up and running before that crucial deadline. The “swimming in molasses effect” doesn’t end when the problem is found out, either—repairing the problem requires juggling a thousand variables, most of which are unknown, combined with the wit and sagacity of a soothsayer to work with vendors, code releases, and unintended consequences.

It’s enough to make a network engineer want to find a mountain top and assume an all-knowing pose—even if they don’t know anything at all.
The problem of taking longer, though, applies in every area of computer networking. It takes too long for the packet to get there, it takes to long for the routing protocol to converge, it takes too long to support a new application or server. It takes so long to create and validate a network design change that the hardware, software and processes created are obsolete before they are used.

Why does it always take too long? A short story often told to me by my Grandfather—a farmer—might help.
One morning a farmer got early in the morning, determined to throw some hay down to the horses in the stable. While getting dressed, he noticed one of the buttons on his shirt was loose. “No time for that now,” he thought, “I’ll deal with it later.” Getting out to the barn, he climbed up the ladder to the loft, and picked up a pitchfork. When he drove the fork into the hay, the handle broke.

He sighed, took the broken pieces down the ladder, and headed over to his shed to replace the handle—but when he got there, he realized he didn’t have a new handle that would fit. Sighing again, he took the broken pieces to his old trusty truck and headed into town—arriving before the hardware store opened. “Well, I’m already here, might as well get some coffee,” he thought, so he headed to the diner. After a bit, he headed to the store to buy a handle—but just as he walked out past the door, the loose button caught on the handle, popping off.

It took a few minutes to search for the lost button, but he found it and headed over to the cleaners to have it sewn back on “real fast.” Well, he couldn’t wander around town in his undershirt, so he just stepped next door to the barber’s, where there were a few friendly games of checkers already in progress. He played a couple of games, then the barber came out to remind him that he needed a haircut (a thing barbers tend to do all the time for some reason), so he decided to have it done. “Might was well not waste the time in town now I’m here,” he thought.

The haircut finished, he went back to get his shirt, and realized it was just about lunch. Back to the diner again. Once he was done, he jumped in his truck and headed back to the farm. And then he realized—the horses were hungry, the hay hadn’t been pitched, and … his pitchfork was broken.

And this is why it always takes longer than it should to get anything done with a network. You take the call and listen to the customer talk about what the application is doing, which takes a half an hour. You then think about what might be wrong, perhaps kicking a few routers “just for good measure” before you start troubleshooting in earnest. You look for a piece of information you need to understand the problem, only to find the telemetry system doesn’t collect that data “yet”—so you either open a ticket (a process that takes a half an hour), or you “fix it now” (which takes several hours). Once you have that information, you form a theory, so you telnet into a network device to check on a few things… only to discover the device you’re looking at has the wrong version of code… This requires a maintenance window to fix, so you put in a request…

Once you even figure out what the problem is, you encounter a series of hurdles lined up in front of you. The code needs to be upgraded, but you have to contact the vendor to make certain the new code supports all the “stuff” you need. The configuration has to be changed, but you have to determine if the change will impact all the other applications on the network. You have to juggle a seemingly infinite number of unintended consequences in a complex maze of software and hardware and people and processes.

And you wonder, the entire time, why you just didn’t learn to code and become a backend developer, or perhaps a mountain-top guru.

So the next time you think it’s taking to long to fix the problem, or design a new addition to the network, or for the vendor to create that perfect bit of code, remember the farmer, and the button that left the horses hungry.

It Always Takes Too Long

It always takes longer to find a problem than it should. Moving through the troubleshooting process often feels like swimming in molasses—it’s never fast enough or far enough to get the application back up and running before that crucial deadline. The “swimming in molasses effect” doesn’t end when the problem is found out, either—repairing the problem requires juggling a thousand variables, most of which are unknown, combined with the wit and sagacity of a soothsayer to work with vendors, code releases, and unintended consequences.

It’s enough to make a network engineer want to find a mountain top and assume an all-knowing pose—even if they don’t know anything at all.
The problem of taking longer, though, applies in every area of computer networking. It takes too long for the packet to get there, it takes to long for the routing protocol to converge, it takes too long to support a new application or server. It takes so long to create and validate a network design change that the hardware, software and processes created are obsolete before they are used.

Why does it always take too long? A short story often told to me by my Grandfather—a farmer—might help.

One morning a farmer got early in the morning, determined to throw some hay down to the horses in the stable. While getting dressed, he noticed one of the buttons on his shirt was loose. “No time for that now,” he thought, “I’ll deal with it later.” Getting out to the barn, he climbed up the ladder to the loft, and picked up a pitchfork. When he drove the fork into the hay, the handle broke.

He sighed, took the broken pieces down the ladder, and headed over to his shed to replace the handle—but when he got there, he realized he didn’t have a new handle that would fit. Sighing again, he took the broken pieces to his old trusty truck and headed into town—arriving before the hardware store opened. “Well, I’m already here, might as well get some coffee,” he thought, so he headed to the diner. After a bit, he headed to the store to buy a handle—but just as he walked out past the door, the loose button caught on the handle, popping off.
It took a few minutes to search for the lost button, but he found it and headed over to the cleaners to have it sewn back on “real fast.” Well, he couldn’t wander around town in his undershirt, so he just stepped next door to the barber’s, where there were a few friendly games of checkers already in progress. He played a couple of games, then the barber came out to remind him that he needed a haircut (a thing barbers tend to do all the time for some reason), so he decided to have it done. “Might was well not waste the time in town now I’m here,” he thought.

The haircut finished, he went back to get his shirt, and realized it was just about lunch. Back to the diner again. Once he was done, he jumped in his truck and headed back to the farm. And then he realized—the horses were hungry, the hay hadn’t been pitched, and … his pitchfork was broken.

And this is why it always takes longer than it should to get anything done with a network. You take the call and listen to the customer talk about what the application is doing, which takes a half an hour. You then think about what might be wrong, perhaps kicking a few routers “just for good measure” before you start troubleshooting in earnest. You look for a piece of information you need to understand the problem, only to find the telemetry system doesn’t collect that data “yet”—so you either open a ticket (a process that takes a half an hour), or you “fix it now” (which takes several hours). Once you have that information, you form a theory, so you telnet into a network device to check on a few things… only to discover the device you’re looking at has the wrong version of code… This requires a maintenance window to fix, so you put in a request…

Once you even figure out what the problem is, you encounter a series of hurdles lined up in front of you. The code needs to be upgraded, but you have to contact the vendor to make certain the new code supports all the “stuff” you need. The configuration has to be changed, but you have to determine if the change will impact all the other applications on the network. You have to juggle a seemingly infinite number of unintended consequences in a complex maze of software and hardware and people and processes.

And you wonder, the entire time, why you just didn’t learn to code and become a backend developer, or perhaps a mountain-top guru.

So the next time you think it’s taking to long to fix the problem, or design a new addition to the network, or for the vendor to create that perfect bit of code, remember the farmer, and the button that left the horses hungry.

Whatever it is, you need more (RFC1925 rule 9)

There is never enough. Whatever you name in the world of networking, there is simply not enough. There are not enough ports. There is not enough speed. There is not enough bandwidth. Many times, the problem of “not enough” manifests itself as “too much”—there is too much buffering and there are too many packets being dropped. Not so long ago, the Internet community decided there were not enough IP addresses and decided to expand the address space from 32 bits in IPv4 to 128 bits in IPv6. The IPv6 address space is almost unimaginably huge—2 to the 128th power is about 340 trillion, trillion, trillion addresses. That is enough to provide addresses to stacks of 10 billion computers blanketing the entire Earth. Even a single subnet of this space is enough to provide addresses for a full data center where hundreds of virtual machines are being created every minute; each /64 (the default allocation size for an IPv6 address) contains 4 billion IPv4 address spaces.

But… what if the current IPv6 address space simply is not enough? Engineers working in the IETF have created two different solutions over the years for just this eventuality. In 1994 RFC1606 provided a “letter from the future” describing the eventual deployment of IPv9, which was (in this eventual future) coming to the end of its useful lifetime because the Internet was running out of numbers. In RFC1606, it is noted that IPv9’s 49 levels of hierarchy had proven popular, but not all the levels had found a use. The highest level in use seems to be level 39, which was being used to address individual subatomic particles. Part of the dwindling address space considered in RFC1606 was the default allocation of about 1 billion addresses to each household. As the number of homes built increased globally, the IPv9 address space came under increasing pressure. The allocation of groups of addresses to recyclable items was not helpful, either, regardless of the ability to multicast “all cardboard items” in a recycling bin.

An alternate proposal, written many years later, is RFC8135, which considers complex addressing in IPv6. RFC8135 begins by describing the different ways in which a set of numbers, such as the 128-bit space in the IPv6 address, can be represented, including integers, prime, and composite. Each of these are considered in some detail, but eventually rejected for various reasons. For instance, integer (or fixed point) addresses are rejected because the location of a host is not fixed, so fixed point addresses are a poor representation of the host. Prime addresses are likewise rejected because they take too long to compute, and composite addresses are rejected because they are too difficult to differentiate from prime addresses.

RFC8135 proposes a completely different way of looking at the 128-bits available in the IPv6 address space. Rather than treating IPv6’s address space as a simple integer, this specification advocates for treating it as a floating number. This allows for a much larger space, particularly as aggregation can be indicated through scientific notation. The main problem the authors note with this proposal is users may believe that when they assign a floating address to their device, the device itself thereby becomes waterproof and floating. The authors advice users count on a waterproofing app, available in most app stores, for this function, rather than counting on the floating address. The authors also note duct tape can be used to permanently attach a floating address to a fixed device, if needed.

The danger, of course, is that in the quest for “more,” network designers, network operators, and protocol designers could end up embracing the ridiculous. It all brings to mind the point Andrew Tanenbaum made in a standard work on Networking, Computer Networks. Tanenbaum calculates the bandwidth of a station wagon full of magnetic tape (specifically VHS format) backups. After considering the amount of time it would take to drive the station wagon across the continental United States, he concludes the vehicle has more bandwidth than any link available at that time. A similar calculation could be made with a mid-sized shipping box available from any overnight package carrier, filled with SSD drives (or similar). The conclusion, according to Dr. Tanenbaum, is networks are a sop to human impatience.

As there is no bound to human impatience, no matter how much you have, as RFC1925 says, you will always need more.

It’s Most Complicated than You Think

It’s not unusual in the life of a network engineer to go entire weeks, perhaps even months, without “getting anything done.” This might seem odd for those who do not work in and around the odd combination of layer 1, layer 3, layer 7, and layer 9 problems network engineers must span and understand, but it’s normal for those in the field. For instance, a simple request to support a new application might require the implementation of some feature, which in turn requires upgrading several thousand devices, leading to the discovery that some number of these devices simply do not support the new software version, requiring a purchase order and change management plan to be put in place to replace those devices, which results in … The chain of dominoes, once it begins, never seems to end.

Or, as those who have dealt with these problems many times might say, it is more complicated than you think. This is such a useful phrase, in fact, it has been codified as a standard rule of networking in RFC1925 (rule 8, to be precise).

Take, for instance, the problem of sending documents through electronic mail—in the real world, there are various mechanisms available to group documents, so the recipient understands what documents go together as a set, which ones are separate—staples, paperclips, binders, folders, etc. In the virtual world, however, documents are just a big blob of bits. How does anyone know which documents go with which in this situation? The obvious solution is to create electronic versions of staples and paperclips, as described in RFC1927. This only seems simple, however; it is more complicated than you think.

For instance, how do you know someone along the document transmission path has not altered the staples and/or paper clips? To prevent staple tampering, electronic staples must be cryptographically signed in some way. In the real world, paper clips (in particular) are removed from documents and re-used to save money and resources. Likewise, there must be some process to discover unused digital document sets so the paper clips may be removed and placed in some form of storage for reuse. Some people like to use differently colored staples or paperclips; how should these be represented in the digital world? RFC1927 describes MIME labels to resolve most of these problems, but there is one final problem that brings the complexity of grouping electronic documents to an entirely new level: metadata creep. What happens when the amount of data required to describe the staple or paperclip becomes larger than the documents being grouped?

Something as simple as representing characters in a language can often be more complex than it might initially seem. RFC5242 attempts to resolve the complexity of the many available encoding schemes with a single coding scheme. Rather than assigning each symbol within a language to a single number within a number space, like ASCII and UNICODE do, however, RFC5242 suggests creating a set of codes which describe how a character looks, rather than what it stands for. This allows the authors to use four principles—if it looks alike, it is alike; if it is the same thing, it is the same thing; san-serif is preferred; combine characters rather than creating new ones where possible—to create a simplified way to describe any possible character in virtually any “Latin” language. The result requires a bit more space to store in some cases, and is more difficult to process, but it is simpler at least from some perspective.

RFC5242 reminds me of a protocol custom-developed for an application I once had to troubleshoot—the entire protocol was sent in actual ASCII text. At least it was simpler to read on the network packet capture tool. There are, of course, many other examples of things being more complex than initially thought in the networking world—which is probably a good thing, because it means those many reports of the demise of the network engineer are probably greatly exaggerated.

It is Always Something (RFC1925, Rule 7)

While those working in the network engineering world are quite familiar with the expression “it is always something!,” defining this (often exasperated) declaration is a little trickier. The wise folks in the IETF, however, have provided a definition in RFC1925. Rule 7, “it is always something,” is quickly followed with a corollary, rule 7a, which says: “Good, Fast, Cheap: Pick any two (you can’t have all three).”

You can either quickly build a network which works well and is therefore expensive, or take your time and build a network that is cheap and still does not work well, or… Well, you get the idea. There are many other instances of these sorts of three-way tradeoffs in the real world, such as the (in)famous CAP theorem, which states a database can be consistent, available, and partitionable (or partitioned). Eventual consistency, and problems from microloops to surprise package deliveries (when you thought you ordered one thing, but another was placed in your cart because of a database inconsistency) have resulted. Another form of this three-way tradeoff is the much less famous, but equally true, state, optimization, surface tradeoff trio in network design.

It is possible, however, to build a system which fails at all three measures—a system which is expensive, takes a long time to build, and does not perform well. The fine folks at the IETF have provided examples of such systems.

For instance, RFC1149 describes a system of transporting IPv4 packets over avian carriers, or pigeons. This is particularly useful in areas where electricity and network cabling are not commonly found. To quote the relevant part of the RFC:

The IP datagram is printed, on a small scroll of paper, in hexadecimal, with each octet separated by whitestuff and blackstuff. The scroll of paper is wrapped around one leg of the avian carrier. A band of duct tape is used to secure the datagram’s edges. The bandwidth is limited to the leg length.

The specification of duct tape is quite odd, however; perhaps the its water-resistant properties are required for this mode of transport. This mode of transport has been adapted for use with more the more modern (in relative terms only!) IPv6 transport in RFC6214. For situation where quality of service is critical, RFC2549 describes quality of service extensions to the transport mechanism.

To further prove it is possible to build a network which is slow, expensive, and does not work well, RFC4824 describes the transmission of packets through semaphore flag signaling, or SFSS. Commonly used to signal between two ships when no other form of communication is available, SFSS consists of a sender who positions (waves) flags of particular shape and color in specific positions to signal individual characters. Rather than transmitting letters or other characters, RFC4824 describes using these flags to transmit 0’s, 1’s, and the framing elements required to transmit an IPv4 datagram over long distances. The advantage of such a system is, much like the avian carrier, that it will operate where there is no electricity. The disadvantage is the cost of encoding and decoding the packet’s contents may be many orders of magnitude more difficult than using existing SFSS specifications to signal messages.

Finally, RFC7511 provides an option which allows the most use of resources on any network while providing the slowest possible network performance by allowing senders to include a scenic route header on IPv6 packets. This header notifies routers and other networking devices that this packet would like to take the scenic route to its destination, which will cause paths including avian carriers (as described in RFC6214) to be chosen over any other available path.

You Can Always Add Another Layer of Indirection (RFC1925, Rule 6a)

Many within the network engineering community have heard of the OSI seven-layer model, and some may have heard of the Recursive Internet Architecture (RINA) model. The truth is, however, that while protocol designers may talk about these things and network designers study them, very few networks today are built using any of these models. What is often used instead is what might be called the Infinitely Layered Functional Indirection (ILFI) model of network engineering. In this model, nothing is solved at a particular layer of the network if it can be moved to another layer, whether successfully or not.

For instance, Ethernet is the physical and data link layer of choice over almost all types of physical medium, including optical and copper. No new type of physical transport layer (other than wireless) can succeed unless if can be described as “Ethernet” in some regard or another, much like almost no new networking software can success unless it has a Command Line Interface (CLI) similar to the one a particular vendor developed some twenty years ago. It’s not that these things are necessarily better, but they are well-known.

Ethernet, however, goes far beyond providing physical layer connectivity. Because many applications rely on using Ethernet semantics directly, many networks are built with some physical form of Ethernet (or something claiming to be like Ethernet), with IP on top of this. On top of the IP, there is some other transport protocol, such as VXLAN, UDP, GRE, or perhaps even MPLS over UDP. On top of these layers rides … Ethernet. On which IP runs. On which TCP or UDP, or potentially VXLAN runs. It turns out it is easier to add another layer of indirection to solve many of the problems caused by applications that expect Ethernet than it is to solve them with IP—or any other transport protocol. You’ve heard of turtles all the way down—today we have Ethernet all the way down.

Many other suggestions of this type have been made in network engineering and protocol design across the years, but none of them seem to have been as widely deployed as Ethernet over IP over Ethernet. For instance, RFC3252 notes it has always been difficult to understand the contents of Ethernet, IP, and other packets as they are transmitted from host to host. The eXtensible Markup Language (XML) is, on the other hand, designed to be both machine- and human-readable. A logical solution to the problem of unreadable packets, then, is to add another layer of indirection by formatting all packets, including Ethernet and IP, into XML. Once this is done, there would be no need for expensive or complex protocol analyzers, as anyone could simply capture packets off the wire and read them directly. Adding another layer, in this case, could save many hours of troubleshooting time, and generally reduce the cost of operating a network significantly.

Once the idea of adding another layer has been fully grasped, the range of problems which can be solved becomes almost limitless. Many companies struggle to find some way to provide secure remote access to their employees, contractors, and even customers. The systems designed to solve this problem are often complex, difficult to deploy, and almost impossible to troubleshoot. RFC5514, however, provides an alternate solution: simply layer an IPv6 transport stream on top of the social media networks everyone already uses. Everyone, after all, already has at least one social media account, and can already reach that social media account using at least one device. Creating an IPv6 stream across social media would provide universal cloud-based access to anyone who desires.

Such streams could even be encrypted to ensure the operators and users of the underlying social media network cannot see any private information transmitted across the IPv6 channel created in this way.

It is Easier to Move a Problem than Solve it (RFC1925, Rule 6)

Early on in my career as a network engineer, I learned the value of sharing. When I could not figure out why a particular application was not working correctly, it was always useful to blame the application. Conversely, the application owner was often quite willing to share their problems with me, as well, by blaming the network.

A more cynical way of putting this kind of sharing is the way RFC 1925, rule 6 puts is: “It is easier to move a problem around than it is to solve it.”

Of course, the general principle applies far beyond sharing problems with your co-workers. There are many applications in network and protocol design, as well. Perhaps the most widespread case deployed in networks today is the movement to “let the controller solve the problem.” Distributed routing protocols are hard? That’s okay, just implement routing entirely on a controller. Understanding how to deploy individual technologies to solve real-world problems is hard? Simple—move the problem to the controller. All that’s needed is to tell the controller what we intend to do, and the controller can figure the rest out. If you have problems solving any problem, just call it Software Defined Networking, or better yet Intent Based Networking, and the problems will melt away into the controller.

Pay no attention to the complexity of the code on the controller, or those pesky problems with CAP Theorem, or any protests on the part of long-term engineering staff who talk about total system complexity. They are probably just old curmudgeon who are protecting their territory in order to ensure they have a job tomorrow. Once you’ve automated the process you can safely ignore how the process works; the GUI is always your best guide to understanding the overall complexity of the system.

Examples of moving the problem abound in network engineering. For instance, it is widely known that managing customers is one of the hardest parts of operating a network. Customers move around, buy new hardware, change providers, and generally make it very difficult for providers by losing their passwords and personally identifying information (such as their Social Security Number in the US). To solve this problem, RFC8567 suggests moving the problem of storing enough information to uniquely identify each person into the Domain Name System. Moving the problem from the customer to DNS clearly solves the problem of providers (and governments) being able to track individuals on a global basis. The complexity and scale of the DNS system is not something to be concerned with, as DNS “just works,” and some method of protecting the privacy of individuals in such a system can surely be found. After all, it’s just software.

If the DNS system becomes too complex, it is simple enough to relieve DNS of the problem of mapping IP addresses to the names of hosts. Instead, each host can be assigned a single host that is used regardless of where it is attached to the network, and hence uniquely identifies the host throughout its lifetime. Such a system is suggested in RFC2100 and appears to be widely implemented in many networks already, at least from the perspective of application developers. Because DNS is “too slow,” application developers find it easier to move the problem DNS is supposed to solve into the routing system by assigning permanent IP addresses.

Another great example of moving a problem rather than solving it is RFC3215, Electricity over IP. Every building in the world, from houses to commercial storefronts, must have multiple cabling systems installed in order to support multiple kinds of infrastructure. If RFC3215 were widely implemented, a single kind of cable (or even fiber optics, if you want your electricity faster) can be installed in all buildings, and power carried over the IP network running on these cables (once the IP network is up and running). Many ancillary problems could be solved with such a scheme—for instance, power usage could be measured based on a per-packet system, rather than the sloppier kilowatt-hour system currently in use. Any bootstrap problems can be referred to the controller. After all, it’s just software.

The bottom line is this: when you cannot figure out how to solve a problem, just move it to some other system, especially if that system is “just software,” so the problem now becomes “software defined. This is also especially useful if moving the problem can be accomplished by claiming the result is a form of automation.

Moving problems around is always much easier than solving them.