According to RFC1925, the second fundamental truth of networking is: No matter how hard you push and no matter what the priority, you can’t increase the speed of light.

However early in the world of network engineering this problem was first observed (see, for instance, Tanenbaum’s “station wagon example” in Computer Networks), human impatience is forever trying to overcome the limitations of the physical world, and push more data down the pipe than mother nature intended (or Shannon’s theory allows).

One attempt at solving this problem is the description of an infinitely fat pipe (helpfully called an “infan(t)”) described in RFC5984. While packets would still need to be clocked onto such a network, incurring serialization delay, the ability to clock an infinite number of packets onto the network at the same moment in time would represent a massive gain in a network’s ability, potentially reaching speeds faster than the speed of light. The authors of RFC5984 describe several attempts to build such a network, including black fiber, on which the lack of light implies data transmission. This is problematic, however, because a lack of information can be interpreted differently depending on the context. A pregnant pause has far different meaning than a shocked pause, for instance, or just a plain pause.

The team experimenting with faster than light communication also tried locking netcats up in boxes, but this seemed to work and not work at the same time. Finally, the researchers settled on ESP based forwarding, in which two people with a telepathic link transmit data over long distances. They compute the delay of such communication at around 350ms, regardless of the distance involved. This is clearly a potential faster than speed-of-light communication medium.

Another plausible option for building infinitely fat pipes is to broadcast everything. If you could reach an entire region in some way at once, it might be possible to build a full mesh of hosts, each transmitting to every other host in the region at the same time, ultimately constituting an infinitely fat pipe. Such a system is described in RFC6217, which describes the transmission of broadcast packets across entire regions using air as a medium. This kind of work is a logical extension of the stretched Ethernet segments often used between widely separated data centers and campuses, only using a more easily accessed medium (the air). The authors of this RFC note the many efficiencies gained from using broadcast only transmission modes, such as not needing destination addresses, the TCP three-way handshake process, and acknowledgements (which reportedly consume an inordinate amount of bandwidth).

Foreseeing the time when faster than speed-of-light networking would be possible, R. Hinden wrote a document detailing some of the design considerations for such networks which was published as RFC6921. This document is primarily concerned with the ability of the TCP three-way handshake to support an environment where the network’s speed of transmission is so much faster than the speed at which packets are processed or clocked onto the network that an acknowledgement is received before the original packet is transmitted. R. Hinden suggests that it might be possible to use packet drops in normal networks to emulate this behavior, and find some way to solve it in case faster than speed-of-light networks become generally available—such as the ESP network described in RFC5984.

More recent, and realistic, work in faster than speed-of-light networking has been undertaken by the proposed Quantum Networking Research Group in the IRTF. You can read the proposed architecture for a quantum Internet here.

From time immemorial, humor has served to capture truth. This is no different in the world of computer networks. A notable example of using humor to capture truth is the April 1 RFC series published by the IETF. RFC1925, The Twelve Networking Truths, will serve as our guide.

According to RFC1925, the first fundamental truth of networking is: it has to work. While this might seem to be overly simplistic, it has proven—over the years—to be much more difficult to implement in real life than it looks like in a slide deck. Those with extensive experience with failures, however, can often make a better guess at what is possible to make work than those without such experience. The good news, however, is the experience of failure can be shared, especially through self-deprecating humor.

Consider RFC748, which is the first April First RFC published by the IETF, the TELNET RANDOMLY-LOSE Option. This RFC describes a set of additional signals in the TELNET protocol (for those too young to remember, TELNET is what people used to communicate with hosts before SSH and web browsers!) that instruct the server not to provide random losses through such things as “system crashes, lost data, incorrectly functioning programs, etc., as part of their services.” The RFC notes that many systems apparently have undocumented features that provide such losses, frustrating users and system administrators. The option proposed would instruct the server to disable features which cause these random losses.

Lesson learned? Although one of the general rules of application design is the network is not reliable, the counter rule suggested by RFC748 is the application is not reliable, either. This a key point in the race to Mean Time to Innocence (MTTI). RFC1882, published a few years after RFC748, is a veritable guidebook for finding problems in a network, including transceiver failures, databases with broken b-trees, unterminated contacts, and a plethora of other places to look.  Published just before Christmas, RFC1882 is an ideal guide for those who want to spend time with their families during the most festive times of the year.

Another common problem in large-scale networks is services that want to choose to operate from the safety and security of an anonymous connection. RFC6593 describes the Doman Pseudonym System, specifically designed to support services that do not wish to be discovered. The specification describes two parties to the protocol, the first being the seeker, or “it,” and the second being the service which is attempting to hide from it. The process used is for the seeker to send a transmission declaring the beginning of the search sequence called the “ready or not,” followed by a countdown during which “it” is not allowed to peek at a list of available services. During this countdown, the service may change its name or location, although it will be penalized if discovered doing so. This Domain Pseudonym System is the perfect counterpart to the Domain Name System normally used to discover services on large-scale networks, as shown by the many networks that already deploy such a hide-and-seek method to managing services.

What if all the above guidance for network operators fails, and you are stuck troubleshooting a problem? RFC2321 has an answer to this problem: RITA — The Reliable Internetwork Troubleshooting Agent. The typical RITA is described as 51.25cm in length, and yellow/orange in color. The first test the operator can perform with the RITA is placing it on the documentation for the suspect system, or on top of the suspect system itself. If the RITA eventually flies away, there is a greater than 90% chance there is a defect in the system tested. The odds of the defects in the tested system being the root cause of the problem the operator is currently troubleshooting is not guaranteed, however. The RITA has such a high success rate because it is believed that 100% of systems in operation do, in fact, contain defects. The 10% failure rate primarily occurs in cases where the RITA itself dies during the test, or decides to go to sleep rather than flying to some other location.

Each of these methods can help the network operator fulfill the first rule of networking: it has to work.

The truth is, however, that while protocol designers may talk about these things, and network designers study them, very few networks today are built using any of these models. What is often used instead is what might be called the Infinitely Layered Functional Indirection (ILFI) model of network engineering. In this model, nothing is solved at a particular layer of the network if it can be moved to another layer, whether successfully or not.

sarcasm warning—take the following post with a large grain of salt

A thousand years from now, when someone is writing the history of computer networks, one thing they will notice—at least I think they will—is how we tend to reduce our language so as many terms as possible have precisely the same meaning. They might attribute this to marketing, or the hype cycle, or… but whatever the cause this is clearly a trend in the networking world. Some examples might be helpful, so … forthwith, the reduced terminology of the networking world.

Software Defined Networking (SDN): Used to mean a standardized set of interfaces that enabled open access to the forwarding hardware. Came to mean some form of control plane centralization. Now means automated configuration and management of network devices, centralized control planes, traffic engineering, and just about anything else that seems remotely related to these.

Fabric: Used to mean a regular, non-planar, repeating network topology with scale-out characteristics. Now means any vaguely hierarchical topology (not a ring) with a lot of links.

DevOps: Used to mean applying software development processes to the configuration, operation, and troubleshooting of server and network devices. Now means the same thing as SDN.

Clos: Used to mean a three stage fabric in which every device in a prior stage is connected to every device in the next stage, all devices have the same number of ports, all traffic is east/west, and having a scale-out characteristics. Now means the same thing as fabric, and is spelled CLOS because—aren’t all four letter words abbreviations? Now external links are commonly attached to the “core” of the Clos, because… well, it kindof looks hierarchical, after all.

Hierarchical Design: Used to mean a network design with a modular layered design, and specific functions tied to each layer of the network. Generally there were two or three layers, with clear failure domain separation through aggregation and summarization of control plane information. Now means the same thing as fabric.

Cloud: Used to mean the centralization and abstraction of resources to support agile development strategies. Now means… well… the meaning is cloudy at this time, but generally applied to just about anything. Will probably end up meaning the same thing as DevOps, SDN, and fabric.

Network Topology: Used to mean a description of the interconnection system used in building a network. Some kinds of topologies were hub-and-spoke, ring, partial mesh, Clos, Benes, butterfly, full mesh, etc. Now means the same as fabric.

Routing Protocol: Used to mean the protocol, including the semantics and algorithm or heuristic, used to calculate the set of loop-free paths through a network. Includes instances such as IS-IS, EIGRP, and OSPF. Now means BGP, as this is the only protocol used in any production network (except SDN).

Router: Used to mean a device that determines the next hop to which the packet should be forwarded using the layer 3 address, replacing the layer 2 header in the process of forwarding the packet. Now means the same thing as a switch.

Switch: Used to mean a device which determined which port through which a packet should be forwarded based on the layer 2 header, did not modify the packet, etc. Now means any device that forwards packets; has generally replaced “router.”

Security: Used to mean thinking through attack surfaces, understanding protocols and their operation, and how to build a system that is difficult to attack. Now means inserting a firewall into the network.

We used to have a rich set of terms we could use to describe different kinds of topologies, devices, and ways of building networks. We seem to want to insist on merging as many terms as possible so they all mean the same thing; we are quickly reducing ourselves to fabric, switch, SDN, and cloud to describe everything.

Which makes me wonder sometimes—what are they teaching in network engineering classes now-a-days?