Have you ever looked at your wide area network and wondered … what would the traffic flows look like if this link or that router failed? Traffic modeling of this kind is widely available in commercial tools, which means it’s been hard to play with these kinds of tools, learn how they work, and understand how they can be effective. There is, however, an open source alternative—pyNTM.

download

Token Ring, in its original form, was—on paper—a very capable physical transport. For instance, because of the token passing capabilities, it could make use of more than 90% of the available bandwidth. In contrast, Ethernet systems, particularly early Ethernet systems using a true “single wire” broadcast domain, cannot achieve nearly that kind of utilization—hence “every device on its own switch port.” The Fiber Distributed Data Interface, or FDDI, is like Token Ring in many ways. For instance, FDDI uses a token to determine when a station connected to the ring can transmit, enabling efficient use of bandwidth.

And yet, Ethernet is the common carrier of almost all wired networks today, and even wireless standards mimic Ethernet’s characteristics. What happened to FDDI?

FDDI is a 100Mbit/second optical standard based on token bus, which used an improved timed token version of the token passing scheme developed for token ring networks. This physical layer protocol standard had a number of fairly unique features. It was a 100Mbit/second standard at a time when Ethernet offered a top speed of 10Mbits/second. Since FDDI could operate over a single mode fiber, it could support distances of 200 kilometers, or around 120 miles. Because it was based on a token passing system, FDDI could also support larger frame sizes; while Ethernet was supporting 1500 octet frames, FDDI could support 4352 octet frames.

FDDI provided three different ways to connect a station to the ring. A Single Attached Station (SAS) used one ring as the primary and the second as a backup. A Dual Attached Station (DAS) connects to both rings, which allows a DAS to send and receive at 200mbits/second. A concentrator is dual attached, as well. FDDI can react to a failure of a node by optically bypassing the failed node, and can react to a failed link in the ring by looping all traffic onto the second ring. Error detection is handled by the loss of a token so even the failure of an optically bypassed node, either for failure or just for faster switching, can be easily detected.

If this is your first encounter with FDDI, you might be wondering: how did Ethernet win against this technology? It seems, on first examination, like the legendary war between VHS and Betamax.

The primary reason given for FDDI’s loss is the cost of the hardware. Once Ethernet reached 100mbit/second speeds, the lower cost of Ethernet simply made the choice between the two obvious. This is a simple answer, but as with almost all technology stories, it is too simple.

Consider all the terrific features FDDI has natively. It takes much of the load of error detection and correction off upper layer protocols—but upper layer protocols still need error correction in the end-to-end path. FDDI has the ability to wrap onto the second ring for a SAS, but this means maintaining two optical systems, including the fiber, transceiver, CMOS, and ASIC ports. Half of these resources will only be rarely used. To make matter worse, the fibers were often buried together, and terminate in the same physical hardware, so they share fate in many regards. The chipsets required to provide optical bypass, the ability to wrap, and all the detection and other mechanisms included are very expensive to produce, and the process of building these higher end features into software and hardware are prone to defects and problems. So, yes—FDDI was more expensive than Ethernet. This is primarily because it tried to bury a lot of functionality in a single layer, using advanced optics and silicon.

The story of FDDI, then, is that it was introduced at just the point where Ethernet was gaining momentum, largely in an attempt to create a better token passing physical layer that would use links more efficiently. Laying fiber, even in a campus, is expensive; it just seemed better to make the best use of the available fiber by transferring costs to the hardware rather than laying more. As the market shifted, and laying fiber became less expensive, the return on investment in all that fancy silicon and software fell until Ethernet had a similar ROI.

A bottom line lesson network engineers can take from this? Solving hard problems requires highly complex systems. Pushing the complexity into a single layer might seem simpler at first, but the costs in that single layer will multiply until the ROI makes the solution unattractive. The fruits of complexity sell, but the costs can be overwhelming.

This is a repost from the ECI blog, which is apparently being taken down.

QUIC is a relatively new data transport protocol developed by Google, and currently in line to become the default transport for the upcoming HTTP standard. Because of this, it behooves every network engineer to understand a little about this protocol, how it operates, and what impact it will have on the network. We did record a History of Networking episode on QUIC, if you want some background.

In a recent Communications of the ACM article, a group of researchers (Kakhi et al.) used a modified implementation of QUIC to measure its performance under different network conditions, directly comparing it to TCPs performance under the same conditions. Since the current implementations of QUIC use the same congestion control as TCP—Cubic—the only differences in performance should be code tuning in estimating the round-trip timer (RTT) for congestion control, QUIC’s ability to form a session in a single RTT, and QUIC’s ability to carry multiple streams in a single connection. The researchers asked two questions in this paper: how does QUIC interact with TCP flows on the same network, and does UIC perform better than TCP in all situations, or only some?

To answer the first question, the authors tried running QUIC and TCP over the same network in different configurations, including single QUIC and TCP sessions, a single QUIC session with multiple TCP sessions, etc. In each case, they discovered that QUIC consumed about 50% of the bandwidth; if there were multiple TCP sessions, they would be starved for bandwidth when running in parallel with the QUIC session. For network folk, this means an application implemented using QUIC could well cause performance issues for other applications on the network—something to be aware of. This might mean it is best, if possible, to push QUIC-based applications into a separate virtual or physical topology with strict bandwidth controls if it causes other applications to perform poorly.

Does QUIC’s ability to consume more bandwidth mean applications developed on top of it will perform better? According to the research in this paper, the answer is how many balloons fit in a bag? In other words, it all depends. QUIC does perform better when its multi-stream capability comes into play and the network is stable—for instance, when transferring variably sized objects (files) across a network with stable jitter and delay. In situations with high jitter or delay, however, TCP consistently outperforms QUIC.

TCP outperforming QUIC is a bit of a surprise in any situation; how is this possible? The researchers used information from their additional instrumentation to discover QUIC does not tolerate out-of-order packet delivery very well because of its fast packet retransmission implementation. Presumably, it should be possible to modify these parameters somewhat to make QUIC perform better.

This would still leave the second problem the researchers found with QUIC’s performance—a large difference between its performance on desktop and mobile platforms. The difference between these two comes down to where QUIC is implemented. Desktop devices (and/or servers) often have smart NICs which implement TCP in the ASIC to speed packet processing up. QUIC, because it runs in user space, only runs on the main processor (it seems hard to see how a user space application could run on a NIC—it would probably require a specialized card of some type, but I’ll have to think about this more). The result is that QUIC’s performance depends heavily on the speed of the processor. Since mobile devices have much slower processors, QUIC performs much more slowly on mobile devices.

QUIC is an interesting new transport protocol—one everyone involved in designing or operating networks is eventually going to encounter. This paper gives good insight into the “soul” of this new protocol.

When you think of new Ethernet standards, you probably think about faster and optical. There is, however, an entire world of buildings out there with older copper cabling, particularly in the industrial realm, that could see dramatic improvements in productivity if their control and monitoring systems could be moved to IP. In these cases, what is needed is an Ethernet standard that runs over a single copper pair, and yet offers enough speed to support industrial use cases. Peter Jones joins Jeremy Filliben and Russ White to discuss single pair Ethernet.

download

The IETF works on many things beyond IP and routing—the Media Operations (MOPS) working group is gathering input on media-related operational issues and practices, including “proposed technologies related to the deployment, engineering, and operation of media streaming and manipulation protocols and procedures in the global Internet (inter-domain) and within-domain networking.” Leslie Daigle and Eric Vyncke, the co-chairs of the MOPS working group, join Alvaro Retana and Russ White to discuss the work they are doing.

download

There was a time when Software Defined Networking was going to take over the entire networking world—just like ATM, FDDI, and … so many others before. Whatever happened to SDN, anyway? What is its enduring legacy in the world of network engineering? Terry Slattery, Tom Ammon, and Russ White gather at the hedge to have a conversation about whatever happened to SDN?

download