Technologies that Didn’t: The Fiber Distributed Data Interface
Token Ring, in its original form, was—on paper—a very capable physical transport. For instance, because of the token passing capabilities, it could make use of more than 90% of the available bandwidth. In contrast, Ethernet systems, particularly early Ethernet systems using a true “single wire” broadcast domain, cannot achieve nearly that kind of utilization—hence “every device on its own switch port.” The Fiber Distributed Data Interface, or FDDI, is like Token Ring in many ways. For instance, FDDI uses a token to determine when a station connected to the ring can transmit, enabling efficient use of bandwidth.
And yet, Ethernet is the common carrier of almost all wired networks today, and even wireless standards mimic Ethernet’s characteristics. What happened to FDDI?
FDDI is a 100Mbit/second optical standard based on token bus, which used an improved timed token version of the token passing scheme developed for token ring networks. This physical layer protocol standard had a number of fairly unique features. It was a 100Mbit/second standard at a time when Ethernet offered a top speed of 10Mbits/second. Since FDDI could operate over a single mode fiber, it could support distances of 200 kilometers, or around 120 miles. Because it was based on a token passing system, FDDI could also support larger frame sizes; while Ethernet was supporting 1500 octet frames, FDDI could support 4352 octet frames.
FDDI provided three different ways to connect a station to the ring. A Single Attached Station (SAS) used one ring as the primary and the second as a backup. A Dual Attached Station (DAS) connects to both rings, which allows a DAS to send and receive at 200mbits/second. A concentrator is dual attached, as well. FDDI can react to a failure of a node by optically bypassing the failed node, and can react to a failed link in the ring by looping all traffic onto the second ring. Error detection is handled by the loss of a token so even the failure of an optically bypassed node, either for failure or just for faster switching, can be easily detected.
If this is your first encounter with FDDI, you might be wondering: how did Ethernet win against this technology? It seems, on first examination, like the legendary war between VHS and Betamax.
The primary reason given for FDDI’s loss is the cost of the hardware. Once Ethernet reached 100mbit/second speeds, the lower cost of Ethernet simply made the choice between the two obvious. This is a simple answer, but as with almost all technology stories, it is too simple.
Consider all the terrific features FDDI has natively. It takes much of the load of error detection and correction off upper layer protocols—but upper layer protocols still need error correction in the end-to-end path. FDDI has the ability to wrap onto the second ring for a SAS, but this means maintaining two optical systems, including the fiber, transceiver, CMOS, and ASIC ports. Half of these resources will only be rarely used. To make matter worse, the fibers were often buried together, and terminate in the same physical hardware, so they share fate in many regards. The chipsets required to provide optical bypass, the ability to wrap, and all the detection and other mechanisms included are very expensive to produce, and the process of building these higher end features into software and hardware are prone to defects and problems. So, yes—FDDI was more expensive than Ethernet. This is primarily because it tried to bury a lot of functionality in a single layer, using advanced optics and silicon.
The story of FDDI, then, is that it was introduced at just the point where Ethernet was gaining momentum, largely in an attempt to create a better token passing physical layer that would use links more efficiently. Laying fiber, even in a campus, is expensive; it just seemed better to make the best use of the available fiber by transferring costs to the hardware rather than laying more. As the market shifted, and laying fiber became less expensive, the return on investment in all that fancy silicon and software fell until Ethernet had a similar ROI.
A bottom line lesson network engineers can take from this? Solving hard problems requires highly complex systems. Pushing the complexity into a single layer might seem simpler at first, but the costs in that single layer will multiply until the ROI makes the solution unattractive. The fruits of complexity sell, but the costs can be overwhelming.
This is a repost from the ECI blog, which is apparently being taken down.