Technologies that Didn’t: Directory Services

One of the most important features of the Network Operating Systems, like Banyan Vines and Novell Netware, available in the middle of the 1980’s was their integrated directory system. These directory systems allowed for the automatic discovery of many different kinds of devices attached to a network, such as printers, servers, and computers. Printers, of course, were the important item in this list, because printers have always been the bane of the network administrator’s existence. An example of one such system, an early version of Active Directory, is shown in the illustration below.

Users, devices and resources, such as file mounts, were stored in a tree. The root of the tree was (generally) the organization. There were Organizational Units (OUs) under this root. Users and devices could belong to an OU, and be given access to devices and services in other OUs through a fairly simple drag and drop, or GUI based checkbox style interface. These systems were highly developed, making it fairly easy to find any sort of resource, including email addresses of other uses in the organization, services such as shared filers, and—yes—even printers.

The original system of this kind was Banyan’s Streetalk, which did not have the depth or expressiveness of later systems, like the one shown above from Windows NT, or Novell’s Directory Services. A similar system existed in another network operating system called LANtastic, which was never really widely deployed (although I worked on a LANtastic system in the late 1980’s).

The usual “pitch” for deploying these systems was the ease of access control they brought into the organization from the administration side, along with the ease of finding resources from the user’s perspective. Suppose you were sitting at your desk, and needed to know who over in some other department, say accounting, you could contact about some sort of problem, or idea. If you had one of these directory services up and running, the solution was simple: open the directory, look for the accounting OU within the tree, and look for a familiar name. Once you have found them, you could send them an email, find their phone number, or even—if you had permission—print a document at a printer near their desk for them to pick up. Better than a FAX machine, right?

What if you had multiple organizations who needed to work together? Or you really wanted a standard way to build these kinds of directories, rather than being required to run one of the network operating systems that could support such a system? There were two industry wide standards designed to address these kinds of problems: LDAP and X.500.

The OUs, CNs, and other elements shown in the illustration above are actually an expression of the X.500 directory system. As X.500 was standardized starting in the mid-1990’s, these network operating systems changed their native directory systems to match the X.500 schema. The ultimate goal was to make these various directory services interoperate through X.500 connectors.

Given all this background, what happened to these systems? Why are these kinds of directories widely available today? While there are many reasons, two of these stand out.

First, these systems are complex and heavy. Their complexity made them very hard to code and maintain; I can well remember working on a large Netware Directory Service deployment where objects fell into the wrong place on a regular basis, drive mapping did not work correctly, and objects had to be deleted and recreated to force their permissions to reset.

Large, complex systems tend to be unstable in unpredictable ways. One lesson the information technology world has not learned across the years is that abstraction is not enough; the underlying systems themselves must be simplified in a way that makes the abstraction more closely resemble the underlying reality. Abstraction can cover problems up as easily as it can solve problems.

Second, these systems fit better in a world of proprietary protocols and network operating systems than into a world of open protocols. The complexity driven into the network by trying to route IP, Novell’s IPX, Banyan’s VIP, DECnet, Microsoft’s protocols, Apple’s protocols, etc., made building and managing networks ever more complex. Again, while the interfaces were pretty abstractions, the underlying network was also reminiscent of a large bowl of spaghetti. There were even attempts to build IPX/VIP/IP packet translators so a host running Vines’ could communicate with devices on the then nascent global Internet.

Over time, the simplicity of IP, combined with the complexity and expense of these kinds of systems drove them from the scene. Some remnants live on in the directory structure contained in email and office software packages, but they are a shadow of Streettalk, NDS, and the Microsoft equivalent. The more direct descendants of these systems are single sign-on and OAUTH systems that allow you to use a single identity to log into multiple places.

But the primary function of finding things, rather than authenticating them, has long been left behind. Today, if you want to know someone’s email address, you look them up on your favorite social medial network. Or you don’t bother with email at all.

Technologies that Didn’t: ARCnet

In the late 1980’s, I worked at a small value added reseller (VAR) around New York City. While we deployed a lot of thinnet (RG58 coax based Ethernet for those who don’t know what thinnet is), we also had multiple customers who used ARCnet.

Back in the early days of personal computers like the Amiga 500, the 8086 based XT (running at 4.77MHz), and the 8088 based AT, all networks were effectively wide area, used to connect PDP-11’s and similar gear between college campuses and research institutions. ARCnet was developed in 1976, and became popular in the early 1980’s, because it was, at that point, the only available local area networking solution for personal computers.

ARCnet was not an accidental choice in the networks I supported at the time. While thinnet was widely available, it required running coax cable. The only twisted pair Ethernet standard available at the time required new cables to be run through buildings, which could often be an expensive proposition. For instance, one of the places that relied heavily on ARCnet was a legal office in a small town in north-central New Jersey. This law office had started out in an older home over a shop in the square of a smaller town—a truly historic building well over a hundred years old. As the law office grew, they purchased adjacency buildings, and created connecting corridors through closets and existing halls by carefully opening up passages between the buildings. The basements of the buildings were more-or-less connected anyway, so the original telephone cabling was tied together to create a unified system.

When the law office decided to bring email and shared printers up on Novell Netware, they called in the VAR I worked for to figure out how to make it all work. The problem we encountered was the building had been insulated at some point with asbestos fiber filling in the walls. Wiring on the surface of the walls and baseboards was rejected because it would destroy the historical character of the building. Running through the walls would only be possible if the asbestos was torn out—this would be removing the walls, again encountering major problems with the historical nature of the building.

The solution? ARCnet can run on the wiring used for plain old telephone circuits. Not very fast, of course; the original specification was 2.5Mbit/s. On the other hand, it was fast enough for printers and email before the days of huge image files and cute cat videos. ARCnet could also run in a “star” configuration, which means with a centralized hub (which we would today call a switch), and each host attached as a spoke or point on the star. This kind of wiring had just been introduced for Ethernet, and so was considered novel, but not widely deployed.

ARCnet deployed to well over ten thousand networks globally (a lot of networks for that time period), and then was rapidly replaced by Ethernet. The official reason for this rapid replacement was the greater speed of Ethernet—but as I noted above, most of the applications for networks in those days did not really make use of all that bandwidth, even in larger networks. Routers were not a “thing” at this time, but you could still connect several hundred hosts onto a single ARCnet or Ethernet segment and expect it to work with the common traffic requirements of the day.

At the small VAR I worked at, we had another reason for replacing ARCnet: it blew up too much. The cables over which POTs services run is unshielded, and hence liable to induced high voltage spikes from other sources. For instance, we had to be quite intentional about not using a POTs lines located within a certain distance of the older wiring in the buildings where it was deployed; a voltage spike could not only cause the network to “blank out” for some amount of time, it could actually cause enough voltage on the wires to destroy the network interface cards. We purchased ARCnet interface cards by the case, it seemed. After any heavy thunderstorm, the entire shop went from one ARCnet customer to another replacing cards. At some point, replacing cases of interface cards becomes more expensive than performing asbestos mitigation, or even just running the shielded cable Ethernet on twisted pair requires. It becomes cheaper to replace ARCnet than it does to keep it running.

An interesting twist to this story—there is current work in the Ethernet working group of the IEEE to make Ethernet run on … the cabling used for very old POTs services. This is effectively the same use case ARCnet filled for many VARs in the late 1980’s. The difference, today, is that much more is understood about how to build electronics that can support high voltage spikes while still being able to discriminate a signal on a poor transmission medium. Much of this work has been done for the wireless world already.

So ARCnet failed more because it was a technology ahead of its time, in terms of its use case, but in line with its time, in its physical and electronic design.

 

Technologies that Didn’t: Asynchronous Transfer Mode

One of the common myths of the networking world is there were no “real” networks before the early days of packet-based networks. As myths go, this is not even a very good myth; the world had very large-scale voice and data networks long before distributed routing, before packet-based switching, and before any of the packet protocols such as IP. I participated in replacing a large scale voice and data network, including hundreds of inverse multiplexers that tied a personnel system together in the middle of the 1980’s. I also installed hundreds of terminal emulation cards in Zenith Z100 and Z150 systems in the same time frame to allow these computers to connect to mainframes and newer minicomputers on the campus.

All of these systems were run through circuit-switched networks, which simply means the two end points would set up a circuit over which data would travel before the data actually traveled. Packet switched networks were seen as more efficient at the time because the complexity of setting these circuits up, along with the massive waste of bandwidth because the circuits were always over provisioned and underused.

The problem, at that time, with packet-based networks was the sheer overhead of switching packets. While frames of data could be switched in hardware, packets could not. Each packet could be a different length, and each packet carried an actual destination address, rather than some sort of circuit identifier—a tag. Packet switching, however, was quickly becoming the “go to” technology solution for a lot of problems because of its efficient use of network resources, and simplicity of operation.
Asynchronous Transfer Mode, or ATM, was widely seen as a compromise technology that would provide the best circuit and packet switching in a single technology. Data would be input into the network in the form of either packets or circuits. The data would then be broken up into fixed sized cells, which would then be switched based on a fixed label-based header. This would allow hardware to switch the cells in a way that is like circuit switching, while retaining many of the advantages of a circuit switched network. In fact, ATM allowed for both circuit- and packet-switched paths to be both be used in the same network.
With all this goodness under one technical roof, why didn’t ATM take off? The charts from the usual prognosticators showed markets that were forever “up and to the right.”

The main culprit in the demise of ATM turned out to be the size of the cell. In order to support a good combination of voice and data traffic, the cell size was set to 53 octets. A 48-octet packet, then, should take up a single cell with a little left over. Larger packets, in theory, should be able to be broken into multiple cells and carried over the network with some level of efficiency, as well. The promise of the future was ATM to the desktop, which would solve the cell size overhead problem, since applications would generate streams pre-divided into the correctly sized packets to use the fixed cell size efficiently.
The reality, however, was far different. The small cell size, combined with the large overhead of carrying both a network layer header, the ATM header, and the lower layer data link header, caused ATM to be massively inefficient. Some providers at the time had research showing that while they were filling upwards of 80% of any given link’s bandwidth, the goodput, the amount of data being transmitted over the link, was less than 40% of the available bandwidth. There were problems with out of order cells and reassembly to add on top of this, causing entire packets worth of data, spread across multiple cells, to be discarded. The end was clearly near when articles appeared in popular telecommunications journals comparing ATM to shredding the physical mail, attaching small headers to each resulting chad, and reassembling the original letters at the recipient’s end of the postal system. The same benefits were touted—being able to pack mail trucks more tightly, being able to carry a wider array of goods over a single service, etc.

In the end, ATM to the desktop never materialized, and the inefficiencies of ATM on long-haul links doomed the technology to extinction.

Lessons learned? First, do not count on supporting “a little inefficiency” while the ecosystem catches up to a big new idea. Either the system has immediate, measurable benefits, or it does not. If it does not, it is doomed from the first day of deployment. Second, do not try to solve all the problems in the world at once. Build simple, use it, and then build it better over time. While we all hate being beta testers, sometimes real-world beta testing is the only way to know what the world really wants or needs. Third, up-and-to-the-right charts are easy to justify and draw. They are beautiful and impressive on glossy magazine pages, and in flashy presentations. But they should always be considered carefully. The underlying technology, and how it matches real-world needs, are more important than any amount of forecasting and hype.

The Hedge 51: Tim Fiola and pyNTM

Have you ever looked at your wide area network and wondered … what would the traffic flows look like if this link or that router failed? Traffic modeling of this kind is widely available in commercial tools, which means it’s been hard to play with these kinds of tools, learn how they work, and understand how they can be effective. There is, however, an open source alternative—pyNTM.

download

Technologies that Didn’t: The Fiber Distributed Data Interface

Token Ring, in its original form, was—on paper—a very capable physical transport. For instance, because of the token passing capabilities, it could make use of more than 90% of the available bandwidth. In contrast, Ethernet systems, particularly early Ethernet systems using a true “single wire” broadcast domain, cannot achieve nearly that kind of utilization—hence “every device on its own switch port.” The Fiber Distributed Data Interface, or FDDI, is like Token Ring in many ways. For instance, FDDI uses a token to determine when a station connected to the ring can transmit, enabling efficient use of bandwidth.

And yet, Ethernet is the common carrier of almost all wired networks today, and even wireless standards mimic Ethernet’s characteristics. What happened to FDDI?

FDDI is a 100Mbit/second optical standard based on token bus, which used an improved timed token version of the token passing scheme developed for token ring networks. This physical layer protocol standard had a number of fairly unique features. It was a 100Mbit/second standard at a time when Ethernet offered a top speed of 10Mbits/second. Since FDDI could operate over a single mode fiber, it could support distances of 200 kilometers, or around 120 miles. Because it was based on a token passing system, FDDI could also support larger frame sizes; while Ethernet was supporting 1500 octet frames, FDDI could support 4352 octet frames.

FDDI provided three different ways to connect a station to the ring. A Single Attached Station (SAS) used one ring as the primary and the second as a backup. A Dual Attached Station (DAS) connects to both rings, which allows a DAS to send and receive at 200mbits/second. A concentrator is dual attached, as well. FDDI can react to a failure of a node by optically bypassing the failed node, and can react to a failed link in the ring by looping all traffic onto the second ring. Error detection is handled by the loss of a token so even the failure of an optically bypassed node, either for failure or just for faster switching, can be easily detected.

If this is your first encounter with FDDI, you might be wondering: how did Ethernet win against this technology? It seems, on first examination, like the legendary war between VHS and Betamax.

The primary reason given for FDDI’s loss is the cost of the hardware. Once Ethernet reached 100mbit/second speeds, the lower cost of Ethernet simply made the choice between the two obvious. This is a simple answer, but as with almost all technology stories, it is too simple.

Consider all the terrific features FDDI has natively. It takes much of the load of error detection and correction off upper layer protocols—but upper layer protocols still need error correction in the end-to-end path. FDDI has the ability to wrap onto the second ring for a SAS, but this means maintaining two optical systems, including the fiber, transceiver, CMOS, and ASIC ports. Half of these resources will only be rarely used. To make matter worse, the fibers were often buried together, and terminate in the same physical hardware, so they share fate in many regards. The chipsets required to provide optical bypass, the ability to wrap, and all the detection and other mechanisms included are very expensive to produce, and the process of building these higher end features into software and hardware are prone to defects and problems. So, yes—FDDI was more expensive than Ethernet. This is primarily because it tried to bury a lot of functionality in a single layer, using advanced optics and silicon.

The story of FDDI, then, is that it was introduced at just the point where Ethernet was gaining momentum, largely in an attempt to create a better token passing physical layer that would use links more efficiently. Laying fiber, even in a campus, is expensive; it just seemed better to make the best use of the available fiber by transferring costs to the hardware rather than laying more. As the market shifted, and laying fiber became less expensive, the return on investment in all that fancy silicon and software fell until Ethernet had a similar ROI.

A bottom line lesson network engineers can take from this? Solving hard problems requires highly complex systems. Pushing the complexity into a single layer might seem simpler at first, but the costs in that single layer will multiply until the ROI makes the solution unattractive. The fruits of complexity sell, but the costs can be overwhelming.

This is a repost from the ECI blog, which is apparently being taken down.

Is QUIC really Quicker?

QUIC is a relatively new data transport protocol developed by Google, and currently in line to become the default transport for the upcoming HTTP standard. Because of this, it behooves every network engineer to understand a little about this protocol, how it operates, and what impact it will have on the network. We did record a History of Networking episode on QUIC, if you want some background.

In a recent Communications of the ACM article, a group of researchers (Kakhi et al.) used a modified implementation of QUIC to measure its performance under different network conditions, directly comparing it to TCPs performance under the same conditions. Since the current implementations of QUIC use the same congestion control as TCP—Cubic—the only differences in performance should be code tuning in estimating the round-trip timer (RTT) for congestion control, QUIC’s ability to form a session in a single RTT, and QUIC’s ability to carry multiple streams in a single connection. The researchers asked two questions in this paper: how does QUIC interact with TCP flows on the same network, and does UIC perform better than TCP in all situations, or only some?

To answer the first question, the authors tried running QUIC and TCP over the same network in different configurations, including single QUIC and TCP sessions, a single QUIC session with multiple TCP sessions, etc. In each case, they discovered that QUIC consumed about 50% of the bandwidth; if there were multiple TCP sessions, they would be starved for bandwidth when running in parallel with the QUIC session. For network folk, this means an application implemented using QUIC could well cause performance issues for other applications on the network—something to be aware of. This might mean it is best, if possible, to push QUIC-based applications into a separate virtual or physical topology with strict bandwidth controls if it causes other applications to perform poorly.

Does QUIC’s ability to consume more bandwidth mean applications developed on top of it will perform better? According to the research in this paper, the answer is how many balloons fit in a bag? In other words, it all depends. QUIC does perform better when its multi-stream capability comes into play and the network is stable—for instance, when transferring variably sized objects (files) across a network with stable jitter and delay. In situations with high jitter or delay, however, TCP consistently outperforms QUIC.

TCP outperforming QUIC is a bit of a surprise in any situation; how is this possible? The researchers used information from their additional instrumentation to discover QUIC does not tolerate out-of-order packet delivery very well because of its fast packet retransmission implementation. Presumably, it should be possible to modify these parameters somewhat to make QUIC perform better.

This would still leave the second problem the researchers found with QUIC’s performance—a large difference between its performance on desktop and mobile platforms. The difference between these two comes down to where QUIC is implemented. Desktop devices (and/or servers) often have smart NICs which implement TCP in the ASIC to speed packet processing up. QUIC, because it runs in user space, only runs on the main processor (it seems hard to see how a user space application could run on a NIC—it would probably require a specialized card of some type, but I’ll have to think about this more). The result is that QUIC’s performance depends heavily on the speed of the processor. Since mobile devices have much slower processors, QUIC performs much more slowly on mobile devices.

QUIC is an interesting new transport protocol—one everyone involved in designing or operating networks is eventually going to encounter. This paper gives good insight into the “soul” of this new protocol.