Technologies that Didn’t: Asynchronous Transfer Mode
One of the common myths of the networking world is there were no “real” networks before the early days of packet-based networks. As myths go, this is not even a very good myth; the world had very large-scale voice and data networks long before distributed routing, before packet-based switching, and before any of the packet protocols such as IP. I participated in replacing a large scale voice and data network, including hundreds of inverse multiplexers that tied a personnel system together in the middle of the 1980’s. I also installed hundreds of terminal emulation cards in Zenith Z100 and Z150 systems in the same time frame to allow these computers to connect to mainframes and newer minicomputers on the campus.
All of these systems were run through circuit-switched networks, which simply means the two end points would set up a circuit over which data would travel before the data actually traveled. Packet switched networks were seen as more efficient at the time because the complexity of setting these circuits up, along with the massive waste of bandwidth because the circuits were always over provisioned and underused.
The problem, at that time, with packet-based networks was the sheer overhead of switching packets. While frames of data could be switched in hardware, packets could not. Each packet could be a different length, and each packet carried an actual destination address, rather than some sort of circuit identifier—a tag. Packet switching, however, was quickly becoming the “go to” technology solution for a lot of problems because of its efficient use of network resources, and simplicity of operation.
Asynchronous Transfer Mode, or ATM, was widely seen as a compromise technology that would provide the best circuit and packet switching in a single technology. Data would be input into the network in the form of either packets or circuits. The data would then be broken up into fixed sized cells, which would then be switched based on a fixed label-based header. This would allow hardware to switch the cells in a way that is like circuit switching, while retaining many of the advantages of a circuit switched network. In fact, ATM allowed for both circuit- and packet-switched paths to be both be used in the same network.
With all this goodness under one technical roof, why didn’t ATM take off? The charts from the usual prognosticators showed markets that were forever “up and to the right.”
The main culprit in the demise of ATM turned out to be the size of the cell. In order to support a good combination of voice and data traffic, the cell size was set to 53 octets. A 48-octet packet, then, should take up a single cell with a little left over. Larger packets, in theory, should be able to be broken into multiple cells and carried over the network with some level of efficiency, as well. The promise of the future was ATM to the desktop, which would solve the cell size overhead problem, since applications would generate streams pre-divided into the correctly sized packets to use the fixed cell size efficiently.
The reality, however, was far different. The small cell size, combined with the large overhead of carrying both a network layer header, the ATM header, and the lower layer data link header, caused ATM to be massively inefficient. Some providers at the time had research showing that while they were filling upwards of 80% of any given link’s bandwidth, the goodput, the amount of data being transmitted over the link, was less than 40% of the available bandwidth. There were problems with out of order cells and reassembly to add on top of this, causing entire packets worth of data, spread across multiple cells, to be discarded. The end was clearly near when articles appeared in popular telecommunications journals comparing ATM to shredding the physical mail, attaching small headers to each resulting chad, and reassembling the original letters at the recipient’s end of the postal system. The same benefits were touted—being able to pack mail trucks more tightly, being able to carry a wider array of goods over a single service, etc.
In the end, ATM to the desktop never materialized, and the inefficiencies of ATM on long-haul links doomed the technology to extinction.
Lessons learned? First, do not count on supporting “a little inefficiency” while the ecosystem catches up to a big new idea. Either the system has immediate, measurable benefits, or it does not. If it does not, it is doomed from the first day of deployment. Second, do not try to solve all the problems in the world at once. Build simple, use it, and then build it better over time. While we all hate being beta testers, sometimes real-world beta testing is the only way to know what the world really wants or needs. Third, up-and-to-the-right charts are easy to justify and draw. They are beautiful and impressive on glossy magazine pages, and in flashy presentations. But they should always be considered carefully. The underlying technology, and how it matches real-world needs, are more important than any amount of forecasting and hype.
It was also a matter of timing – I worked at an integrator in the UK during the early 90’s and we did do a couple of ATM25 to the desktop installs for deep pocketed customers. At the same time switched 16meg Token Ring was a niche thing for Token Ring shops that wanted higher performance and per drop fault isolation, and FDDI was relatively common for connectivity within the data center. On the LAN nothing could compete on cost efficiency with 100 megabit Ethernet – as the NIC and switch chipsets became inexpensive fast, the more complex solutions became an impossible sell, even if arguably more elegant (and interoperable at the time, remember how many variants of 100 meg there were?). ATM LANE survived longer than the other two as a technology between campus switches, with Gigabit not fully displacing it until well after 2000.
ATM survives today if you know where to look … 🙂 Thanks for stopping by and adding to the history here!