Papers

The Cloud Begins with Coal

The information economy is a blue-whale economy with its energy uses mostly out of sight. Based on a mid-range estimate, the world’s Information-Communications-Technologies (ICT) ecosystem uses about 1,500 TWh of electricity annually, equal to all the electric generation of Japan and Germany combined—as much electricity as was used for global illumination in 1985. The ICT ecosystem now approaches 10% of world electricity generation. Or in other energy terms—the zettabyte era already uses about 50% more energy than global aviation.

Local Copy
Public Copy

Jupiter Rising

We present our approach for overcoming the cost, operational complexity, and limited scale endemic to datacenter networks a decade ago. Three themes unify the ve generations of datacenter networks detailed in this paper. First, multi-stage Clos topologies built from commodity switch silicon can support cost-effective deployment of building-scale networks. Second, much of the general, but complex, decentralized network routing and management protocols supporting arbitrary deployment scenarios were overkill for single-operator, pre-planned datacenter networks.

Local Copy
Public Copy

No Silver Bullet

All software construction involves essential tasks, the fashioning of the complex conceptual structures that compose the abstract software entity, and accidental tasks, the representation of these abstract entities in programming languages and the mapping of these onto machine languages within space and speed constraints. Most of the big past gains in software productivity have come from removing artificial barriers that have made the accidental tasks inordinately hard, such as severe hardware constraints, awkward programming languages, lack of machine time. … Therefore it appears that the time has come to address the essential parts of the software task, those concerned with fashioning abstract conceptual structures of great complexity.

Local Copy
Public Copy

Science and Complexity

Science has led to a multitude of results that affect men’s lives. Some of these results are embodied in mere conveniences of a relatively trivial sort. Many of them, based on science and developed through technology, are essential to the machinery of modern life. Many other results, especially those associated with the biological and medical sciences, are of unquestioned benefit and comfort. Certain aspects of science have profoundly influenced men’s ideas and even their ideals. Still other aspects of science are thoroughly awesome.

Local Copy
JSTOR Copy

On the criteria to be used in decomposing systems into modules

This paper discusses modularization as a mechanism for improving the flexibility and comprehensibility of a system while allowing the shortening of its development time* the effectiveness of a “modularization” is dependent upon the criteria used in dividing the system into modules. Two system design problems are presented, and for each, both a conventional and unconventional decomposition are described. It is shown that the unconventional decompositions have distinct advantages for the goals outlined. The criteria used in arriving at the decompositions are discussed. The unconventional decomposition, if implemented with the conventional assumption that a module consists of one or more subroutines, will be less efficient in most cases. An alternative approach to implementation which does not have this effect is sketched.

Local Copy
Public Copy

Loop-Free Routing Using Diffusing Computations

A family of distributed algorithms for the dynamic computation of the shortest paths in a computer network or internet is presented, validated, and analyzed. According to these algorithms, each node maintains a vector with its distance to every other node. Update messages from a node are sent only to its neighbors; each such message contains a distance vector of one or more entries, and each entry specifies the length of the selected path to a network destination, as well as an indication of whether the entry constitutes an update, a query, or a reply to a previous query.

This is the first description of the shortest path algorithm used in EIGRP and some more recent protocols.

Local Copy
Public Copy

End-to-end Arguments in System Design

This paper presents a design principle that helps guide placement of functions among the modules of a distributed computer system. The principle, called the end-to-end argument, suggests that functions placed at low levels of a system may be redundant or of little value when compared with the cost of providing them at that low level. Examples discussed in the paper include bit error recovery, security using encryption, duplicate message suppression, recovery from system crashes, and delivery acknowledgement. Low level mechanisms to support these functions are justified only as performance enhancements.

Local Copy
Public Copy

Hints for Computer System Design

Studying the design and implementation of a number of computer has led to some general hints for system design. They are described here and illustrated by many examples, ranging from hardware such as the Alto and the Dorado to application programs such as Bravo and Star.

Local Copy
Public Copy

High Performance Data Center Networks

This book describes the design and engineering tradeoffs of datacenter networks. It describes interconnection networks from topology and network architecture to routing algorithms, and presents opportunities for taking advantage of the emerging technology trends that are influencing router microarchitecture.With the emergence of “many-core” processor chips, it is evident that we will also need “many-port” routing chips to provide a bandwidth-rich network to avoid the performance limiting effects of Amdahl’s Law.We provide an overview of conventional topologies and their routing algorithms and show how technology, signaling rates and cost-effective optics are motivating new network topologies that scale up to millions of hosts.The book also provides detailed case studies of two high performance parallel computer systems and their networks.

Local Copy
Public Copy

The Network is Reliable: An informal survey of real-world communications failures

“The network is reliable” tops Peter Deutsch’s classic list, “Eight fallacies of distributed computing” (https://blogs.oracle.com/jag/resource/Fallacies.html), “all

[of which] prove to be false in the long run and all [of which] cause big trouble and painful learning experiences.” Accounting for and understanding the implications of network behavior is key to designing robust distributed programs—in fact, six of Deutsch’s “fallacies” directly pertain to limitations on networked communications. This should be unsurprising: the ability (and often requirement) to communicate
over a shared channel is a defining characteristic of distributed programs, and many of the key results in the field pertain to the possibility and impossibility of performing distributed computations under particular sets of network conditions.

Local Copy
Public Copy

Hey, You Have Given Me Too Many Knobs!

Configuration problems are not only prevalent, but also severely impair the reliability of today’s system software. One fundamental reason is the ever-increasing complexity of configuration, reflected by the large number of configuration parameters (“knobs”). With hundreds of knobs, configuring system software to ensure high reliability and performance becomes a daunting, error-prone task. This paper makes a first step in understanding a fundamental question of configuration design: “do users really need so many knobs?”

Local Copy
Public Copy

On the Co-Existence of Distributed and Centralized Routing Control-Planes

Network operators can and do deploy multiple routing control-planes, e.g., by running different protocols or instances of the same protocol. With the rise of SDN, multiple control-planes are likely to become even more popular, e.g., to enable hybrid SDN or multi-controller deployments. Unfortunately, previous works do not apply to arbitrary combinations of centralized and distributed control-planes. In this paper, we develop a general theory for coexisting control-planes. We provide a novel, exhaustive classification of existing and future control-planes (e.g., OSPF, EIGRP, and OpenFlow) based on fundamental control-plane properties that we identify. Our properties are general enough to study centralized
and distributed control-planes under a common framework. We show that multiple uncoordinated control-planes can cause forwarding anomalies whose type solely depends on the identified properties. To show the wide applicability of our framework, we leverage our theoretical insight to (i) provide sufficient conditions to avoid anomalies, (ii) propose configuration guidelines, and (iii) define a provably-safe procedure for reconfigurations from any (combination of) control-planes to any other. Finally, we discuss prominent consequences of our findings on the deployment of new paradigms (notably, SDN) and previous research works.

Local Copy
(There used to be a public copy here, but this site now gives an error)