Whither Network Engineering? (Part 3)

In the previous two parts of this series, I have looked at the reasons I think the networking ecosystem is bound to change and why I think disaggregation is going to play a major role in that change. If I am right about the changes happening, what will become of network engineers? The bifurcation of knowledge, combined with the kinds of networks and companies noted in the previous posts in this series, point the way. There will, I think, be three distinct careers where the current “network engineer” currently exists on the operational side:

  1. Moving up the stack, towards business, the more management role. This will be captured primarily by the companies that operate in market verticals deep and narrow enough to survive without a strong focus on data, and hence can survive a transition to black box, fully integrated solutions. This position will largely be focused on deploying, integrating, and automating vertically integrated, vendor-driven systems and managing vendor relationships.
  2. Moving up the stack, towards software and business, the disaggregated network engineering role (I don’t have a better name for this presently). This will be in support of companies that value data to the point of focusing on its management as a separate “thing.” The network will no longer be a “separate line item,” however, but rather part of a larger system revolving around the data that makes the company “go.”
  3. Moving down the stack, towards the hardware, the network hardware, rack-and-stack, cabling, power, etc., engineer. Again, I do not have a good name for this role right now.

There will still be a fairly strong “soft division” between design and troubleshooting in the second role. Troubleshooting will primarily be handled by the vendor in the first role.

Perhaps the diagram below will help illustrate what I think is happening, and will continue to happen, in the network engineering field.

The old network engineering role, shown in the lower left corner of the two halves of the illustration, focused on the appliances and circuits used to build networks, with some portion of the job interacting with protocols and management tools. The goal is to provide the movement of data as a service, with minimal regard to the business value of that data. This role will, in my opinion, transition to the entire left side of the illustration as a company moves to black box solutions. The real value offered in this new role will be in managing the contracts and vendors used to supply what is essentially a commodity.

On the right side is what I think the disaggregated path looks like. Here the network engineering role has largely moved away from hardware; this will increasingly become a largely specialized vendor driven realm of work. On the other end, the network engineer will focused more on software, from protocols to applications, and how they drive and add value to the business. Again, the role will need to move up the stack towards the business to continue adding value; away from hardware, and towards software.

I could well be wrong. I would not be happy or sad if I am right or wrong.

None of these are invalid choices to make, or bad roles to fill. I do not know what role fits “you” best, your life, nor your interests. I am simply observing what I think is happening in the market, and trying to understand where things are going, because I think this kind of thinking helps provide clarity in a confusing world.

In both the first and second roles, you must move up the stack to add value. This is what happened in the worlds of electronic engineering and personal computers as they both disaggregated away from an appliance model. Living through these past experiences is part of what leads me to believe this same kind of movement will happen in the world of networking technology. Further, I think I already see these changes happening in parts of the market, and I cannot see any reason these kinds of changes should not move throughout the entire market fairly rapidly.

What is the percentage of these two roles in the market? Some people think the second role will simply not exist, in fact, other than at vendors. Others think the second role will be a vanishingly small part of the market. I tend to think the percentages will be more balanced because of shifts in the business environment that is happening in parallel with (or rather driving) these changes. Ultimately, however, the number of people in each role will driven by the business environment, rather than the networking world.

Will there be “network engineers” in the future?

If we look at the progress of time from left to right, there is a big bulge ahead, followed by a slope off, and then a long tail. This is my understanding of the current network engineering skill set. We are at A as I write this, just before the big bulge of radical change at B, and I think much farther along than many others believe. At C, there will still be network engineers in the mold of the network engineers of today. They will be valiantly deploying appliance based networks for those companies who have a vertical niche deep enough to survive. There will be vendors still supporting these companies and engineers, too. There will just be a very few of them. Like COBOL and FORTRAN coders today, they will live on the long tail of demand. I suspect a number of the folks who live in this long tail will even consider themselves the “real legacy” of network engineering, while seeing the rest of the network operations and engineering market is more of “software engineers” and “administrators.”

That’s all fine by me; I just know I’d rather be in the bubble of demand than the long tail. 🙂

What should I do as a network engineer? This is the tricky question.

First, I cannot tell you which path to take of the ones I have presented. I cannot, in fact, tell you precisely what these roles are going to look like, nor whether there will be other roles available. For instance, I have not discussed what I think vendors look like after this change at all; there will be some similar roles, and some different ones, in that world.

Second, all the roles I’ve described (other than the hardware focused role) involve moving up the stack into a more software and business focus. This means that to move into these roles, you need to gain some business acumen and some software skills. If this is all correct, then now is the time to gain those skills, rather than later. I intend to post more on these topics in the future, so watch this space.

Third, don’t be fatalistic about any of this. I hear a lot of people say things like “I don’t have any influence over the market or my company.” Wrong. Rather than throwing our hands up in frustration and waiting for our fates (or heads) to be handed to us on a silver platter, I want to suggest a way forward. I know that none of us can entirely control the future—my worldview does not allot the kind of radical freedom this would entail to individual humans. At the same time, I am not a fatalist, and I tend to get frustrated with people who argue they have no control, so we should just “sit back, relax, and enjoy the ride.” We have freedom to do different things in the future within the context and guard rails set by our past decisions (and other things outside the scope of a technical blog).

My suggestion is this: take a hard look at what I have written here, decide for yourself where you think I am right and where I am wrong, and make career decisions based on what you think is going to happen. I have seen multiple people end up at age 50 or 60 with a desire to work, and yet with no job. I cannot tell you what percentage of any particular person’s situation is because of ageism, declining skills, or just being in the wrong place at the wrong time (I tend to think all three play a different role in every person’s situation). On the other hand, if you focus on what you can change—your skills, attitude, and position—and stop worrying so much about the things you cannot change, you will be a happier person.

Fourth, this fatalism stretches to the company you work for, and anyplace you might work in the future. There is a strong belief that network engineers cannot influence business leadership. Let me turn this around: If you stop talking about chipsets and optical transceivers, and start talking about the value of data and how the company needs to think about that value, then you might get a seat at the table when these discussions are taking place. You are not helpless here; if you learn how to talk to the business, there is at least some chance (depending on the company, of course) that you can shape the future of the company you work for. If nothing else, you can use your thinking in this area to help you decide where you want to work next.

Now, let’s talk about some risk factors. While these trends seem strong to me, it is still worth asking: what could take things in a different direction? One thing that would certainly change the outlook would be a major economic crash or failure like the Great Depression. This might seem unthinkable to most people, but more than a few of the thinkers I follow in the economic and political realms are suggesting this kind of thing is possible. If this happens, companies will be holding things together with tin cans, bailing wire, and duct tape; in this case, all bets are off. Another could be the collapse of the entire disaggregation ecosystem. Perhaps another could be someone discovering how to break the State/Optimization/Surface triad, or somehow beat CAP theorem.

There is also the possibility that people, at large, will reject the data driven economy that is developing, intentionally moving back to a more personally focused world with local shopping, and offline friends rather than online. I would personally support such a thing, but but while I think such a move could happen, I do not see it impacting every area of life. The “buy local” mantra is largely focused on bookstores, food, and some other areas. Notice this, however: if “buy local” is really what it means, then it means buying from locally owned stores, rather than shifting from an online retailer to a large chain mixed online/offline retailer. Buy local is not a panacea for appliance based network engineering, and may even help drive the changes I see ahead.

So there you have it: in this first week of 2019, this is what I think is going to happen in the world of networking technology. I could be way wrong, and I am sticking my neck out a good bit in publishing this little series.

As always, this is more of a two-way conversation than you imagine. I read the comments here and on LinkedIn, and even (sometimes) on Twitter, so tell me what you think the network future of network engineering will be. I am not so old, and certain of myself, that I cannot learn new things! 🙂

Whither Network Engineering? (Part 2)

In the first post of this series at the turn of 2019, I considered the forces I think will cause network engineering to radically change. What about the timing of these changes? I hear a lot of people say” “this stuff isn’t coming for twenty years or more, so don’t worry about it… there is plenty of time to adapt.” This optimism seems completely misplaced to me. Markets and ideas are like that old house you pass all the time—you know the one. No-one has maintained it for years, but it is so … solid. It was built out of the best timber, by people who knew what they were doing. The foundation is deep, and it has lasted all these years.

Then one day you pass a heap of wood on the side of the road and realize—this is that old house that seemed so solid just a few days ago. Sometime in the night, that house that was so solid collapsed. The outer shell was covering up a lot of inner rot. Kuhn, in The Structure of Scientific Revolutions, argues this is the way ideas always go. They appear to be solid one day, and then all the supports that looked so solid just moments before the collapse are all shown to be full of termites. The entire system of theories collapses in what seems like a moment compared to the amount of time the theory has stood. History has borne this way of looking at things out.

The point is: we could wake up in five years’ time and find the entire structure of the network engineering market has changed while we were asleep at the console running traceroute. I hear a lot of people talk about how it will take tens of years for any real change to take place because some class of businesses (usually the enterprise) do not take up new things very quickly. This line of thinking assumes the structure of business will remain the same—I think this tends to underestimate the symbiotic relationship between business and information technology. We have become so accustomed to seeing IT as a cost center that has little bearing on the overall business that it is hard to shift our thinking to the new realities that are starting to take hold.

While some niche retailers are doing okay, most of the the broad-based ones are in real trouble. Shopping malls are like ghost towns, bookstores are closing in droves; even grocery stores are struggling in many areas. This is not about second-day delivery—this is about data. Companies must either be in a deep niche or learn to work with data to survive. Companies that can most effectively combine and use data to anticipate, adapt to, and influence consumer behavior will survive. The rest will not.

Let me give some examples that might help. Consider Oak Island Hardware, a local hardware store, Home Depot, Sears, and Amazon. First, there are two kinds of businesses here; while all four have products that overlap, they service two different kinds of needs. In the one case, Home Depot and Oak Island Hardware cater to geographically localized wants where physical presence counts. When your plumbing starts to leak, you don’t have time to wait for next-day delivery. If you are in the middle of rebuilding a wall or a cabinet and you need another box of nails, you are not waiting for a delivery. You will get in your car and drive to the nearest place that sells such things. To some degree, Oak Island Hardware and Home Depot are in a separate kind of market than Sears and Amazon.

Consider Sears and Amazon as a pair first. Amazon internalized its data handling, and builds semi-custom solutions to support that data handling. Sears tried to focus on local stores, inventory management, and other traditional methods of retail. Sears is gone, Amazon remains. So Home Depot and Oak Island Hardware have a “niche” that protects them (to some degree) from the ravages of the data focused world. Now consider Oak Island Hardware versus Home Depot. Here the niche is primarily geographical—there just is not enough room on Oak Island to build a Home Depot. When people need a box of nails “now,” they will often choose the closer store to get those nails.

On the other hand, what kind of IT needs does a stand-alone store like Oak Island Hardware have? I do not think they will be directly hiring any network engineers in the near future. Instead, they will be purchasing IT services in the form of cloud-based applications. These cloud-based applications, in turn, will be hosted on … disaggregated stacks run by providers.

The companies in the broader markets that are doing well have have built fully- or semi-customized systems to handle data efficiently. The network is no longer treated as a “thing” to be built; it is just another part of a larger data delivery system. Ultimately, businesses in broader markets that want to survive need to shift their thinking to data. The most efficient way to do this is to shift to a disaggregated, layered model similar to the one the web- and hyper-scalers have moved to.

I can hear you out there now, reading this, saying: “No! They can’t do this! The average IT shop doesn’t have the skilled people, the vision, the leadership, the… The web- and hyper-scalers have specialized systems built for a single purpose! This stuff doesn’t apply to enterprise networks!”

In answer to this plethora of objections, let me tell you a story.

Once, a long time ago, I was sent off to work on installing a project called PC3; a new US Air Force personnel management system. My job was primarily on the network side of the house, running physical circuits through the on-base systems, installing inverse multiplexers, and making certain the circuits were up and running. At the same time, I had been working on the Xerox STAR system on base, as well as helping design the new network core running a combination of Vines and Netware over optical links connecting Cabletron devices. We already had a bunch of other networks on base, including some ARCnet, token bus, thicknet, thinnet, and a few other things, so packet switching was definitely already a “thing.”

In the process of installing this PC3 system, I must have said something about how this was such old technology, and packet switching was eventually going to take over the world. In return, I got an earful or two from one of the older techs working on the job with me. “Russ,” he said, “you just don’t understand! Packet switching is going to be great for some specialized environments, but circuit switching has already solve the general purpose cases.”

Now, before you laugh at the old codger, he made a bunch of good points. At that time, we were struggling to get a packet switched network up between seven buildings, and then trying to figure out how to feed the packet switched network into more buildings. The circuit switched network, on the other hand, already had more bandwidth into every building on base than we could figure out how to bring to those seven buildings. Yes, we could push a lot more bandwidth across a couple of rooms, but even scaling bandwidth out to an entire large building was a challenge.

What changed? The ecosystem. A lot of smart people bought into the vision of packet switched networking and spent a lot of time figuring out how to make it do all the things no-one thought it could do, and apply it to problems no-one thought it could apply to. They learned how to take the best pieces of circuit-switched technology and apply it in the packet switched world (remember the history of MPLS).

So before you say “disaggregation does not apply to the enterprise,” remember the lesson of packet switched networks—and the lessons of a million other similar technologies. Disaggregation might not apply in the same way to web- and hyper-scale networks and enterprise networks, but this does not mean it does not apply at all. Do not throw the baby out with the bathwater.

As the disaggregation ecosystem grows—and it will grow—the options will become both broader and deeper. Rather than seeing the world as standards versus open-source, we will need learn to see standards plus open source. Instead of seeing the ecosystem as commercial versus open source, we will need to learn to see commercial plus open source. Instead of seeing protocols on appliances supporting applications, we need to will learn to see hardware and software. As the ecosystem grows, we will learn to learn from many places, including appliance-based networking, the world of servers, application development, and … the business. We will need to directly apply what makes sense and learn wisdom from the rest.

What does this mean for network engineering skills? That is the topic of the third post in this series.

Whither Network Engineering? (Part 1)

An article on successful writers who end up driving delivery trucks. My current reading in epistemology for an upcoming PhD seminar. An article on the bifurcation of network engineering skills. Several conversations on various slacks I participate in. What do these things have in common? Just this:

What is to become of network engineering?

While it seems obvious network engineering is changing, it is not so easy to say how it is changing, and how network engineers can adapt to those changes. To better understand these things, it is good to back up and take in a larger view. A good place to start is to think about how networks are built today.

Networks today are built using an appliance and circuit model. To build a network, an “engineer” (we can argue over the meaning of that word) tries to gauge how much traffic needs to be moved between different points in the business’ geographical space, and then tries to understand the shape of that traffic. Is it layer 2, or layer 3? Which application needs priority over some other application?

Once this set of requirements is drawn up, a long discussion over the right appliances and circuits to purchase to fulfill them. There may be some thought put into the future of the business, and perhaps some slight interaction with the application developers, but, in general, the network is seen pretty much as plumbing. So long as the water glass is filled quickly, and the toilets flush, no-one really cares how it works.

There are many results of building networks this way. First, the appliances tend to be complex devices with many different capabilities. Since a single appliance must serve many different roles for many different customers running many different applications, each appliance must be like a multitool, or those neat kitchen devices you see on television (it slices, it dices, it can even open cans!). While this is neat, it tends to cause technologies to be misapplied, and means each appliance is running tens of millions of lines of code—code very few people understand.

This situation has led, on the one hand, to a desire to simplify. The first way operators are simplifying is to move all their applications to the cloud. Many people see this as replacing just the data center, but this misunderstands the draw of cloud, and why businesses are moving to it. I have heard people say, “oh, there will still be the wide area, and there will still be the campus, even if my company goes entirely to the cloud.” In my opinion, this answer does not effectively grapple with the concept of cloud computing.

If a business desires to divest itself of its network, it will not stop with the data center. 5G, SD-WAN, and edge computing are going to fundamentally change the way campus and WAN are done. If you could place your application in a public cloud service and have the data and application distributed to every remote site without needing a data center, on site equipment, and circuits into each of those remote sites, would you do it? To ask is to know the answer.

If most companies move all their data to cloud service, then the only network engineers who survive will be at those providers, transit providers, and other supporting roles. The catch here is that cloud providers do not treat the network as a separate “thing,” and hence they do not really have “network engineers” in the traditional sense. So in this scenario, the network engineer still changes radically, and there are very few of them around, mostly working for providers of various kinds.

On the other hand, the drive to simplify has led to strongly vertically integrated vendor-based solutions consisting of hardware and software. The easy button, the modern mainframe, or whatever you want to call it. In this case, the network engineer works at the vendor rather than the enterprise. They tend to have very specialized knowledge, and there are few of them.

There is a third option, of course: disaggregation.

In this third option, the company will invest in the network and applications as a single, combined strategic asset. Like a cloud provider or web scaler, these companies will not see the network as a “thing” to be invested in separately. Here there will be engineers of one kind or another, and a blend of things purchased from vendors and things built in-house. They will see the applications through the hardware as a complete system, rather than as an investment in appliances and circuits. Perhaps the following diagram will help.

The left side of this diagram is how we build networks today: appliances connected through the control plane, with network management and applications riding on top. The disaggregated view of the network treats the control plane somewhat like an application, and the operating system like any other operating system. The hardware is fit to task; this does not mean it is a ”commodity,” but rather that the hardware life cycle and tuning is untied from the optimization of the software operating environment. In the disaggregated view, the software stack is fit to the company and its business, rather than to the hardware. This is the crucial difference between the two models.

There are two ways to view the competition between the company that moves to the cloud, the company that moves to black box integrated solutions, and the company that disaggregates. My view is that the companies that move to the cloud, or choose the block box, will only survive if they live in a fairly narrow niche where the data they collect, produce, and rely on is narrow in scope—or rather, not generally usable.

Those companies that try to live in the broader market, and give their data to a cloud provider, or give their IT systems entirely to a vendor, will be eaten. Why do I think this? Because data is the new oil. Data grants and underlies every kind of power that relates to making any sort of money any longer—political power, social power, supply-chain efficiency, and anything else you can name. There are no chemical companies, there are only data companies. This is the new normal, and companies that do not understand this new normal will either need to be in a niche small enough that their data is unique in a way that protects them, or they will be eaten. George Gilder, in Knowledge and Power, is one of the better explanations of this process you can pick up.

If data is at the heart of your business and you either give it to someone else, or you fail to optimize your use of it, you will be at a business disadvantage. That business disadvantage will grow over time until it becomes an economic millstone around the company itself. Can you say Sears? What about Toys-R-Us?

Technology like 5G, edge computing, and cloud, mixed in with the pressure to reduce the complexity of running a network and subsuming it into the larger life of IT, are forming a wrecking ball directed at network engineering as we know it. Which leaves us with the question: whither network engineering?

Ossification and Fragmentation: The Once and Future ‘net

Mostafa Ammar, out of Georgia Tech (not my alma mater, but many of my engineering family are alumni there), recently posted an interesting paper titled The Service-Infrastructure Cycle, Ossification, and the Fragmentation of the Internet. I have argued elsewhere that we are seeing the fragmentation of the global Internet into multiple smaller pieces, primarily based on the centralization of content hosting combined with the rational economic decisions of the large-scale hosting services. The paper in hand takes a slightly different path to reach the same conclusion.

cross posted at CircleID

TL;DR[time-span]

  • Networks are built based on a cycle of infrastructure modifications to support services
  • When new services are added, pressure builds to redesign the network to support these new services
  • Networks can ossify over time so they cannot be easily modified to support new services
  • This causes pressure, and eventually a more radical change, such as the fracturing of the network

 
The author begins by noting networks are designed to provide a set of services. Each design paradigm not only supports the services it was designed for, but also allows for some headroom, which allows users to deploy new, unanticipated services. Over time, as newer services are deployed, the requirements on the network change enough that the network must be redesigned.
This cycle, the service-infrastructure cycle, relies on a well-known process of deploying something that is “good enough,” which allows early feedback on what does and does not work, followed by quick refinement until the protocols and general design can support the services placed on the network. As an example, the author cites the deployment of unicast routing protocols. He marks the beginning of this process as 1962, when Prosser was first deployed, and then as 1995, when BGPv4 was deployed. Across this time routing protocols were invented, deployed, and revised rapidly. Since around 1995, however—a period of over 20 years at this point—routing has not changed all that much. So there were around 35 years of rapid development, followed by what is now over 20 years of stability in the routing realm.

Ossification, for those not familiar with the term, is a form of hardening. Petrified wood is an ossified form of wood. An interesting property of petrified wood is that is it fragile; if you pound a piece of “natural” wood with a hammer, it dents, but does not shatter. Petrified, or ossified, wood shatters, like glass.

Multicast routing is held up as an opposite example. Based on experience with unicast routing, the designers of multicast attempted to “anticipate” the use cases, such that early iterations were clumsy, and failed to attain the kinds of deployment required to get the cycle of infrastructure and services started. Hence multicast routing has largely failed. In other words, multicast ossified too soon; the cycle of experience and experiment was cut short by the designers trying to anticipate use cases, rather than allowing them to grow over time.

Some further examples might be:

  • IETF drafts and RFCs were once short, and used few technical terms, in the sense of a term defined explicitly within the context of the RFC or system. Today RFCs are veritable books, and require a small dictionary to read.
  • BGP security, which is mentioned by the author as a victim of ossification, is actually another example of early ossification destroying the experiment/enhancement cycle. Early on, a group of researchers devised the “perfect” BGP security system (which is actually by no means perfect—it causes as many security problems as it resolves), and refused to budge once “perfection” had been reached. For the last twenty years, BGP security has not notably improved; the cycle of trying and changing things has been stopped this entire time.

There are also weaknesses in this argument, as well. It can be argued that the reason for the failure of widespread multicast is because the content just wasn’t there when multicast was first considered—in fact, that multicast content still is not what people really want. The first “killer app” for multicast was replacing broadcast television over the Internet. What has developed instead is video on demand; multicast is just not compelling when everyone is watching something different whenever they want to.

The solution to this problem is novel: break the Internet up. Or rather, allow it to break up. The creation of a single network from many networks was a major milestone in the world of networking, allowing the open creation of new applications. If the Internet were not ossified through business relationships and the impossibility of making major changes in the protocols and infrastructure, it would be possible to undertake radical changes to support new challenges.

The new challenges offered include IoT, the need for content providers to have greater control over the quality of data transmission, and the unique service demands of new applications, particularly gaming. The result has been the flattening of the Internet, followed by the emergence of bypass networks—ultimately leading to the fragmentation of the Internet into many different networks.

Is the author correct? It seems the Internet is, in fact, becoming a group of networks loosely connected through IXPs and some transit providers. What will the impact be on network engineers? One likely result is deeper specialization in sets of technologies—the “enterprise/provider” divide that had almost disappeared in the last ten years may well show up as a divide between different kinds of providers. For operators who run a network that indirectly supports some other business goal (what we might call “enterprise”), the result will be a wide array of different ways of thinking about networks, and an expansion of technologies.

But one lesson engineers can certainly take away is this: the concept of agile must reach beyond the coding realm, and into the networking realm. There must be room “built in” to experiment, deploy, and enhance technologies over time. This means accepting and managing risk rather than avoiding it, and having a deeper understanding of how networks work and why they work that way, rather than the blind focus on configuration and deployment we currently teach.

Reaction: The Importance of Open APIs

Over at CIMI, Tom Nolle Considers whether the open API is a revolution, or a cynical trap. The line of argument primarily relates to accessing functions in a Virtual Network Function (VNF), which is then related to Network Function Virtualization (NFV). The broader point is made in this line:

One important truth about an API is that it effectively imposes a structure on the software on either side of it. If you design APIs to join two functional blocks in a diagram of an application, it’s likely that the API will impose those blocks on the designer.

This is true—if you design the API first, it will necessarily impose information flow between the different system components, and even determine, at least to some degree, the structure of the software modules on either side of the API. For instance, if you decide to deploy a single network appliance vendor, then your flow of building packet filters will be similar across all devices. However, if you add a second vendor into the mix, you might find the way packet filters are described and deployed are completely different, requiring a per-device module that moves from intent to implementation.

While this problem will always exist, there is another useful way of looking at the problem. Rather than seeing the API as a set of calls and a set of data structures, you can break things up into the grammar and the dictionary. The grammar is the context in which words are placed so they relate to other words, and the dictionary is the meaning of the words. We often think of an API as being almost purely dictionary; if I push this data structure to the device, then something happens; in grammatical terms, the verb is implied in the subject and object pair you feed to the device.

Breaking things up in this way allows you to see the problem in a different way. There is no particular reason the dictionary or the grammar must be implied. Rather than being built into the API, they can be carried through the API. An example would be really helpful here.

In the old/original OSPF, all fields were fixed length. There was no information about what any particular field was, because the information being carried was implied by the location of the data in the packet. In this case, the grammar determined the dictionary, both of which had to be coded into the OSPF implementation. The grammar and dictionary are essentially carried from one implementation to another through the OSPF standards or specifications. IS-IS, on the other hand, carries all its information in TLVs, which means at least some information required to interpret the data is carried alongside the data itself. This additional information is called metadata.

There are a couple of tradeoffs here (if you haven’t found the tradeoffs, you haven’t looked hard enough). First, the software on both ends of the connection that read and interpret the information are going to be much more complex. Second, the amount of data carried on the wire increases, as you are not only carrying the data, you are also carrying the metadata. The upside is that IS-IS is easier to extend; implementations can ignore TLVs they don’t understand, new TLVs with new metadata can be added, etc.

If you want to understand this topic more deeply, this is the kind of thing Computer Networking Problems and Solutions discusses in detail.

In the world of network configuration and management, an example of this kind of system is YANG, which intentionally carries metadata alongside the data itself. In this way, the dictionary is, in a sense, carried with the data. There will always be different words for different objects, of course, but translators can be built to allow one device to talk to another.

There is still the problem of flow, or grammar, which can make it difficult to configure two devices from one set of primitives. “I want to filter packets from this application” can still be expressed by a different process on two different vendor implementations. However, the ability to translate the dictionary part of the problem between devices is a major step forward in solving the problem of building software that will work across multiple devices.

This is why YANG, JSON, and their associated ecosystems really matter.

Think Like an Engineer, not a Cheerleader

When you see a chart like this—

—you probably think if I were staking my career on technologies, I would want to jump from the older technology to the new just at the point where that adoption curve starts to really drive upward.

Over at ACM Queue, Peter J. Denning has an article up on just this topic. He argues that if you understand the cost curve and tipping point of any technology, you can predict—with some level of accuracy—the point at which the adoption s-curve is going to begin its exponential growth phase.

Going back many years, I recognize this s-curve. It was used for FDDI, ATM, Banyan Vines, Novell Netware, and just about every new technology that has ever entered the market.

TL;DR[time-span]

  • There are technology jump points where an entire market will move from one technology to another
  • From a career perspective, it is sometimes wise to jump to a new technology when at the early stages of such a jump
  • However, there are risks involved, such as hidden costs that prevent the jump from occurring
  • Hence, you need to be cautious and thoughtful when considering jumping to a new technology

 

The problem with this curve, especially when applied to every new technology ever invented, is it often makes it seem inevitable some new technology is going to replace an older, existing technology. This, however, makes a few assumptions that are not always warranted.

First, there is an underlying assumption that a current exponential reduction in technology costs will continue until the new technology is cheaper than the old. There are several problems in this neighborhood. Sometimes, for instance, the obvious or apparent costs are much less expensive, but the overall costs of adoption are not. To give one example, many people still heat their homes with some form of oil-based product. Since electricity is so much less expensive—or at least it seems to be at first glance—why is this so? I’m not an economist, but I can take some wild guesses at the answer.

For instance, electricity must be generated from heat. Someplace, then, heat must be converted to electricity, the electricity transported to the home, and then the electricity must be converted back to heat. A crucial question: is the cost of the double conversion and transportation more than the cost of simply transporting the original fuel to the home? If so, by how much? Many of these costs can be hidden—if every person in the world converted to electric heat, what would be the cost of upgrading and maintain an electric grid that could support this massive increase in power usage?

Hidden costs, and our inability to see the entire system at once, often make it more difficult than it might seem to predict the actual “landing spot” on the cost curve of a technology. Nor is it always possible to assume that once a technology has reached a “landing spot,” it will stay there. Major advances in some new technology may actually cross over into the older technology, so that both cost curves are driven down at the same time.

Second, there is the problem of “good enough.” Why are there no supersonic jets flying regularly across the Atlantic Ocean? Because people who fly, as much as they might complain (like me!) have ultimately decided with their wallets that the current technology is “good enough” to solve the problem at hand. That increasing the speed of flight just isn’t worth the risks and the costs.

Third, as Mike Bushong recently pointed out in a member’s Q&A at The Network Collective, many times a company (startup) will fail because it is too early in the cycle, rather than too late. I will posit that technologies can go the same way; a lot of people can invest in a technology really early and find it just does not work. The idea, no matter how good, will then go on the back burner for many years—perhaps forever—until someone else tries it again.

The Bottom Line

The bottom line is this: just because the curves seem to be converging does not mean a technology is going to follow the s-curve up and to the right. If you are thinking in terms of career growth, you have to ask hard questions, think about the underlying principles, and think about what the failure scenarios might look like for this particular technology.

Another point to remember is the staid and true rule 11. What problem does this solve, and how does it solve it? How is this solution like solutions attempted in the past? If those solutions failed, what will cause the result to be different this time? Think also in terms of complexity—is the added complexity driving real value?

I am not saying you should not bet on a new technology for your future. Rather—think like an engineer, rather than a cheerleader.

Is BGP Good Enough?

In a recent podcast, Ivan and Dinesh ask why there is a lot of interest in running link state protocols on data center fabrics. They begin with this point: if you have less than a few hundred switches, it really doesn’t matter what routing protocol you run on your data center fabric. Beyond this, there do not seem to be any problems to be solved that BGP cannot solve, so… why bother with a link state protocol? After all, BGP is much simpler than any link state protocol, and we should always solve all our problems with the simplest protocol possible.

TL;DR[time-span]

  • BGP is both simple and complex, depending on your perspective
  • BGP is sometimes too much, and sometimes too little for data center fabrics
  • We are danger of treating every problem as a nail, because we have decided BGP is the ultimate hammer

 
Will these these contentions stand up to a rigorous challenge?

I will begin with the last contention first—BGP is simpler than any link state protocol. Consider the core protocol semantics of BGP and a link state protocol. In a link state protocol, every network device must have a synchronized copy of the Link State Database (LSDB). This is more challenging than BGP’s requirement, which is very distance-vector like; in BGP you only care if any pair of speakers have enough information to form loop-free paths through the network. Topology information is (largely) stripped out, metrics are simple, and shared information is minimized. It certainly seems, on this score, like BGP is simpler.

Before declaring a winner, however, this simplification needs to be considered in light of the State/Optimization/Surface triad.

When you remove state, you are always also reducing optimization in some way. What do you lose when comparing BGP to a link state protocol? You lose your view of the entire topology—there is no LSDB. Perhaps you do not think an LSDB in a data center fabric is all that important; the topology is somewhat fixed, and you probably are not going to need traffic engineering if the network is wired with enough bandwidth to solve all problems. Building a network with tons of bandwidth, however, is not always economically feasible. The more likely reality is there is a balance between various forms of quality of service, including traffic engineering, and throwing bandwidth at the problem. Where that balance is will probably vary, but to always assume you can throw bandwidth at the problem is naive.

There is another cost to this simplification, as well. Complexity is inserted into a network to solve hard problems. The most common hard problem complexity is used to solve is guarding against environmental instability. Again, a data center fabric should be stable; the topology should never change, reachability should never change, etc. We all know this is simply not true, however, or we would be running static routes in all of our data center fabrics. So why aren’t we?

Because data center fabrics, like any other network, do change. And when they do change, you want them to converge somewhat quickly. Is this not what all those ECMP parallel paths are for? In some situations, yes. In others, those ECMP paths actually harm BGP convergence speed. A specific instance: move an IP address from one ToR on your fabric to another, or from one virtual machine to another. In this situation, those ECMP paths are not working for you, they are working against you—this is, in fact, one of the worst BGP convergence scenarios you can face. IS-IS, specifically, will converge much faster than BGP in the case of detaching a leaf node from the graph and reattaching it someplace else.

Complexity can be seen from another perspective, as well. When considering BGP in the data center, we are considering one small slice of the capabilities of the protocol.

in the center of the illustration above there is a small grey circle representing the core features of BGP. The sections of the ten sided figure around it represent the features sets that have been added to BGP over the years to support the many places it is used. When we look at BGP for one specific use case, we see the one “slice,” the core functionality, and what we are building on top. The reality of BGP, from a code base and complexity perspective, is the total sum of all the different features added across the years to support every conceivable use case.

Essentially, BGP has become not only a nail, but every kind of nail, including framing nails, brads, finish nails, roofing nails, and all the other kinds. It is worse than this, though. BGP has also become the universal glue, the universal screw, the universal hook-and-loop fastener, the universal building block, etc.

BGP is not just the hammer with which we turn every problem into a nail, it is a universal hammer/driver/glue gun that is also the universal nail/screw/glue.

When you run BGP on your data center fabric, you are not just running the part you want to run. You are running all of it. The L3VPN part. The eVPN part. The intra-AS parts. The inter-AS parts. All of it. The apparent complexity may appear to be low, because you are only looking at one small slice of the protocol. But the real complexity, under the covers, where attack and interaction surfaces live, is very complex. In fact, by any reasonable measure, BGP might have the simplest set of core functions, but it is the most complicated routing protocol in existence.

In other words, complexity is sometimes a matter of perspective. In this perspective, IS-IS is much simpler. Note—don’t confuse our understanding of a thing with its complexity. Many people consider link state protocols more complex simply because they don’t understand them as well as BGP.

Let me give you an example of the problems you run into when you think about the complexity of BGP—problems you do not hear about, but exist in the real world. BGP uses TCP for transport. So do many applications. When multiple TCP streams interact, complex problems can result, such as the global synchronization of TCP streams. Of course we can solve this with some cool QoS, including WRED. But why do you want your application and control plane traffic interacting in this way in the first place? Maybe it is simpler just to separate the two?

Is BGP really simpler? From one perspective, it is simpler. From another, however, it is more complex.

Is BGP “good enough?” For some applications, it is. For others, however, it might not be.

You should decide what to run on your network based on application and business drivers, rather than “because it is good enough.” Which leads me back to where I often end up: If you haven’t found the trade-offs, you haven’t look hard enough.