WRITTEN
Simpler is Better… Right?
A few weeks ago, I was in the midst of a conversation about EVPNs, how they work, and the use cases for deploying them, when one of the participants exclaimed: “This is so complicated… why don’t we stick with the older way of doing things with multi-chassis link aggregation and virtual chassis device?” Sometimes it does seem like we create complex solutions when a simpler solution is already available. Since simpler is always better, why not just use them? After all, simpler solutions are easier to understand, which means they are easier to deploy and troubleshoot.
The problem is we too often forget the other side of the simplicity equation—complexity is required to solve hard problems and adapt to demanding environments. While complex systems can be fragile (primarily through ossification), simple solutions can flat out fail just because they can’t cope with changes in their environment.
As an example, consider MLAG. On the surface, MLAG is a useful technology. If you have a server that has two network interfaces but is running an application that only supports a single IP address as a service access point, MLAG is a neat and useful solution. Connect a single (presumably high reliability) server to two different upstream switches through two different network interface cards, which act like one logical Ethernet network. If one of the two network devices, optics, cables, etc., fails, the server still has network connectivity.
Neat.
But MLAG has well-known downsides, as well. There is a little problem with the physical locality of the cables and systems involved. If you have a service split among multiple services, MLAG is no longer useful. If the upstream switches are widely separated, then you have lots of cabling fun and various problems with jitter and delay to look forward to.
There is also the little problem of MLAG solutions being mostly proprietary. When something fails, your best hope is a clueful technical assistance engineer on the other end of a phone line. If you want to switch vendors for some reason, you have the fun of taking the entire server out of operation for a maintenance window to do the switch, along with the vendor lock-in in the case of failure, etc.
EVPN, with its ability to attach a single host through multiple virtual Ethernet connections across an IP network, is a lot more complex on the surface. But looks can be deceiving… In the case of EVPN, you see the complexity “upfront,” which means you can (largely) understand what it is doing and how it is doing it. There is no “MLAG black box;” useful in cases where you must troubleshoot.
And there will be cases where you will need to troubleshoot.
Further, because EVPN is standards-based and implemented by multiple vendors, you can switch over one virtual connection at a time; you can switch vendors without taking the server down. EVPN is a much more flexible solution overall, opening up possibilities around multihoming across different pods in a butterfly fabric, or allowing the use of default MAC addresses to reduce table sizes.
Virtual chassis systems can often solve some of the same problems as EVPN, but again—you are dealing with a black box. The black box will likely never scale to the same size, and cover the same use cases, as a community-built standard like EVPN.
The bottom line
Sometimes starting with a more complex set of base technologies will result in a simpler overall system. The right system will not avoid complexity, but rather reduce it where possible and contain it where it cannot be avoided. If you find your system is inflexible and difficult to manage, maybe its time to go back to the drawing board and start with something a little more complex, or where the complexity is “on the surface” rather than buried in an abstraction. The result might actually be simpler.
Data Gravity and the Network
One “sideways” place to look for value in the network is in a place that initially seems far away from infrastructure, data gravity. Data gravity is not something you might often think about directly when building or operating a network, but it is something you think about indirectly. For instance, speeds and feeds, quality of service, and convergence time are all three side effects, in one way or another, of data gravity.
As with all things in technology (and life), data gravity is not one thing, but two, one good and one bad—and there are tradeoffs. Because if you haven’t found the tradeoffs, you haven’t looked hard enough. All of this is, in turn, related to the CAP Theorem.
Data gravity is, first, a relationship between applications and data location. Suppose you have a set of applications that contain customer information, such as forms of payment, a record of past purchases, physical addresses, and email addresses. Now assume your business runs on three applications. The first analyzes past order information and physical location to try to predict potential interests in current products. The second fills in information on an order form when a customer logs into your company’s ecommerce site. The third is used by marketing to build and send email campaigns.
Think about where you would store this information physically, especially if you want the information used by all three applications to be as close to accurate as possible in near-real time. CAP theorem dictates that if you choose to partition the data, say by having one copy of the customer address information in an on-premises fabric for analysis and another in a public cloud service for the ecommerce front-end, you are you going to have to live with either locking one copy of the record or the other while it is being updated, or you are going to have to live with one of the two copies of the record being out-of-date while its being used (the database can either have a real time locking system or be eventually consistent). If you really want the data used by all three applications to be accurate in near-real time, the only solution you have is to run all three applications in the same physical infrastructure so there can be one unified copy of all three data.
This causes the first instance of data gravity. If you want the ordering front end to work well, you need to unify the data stores, and move them physically close to the application. If you want the other applications that interact with this same data to work well, you need to move them to the same physical infrastructure as the data they rely on. Data gravity, in this case, acts like a magnet; the more data that is stored in a location, the more applications will want to be in that same location. The more applications that run on a particular infrastructure, the more data will be stored there, as well. Data follows applications, and applications follow data. Data gravity, however, can often work against the best interests of the business. The public cloud is not the most efficient place to house every piece of data, nor is it the best place to do all processing.
Minimizing the impact of the CAP theorem by building and operating a network that efficiently moves data minimizes the impact of this kind of data gravity.
The second instance of data gravity is sometimes called Kai-Fu Lee’s Virtuous Cycle, or the KL-VC. Imagine the same set of applications, only now look at the analysis part of the system more closely. Clearly, if you knew more about your customers than their location, you could perform better analysis, know more about their preferences, and be able to target your advertising more carefully. This might (or might not—there is some disagreement over just how fine-grained advertising can be before it moves from useful to creepy and productive to counterproductive) lead directly to increased revenues flowing from the ecommerce site.
But obtaining, storing, and processing this data means moving this data, as well—a point not often considered when thinking about how rich of a shopping experience “we might be able to offer.” Again, the network provides the key to moving the data so it can be obtained, stored, and processed.
At first glance, this might all still seem like commodity stuff—it is still just bits on the wire that need to be moved. Deeper inspection, however, reveals that this simply is not true. Understanding where data lives and how applications need to use it, the tradeoffs of CAP against that data, and the changes in the way the company does business if that data could be moved more quickly and effectively. Understanding which data flow has real business impact, and when, is a gateway into understanding how to add business value.
If you could say to your business leaders: let’s talk about where data is today, where it needs to be tomorrow, and how we can build a network where we can consciously balance between network complexity, network cost, and application performance, would that change the game at all? What if you could say: let’s talk about building a network that provides minimal jitter and fixed delay for defined workloads, and yet allows data to be quickly tied to public clouds in a way that allows application developers to choose what is best for any particular use case?
Data, applications, and the meaning of the network
Two things which seem to be universally true in the network engineering space right this moment. The first is that network engineers are convinced their jobs will not exist or there will only be network engineers “in the cloud” within the next five years. The second is a mad scramble to figure out how to add value to the business through the network. These two movements are, of course, mutually exclusive visions of the future. If there is absolutely no way to add value to a business through the network, then it only makes sense to outsource the whole mess to a utility-level provider.
The result, far too often, is for the folks working on the network to run around like they’ve been in the hot aisle so long that your hair is on fire. This result, however, somehow seems less than ideal.
I will suggest there are alternate solutions available if we just learn to think sideways and look for them. Burning hair is not a good look (unless it is an intentional part of some larger entertainment). What sort of sideways thinking am I looking for? Let’s begin by going back to basics by asking a question that might a bit dangerous to ask—do applications really add business value? They certainly seem to. After all, when you want to know or do something, you log into an application that either helps you find the answer or provides a way to get it done.
But wait—what underlies the application? Applications cannot run on thin air (although I did just read someplace that applications running on “the cloud” are the only ones that add business value, implying applications running on-premises do not). They must have data or information, in order to do their jobs (like producing reports, or allowing you to order something). In fact, one of the major problems developers face when switching from one application to handle a task to another one is figuring out how to transfer the data.
This seems to imply that data, rather than applications, is at the heart of the business. When I worked for a large enterprise, one of my favorite points to make in meetings was we are not a widget company… we are a data company. I normally got blank looks from both the IT and the business folks sitting in the room when I said this—but just because the folks in the room did not understand it does not mean it is not true.
What difference does this make? If the application is the center of the enterprise world, then the network is well and truly a commodity that can, and should, be replaced with the cheapest version possible. If, however, data is at the heart of what a business does, then the network and the application are em>equal partners in information technology. It is not that one is “more important” while the other is “less important;” rather, the network and the applications just do different things for and to one of the core assets of the business—information.
After all, we call it information technology, rather than application technology. There must be some reason “information” is in there—maybe it is because information is what really drives value in the business?
How does changing our perspective in this way help? After all, we are still “stuck” with a view of the network that is “just about moving data,” right? And moving data is just about exciting as moving, well… water through pipes, right?
No, not really.
Once information is the core, then the network and applications become “partners” in drawing value out of data in a way that adds value to the business. Applications and the network are but “fungible,” in that they can be replaced with something newer, more effective, better, etc., but neither is really more important than the other.
This post has gone on a bit long in just “setting the stage,” so I’ll continue this line of thought next week.
Dealing with Lock-In
A post on Martin Fowler’s blog this week started me thinking about lock-in—building a system that only allows you to purchase components from a single vendor so long as the system is running. The point of Martin’s piece is that lock-in exists in all systems, even open source, and hence you need to look at lock-in as a set of tradeoffs, rather than always being a negative outcome. Given that lock-in is a tradeoff, and that lock-in can happen regardless of the systems you decide to deploy, I want to go back to one of the foundational points Martin makes in his post and think about avoiding lock-in a little differently than just choosing between open source and vendor-based solutions.
If cannot avoid lock-in either by choosing a vendor-based solution or by choosing open source, then you have two choices. The first is to just give up and live with the results of lock-in. In fact, I have worked with a lot of companies who have done just this—they have accepted that lock-in is just a part of building networks, that lock-in results in a good transfer of risk to the vendor from the operator, or that lock-in results in a system that is easier to deploy and manage.
Giving in to lock-in, though, does not seem like a good idea on the surface, because architecture is about creating opportunity. If you cannot avoid lock-in and yet lock-in is antithetical to good architecture, what are your other options?
First, you can deepen your understanding of where and how lock-in matters, and when it doesn’t. Architecture often fails in one of three ways. The first way is designing something that is so closely fit-to-purpose today that is inflexible against future requirements. Think of a pair of pants that fit perfectly today; if you gain a few pounds, you might have to start over with your body-covering pant solution after that upcoming holiday. The second way is designing something that is so flexible it is wasteful. Think of a pair of pants that are sized to fit you no matter what your body size might be. Perhaps you might try to create a pair of pants you can wear when you are two, but also when you are seventy-two? You could design such a thing, I’m certain, but the amount of material you’d need to use in creating it would probably involve enough waste that you might as well just go ahead and sew multiple sets of pants across your lifetime. The third way is in building something that is flexible, but only through ossification. Something which can only be changed by adding or multiplying, and never through subtraction or division. Think of a pair of pants that can be changed to any color by adding a thin layer of new material each time.
All three of these are overengineered in one sense or another. They are trying to solve problems that probably don’t exist, that may never exist, or that can only be solved by adding another layer on top. All three of these are also forms of lock-in which should be avoided.
In order to buy the right kind of pants, you have to know how long you expect the pants to fit, and you have to know how your body is going to change over that time period. This is where the architect needs to connect with the business and try to understand what kinds of changes are likely, and what those changes might mean for the network. This is also the place where the architect needs to feed some technical reality back into the business; most of the time business leaders don’t think about technology challenges because no-one has ever explained to them that networks and information technology systems are not infinitely pliable balls of wax.
Second, you can try to contain lock-in, and intentionally refuse to be locked-in above a modular level. For instance, say you build a data center fabric using a marketecture sold by a vendor. If that marketecture can be contained within a single data center fabric, and you have more than one fabric, then you can avoid some degree of lock-in by building your other fabrics with different solutions. If the marketecture doesn’t allow for this, then maybe you should just say “no.” Modular design is not just good for controlling the leakage of state between modules, it can also be good for controlling the leakage of lock-in.
Finally, you can consider proper modularization vertically in your network, from layer to layer in your network stack. Enforce IPv4 or IPv6 as a single “wasp waist,” in the network transport; don’t allow Ethernet to wander all over the place, across modules and layers, as if it were the universal transport. Enforce separate and underlay and overlay control planes. Enforce a single form of storing information (such as all YANG formatted data, or all JSON formatted data, etc.) in your network management systems.
This is what open standards are good for. It will be painful to build to these standards, but these standards represent a point of abstraction where you can split lock-in systems apart, preventing them from entangling with one another in a way that prevents ever replacing one component with another.
Lock-in is not something you can avoid. On the other hand, you can learn to accept it when you know it will not impact your future options, and contain it when you know it will.
(Effective) Habits of the Network Expert
For any field of study, there are some mental habits that will make you an expert over time. Whether you are an infrastructure architect, a network designer, or a network reliability engineer, what are the habits of mind those involved in the building and operation of networks follow that mark out expertise?
Experts involve the user
Experts don’t just listen to the user, they involve the user. This means taking the time to teach the developer or application owner how their applications interact with the network, showing them how their applications either simplify or complicate the network, and the impact of these decisions on the overall network.
Experts think about data
Rather than applications. What does the data look like? How does the business use the data? Where does the data need to be, when does it need to be there, how often does it need to go, and what is the cost of moving it? What might be in the data that can be harmful? How can I protect the data while at rest and in flight?
Experts think in modules, surfaces, and protocols
Devices and configurations can, and should, change over time. The way a problem is broken up into modules and the interaction surfaces (interfaces) between those modules can be permanent. Choosing the wrong protocol means choosing a different protocol to solve every problem, leading to accretion of complexity, ossification, and ultimately brittleness. Break the problem up right the first time, and choose the protocols carefully, and let the devices and configurations follow.
Choosing devices first is like selecting the hammer you’re going to use to build a house, and then selecting the design and materials used in the house based on what you can use the hammer for.
Experts think about tradeoffs
State, optimization, and surface is an ironclad tradeoff. If you increase state, you increase complexity while also increasing optimization. If you increase surfaces through abstraction, you are both increasing and decreasing state, which has an impact both on complexity and optimization. All nontrivial abstractions leak. Every time you move data you are facing the speed of serialization, queueing, and light, and hence you are dealing with the choice between consistency, availablity, and partitioning.
If you haven’t found the tradeoffs, you haven’t looked hard enough.
Experts focus on the essence
Every problem has an essential core—something you are trying to solve, and a reason for solving it. Experts know how to divide between the essential and the nonessential. Experts think about what they are not designing, and what they are not trying to accomplish, as well as what they are. This doesn’t mean the rest isn’t there, it just means it’s not quite in focus all the time.
Experts are mentally stimulated to simulate
Labs are great—but moving beyond the lab and thinking about how the system works as a whole is better. Experts mentally simulate how the data moves, how the network converges, how attackers might try to break in, and other things besides.
Experts look around
Interior designers go to famous spaces to see how others have designed before them. Building designers walk through cities and famous buildings to see how others have designed before them. The more you know about how others have designed, the more you know about the history of networks, the more of an expert you will be.
Experts reshape the problem space
Experts are unafraid to think about the problem in a different way, to say “no,” and to try solutions that have not been tried before. Best common practice is a place to start, not a final arbiter of all that is good and true. Experts do not fall to the “is/ought” fallacy.
Experts treat problems as opportunities
Whether the problem is a mistake or a failure, or even a little bit of both, every problem is an opportunity to learn how the system works, and how networks work in general.
IPv6 Backscatter and Address Space Scanning
Backscatter is often used to detect various kinds of attacks, but how does it work? The paper under review today, Who Knocks at the IPv6 Door, explains backscatter usage in IPv4, and examines how effectively this technique might be used to detect scanning of IPv6 addresses, as well. The best place to begin is with an explanation of backscatter itself; the following network diagram will be helpful—

Assume A is scanning the IPv4 address space for some reason—for instance, to find some open port on a host, or as part of a DDoS attack. When A sends an unsolicited packet to C, a firewall (or some similar edge filtering device), C will attempt to discover the source of this packet. It could be there is some local policy set up allowing packets from A, or perhaps A is part of some domain none of the devices from C should be connecting to. IN order to discover more, the firewall will perform a reverse lookup. To do this, C takes advantage of the PTR DNS record, looking up the IP address to see if there is an associated domain name (this is explained in more detail in my How the Internet Really Works webinar, which I give every six months or so). This reverse lookup generates what is called a backscatter—these backscatter events can be used to find hosts scanning the IP address space. Sometimes these scans are innocent, such as a web spider searching for HTML servers; other times, they could be a prelude to some sort of attack.
Kensuke Fukuda and John Heidemann. 2018. Who Knocks at the IPv6 Door?: Detecting IPv6 Scanning. In Proceedings of the Internet Measurement Conference 2018 (IMC ’18). ACM, New York, NY, USA, 231-237. DOI: https://doi.org/10.1145/3278532.3278553
Scanning the IPv6 address space is much more difficult because there are 2128 addresses rather than 232. The paper under review here is one of the first attempts to understand backscatter in the IPv6 address space, which can lead to a better understanding of the ways in which IPv6 scanners are optimizing their search through the larger address space, and also to begin understanding how backscatter can be used in IPv6 for many of the same purposes as it is in IPv4.
The researchers begin by setting up a backscatter testbed across a subset of hosts for which IPv4 backscatter information is well-known. They developed a set of heuristics for identifying the kind of service or host performing the reverse DNS lookup, classifying them into major services, content delivery networks, mail servers, etc. They then examined the number of reverse DNS lookups requested versus the number of IP packets each received.
It turns out that about ten times as many backscatter incidents are reported for IPv4 than IPv6, which either indicates that IPv6 hosts perform reverse lookup requests about ten times less often than IPv4 hosts, or IPv6 hosts are ten times less likely to be monitored for backscatter events. Either way, this result is not promising—it appears, on the surface, that IPv6 hosts will be less likely to cause backscatter events, or IPv6 backscatter events are ten times less likely to be reported. This could indicate that widespread deployment of IPv6 will make it harder to detect various kinds of attacks on the DFZ. A second result from this research is that using backscatter, the researchers determined IPv6 scanning is increasing over time; while the IPv6 space is not currently a prime target for attacks, it might become more so over time, if the scanning rate is any indicator.
The bottom line is—IPv6 hosts need to be monitored as closely, or more closely than IPv6 hosts, for scanning events. The techniques used for scanning the IPv6 address space are not well understood at this time, either.
SDN, AI, and DevOps
According to the recent SONAR report, 52% of respondents reported they are using Software Defined Networking (SDN) tools to automate their networks, while 57% reported they are using network management tools. The report notes “52% may be slightly exaggerated, depending on how one defines SDN…” Which leads naturally to the question—what the difference between SDN and DevOps is, and how does AI figure into both or either of these. SDN, DevOps, and AI describe separate and overlapping movements in the design, deployment, and management of networks. While they are easy to confuse, they have three different origins and meanings.
Software Defined Networking grew out of research efforts to build and deploy experimental control planes, either distributed or centralized. SDN, however, quickly became associated with replacing some or all the functions of a distributed control plane with a centralized controller, particularly in order to centralize policy related to the control plane such as traffic engineering. SDN solutions always work through a programmatic interface designed to primarily supply forwarding information to network devices.
Development Operations, or DevOps, is a movement away from human-centered interfaces towards machine-centered interfaces for the deployment, operation, and troubleshooting of networks. DevOps is centered on the deployment, configuration, and management of the entire device, rather than providing the information required to forward traffic. DevOps can either use a programmatic interface, such as YANG, or “screen scraping,” to configure and manage network devices.
Finally, Artificial Intelligence, or AI, in the context of computer networks, is focused on the use of data gathered from the network to improve operations, from decreasing the time required to troubleshoot a problem to making the network adapt more quickly to shifting application and business requirements. AI, applied to networks, is narrow in scope, so it is Artificial Narrow Intelligence, or ANI. Real implementations of AI in the networking field are often applications of Machine Learning, or ML; while these two terms are often used interchangeably, they are not quite the same thing.
The following illustration will be useful in understanding the relationship between these three concepts.

In the figure, the SDN and DevOps controllers interact with two different aspects of the network devices forwarding traffic; both SDN and DevOps can be deployed in the same network to solve different problems. For instance, DevOps might be used to configure network devices to reach the SDN controller so they can receive the information they need to forward packets. Or the DevOps system might be used to configure a distributed control plane, such as IS-IS, on all the network devices, and also to configure a centralized controller which can override the local decisions of the distribute routing protocol for traffic engineering.
There are some situations where the difference between SDN and DevOps solutions is not obvious. The most common example is DevOps could be used to configure routing information on each network device, performing the same function as an SDN controller. In this case, what is the difference?
First, an SDN solution is intended specifically to replace the distributed control plane, rather than to configure the entire device. Second, the configurations pushed to a device through DevOps is normally persistent; if a device reboots, the configuration pushed through DevOps will be loaded and enabled, impacting the operation of the device. In contrast, any information pushed to a device through an SDN controller would normally be ephemeral; when the device is rebooted, information pushed by the SDN controller will be lost.
Finally, AI and self-healing are shown on the right side of this diagram as a way to turn telemetry into actionable input for either the DevOps or the SDN system. The ability of ML networks to find and recognize patterns in streams of data means it is perfectly suited to find new patterns of network behavior and alert an operator, or to match current conditions to the past, anticipating future failures or finding an otherwise unnoticed problem.
While SDN, DevOps, and AI overlap, then, they serve different purposes in the realm of network engineering and operations. There are many areas of overlap, but they are also different enough to argue the three terms should be cleanly separated, with each adding a different kind of value to the overall system.
