Archive for 2019
The Hedge 13: Ivan Pepelnjak
In this episode of the Hedge, Tom Ammon and Russ White are joined by Ivan Pepelnjak of ipSpace.net to talk about being old, knowing about how things are going to break before they do, and being negative. Along the way, we discuss the IETF, open source, and many other aspects of the world of network engineering.
IPv6 and Leaky Addresses
One of the recurring myths of IPv6 is its very large address space somehow confers a higher degree of security. The theory goes something like this: there is so much more of the IPv6 address space to test in order to find out what is connected to the network, it would take too long to scan the entire space looking for devices. The first problem with this myth is it simply is not true—it is quite possible to scan the entire IPv6 address space rather quickly, probing enough addresses to perform a tree-based search to find attached devices. The second problem is this assumes the only modes of attack available in IPv4 will directly carry across to IPv6. But every protocol has its own set of tradeoffs, and therefore its own set of attack surfaces.
Assume, for instance, you follow the “quick and easy” way of configuring IPv6 addresses on devices as they are deployed in your network. The usual process for building an IPv6 address for an interface is to take the prefix, learned from the advertisement of a locally attached router, and the MAC address of one of the locally attached interfaces, combining them into an IPv6 address (SLAAC). The size of the IPv6 address space proves very convenient here, as it allows the MAC address, which is presumably unique, to be used in building a (presumably unique) IPv6 address.
According to RFC7721, this process opens several new attack surfaces that did not exist in IPv4, primarily because the device has exposed more information about itself through the IPv6 address. First, the IPv6 address now contains at least some part of the OUI for the device. This OUI can be directly converted to a device manufacturer using web pages such as this one. In fact, in many situations you can determine where and when a device was manufactured, and often what class of device it is. This kind of information gives attackers an “inside track” on determining what kinds of attacks might be successful against the device.
Second, if the IPv6 address is calculated based on a local MAC address, the host bits of the IPv6 address of a host will remain the same regardless of where it is connected to the network. For instance, I may normally connect my laptop to a port in a desk in the Raleigh area. When I visit Sunnyvale, however, I will likely connect my laptop to a port in a conference room there. If I connect to the same web site from both locations, the site can infer I am using the same laptop from the host bits of the IPv6 address. Across time, an attacker can track my activities regardless of where I am physically located, allowing them to correlate my activities. Using the common lower bits, an attacker can also infer my location at any point in time.
Third, knowing what network adapters an organization is likely to use reduces the amount of raw address space that must be scanned to find active devices. If you know an organization uses Juniper routers, and you are trying to find all their routers in a data center or IX fabric, you don’t really need to scan the entire IPv6 address space. All you need to do is probe those addresses which would be formed using SLAAC with OUI’s formed from Juniper MAC addresses.
Beyond RFC7721, many devices also return their MAC address when responding to ICMPv6 probes in the time exceeded response. This directly exposes information about the host, so the attacker does not need to infer information from SLAAC-derived MAC addresses.
What can be done about these sorts of attacks?
The primary solution is to use semantically opaque identifiers when building IPv6 addresses using SLAAC—perhaps even using a cryptographic hash to create the base identifiers from which IPv6 addresses are created. The bottom line is, though, that you should examine the vendor documentation for each kind of system you deploy—especially infrastructure devices—as well as using packet capture tools to understand what kinds of information your IPv6 addresses may be leaking and how to prevent it.
The Hedge 12: Cyberinsecurity with Andrew Odlyzko
There is a rising tide of security breaches. There is an even faster rising tide of hysteria over the ostensible reason for these breaches, namely the deficient state of our information infrastructure. Yet the world is doing remarkably well overall, and has not suffered any of the oft-threatened giant digital catastrophes. Andrew Odlyzko joins Tom Ammon and I to talk about cyber insecurity.
Simpler is Better… Right?
A few weeks ago, I was in the midst of a conversation about EVPNs, how they work, and the use cases for deploying them, when one of the participants exclaimed: “This is so complicated… why don’t we stick with the older way of doing things with multi-chassis link aggregation and virtual chassis device?” Sometimes it does seem like we create complex solutions when a simpler solution is already available. Since simpler is always better, why not just use them? After all, simpler solutions are easier to understand, which means they are easier to deploy and troubleshoot.
The problem is we too often forget the other side of the simplicity equation—complexity is required to solve hard problems and adapt to demanding environments. While complex systems can be fragile (primarily through ossification), simple solutions can flat out fail just because they can’t cope with changes in their environment.
As an example, consider MLAG. On the surface, MLAG is a useful technology. If you have a server that has two network interfaces but is running an application that only supports a single IP address as a service access point, MLAG is a neat and useful solution. Connect a single (presumably high reliability) server to two different upstream switches through two different network interface cards, which act like one logical Ethernet network. If one of the two network devices, optics, cables, etc., fails, the server still has network connectivity.
Neat.
But MLAG has well-known downsides, as well. There is a little problem with the physical locality of the cables and systems involved. If you have a service split among multiple services, MLAG is no longer useful. If the upstream switches are widely separated, then you have lots of cabling fun and various problems with jitter and delay to look forward to.
There is also the little problem of MLAG solutions being mostly proprietary. When something fails, your best hope is a clueful technical assistance engineer on the other end of a phone line. If you want to switch vendors for some reason, you have the fun of taking the entire server out of operation for a maintenance window to do the switch, along with the vendor lock-in in the case of failure, etc.
EVPN, with its ability to attach a single host through multiple virtual Ethernet connections across an IP network, is a lot more complex on the surface. But looks can be deceiving… In the case of EVPN, you see the complexity “upfront,” which means you can (largely) understand what it is doing and how it is doing it. There is no “MLAG black box;” useful in cases where you must troubleshoot.
And there will be cases where you will need to troubleshoot.
Further, because EVPN is standards-based and implemented by multiple vendors, you can switch over one virtual connection at a time; you can switch vendors without taking the server down. EVPN is a much more flexible solution overall, opening up possibilities around multihoming across different pods in a butterfly fabric, or allowing the use of default MAC addresses to reduce table sizes.
Virtual chassis systems can often solve some of the same problems as EVPN, but again—you are dealing with a black box. The black box will likely never scale to the same size, and cover the same use cases, as a community-built standard like EVPN.
The bottom line
Sometimes starting with a more complex set of base technologies will result in a simpler overall system. The right system will not avoid complexity, but rather reduce it where possible and contain it where it cannot be avoided. If you find your system is inflexible and difficult to manage, maybe its time to go back to the drawing board and start with something a little more complex, or where the complexity is “on the surface” rather than buried in an abstraction. The result might actually be simpler.
The Hedge 11: Roland Dobbins on Working Remotely
I failed to include the right categories the first time, so this didn’t make it into the podcast catcher feeds correctly…
Network engineering and operations are both “mental work” that can largely be done remotely—but working remote is not only great in many ways, it is also often fraught with problems. In this episode of the Hedge, Roland Dobbins joins Tom and Russ to discuss the ins and outs of working remote, including some strategies we have found effective at removing many of the negative aspects.
Data Gravity and the Network
One “sideways” place to look for value in the network is in a place that initially seems far away from infrastructure, data gravity. Data gravity is not something you might often think about directly when building or operating a network, but it is something you think about indirectly. For instance, speeds and feeds, quality of service, and convergence time are all three side effects, in one way or another, of data gravity.
As with all things in technology (and life), data gravity is not one thing, but two, one good and one bad—and there are tradeoffs. Because if you haven’t found the tradeoffs, you haven’t looked hard enough. All of this is, in turn, related to the CAP Theorem.
Data gravity is, first, a relationship between applications and data location. Suppose you have a set of applications that contain customer information, such as forms of payment, a record of past purchases, physical addresses, and email addresses. Now assume your business runs on three applications. The first analyzes past order information and physical location to try to predict potential interests in current products. The second fills in information on an order form when a customer logs into your company’s ecommerce site. The third is used by marketing to build and send email campaigns.
Think about where you would store this information physically, especially if you want the information used by all three applications to be as close to accurate as possible in near-real time. CAP theorem dictates that if you choose to partition the data, say by having one copy of the customer address information in an on-premises fabric for analysis and another in a public cloud service for the ecommerce front-end, you are you going to have to live with either locking one copy of the record or the other while it is being updated, or you are going to have to live with one of the two copies of the record being out-of-date while its being used (the database can either have a real time locking system or be eventually consistent). If you really want the data used by all three applications to be accurate in near-real time, the only solution you have is to run all three applications in the same physical infrastructure so there can be one unified copy of all three data.
This causes the first instance of data gravity. If you want the ordering front end to work well, you need to unify the data stores, and move them physically close to the application. If you want the other applications that interact with this same data to work well, you need to move them to the same physical infrastructure as the data they rely on. Data gravity, in this case, acts like a magnet; the more data that is stored in a location, the more applications will want to be in that same location. The more applications that run on a particular infrastructure, the more data will be stored there, as well. Data follows applications, and applications follow data. Data gravity, however, can often work against the best interests of the business. The public cloud is not the most efficient place to house every piece of data, nor is it the best place to do all processing.
Minimizing the impact of the CAP theorem by building and operating a network that efficiently moves data minimizes the impact of this kind of data gravity.
The second instance of data gravity is sometimes called Kai-Fu Lee’s Virtuous Cycle, or the KL-VC. Imagine the same set of applications, only now look at the analysis part of the system more closely. Clearly, if you knew more about your customers than their location, you could perform better analysis, know more about their preferences, and be able to target your advertising more carefully. This might (or might not—there is some disagreement over just how fine-grained advertising can be before it moves from useful to creepy and productive to counterproductive) lead directly to increased revenues flowing from the ecommerce site.
But obtaining, storing, and processing this data means moving this data, as well—a point not often considered when thinking about how rich of a shopping experience “we might be able to offer.” Again, the network provides the key to moving the data so it can be obtained, stored, and processed.
At first glance, this might all still seem like commodity stuff—it is still just bits on the wire that need to be moved. Deeper inspection, however, reveals that this simply is not true. Understanding where data lives and how applications need to use it, the tradeoffs of CAP against that data, and the changes in the way the company does business if that data could be moved more quickly and effectively. Understanding which data flow has real business impact, and when, is a gateway into understanding how to add business value.
If you could say to your business leaders: let’s talk about where data is today, where it needs to be tomorrow, and how we can build a network where we can consciously balance between network complexity, network cost, and application performance, would that change the game at all? What if you could say: let’s talk about building a network that provides minimal jitter and fixed delay for defined workloads, and yet allows data to be quickly tied to public clouds in a way that allows application developers to choose what is best for any particular use case?
Data, applications, and the meaning of the network
Two things which seem to be universally true in the network engineering space right this moment. The first is that network engineers are convinced their jobs will not exist or there will only be network engineers “in the cloud” within the next five years. The second is a mad scramble to figure out how to add value to the business through the network. These two movements are, of course, mutually exclusive visions of the future. If there is absolutely no way to add value to a business through the network, then it only makes sense to outsource the whole mess to a utility-level provider.
The result, far too often, is for the folks working on the network to run around like they’ve been in the hot aisle so long that your hair is on fire. This result, however, somehow seems less than ideal.
I will suggest there are alternate solutions available if we just learn to think sideways and look for them. Burning hair is not a good look (unless it is an intentional part of some larger entertainment). What sort of sideways thinking am I looking for? Let’s begin by going back to basics by asking a question that might a bit dangerous to ask—do applications really add business value? They certainly seem to. After all, when you want to know or do something, you log into an application that either helps you find the answer or provides a way to get it done.
But wait—what underlies the application? Applications cannot run on thin air (although I did just read someplace that applications running on “the cloud” are the only ones that add business value, implying applications running on-premises do not). They must have data or information, in order to do their jobs (like producing reports, or allowing you to order something). In fact, one of the major problems developers face when switching from one application to handle a task to another one is figuring out how to transfer the data.
This seems to imply that data, rather than applications, is at the heart of the business. When I worked for a large enterprise, one of my favorite points to make in meetings was we are not a widget company… we are a data company. I normally got blank looks from both the IT and the business folks sitting in the room when I said this—but just because the folks in the room did not understand it does not mean it is not true.
What difference does this make? If the application is the center of the enterprise world, then the network is well and truly a commodity that can, and should, be replaced with the cheapest version possible. If, however, data is at the heart of what a business does, then the network and the application are em>equal partners in information technology. It is not that one is “more important” while the other is “less important;” rather, the network and the applications just do different things for and to one of the core assets of the business—information.
After all, we call it information technology, rather than application technology. There must be some reason “information” is in there—maybe it is because information is what really drives value in the business?
How does changing our perspective in this way help? After all, we are still “stuck” with a view of the network that is “just about moving data,” right? And moving data is just about exciting as moving, well… water through pipes, right?
No, not really.
Once information is the core, then the network and applications become “partners” in drawing value out of data in a way that adds value to the business. Applications and the network are but “fungible,” in that they can be replaced with something newer, more effective, better, etc., but neither is really more important than the other.
This post has gone on a bit long in just “setting the stage,” so I’ll continue this line of thought next week.