Learning to Trust

The state of automation among enterprise operators has been a matter of some interest this year, with several firms undertaking studies of the space. Juniper, for instance, recently released the first yearly edition of the SONAR report, which surveyed many network operators to set a baseline for a better future understanding of how automation is being used. Another recent report in this area is Enterprise Network Automation for 2020 and Beyond, conducted by Enterprise Management Associates.

While these reports are, themselves, interesting for understanding the state of automation in the networking world, one correlation noted on page 13 of the EMA report caught my attention: “Individuals who primarily engage with automation as users are less likely to fully trust automation.” This observation is set in parallel with two others on that same page: “Enterprises that consider network automation a high priority initiative trust automation more,” and “Individuals who fully trust automation report significant improvement in change management capacity.” It seems somewhat obvious these three are related in some way, but how? The answer to this, I think, lies in the relationship between the person and the tool.

We often think of tools as “abstract objects,” a “thing” that is “out there in the world.” It’s something we use to get a particular result, but which does not, in turn, impact “me as a person” in any way. This view of tools is not born out in the real world. To illustrate, consider a simple situation: you are peacefully driving down the road when another driver runs a traffic signal, causing your two cars to collide. When you jump out of your car, do you say… “you caused your vehicle to hit mine?” Nope. You say: “you hit me.”

This kind of identification between the user and the tool is widely recognized and remarked. Going back to 1904, Thorstein Veblen writes that the “machine throws out anthropomorphic habits of thought,” forcing the worker to adapt to the work, rather than the work to the worker. Marshall McLuhan says Students of computer programming have had to learn how to approach all knowledge structurally,” shaping the way they information so the computer can store and process it.

What does any of this have to do with network automation and trust? Two things. First, the more “involved” you are with a tool, the more you will trust it. People trust hammers more than they do cars (in general) because the use of hammers is narrow, and the design and operation of the tool is fairly obvious. Cars, on the other hand, are complex; many people simply drive them, rather than learning how they work. If you go to a high speed or off-road driving course, the first thing you will be taught is how a car works. This is not an accident—in learning how a car works, you are learning to trust the tool. Second, the more you work with a tool, the more you will understand its limits, and hence the more you will know when you can, and cannot, trust it.

If you want to trust your network automation, don’t just be a user. Be an active participant in the tools you use. This explains the correlation between the level of trust, level of engagement, and level of improvement. The more you participate in the development of the tooling itself, and the more you work with the tools, the more you will be able to trust them. Increased trust will, in turn, result in increased productivity and effectiveness. To some degree, your way of thinking will be shaped to the tool—this is just a side effect of the way our minds work.

You can extend this lesson to all other areas of network engineering—for instance, if you want to trust your network, then you need to go beyond just configuring routing, and really learn how it works. This does not mean you need in depth knowledge of that particular implementation, nor does it mean knowing how every possible configuration option works in detail, but it does mean knowing how the protocol converges, what the limits to the protocol are, etc. Rinse and repeat for routers, storage systems, quality of service, etc.—and eventually you will not only be able to trust your tools, but also be very productive and effective with them.

Weekend Reads 120619

Vulnerability assessments are useful for detecting security issues within your environment. By identifying potential security weaknesses, these assessments help us to reduce the risk of a digital criminal infiltrating its systems. These assessments also help us learn more about their assets in a meaningful way that allows them to improve our overall security posture. —Ben Layer

Older information-technology professionals are being passed over by employers, even as IT job openings soar to record highs and employers say recruiting tech talent is a challenge. —Angus Loten

Security researchers at SRLabs have found a number of vulnerabilities with the way carriers around the world are implementing RCS, the new messaging standard designed to replace SMS, Motherboard reports. In some cases, these issues could compromise a user’s location data, they could allow their text messages or calls to be intercepted, or they might allow their phone number to be spoofed. —Jon Porter

A newly discovered vulnerability in the Android operating system could let attackers abuse legitimate apps to deliver malware. In doing so, they could track users without their knowledge. —/Kelly Sheridan

On 1 October, APNIC introduced a special type of inet[6]num (that is, either an inetnum or an inet6num) record, called a whois stub record, into the APNIC Whois Database. It aims to fill in a few gaps in the data and improve query results, as will be demonstrated later in this article. —Rafael Cintra

Today, we’re happy to announce that 80% of Android apps are encrypting traffic by default. The percentage is even greater for apps targeting Android 9 and higher, with 90% of them encrypting traffic by default. —Bram Bonné

However, there are customers who prefer to have the compute and the rest of the infrastructure hosted within their own data centers. Their reasons include security, data governance and low latency. The AWS Outposts solution is designed to bring AWS Cloud services to customers’ on-premises data centers. An AWS Outposts Rack is delivered to a customer site as a preconfigured standalone rack, requiring only power and network connectivity to begin providing service in customers’ data centers. —Chris Spain

Let’s step back into the blockchain jungle and take a look at the current state of the ecosystem and the projects trying to solve some of the limitations of blockchain technology: speed and throughput, cross-blockchain information and value exchange, governance, and identity and account management. —Axel Smith

On the ‘Net: So many selfies, so little self

As the Industrial Revolution began to gain momentum, thinkers often decried technological progress as an “atomizing force” that split communities by emphasizing the individual over the group. Living alone in a crowd is documented in a number of books—including Bowling Alone, Alone Together, and Antisocial Media. Perhaps there is no more poignant expression of atomization than Moby & the Void Pacific Choir’s “Are You Lost in the World Like Me?

On the ‘Net: RFC1925 Rule 6A

The truth is, however, that while protocol designers may talk about these things, and network designers study them, very few networks today are built using any of these models. What is often used instead is what might be called the Infinitely Layered Functional Indirection (ILFI) model of network engineering. In this model, nothing is solved at a particular layer of the network if it can be moved to another layer, whether successfully or not.

The Hedge 14: Ron Bonica and SRM6

SRv6 uses IPv6 header fields to perform many of the same traffic engineering, fast reroute, and other functions available through MPLS. The size of the header with a large label stack, however, can be problematic from a performance perspective. Further, adding the concept of actions to SRv6 would bring a lot of new functionality into view. On this episode of the Hedge podcast, Ron Bonica joins Russ White to talk about SRm6, or Segment Routing Mapped to the v6 address space, which compacts the label stack and actions into a smaller space, resulting in an easier to deploy version of SRv6.


Lessons in Location and Identity through Remote Peering

We normally encounter four different kinds of addresses in an IP network; we tend to think about each of these as:

  • The MAC address identifies an interface on a physical or virtual wire
  • The IP address identifies an interface on a host
  • The DNS name identifies a host
  • The port number identifies an application or service running on the host

There are other address-like things, of course, such as the protocol number, a router ID, an MPLS label, etc. But let’s stick to these four for the moment. Looking through this list, the first thing you should notice is we often use the IP address as if it identified a host—which is generally not a good thing. There have been some efforts in the past to split the locator from the identifier, but the IP protocol suite was designed with a separate locator and identifier already: the IP address is the location and the DNS name is the identifier.

Even if you split up the locator and the identifier, however, the word locator is still quite ambiguous because we often equate the geographical and topological locations. In fact, old police procedural shows used to include scenes where a suspect was tracked down because they were using an IP address “assigned to them” in some other city… When the topic comes up this way, we can see the obvious flaw. In other situations, conflating the IP address with the location of the device is less obvious, and causes more subtle problems.

Consider, for instance, the concept of remote peering. Suppose you want to connect to a cloud provider who has a presence in an IXP that’s just a few hundred miles away. You calculate the costs of placing a router on the IX fabric, add it to the cost of bringing up a new circuit to the IX, and … well, there’s no way you are ever going to get that kind of budget approved. Looking around, though, you find there is a company that already has a router connected to the IX fabric you want to be on, and they offer a remote peering solution, which means they offer to build an Ethernet tunnel across the public Internet to your edge router. Once the tunnel is up, you can peer your local router to the cloud provider’s network using BGP. The cloud provider thinks you have a device physically connected to the local IX fabric, so all is well, right?

In a recent paper, a group of researchers looked at the combination of remote peering and anycast addresses. If you are not familiar with anycast addresses, the concept is simple: take a service which is replicated across multiple locations and advertise every instance of the service using a single IP address. This is clever because when you send packets to the IP address representing the service, you will always reach the closest instance of the service. So long as you have not played games with stretched Ethernet, that is.

In the paper, the researchers used various mechanisms to figure out where remote peering was taking place, and another to discover services being advertised using anycast (normally DNS or CDN services). Using the intersection of these two, they determined if remote peering was impacting the performance of any of these services. I shocked, shocked, to tell you the answer is yes. I would never have expected stretched Ethernet to have a negative impact on performance. 😊

To quote the paper directly:

…we found that 38% (126/332) of RTTs in traceroutes towards anycast pre￿xes potentially a￿ected by remote peering are larger than the average RTT of pre￿xes without remote peering. In these 126 traceroute probes, the average RTT towards pre￿xes potentially a￿ected by remote peering is 119.7 ms while the average RTT of the other pre￿xes is 84.7 ms.

The bottom line: “An average latency increase of 35.1 ms.” This is partially because the two different meanings of the word location come into play when you are interacting with services like CDNs and DNS. These services will always try to serve your requests from a physical location close to you. When you are using Ethernet stretched over IP, however, your topological location (where you connect to the network) and your geographical location (where you are physically located on the face of the Earth) can be radically different. Think about the mental dislocation when you call someone with an area code that is normally tied to an area of the west coast of the US, and yet you know they now live around London, say…

We could probably add in a bit of complexity to solve these problems, or (even better) just include your GPS coordinates in the IP header. After all, what’s the point of privacy? … 🙂 The bottom line is this: remote peering might a good idea when everything else fails, of course, but if you haven’t found the tradeoffs, you haven’t looked hard enough. It might be that application performance across a remote peering session is low enough that paying for the connection might turn out cheaper.

In the meantime, wake me up when we decide that stretching Ethernet over IP is never a good thing.

Weekend Reads 112919

The United States on Tuesday set out a procedure to protect its telecommunications networks and their supply chains from national security threats, saying it would consider whether to bar transactions on a case-by-case basis. —David Shepardson

Last Thursday, Tesla CEO, Elon Musk unveiled Tesla’s latest innovation, the Cybertruck (Or, as he prefers to say, CYBRTRCK.) Tesla already has— if Musk’s cryptic tweet embedded below is correct—at least 200,000 preorders (though the fact that only $100 down payment is required means that enthusiasm is not very expensive)… —Brendan Dixon

The proportion for Golden Ratio is 1:1.618. It is a mathematical equation that has found its way into design practices as well. The golden ratio has been scientifically proven beautiful. The best example to understand the importance of the Golden Ratio can be traced back to one of the most famous paintings: the Mona Lisa. The painting itself uses the golden ratio. —Harsh Raval

After all these “cybersecurity” rules are in place, no foreign company may encrypt data so that it cannot be read by the Chinese central government and the Communist Party of China. In other words, businesses will be required to turn over encryption keys. —Gordon G. Chang

Bean counters have noted that many iconic businesses (Uber, Lyft, Airbnb, WeWork, etc.) are not very profitable. A market shakeout will probably raise the cost of urban living. Here’s some background… —Deyse O’Leary

The minimum viable product (MVP) approach is the minimal or “lean” way to give consumers what they want without it necessarily being a fully realized idea. Given how the cloud works and its unprecedented ability to test incomplete ideas, the MVP approach has become the dominant methodology for pushing ideas out into the world. —John Maeda

By eliminating all the check-out steps required to buy something online, 1-click gave Amazon a decisive edge against cart abandonment, which, according to some studies, averages 70 percent and remains one of the two or three biggest challenges to online retailers. 1-click made impulse buys on the web actually impulsive. —Cliff Kuang

Have you ever worked with someone that has the most valuable time in the world? Someone that counts each precious minute in their presence as if you’re keeping them from something very, very important that they could use to solve world hunger or cure cancer? If you haven’t then you’re a very lucky person indeed. Sadly, almost everyone, especially those in IT, has had the misfortune to be involved with someone whose time is more precious than platinum-plated saffron. —Tom Hollingsworth

The growing adoption of multifactor authentication (MFA) has resulted in a proportionate rise in cyberattacks that target MFA technologies. In a recent Private Industry Notification (PIN), the Federal Bureau of Investigation (FBI) recognized how recent cyberattack campaigns are focusing directly on circumventing MFA. The FBI outlined three specific and comprehensive tactics that hackers have been developing in order to bypass MFA. —Tanner Johnson