Hedge 90: Andrew Wertkin and a Naïve Reliance on Automation

Automation is surely one of the best things to come to the networking world—the ability to consistently apply a set of changes across a wide array of network devices has speed at which network engineers can respond to customer requests, increased the security of the network, and reduced the number of hours required to build and maintain large-scale systems. There are downsides to automation, as well—particularly when operators begin to rely on automation to solve problems that really should be solved someplace else.

In this episode of the Hedge, Andrew Wertkin from Bluecat Networks joins Tom Ammon and Russ White to discuss the naïve reliance on automation.

download

The Hedge 85: Terry Slattery and the ROI of Automation

It’s easy to assume automation can solve anything and that it’s cheap to deploy—that there are a lot of upsides to automation, and no downsides. In this episode of the Hedge, Terry Slattery joins Tom Ammon and Russ White to discuss something we don’t often talk about, the Return on Investment (ROI) of automation.


download

The Hedge 79: Brooks Westbrook and the Data Driven Lens

Many networks are designed and operationally drive by the configuration and management of features supporting applications and use cases. For network engineering to catch up to the rest of the operational world, it needs to move rapidly towards data driven management based on a solid understanding of the underlying protocols and systems. Brooks Westbrook joins Tom Amman and Russ White to discuss the data driven lens in this episode of the Hedge.

download

The Hedge 73: Daniel Teycheney and Open Source in Networking

Combining, or stitching together, open source projects to build something unique for your network is becoming more common. What does this look like in the real world? What are some of the positive and negative aspects of building things this way? How do open source projects interact with the commercial world? Daniel Teycheney joins Tom Ammon, Jett Tantsura, and Russ White to discuss open source software in networking, particularly around network monitoring and management.

download

Underhanded Code and Automation

So, software is eating the world—and you thought this was going to make things simpler, right? If you haven’t found the tradeoffs, you haven’t looked hard enough. I should trademark that or something! 🙂 While a lot of folks are thinking about code quality and supply chain are common concerns, there are a lot of little “side trails” organizations do not tend to think about. One such was recently covered in a paper on underhanded code, which is code designed to pass a standard review which be used to harm the system later on. For instance, you might see at some spot—

if (buffer_size=REALLYLONGDECLAREDVARIABLENAMEHERE) {
/* do some stuff here */
} /* end of if */

Can you spot what the problem might be? In C, the = is different than the ==. Which should it really be here? Even astute reviewers can easily miss this kind of detail—not least because it could be an intentional construction. Using a strongly typed language can help prevent this kind of thing, like Rust (listen to this episode of the Hedge for more information on Rust), but nothing beats having really good code formatting rules, even if they are apparently arbitrary, for catching these things.

The paper above lists these—

  • Use syntax highlighting and typefaces that clearly distinguish characters. You should be able to easily tell the difference between a lowercase l and a 1.
  • Require all comments to be on separate lines. This is actually pretty hard in C, however.
  • Prettify code into a standard format not under the attacker’s control.
  • Use compiler warnings in static analysis.
  • Forbid unneeded dangerous constructions
  • Use runtime memory corruption detection
  • Use fuzzing
  • Watch your test coverage

Not all of these are directly applicable for the network engineer dealing with automation, but they do provide some good pointers, or places to start. A few more…

Yoda assignments are named after Yoda’s constant placement of the subject after the verb (or in a split infinitive)—”succeed you will…” It’s not technically wrong in terms of grammar, but it is just hard enough to understand that it makes you listen carefully and think a bit harder. In software development, the variable taking the assignment should be on the left, and the thing being assigned should be on the right. Reversing these is a Yoda assignment; it’s technically correct, but it’s harder to read.

Arbitrary standardization is useful when there are many options that ultimately result in the same outcome. Don’t let options proliferate just because you can.

Use macros!

There are probably plenty more, but this is an area where we really are not paying attention right now.

Tradeoffs Come in Threes

On a Spring 2019 walk in Beijing I saw two street sweepers at a sunny corner. They were beat-up looking and grizzled but probably younger than me. They’d paused work to smoke and talk. One told a story; the other’s eyes widened and then he laughed so hard he had to bend over, leaning on his broom. I suspect their jobs and pay were lousy and their lives constrained in ways I can’t imagine. But they had time to smoke a cigarette and crack a joke. You know what that’s called? Waste, inefficiency, a suboptimal outcome. Some of the brightest minds in our economy are earnestly engaged in stamping it out. They’re winning, but everyone’s losing. —Tim Bray

This, in a nutshell, is what is often wrong with our design thinking in the networking world today. We want things to be efficient, wringing the last little dollar, and the last little bit of bandwidth, out of everything.

This is also, however, a perfect example of the problem of triads and tradeoffs. In the case of the street sweeper, we might thing, “well, we could replace those folks sitting around smoking a cigarette and cracking jokes with a robot, making things much more efficient.” We might notice the impact on the street sweeper’s salaries—but after all, it’s a boring job, and they are better off doing something else anyway, right?

We’re actually pretty good at finding, and “solving” (for some meaning of “solving,” of course), these kinds of immediately obvious tradeoffs. It’s obvious the street sweepers are going to lose their jobs if we replace them with a robot. What might not be so obvious is the loss of the presence of a person on the street. That’s a pair of eyes who can see when a child is being taken by someone who’s not a family member, a pair of ears that can hear the rumble of a car that doesn’t belong in the neighborhood, a pair of hands that can help someone who’s fallen, etc.

This is why these kinds of tradeoffs always come in (at least) threes.

Let’s look at the street sweepers in terms of the SOS triad. Replacing the street sweepers with a robot or machine certainly increases optimization. According to the triad, though, increasing optimization in one area should result in some increase in complexity someplace, and some loss of optimization in other places.

What about surfaces? The robot must be managed, and it must interact with people and vehicles on the street—which means people and vehicles must also interact with the robot. Someone must build and maintain the robot, so there must be some sort of system, with a plethora of interaction surfaces, to make this all happen. So yes, there may be more efficiency, but there are now more interaction surfaces to deal with now, too. These interaction surfaces increase complexity.

What about state? In a sense, there isn’t much change in state other than moving it—purely in terms of sweeping the street, anyway. The sweeper and the robot must both understand when and how to sweep the street, etc., so the state doesn’t seem to change much here.

On the other hand, that extra set of eyes and ears, that extra mind, that is no longer on the street in a personal way represents a loss of state. The robot is an abstraction of the person who was there before, and abstraction always represents a loss of state in some way. Whether this loss of state decreases the optimal handling of local neighborhood emergencies is probably a non-trivial problem to consider.

The bottom line is this—when you go after efficiency, you need to think in terms of efficiency of what, rather than efficiency as a goal-in-itself. That’s because there is no such thing as “efficiency-in-itself,” there is only something you are making more efficient—and a lot of things you potentially making less efficient.

Automate your network, certainly, or even buy a system that solves “all the problems.” But remember there are tradeoffs—often a large number of tradeoffs you might not have thought about—and those tradeoffs have consequences.

It’s not “if you haven’t found the tradeoff, you haven’t looked hard enough…” Don’t stop at one. It’s “if you haven’t found the tradeoffs, you haven’t looked hard enough.” It’s a plural for a reason.

Learning from Failure at Scale

One of the difficulties for the average network operator trying to understand their failure rates and reasons is they just don’t have enough devices, or enough incidents, to make informed observations. If you have a couple of dozen switches, it is often hard to understand how often software defects take a device down versus human error (Mean Time Between Mistakes, or MTBM). As networks become larger, however, more information becomes available, and more interesting observations can be made. A recent paper written in conjunction with Facebook uses information from Facebook’s data center fabrics to make some observations about the rate and severity of different kinds of failures—needless to say, the results are fairly interesting.

To produce the study, the authors took data from Facebook’s ticket logging system over 6 years, from 2011 through 2018. They used language-based systems to classify each event based on severity, kind of remediation, and root cause. Once the events were classified, the researchers plotted and tried to understand the results. For instance, table 2 lists the most common root causes of data center fabric incidents: 17% were maintenance, 13% misconfiguration, 13% hardware, and 12% software defects (bugs).

Given Facebook’s network is completely automated, with a full code review/canary process for validating changes before they are put into production, misconfiguration failures should lower than a manually operated network. That 13% of failures are still accounted for by misconfiguration shows even the best automation program cannot eliminate failures from misconfiguration. This number is also interesting because it implies networks without this degree of automation must have much higher failure rates due to misconfiguration. While the raw number of failures are not given, this seems to provide both an idea of how much improvement automation can create, as well as a sort of “cap” on how much improvement operators can expect by automating.

If misconfiguration causes 13% of all failures, and software defects cause 12%, then 25% of all failures are caused by human error. I don’t know of any other studies of this kind, but 25% sounds about right based on years of experience. Whether this 25% is spread across failures in vendor code and operator configuration, or across operator created code and operator configuration, the percentage of failure seems to remain about the same. It is not likely you can eliminate failures caused by human error, nor are you likely to drive it down more than a couple of percentage points.

Another interesting finding here is larger networks increase the time humans take to resolve incidents. As the size of the network scales up, the MTTR scales up with it. This is intuitive—larger networks tend to have more complex configurations, leading to more time spent trying to chase down and understand a problem. One thing the paper does not discuss, but might be interesting, is how modularization impacts these numbers. Intuitively, containing failures within a module (whether horizontally along topological lines or vertically through virtualization) should decrease the scope in which a network engineer needs to search to find a problem and resolve it. This is, on the other hand, likely to be offset somewhat by the increased complexity and reduction in visibility caused by segmentation—so it’s hard to determine what the overall effect of deeper segmentation in a network might be.

Overall, this is an interesting paper to parse through and understand—there are lots of great insights here for network operators at any scale.