SKILLS
Hedge 175: Mike B on Personal Superpowers
When the economy starts contracting, career advisors start talking about the importance of “soft skills.” What are “soft skills,” exactly—and why are they “soft?” Mike Bushong joins Tom Amman and Russ White to talk about why these skills are important, why they are not “soft,” and how we should talk about people skills instead. They are superpowers,” and there isn’t anything “soft” about them.
Hedge 174: Javier Antich and Cloud AI
ChatGPT has broken through the hype barrier and brought AI hype to the larger world. But what does AI mean to network engineers? We’ve talked about AI driven network management for years, and commercial products abound, but what does it really mean to move from the automation driven configuration to AI driven decision-making? Javier Antich joins Tom Ammon and Russ White for this episode of the Hedge to talk about cloud AI for network engineers.
Mean Time to Innocence is not Enough
A long time ago, I supported a wind speed detection system consisting of an impeller, a small electric generator, a 12 gauge cable running a few miles, and a voltmeter. The entire thing was calibrated through a resistive bridge–attach an electric motor to the generator, run it at a series of fixed speed, and adjust the resistive bridge until the voltmeter, marked in knots of wind speed, read correctly.
The primary problem in this system was the several miles of 12 gauge cable. It was often damaged, requiring us to dig the cable up (shovel ready jobs!), strip the cable back, splice the correct pairs together, seal it all in a plastic container filled with goo, and bury it all again. There was one instance, however, when we could not get the wind speed system adjusted correctly, no matter how we tried to tune the resistive bridge. We pulled things apart and determined there must be a problem in one of the (many) splices in the several miles of cable.
At first, we ran a Time Domain Reflectometer (TDR) across the cable to see if we could find the problem. The TDR turned up a couple of hot spots, so we dug those points up … and found there were no splices there. Hmmm … So we called in a specialized cable team. They ran the same TDR tests, dug up the same places, and then did some further testing and found … the cable was innocent.
This set up an argument, running all the way to the base commander level, between our team and the cable team. Who’s fault was this mess? Our inability to measure the wind speed at one end of the runway was impacting flight operations, so this had to be fixed. But rather than fixing the problem, we were spending our time arguing about who’s fault the problem was, and who should fix it.
When I read this line in a recent CAIDA research paper–
“Measurement is political, and often adversarial.”
It rang very true. In Internet terms, speed, congestion, and even usage are often political and adversarial. Just like the wind speed system, two teams were measuring the same thing to prove the problem wasn’t their’s–rather than to figure out what the problem is and how to fix it.
In other words, our goal is too often Mean Time to Innocence (MTTI), rather than Mean Time to Repair (MTTR).
MTTI is not enough. We need to work with our application counterparts to find and fix problems, rather than against them. Measurement should not be adversarial, it should be cooperative.
We need to learn to fix the problem, not the blame.
This is a cultural issue, but it also impacts the way we do telemetry. For instance, in the case of the wind speed indicator, the problem was ultimately a connection that “worked,” but with high capacive reactance such that some kinds of signals were attenuated while others were not. None of us were testing the cable using the right kind of signal, so we all just sat around arguing about who’s problem it was rather than solving the problem.
When a user brings a problem to you, resist the urge to go prove yourself–or your system–innocent. Even if your system isn’t the problem, your system can provide information that can help solve the problem. Treat problems as opportunities to help rather than as opportunies to swish your superhero cape and prove your expertise.
Learning to Ride
Have you ever taught a kid to ride a bike? Kids always begin the process by shifting their focus from the handlebars to the pedals, trying to feel out how to keep the right amount of pressure on each pedal, control the handlebars, and keep moving … so they can stay balanced. During this initial learning phase, the kid will keep their eyes down, looking at the pedals, the handlebars, and . . . the ground.
After some time of riding, though, managing the pedals and handlebars are embedded in “muscle memory,” allowing them to get their head up and focus on where they’re going rather than on the mechanical process of riding. After a lot of experience, bike riders can start doing wheelies, or jumps, or off-road riding that goes far beyond basic balance.
Network engineer—any kind of engineering, really—is the same way.
At first, you need to focus on what you are doing. How is this configured? What specific output am I looking for in this show command? What field do I need to use in this data structure to automate that? Where do I look to find out about these fields, defects, etc.?
The problem is—it is easy to get stuck at this level, focusing on configurations, automation, and the “what” of things.
You’re not going to be able to get your head up and think about the longer term—the trail ahead, the end-point you’re trying to reach—until you commit these things to muscle memory.
The point, with technology, is learning to stop focusing on the pedals, the handlebars, and the ground, and start focusing on the goal—whether its nailing this jump or conquering this trail or making it there.
Transitioning is often hard, of course, but its just like riding a bike. You won’t make the transition until you trust your muscle memory a bit at a time.
Learning the theory of how and why things work the way they are is a key point in this transition. Configuration is just the intersection of “how this works” with “what am I trying to do…” If you know how (and why) protocols work, and you know what you’re trying to do, configuration and automation will become a matter of asking the right questions.
Learn the theory, and riding the bike will become second nature—rather than something you must focus on constantly.
Privacy for Providers
While this talk is titled privacy for providers, it really applies to just about every network operator. This is meant to open a conversation on the topic, rather than providing definitive answers. I start by looking at some of the kinds of information network operators work with, and whether this information can or should be considered “private.” In the second part of the talk, I work through some of the various ways network operators might want to consider when handling private information.
Hedge 128: Network Engineering at College

Have you ever thought about getting a college degree in computer networking? What are the tradeoffs between this and getting a certification? What is the state of network engineering at colleges—what do current students in network engineering programs think about their programs, and what they wish was there that isn’t? Rick Graziani joins Tom Ammon and Russ White in a broad ranging discussion on network engineering and college. Rick teaches network engineering full time in the Valley.
Keith’s Law (1)
I sometimes reference Keith’s Law in my teaching, but I don’t think I’ve ever explained it. Keith’s Law runs something like this:
Any large external step in a system’s capability is the result of many incremental changes within the system.
The reason incremental changes within a system appear as a single large step to outside observers is the smaller changes are normally hidden by abstraction. This is, in fact, the purpose of abstraction—to hide small changes inside a system from external view. Keith’s law is closely related to Clarke’s third law that “Any sufficiently advanced technology is indistinguishable from magic.” What looks like magic from the outside is really just a bunch of smaller things—each easier to understand on its own—combined into one single “thing” through abstraction.
If you’ve read this far, you’re probably thinking—what does this have to do with network engineering?
Well, several things, really.
First—the network is just an abstraction that moves packets to its users. Moving packets seems so … simple … to network users. You put data in here, and data comes out over there. All the little stuff that goes into making a network work are lost in the abstraction of the virtual connection between two hosts.
If you want users to understand why building a network is hard, you’re going to have to work hard at it. And you’re not likely to succeed—it’s often better just to live with the reality that users aren’t going to understand. Of course, this isn’t necessarily a bad thing, at least until it’s time to buy hardware and software to make all this magic work.
Second—no-one outside the network is ever going to understand the refactoring, simplification, and new features you’re trying to build into the network on their own. Users will only understand these things when they are related to some bigger picture, something they can see beyond the abstraction the network presents.
If you’re going to justify doing new things, you need to do so in terms of “larger things,” things that can be seen from outside the abstraction.
Third—no-one is going to pat you on the back for all the little things that need to be done to deploy a new major service. From the outside, that new service, or new cost savings, or whatever—it’s all just indistinguishable from magic.
Keith’s law is both good and bad. But it also means you need to learn how to frame your work in a way that users, who don’t have access to the inner workings of the network, can understand why you’re doing what you’re doing.
Turning this around, this also means you shouldn’t accept the “magic” of vendor products. That brilliant new capability your vendor is showing you is really made up of a lot of smaller components. The abstraction is just that—an abstraction. If you really want to understand the positive and negative consequences of deploying something new, you need to look beyond the abstraction.
