CONTENT TYPE
Hedge 91: Leslie Daigle and IP Addresses Acting Badly
What if you could connect a lot of devices to the Internet—without any kind of firewall or other protection—and observe attackers trying to find their way “in?” What might you learn from such an exercise? One thing you might learn is a lot of attacks seem to originate from within a relatively small group of IP addresses—IP addresses acing badly. Listen in as Leslie Daigle of Thinking Cat and the Techsequences podcast, Tom Ammon, and Russ White discuss just such an experiment and its results.
Free Speech is More than Words
A couple of weeks ago, I joined Leslie Daigle and Alexa Reid on Techsequences to talk about free speech and the physical platform—does the right to free speech include the right to build and operate physical facilities like printing presses and web hosting? I argue it does. Listen in if you want to hear my argument, and how this relates to situations such as the “takedown” of Parler.
NATs, PATs, and Network Hygiene
While reading a research paper on address spoofing from 2019, I ran into this on NAT (really PAT) failures—
The authors state 49% of the NATs they discovered in their investigation of spoofed addresses fail in one of these two ways. From what I remember way back when the first NAT/PAT device (the PIX) was deployed in the real world (I worked in TAC at the time), there was a lot of discussion about what a firewall should do with packets sourced from addresses not indicated anywhere.
If I have an access list including 192.168.1.0/24, and I get a packet sourced from 192.168.2.24, what should the NAT do? Should it forward the packet, assuming it’s from some valid public IP space? Or should it block the packet because there’s no policy covering this source address?
This is similar to the discussion about whether BGP speakers should send routes to an external peer if there is no policy configured. The IETF (though not all vendors) eventually came to the conclusion that BGP speakers should not advertise to external peers without some form of policy configured.
My instinct is the NATs here are doing the right thing—these packets should be forwarded—but network operators should be aware of this failure mode and configure their intentions explicitly. I suspect most operators don’t realize this is the way most NAT implementations work, and hence they aren’t explicitly filtering source addresses that don’t fall within the source translation pool.
In the real world, there should also be a box just outside the NATing device that’s running unicast reverse path forwarding checks. This would resolve these sorts of spoofed packets from being forwarding into the DFZ—but uRPF is rarely implemented by edge providers, and most edge connected operators (enterprises) don’t think about the importance of uRPF to their security.
All this to say—if you’re running a NAT or PAT, make certain you understand how it works. Filters are tricky in the best of circumstances. NAT and PATs just make filters trickier.
Hedge 90: Andrew Wertkin and a Naïve Reliance on Automation

Automation is surely one of the best things to come to the networking world—the ability to consistently apply a set of changes across a wide array of network devices has speed at which network engineers can respond to customer requests, increased the security of the network, and reduced the number of hours required to build and maintain large-scale systems. There are downsides to automation, as well—particularly when operators begin to rely on automation to solve problems that really should be solved someplace else.
In this episode of the Hedge, Andrew Wertkin from Bluecat Networks joins Tom Ammon and Russ White to discuss the naïve reliance on automation.
Details and Complexity
What is the first thing almost every training course in routing protocols begin with? Building adjacencies. What is considered the “deep stuff” in routing protocols? Knowing packet formats and processes down to the bit level. What is considered the place where the rubber meets the road? How to configure the protocol.
I’m not trying to cast aspersions at widely available training, but I sense we have this all wrong—and this is a sense I’ve had ever since my first book was released in 1999. It’s always hard for me to put my finger on why I consider this way of thinking about network engineering less-than-optimal, or why we approach training this way.
This, however, is one thing I think is going on here—
We believe that by knowing ever-deeper reaches of detail about a protocol, we are not only more educated engineers, but we will be able to make better decisions in the design and troubleshooting spaces.
To some degree, we think we are managing the complexity of the protocol by “making our knowledge practical”—by knowing the bits, bytes, and configurations. This natural tendency to “dig in,” to learn more detail, turns out to be counterproductive. Continuing from the same article—
The scientific opinion of many psychologists and behavioral scientists suggests the key to time-sensitive decision making in complex and chaotic situations is simplicity, not complexity. Simple-to-remember rules of thumb, or heuristics, speed the cognitive process, enabling faster decisionmaking and action. Recognizing that heuristics have limitations and are not a substitute for basic research and analysis, they nevertheless help break complexity-induced paralysis and support the development of good plans that can achieve timely and acceptable results. The best heuristics capture useful information in an intuitive, easy-to-recall way. Their utility is in assisting decision makers in complex and chaotic situations to make better and timelier decisions.
Knowing why a protocol works the way it does—understanding what it’s doing and why—from an abstract perspective is, I believe, a more important skill for the average network engineer than knowing the bits and bytes—or the configuration.
Abstract correctly—but abstract more. Get back to the basics and know why things work the way they do. It’s easier to fill in the details if you know the how and why.
Hedge 89: Dana Iskoldski and A House Divided
Bluecat, in cooperation with an outside research consultant, jut finished a survey and study on the lack of communication and divisions between the cloud and networking teams in deployments to support business operations. Dana Iskoldski joins Tom Ammon and Russ White to discuss the findings of their study, and make some suggestions about how we can improve communication between the two teams.
Please find a copy of the study at http://bluecatnetworks.com/hedge.
The Hedge 88: Todd Palino and Getting Things Done
I often feel like I’m “behind” on what I need to get done. Being a bit metacognitive, however, I often find this feeling is more related to not organizing things well, which means I often feel like I have so much to do “right now” that I just don’t know what to do next—hence “processor thrashing on process scheduler.” Todd Palino joins this episode of the Hedge to talk about the “Getting Things Done” technique (or system) of, well … getting things done.
