On the ‘net: Networking Models

I’m writing a series on network models over at Packet Pushers; links to the first three are below.

Looking back at my career in network engineering, beyond some basic concepts and naming conventions, I cannot remember using the OSI model once. I have used the concept of layering, but never the OSI model specifically.

First, models are not sacrosanct. A model is just a tool. If the model you are using is not working for you, feel free to modify it. I find simpler yet extensible models far more effective than complex models for channeling my thoughts. In the past I have tried building huge models that “put everything in one place,” but they do not work for me.

How does a host differ from a middlebox? Hosts are designed to run applications that create and consume packets rather than quickly forwarding traffic through the network. The primary difference is that the source and destination are not two different ports on the same device but rather a virtual interface representing an application and a physical interface.

Hedge 218: Longer than /24’s

Most providers will only accept a /24 or shorter IPv4 route because routers have always had limited amounts of forwarding table space. In fact, many hardware and software IPv4 forwarding implementations are optimized for a /24 or shorter prefix length. Justin Wilson joins Tom Ammon and Russ White to discuss why the DFZ might need to be expanded to longer prefix lengths, and the tradeoffs involved in doing so.

 


 
download

AI Assistants

I have written elsewhere about the danger of AI assistants leading to mediocrity. Humans tend to rely on authority figures rather strongly (see Obedience to Authority by Stanley Milgram as one example), and we often treat “the computer” as an authority figure.

The problem is, of course, Large Language Models—and AI of all kinds—are mostly pattern-matching machines or Chinese Rooms. A pattern-matching machine can be pretty effective at many interesting things, but it will always be, in essence, a summary of “what a lot of people think.” If you choose the right people to summarize, you might get close to the truth. Finding the right people to summarize, however, is beyond the powers of a pattern-matching machine.

Just because many “experts” say the same thing does not mean the thing is true, valid, or useful.

AI assistants can make people more productive, at least in terms of sheer output. Someone using an AI assistant will write more words per minute than someone who is not. Someone using an AI assistant will write more code daily than someone who is not.

But is it just more, or is it better?

Measuring the mediocratic effect of using AI systems, even as an assistant, is difficult. We have the example of drivers using a GPS, never really learning how to get anyplace (and probably losing all larger sense of geography), but these things are hard to measure.

However, a recent research paper on programming and security has shown at least one place where this effect can be measured. Noting that most kinds of social research are problematic (they are hard to replicate, it’s hard to infer valid results accurately, etc.), this one seems well set up and executed, so I’m inclined to put at least some trust in the results.

The researchers asked programmers worldwide to write software to perform six different tasks. They constructed a control group that did not use AI assistants and a test group that did.

The result? In almost every case, participants using the AI assistant wrote much less secure code, including mistakes in building encryption functions, creating a sandbox, allowing SQL injection attacks, local pointers, and integer overflows. Participants made about the same number of mistakes in randomness—a problem not many programmers have taken the time to study—and fewer mistakes in buffer overflows.

It is possible, of course, for companies to create programming-specific AI assistants that might resolve these problems. Domain-specific AI assistants will always be more accurate and useful than general-purpose assistants.

Relying on AI assistants improves productivity but also seems to create mediocre results. In many cases, mediocre results will be “good enough.”

But what about when “good enough” isn’t … good enough?

Humans are creatures of habit. We do what we practice. If you want to become a better coder, you need to practice coding—and remember that practice does not make perfect. Perfect practice makes perfect.

 

Hedge 216: Automation Success Stories

One thing we often hear about automation is that its hard because there are so many different interfaces. On this episode of the Hedge, Daniel Teycheney joins Ethan Banks and Russ White to discuss how they started from a simple idea and ended up building an automation system that does cross vendor boundaries within a larger discussion about automation and APIs.

download

Hedge 215: Old Engineering Books (3)

Reading people from the past can sometimes show us where today’s blind spots are–but sometimes we can just find the blind spots of the people who lived then. In this episode of the Hedge, Tom, Eyvonne, and Russ finish going through a selection of quotes from an engineering book published in 1911. This time, we find there are some things to agree with, but also some to disagree with.