SKILLS
AI Assistants
I have written elsewhere about the danger of AI assistants leading to mediocrity. Humans tend to rely on authority figures rather strongly (see Obedience to Authority by Stanley Milgram as one example), and we often treat “the computer” as an authority figure.
The problem is, of course, Large Language Models—and AI of all kinds—are mostly pattern-matching machines or Chinese Rooms. A pattern-matching machine can be pretty effective at many interesting things, but it will always be, in essence, a summary of “what a lot of people think.” If you choose the right people to summarize, you might get close to the truth. Finding the right people to summarize, however, is beyond the powers of a pattern-matching machine.
Just because many “experts” say the same thing does not mean the thing is true, valid, or useful.
AI assistants can make people more productive, at least in terms of sheer output. Someone using an AI assistant will write more words per minute than someone who is not. Someone using an AI assistant will write more code daily than someone who is not.
But is it just more, or is it better?
Measuring the mediocratic effect of using AI systems, even as an assistant, is difficult. We have the example of drivers using a GPS, never really learning how to get anyplace (and probably losing all larger sense of geography), but these things are hard to measure.
However, a recent research paper on programming and security has shown at least one place where this effect can be measured. Noting that most kinds of social research are problematic (they are hard to replicate, it’s hard to infer valid results accurately, etc.), this one seems well set up and executed, so I’m inclined to put at least some trust in the results.
The researchers asked programmers worldwide to write software to perform six different tasks. They constructed a control group that did not use AI assistants and a test group that did.
The result? In almost every case, participants using the AI assistant wrote much less secure code, including mistakes in building encryption functions, creating a sandbox, allowing SQL injection attacks, local pointers, and integer overflows. Participants made about the same number of mistakes in randomness—a problem not many programmers have taken the time to study—and fewer mistakes in buffer overflows.
It is possible, of course, for companies to create programming-specific AI assistants that might resolve these problems. Domain-specific AI assistants will always be more accurate and useful than general-purpose assistants.
Relying on AI assistants improves productivity but also seems to create mediocre results. In many cases, mediocre results will be “good enough.”
But what about when “good enough” isn’t … good enough?
Humans are creatures of habit. We do what we practice. If you want to become a better coder, you need to practice coding—and remember that practice does not make perfect. Perfect practice makes perfect.
On Writing Complexity
I’ve been on a bit of a writer’s break after finishing the CCST book, but it’s time to rekindle my “thousand words a day” habit. As always, one part of this is thinking about how I write—is there anything I need to change? Tools, perhaps, or style?
What about the grade level complexity of my writing? I’ve never really paid attention to this, but I’m working on contributing to a site regularly that does. So maybe I should.
I tend to write to the tenth or eleventh-grade level, even when writing “popular material,” like blog posts. The recommended level is around the eighth-grade level. Is this something I need to change?
It seems the average person considers anything above the eighth-grade reading level “too hard” to read, so they give up. Every reading level calculation I’ve looked at essentially uses word and sentence length as proxies for complexity. Long words and sentences intimidate people.
On the other hand, measuring the reading grade level can seem futile. There are plenty of complex concepts described by one- and two-syllable words. Short sentences can still have lots of meaning.
Further, the reading grade level does not tell you if the sentence makes sense. A famous politician recently said, “… it’s time for us to do what we have been doing, and that time is every day.” The reading grade level of this sentence is in the sixth grade—but saying nothing is still saying nothing, even if you say it at a sixth-grade level.
While reading level complexity might be important, it is more important to say something.
Sometimes, using long words and sentences stops people from paying attention to your words. However, replacing long words and sentences with shorter ones sometimes removes your words’ real meaning (or at least flavor). I am not, at this point, certain how to balance these. I suspect I will have to consider the tradeoff in every situation.
When you write—and if you are doing your job as a network engineer well, you do write—you might want to consider the complexity of your writing. I will use the grade level as “another tool” in my set, which means I’ll be thinking about writing complexity more—but I’m not going to allow it to drive my writing style. If I can reduce the complexity of my writing without losing meaning, I may … sometimes … or I might not. 😊
Looking at the other side of the coin—what about reading grade level from a reader’s point of view? Should we only read easy-to-read things? The answer should be obvious: no.
There is a bit of a feeling that text above a certain reading level is “sheer nonsense.” Again, though, the grade level has nothing to do with the value of the content. Sometimes, saying complex things just requires complex text. Readers (all of us) need to learn to read complex text.
Reading grade level is a good tool in many situations—but it is one tool among many.
Hedge 211: Learning About Learning
How much have you thought about the way you learn–or how to effectively teach beginners? There is a surprising amount of research into how humans learn, and how best to create material to teach them. In this roundtable episode, Tom, Eyvonne, and Russ discuss a recent paper from the Communications of the ACM, 10 Things Software Developers Should Learn about Learning.
Modern Network Troubleshooting
I’ve reformatted and rebuilt my network troubleshooting live training for 2023, and am teaching it on the 26th of January (in three weeks). You can register at Safari Books Online. From the site:
The first way to troubleshoot faster is to not troubleshoot at all, or to build resilient networks. The first section of this class considers the nature of resilience, and how design tradeoffs result in different levels of resilience. The class then moves into a theoretical understanding of failures, how network resilience is measured, and how the Mean Time to Repair (MTTR) relates to human and machine-driven factors. One of these factors is the unintended consequences arising from abstractions, covered in the next section of the class.
The class then moves into troubleshooting proper, examining the half-split formal troubleshooting method and how it can be combined with more intuitive methods. This section also examines how network models can be used to guide the troubleshooting process. The class then covers two examples of troubleshooting reachability problems in a small network, and considers using ChaptGPT and other LLMs in the troubleshooting process. A third, more complex example is then covered in a data center fabric.
A short section on proving causation is included, and then a final example of troubleshooting problems in Internet-level systems.
Upcoming Pearson Class: Modern Network Troubleshooting
On the 26th of January, I’ll be teaching a webinar over at Safari Books Online (subscription service) called Modern Network Troubleshooting. From the blurb:
The first section of this class considers the nature of resilience, and how design tradeoffs result in different levels of resilience. The class then moves into a theoretical understanding of failures, how network resilience is measured, and how the Mean Time to Repair (MTTR) relates to human and machine-driven factors. One of these factors is the unintended consequences arising from abstractions, covered in the next section of the class.
The class then moves into troubleshooting proper, examining the half-split formal troubleshooting method and how it can be combined with more intuitive methods. This section also examines how network models can be used to guide the troubleshooting process. The class then covers two examples of troubleshooting reachability problems in a small network, and considers using ChaptGPT and other LLMs in the troubleshooting process. A third, more complex example is then covered in a data center fabric.
Hedge 203: Terry Slattery on Network Automation
Terry Slattery joins Tom and Russ to continue the conversation on network automation—and why networks are not as automated as they should be. This is part one of a two-part series; the second part will be published in two weeks as Hedge episode 204.
Hedge 180: Network Operations Survey with Josh S
What has been happening in the world of network automation—and more to the point, what is coming in the future? Josh Stephens from Backbox joins Tom Ammon, Eyvonne Sharp, and Russ White to discuss the current and future network operations and automation landscape.