SECURITY
AI Assistants
I have written elsewhere about the danger of AI assistants leading to mediocrity. Humans tend to rely on authority figures rather strongly (see Obedience to Authority by Stanley Milgram as one example), and we often treat “the computer” as an authority figure.
The problem is, of course, Large Language Models—and AI of all kinds—are mostly pattern-matching machines or Chinese Rooms. A pattern-matching machine can be pretty effective at many interesting things, but it will always be, in essence, a summary of “what a lot of people think.” If you choose the right people to summarize, you might get close to the truth. Finding the right people to summarize, however, is beyond the powers of a pattern-matching machine.
Just because many “experts” say the same thing does not mean the thing is true, valid, or useful.
AI assistants can make people more productive, at least in terms of sheer output. Someone using an AI assistant will write more words per minute than someone who is not. Someone using an AI assistant will write more code daily than someone who is not.
But is it just more, or is it better?
Measuring the mediocratic effect of using AI systems, even as an assistant, is difficult. We have the example of drivers using a GPS, never really learning how to get anyplace (and probably losing all larger sense of geography), but these things are hard to measure.
However, a recent research paper on programming and security has shown at least one place where this effect can be measured. Noting that most kinds of social research are problematic (they are hard to replicate, it’s hard to infer valid results accurately, etc.), this one seems well set up and executed, so I’m inclined to put at least some trust in the results.
The researchers asked programmers worldwide to write software to perform six different tasks. They constructed a control group that did not use AI assistants and a test group that did.
The result? In almost every case, participants using the AI assistant wrote much less secure code, including mistakes in building encryption functions, creating a sandbox, allowing SQL injection attacks, local pointers, and integer overflows. Participants made about the same number of mistakes in randomness—a problem not many programmers have taken the time to study—and fewer mistakes in buffer overflows.
It is possible, of course, for companies to create programming-specific AI assistants that might resolve these problems. Domain-specific AI assistants will always be more accurate and useful than general-purpose assistants.
Relying on AI assistants improves productivity but also seems to create mediocre results. In many cases, mediocre results will be “good enough.”
But what about when “good enough” isn’t … good enough?
Humans are creatures of habit. We do what we practice. If you want to become a better coder, you need to practice coding—and remember that practice does not make perfect. Perfect practice makes perfect.
Hedge 178: Defined Trust Transport with Kathleen Nichols
The Internet of Things is still “out there”—operators and individuals are deploying millions of Internet connected devices every year. IoT, however, poses some serious security challenges. Devices can be taken over as botnets for DDoS attacks, attackers can take over appliances, etc. While previous security attempts have all focused on increasing password security and keeping things updated, Kathleen Nichols is working on a new solution—defined trust transport in limited domains.
Join us for this episode of the Hedge with Kathleen to talk about the problems of trusted transport, the work she’s putting in to finding solutions, and potential use cases beyond IoT.
You can find Kathleen at Pollere, LLC, and her slides on DeftT here.
Chatbot Attack Vectors
My monthly post is up over at Packet Pushers—
Hedge 161: Going Dark with Geoff Huston
Encrypt everything! Now! We don’t often do well with absolutes like this in the engineering world–we tend to focus on “get it down,” and not to think very much about the side effects or unintended consequences. What are the unintended consequences of encrypting all traffic all the time? Geoff Huston joins Tom Ammon and Russ White to discuss the problems with going dark.
Hedge 158: The State of DDoS with Roland Dobbins
DDoS attacks continue to be a persistent threat to organizations of all sizes and in all markets. Roland Dobbins joins Tom Ammon and Russ White to discuss current trends in DDoS attacks, including the increasing scope and scale, as well as the shifting methods used by attackers.
Hedge 153: Security Perceptions and Multicloud Roundtable
Tom, Eyvonne, and Russ hang out at the hedge on this episode. The topics of discussion include our perception of security—does the way IT professionals treat security and privacy helpful for those who aren’t involved in the IT world? Do we discourage users from taking security seriously by making it so complex and hard to use? Our second topic is whether multicloud is being oversold for the average network operator.
On the ‘net: Privacy and Networking
The final three posts in my series on privacy for infrastructure engineers is up over at Packet Pushers. While privacy might not seem like a big deal to infrastructure folks, it really is an issue we should all be considering and addressing—if for no other reason than privacy and security are closely related topics. The primary “thing” you’re trying to secure when you think about networking is data—or rather, various forms of privacy.