AI Assistants
I have written elsewhere about the danger of AI assistants leading to mediocrity. Humans tend to rely on authority figures rather strongly (see Obedience to Authority by Stanley Milgram as one example), and we often treat “the computer” as an authority figure.
The problem is, of course, Large Language Models—and AI of all kinds—are mostly pattern-matching machines or Chinese Rooms. A pattern-matching machine can be pretty effective at many interesting things, but it will always be, in essence, a summary of “what a lot of people think.” If you choose the right people to summarize, you might get close to the truth. Finding the right people to summarize, however, is beyond the powers of a pattern-matching machine.
Just because many “experts” say the same thing does not mean the thing is true, valid, or useful.
AI assistants can make people more productive, at least in terms of sheer output. Someone using an AI assistant will write more words per minute than someone who is not. Someone using an AI assistant will write more code daily than someone who is not.
But is it just more, or is it better?
Measuring the mediocratic effect of using AI systems, even as an assistant, is difficult. We have the example of drivers using a GPS, never really learning how to get anyplace (and probably losing all larger sense of geography), but these things are hard to measure.
However, a recent research paper on programming and security has shown at least one place where this effect can be measured. Noting that most kinds of social research are problematic (they are hard to replicate, it’s hard to infer valid results accurately, etc.), this one seems well set up and executed, so I’m inclined to put at least some trust in the results.
The researchers asked programmers worldwide to write software to perform six different tasks. They constructed a control group that did not use AI assistants and a test group that did.
The result? In almost every case, participants using the AI assistant wrote much less secure code, including mistakes in building encryption functions, creating a sandbox, allowing SQL injection attacks, local pointers, and integer overflows. Participants made about the same number of mistakes in randomness—a problem not many programmers have taken the time to study—and fewer mistakes in buffer overflows.
It is possible, of course, for companies to create programming-specific AI assistants that might resolve these problems. Domain-specific AI assistants will always be more accurate and useful than general-purpose assistants.
Relying on AI assistants improves productivity but also seems to create mediocre results. In many cases, mediocre results will be “good enough.”
But what about when “good enough” isn’t … good enough?
Humans are creatures of habit. We do what we practice. If you want to become a better coder, you need to practice coding—and remember that practice does not make perfect. Perfect practice makes perfect.