Agreed, although LLMs definitely qualify as enabling developers compared to <social media, Steam, consoles, and other distractions> of today.
The Internet itself is full of distractions. My younger self spent a crazy amount of time on IRC. So it's not different than spending time on say, Discord today.
LLMs have pretty much a direct relationship with Google. The quality of the response has much to do with the quality of the prompt. If anything, it's the overwhelming nature of LLMs that might be the problem. Back in the day, if you had, say a library access, the problem was knowing what to look for. Discoverability with LLMs is exponential.
As for LLM as auto-complete, there is an argument to be made that typing a lot reinforces knowledge in the human brain like writing. This is getting lost, but with productivity gains.
Watching my juniors constantly fight the nonsense auto completion suggestions their LLM editor of choice put in front of them, or worse watching them accept it and proceed to get entirely lost in the sauce, I’m not entirely convinced that the autocompletion part of it is the best one.
Tools like Claude code with ask/plan mode seem to be better in my experience, though I absolutely do wonder about the lack of typing causing a lack of memory formation
A rule I set myself a long time ago was to never copy paste code from stack overflow or similar websites. I always typed it out again. Slower, but I swear it built the comprehension I have today.
> Watching my juniors constantly fight the nonsense auto completion suggestions their LLM editor of choice put in front of them, or worse watching them accept it and proceed to get entirely lost in the sauce, I’m not entirely convinced that the autocompletion part of it is the best one.
That's not an LLM problem, they'd do the same thing 10 years ago with stack overflow: argue about which answer is best, or trust the answer blindly.
No, it is qualitatively different because it happens in-line and much faster. If it’s not correct (which it seems it usually isn’t), they spend more time removing whatever garbage it autocompleted.
People do it with the autocomplete as well so I guess there's not that much of a difference wrt LLMs. It likely depends on the language but people who are inexperienced in C++ would be over-relying on autocomplete to the point that it looks hilarious, if you have a chance to sit next to them helping to debug something for example.
For sure, but these new tools spit out a lot more and a lot faster, and it’s usually correct “enough” that the compiler won’t yell. It’s been wild to see its suggestions be wrong far more often than they are right, so I wonder how useful they really are at all.
Normal auto complete plus a code tool like Claude Code or similar seem far more useful to me.
I spent the first two years or so of my coding career writing PHP in notepad++ and only after that switched to an IDE. I rarely needed to consult the documentation on most of the weird quirks of the language because I'd memorized them.
Nowadays I'm back to a text editor rather than an IDE, though fortunately one with much more creature comforts than n++ at least.
I'm glad I went down that path, though I can't say I'd really recommend as things felt a bit simpler back then.
I have the same policy. I do the same thing for example code in the official documentation. I also put in a comment linking to the source if I end up using it. For me, it’s like the RFD says, it’s about taking responsibility for your output. Whether you originated it or not, you’re the reason it’s in the codebase now.
I have worked with a lot of junior engineers, and I’ll take comprehension any day. Developing their comprehension is a huge part of my responsibility to them and to the company. It’s pretty wasteful to take a human being with a functioning brain and ask them to churn out half understood code that works accidentally. I’m going to have to fix that eventually anyway, so why not get ahead of it and have them understand it so they can fix it instead of me?
LLMs are in a context where they are the promised solution for most of the expected economic growth on one end, a tool to improve programmer productivity and skill while also being only better than doom scrolling?
Thats comparison undermines the integrity of the argument you are trying to make.
The Internet itself is full of distractions. My younger self spent a crazy amount of time on IRC. So it's not different than spending time on say, Discord today.
LLMs have pretty much a direct relationship with Google. The quality of the response has much to do with the quality of the prompt. If anything, it's the overwhelming nature of LLMs that might be the problem. Back in the day, if you had, say a library access, the problem was knowing what to look for. Discoverability with LLMs is exponential.
As for LLM as auto-complete, there is an argument to be made that typing a lot reinforces knowledge in the human brain like writing. This is getting lost, but with productivity gains.