Is that interesting? Computers accomplish all sorts of tasks which require thinking from humans.. without thinking. Chess engines have been much better than me at chess for a long time, but I can't say there's much thinking involved.
Well most of the programming is pattern matching. And might be seen as novel for those who have not done it before, but could well been done a lot previously.
It requires as much thinking as it did for me to copy-paste code I did not understand from Stackoverflow to make a program 15 years ago. The program worked, just about. Similarly you can generate endless love sonnets with just blindly putting words into a form.
For some reason we naturally anthropomorphise machines without thinking it for a second. But your toaster is still not in love with you.
Producing a computer program does not require thinking, like many other human endeavors. And looking at the quality of software out there there are indeed quite a few human programmers who do not think about what they do.
That is indeed the case. It becomes very obvious with lesser-known vendor-specific scripting languages that don't have much training data available. LLMs try to map them onto the training data they do have and start hallucinating functions and other language constructs that exist in other languages.
When I tried to use LLMs to create Zabbix templates to monitor network devices, LLMs were utterly useless and made things up all the time. The illusion of thinking lasts only as long as you stay on the happy path of major languages like C, JS or Python.
Yep, seen that myself. If you want to generate some code in a language that is highly represented in the training data (e.g. JS), they do very well. If you want to generate something that isn't one of those scenarios, they fail over and over and over. This is why I think anyone who is paying a modicum of attention should know they aren't thinking. A human, when confronted with a programming language not in his "training data" (experiences) will go out and read docs, look up code examples, ask questions of other practitioners, and reason how to use the language based on that. An LLM doesn't do that, because it's not thinking. It's glorified autocomplete. That isn't to say that autocomplete is useless (even garden variety autocomplete can be useful), but it's good to recognize it for what it is.