They aren't saying they're the same, I'm not sure how you got that interpretation. It's very clear they're highlighting the hypocrisy that arises from claiming to be against automating away aspects of programming while relying on tools that do exactly that for you - only being Ok with it as long as they aren't called "AI".
The crux of why this is a bad analogy is that everyone talking about "automating" things with LLMs is misusing the word "automation". A machine can automate a repetitive manual task. A computer can automate the operation of machinery. A machine instruction set is an abstraction on top of circuitry that can automate the labor of extrapolating the logic physically executed by that circuitry into human-comprehensible routines. In the same way, a programming language implementation (e.g. a compiler) can somewhat automate programming (in the sense that it uses higher levels of abstraction to describe the same thing, saving labor while keeping determinism). What do these things have in common? We can reliably make them approach deterministic behavior. In the case of compilers, completely and reliably and transparently so. Just because you haven't bothered to read what a compiler is doing doesn't mean someone can't verify what it's doing. Physical machines are less reliable, but we have reliable ways to test them, reliable error margins, reliable failure modes, reliable variance. When you are on a stack of abstractions like a programming language on top of a compiler on top of transistors on top of a machine, an error at the top of that stack can have a lot of implications. A tool that probabilistically generates code is not automation. We have no guarantees about how and when it will get things wrong, how much this will happen, and what kinds of things it will get wrong. We have no way to audit their results that will generalize to every problem upstream of them. We have no way to reliably measure improvement in consistency, let alone improve that margin of error reliably. The entire idea that this is an automation at all is nonsense.
how do you figure? people were making directly analogous arguments about compilers back in the day. (not trying to argue that they are 'the same', but their is definitely a spectrum of code generation methods, with widely varying genres of guarantees, suiting a widely varying range of use cases)
I get the point that they are in different magnitudes of unknown but the analogy is still pretty good when it comes to the median programmer, who has no idea what goes on within either one. And if you argue that compilers are ultimately deterministic, that same argument technically holds for an LLM as well.
The biggest difference to me is that we have humans that claim they can explain why compilers work the way they do. But I might as well trust someone who says the same about LLMs, because honestly I have no way to verify if they speak the truth. So I am already offloading a lot of burden of proof about the systems I work on to others. And why does this ”other” need to be a human.
This is like saying “I don’t understand how airplanes fly, so I’ll happily board an airplane designed by an LLM. The reality is determined by how much I know about it.”
No, the other way around. I am saying it is not a smart take to say ”a safe airplane cannot be built if LLMs were used in the process in any way, because reasons”. The safety of the airplane (or more generally the outcome of any venture) can be measured in other ways than leaning on some rule that you cannot use an LLM for help at any stage because they are not always correct