You can run an LLM locally (and distributed compile systems, where the compiler runs in the cloud, are a thing, too) so that doesn't really produce a distinction between the two.
Likewise, many optimization techniques involve some randomness, whether it's approximating an NP-thorny subproblem, or using PGO guided by statistical sampling. People might disable those in pursuit of reproducible builds, but no one would claim that enabling those features makes GCC or LLVM no longer a compiler. So nondeterminism isn't really the distinguishing factor either.
My favorite benchmark for LLMs and agents is to have it port a medium-complexity library to another programming language. If it can do that well, it's pretty capable of doing real tasks. So far, I always have to spend a lot of time fixing errors. There are also often deep issues that aren't obvious until you start using it.
Comments on here often criticise ports as easy for LLMs to do because there's a lot of training and tests are all there, which is not as complex as real word tasks
It's worth noting that many, if not most, games on Steam don't have DRM. You can often just take the .exe files out of them and play. Sometimes you need a polyfill for Steam's client API, but that's usually it.
> You don't generally need specific versions of GCC or Clang to build it I'm pretty sure.
You need a C11 compiler these days with loads of non-standard extensions. Note, for a very long time, one couldn't compile the Linux kernel with clang because it lacked this GCC specific behavior.
I'm not really sure you can turn around and say -- Oh, but now we feel differently about the C standard -- given how much is still non-standard. For instance, I don't believe Intel's C compiler will compile the kernel, etc.
reply