If we ignore human readability (as the author suggests), the answer is context. The token count explodes as you fall down the abstraction rabbit hole. Context consumption means worse reasoning.
In turn, this means expressiveness matters to LLMs just as much as it matters to us. A functional map reduce can be far simpler to write and edit than an imperative loop. Type safety and borrow checking free an LLM from having to reason about types and race conditions. Interpreted languages allow an LLM to do rapid exploration and iteration. Good luck with live reloading a GUI in C.
And if you were to force the LLM to do all that in C, at some point it might decide to write its own language.
Or greater hell, why not binary?
If we ignore human readability (as the author suggests), the answer is context. The token count explodes as you fall down the abstraction rabbit hole. Context consumption means worse reasoning.
In turn, this means expressiveness matters to LLMs just as much as it matters to us. A functional map reduce can be far simpler to write and edit than an imperative loop. Type safety and borrow checking free an LLM from having to reason about types and race conditions. Interpreted languages allow an LLM to do rapid exploration and iteration. Good luck with live reloading a GUI in C.
And if you were to force the LLM to do all that in C, at some point it might decide to write its own language.