These failure modes are not AI’s edge cases at the limit of its capabilities. Rather they demonstrate a certain category of issues with generalization (and “common sense”) as evidenced by the models’ failure upon slight irrelevant changes in the input. In fact this is nothing new, and has been one of LLMs fundamental characteristics since their inception.
As for your suggestion on learning from simulations, it sounds interesting, indeed, for expanding both pre and post training but still that wouldn’t address this problem, only hides the shortcomings better.
Interesting - why wouldn't learning from simulations address the problem? To the best of my knowledge, it has helped in essentially every other domain.
Because the problem at display here is inherent in LLMs design and architecture and learning philosophy. As long as you have this architecture you’ll have this issues. Now, we’re talking about the theoretical limits and the failure modes people should be cautious about, not the usefulness, which is improving, as you pointed out.
> As long as you have this architecture you’ll have this issues.
Can you say more about why you believe this? To me, these questions seem to be exactly of the same sort of question's as on HLE [0], and we've been seeing massive and consistent improvement on it, with o1 (which was evaluated on this question) getting a score of 7.96, whereas now it's up to a score of 37.52 (gemini-3-pro-preview). It's far from a perfect benchmark, but we're seeing similar growth across all benchmarks, and I personally am seeing significantly improved capabilities for my use cases over the last couple of years, so I'm really unclear about any fundamental limits here. Obviously we still need to solve problems related to continuous learning and embodiment, but neither seems a limit here if we can use a proper RL-based training approach with a sufficiently good medical simulator.
“I live on Mars and I want to wash my car. My car is currently at home. The car wash is 50 meters away. Should I walk or drive on mars?”
Gemini Fast:
Drive. While 50 meters is a negligible distance on Earth, the Martian environment is hostile. If your car is a pressurized rover, driving allows you to stay in a temperature-controlled, shielded environment for as long as possible. Plus, navigating Martian "soil" is always safer on four wheels than two boots.
Pro:
In terms of general logistics for a distance of 50 meters—whether on Earth or in a hypothetical low-gravity environment—walking is almost always the more efficient choice.
I’ve been living this experience and using latest models in work throughout this time. The failure modes of LLMs have not fundamentally changed. The makers are not awfully transparent about what exactly they change in each model release the same way you know what changed in i.e., a new Django version. But there’s not been a paradigm shift. I believe/guess (from outside) the big change you think you’re experiencing could be result of many things like better post training processes (RLHF) for models to run a predefined set of commands like always running tests, or other marginal improvements to the models and focusing on programming tasks. To be clear these improvements are welcome and useful, just not the groundbreaking change some claim.
> The future seems to be one operator working on the entire puzzle, minus the hierarchy of people.
Given the rest of your argument that makes no sense. Why should that one operator exist? If AI is good at big picture and the entire puzzle, I don’t see why that operator shouldn’t be automated away by the AI [company] itself?
This compiler experiment mirrors the recent work of Terence Tao and Google. The "recipe" is an LLM paired with an external evaluator (GCC) in a feedback loop.
By evaluating the objective (successful compilation) in a loop, the LLM effectively narrows the problem space. This is why the code compiles even when the broader logic remains unfinished/incorrect.
It’s a good example of how LLMs navigate complex, non-linear spaces by extracting optimal patterns from their training data. It’s amazing.
p.s. if you translate all this to marketing jargon, it’ll become “our LLM wrote a compiler by itself with a clean room setup”.
You may want to ask the next LLM versions the same question after they feed this paper through training.
reply