I can attest to that. I was using Gemini to help with some spherical geometry that I just couldn't figure out myself. This was for an engineering system to define and avoid attitude deadzones for a system that can rotate arbitrarily.
About 75% of the time the code snippets it provided did what it said they did. But the other 25% was killer. Luckily I made a visualization system and was able to see when it made mistakes, but I think if I had tried to vibe code this months ago I'd still be trying.
(These were things like "how can I detect if an arbitrary small circle arc on a unit sphere intersects a circle of arbitrary size projected onto the surface of the unit sphere". With the right MATLAB setup this was easy to visualize and check; but I'm quite convinced it would have taken me a lot longer to understand the geometry and come up with the equations myself than it actually took me to complete the tool)
One my standard coding tests for LLM is a spherical geometry problem, a near-triangle with all three corners being 90 degrees.
Until GPT-5, no model got it right, they only operated in the space of a euclidian projection; perhaps notably, while GPT-5 did get it right, it did so by writing and running a python script that imported a suitable library, not with its own world model.
About 75% of the time the code snippets it provided did what it said they did. But the other 25% was killer. Luckily I made a visualization system and was able to see when it made mistakes, but I think if I had tried to vibe code this months ago I'd still be trying.
(These were things like "how can I detect if an arbitrary small circle arc on a unit sphere intersects a circle of arbitrary size projected onto the surface of the unit sphere". With the right MATLAB setup this was easy to visualize and check; but I'm quite convinced it would have taken me a lot longer to understand the geometry and come up with the equations myself than it actually took me to complete the tool)