This aligns with my experience. I've seen LLMs produce "code" that the person requesting is unable to understand or debug. It usually almost works. It's possible the person writing the prompt didn't actually understand the problem, so they got a half baked solution as a result. Either way, they need to go to a human with more experience to figure it out.
Tbh If I do not understand generated code perfectly, meaning it is using slightly something I do not know I usually spend approximately same time understanding generated code as writing it myself.