Perhaps this OpenAI paper would be interesting then (published September 4th):
https://arxiv.org/pdf/2509.04664
Hallucination is still absolutely an issue, and it doesn’t go away by reframing it as user error, saying the user didn’t know what they were doing, didn’t know what they needed to get from the LLM, or couldn’t describe it well enough.
Perhaps this OpenAI paper would be interesting then (published September 4th):
https://arxiv.org/pdf/2509.04664
Hallucination is still absolutely an issue, and it doesn’t go away by reframing it as user error, saying the user didn’t know what they were doing, didn’t know what they needed to get from the LLM, or couldn’t describe it well enough.