Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No LLM output is a hallucination. It is just doing token prediction 100% of the time. When you throw enough tokens at it, it can follow a coherent and relevant token curve. When you throw even more tokens at it, that curve could even contain information that is agreed to be factual.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: