Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You are judging this based on what the LLM outputs, not on its internals. When we peer into its internals, it seems that LLMs actually have a pretty good representation of what they do and don't know; this just isn't reflected in the output because the relevant information is lost in future context.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: