Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Have LLMs learned to say "I don't know" yet?

Can they, fundamentally, do that? That is, given the current technology.

Architecturally, they don't have a concept of "not knowing." They can say "I don't know," but it simply means that it was the most likely answer based on the training data.

A perfect example: an LLM citing chess rules and still making an illegal move: https://garymarcus.substack.com/p/generative-ais-crippling-a...

Heck, it can even say the move would have been illegal. And it would still make it.



If the current technology does not allow them to sincerely say "I don't know, I am now checking it out" then they are not AGI, was my original point.

I am aware that the LLM companies are starting to integrate this quality -- and I strongly approve. But again, not being self-critical and as such lacking self-awareness is one of the qualities that I would ascribe to an AGI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: