Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

a conscious system is aware of itself. chat gpt is aware of itself, that's the first thing they tell it in the system prompt.


Chat GPT is not aware. It has no self.

It produces token strings that are statistically likely to be a match for any given prompt based on text it's been fed. But the reason it gets things wrong is not the same reason we get things wrong. It gets things wrong because the probabilities matched but the actual meaning did not.

It sounds great that RAG stands for Recursively Assembled Grammar (in a discussion about LLMs), only it doesn't. The LLM generated text that even explained RAG as a recursive grammar applied in the context of LLM usage, but RAG stands for Request-Augmented Generation. When I pointed this out the system output "Oh, of course, that is also a thing and it means..." But the system had no awareness. Not of the meaning of the text it was producing, and not of any sort of self.


On an Etch-a-Sketch I can tell it that it's aware of itself...doesn't mean it actually is.


you probably claim to be aware too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: