Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can program a concept of truth into them, or maybe punishing it for making mistakes instead of just rewarding it for replicating text. Nobody knows how to do that in a way that get intelligent results today, but we know how to code things that outputs or checks truths in other contexts, like wolfram alpha is capable of solving tons of things and isn't wrong.

> (or any concepts at all).

Nobody here said that, that is your interpretation. Not everyone who is skeptical of current LLM architectures future potential as AGI thinks that computers are unable to solve these things. Most here who argues against LLM don't think the problems are unsolvable, just not solvable by the current style of LLMs.



> You can program a concept of truth into them, ...

The question was, how you do that?

> Nobody here said that, that is your interpretation.

What is my interpretation?

I don't think that the problems are unsolvable, but we don't know how to do it now. Thinking that "just program the truth in them" shows a lack of understanding of the magnitude of the problem.

Personally I'm convinced that we'll never reach any kind of AGI with LLM. They are lacking any kind of model about the world that can be used to reason about. And the concept of reasoning.


> The question was, how you do that?

And I answered, we don't know how you do that which is why we don't currently.

> Personally I'm convinced that we'll never reach any kind of AGI with LLM. They are lacking any kind of model about the world that can be used to reason about. And the concept of reasoning.

Well, for some definition of LLM we probably could. But probably not the way they are architected today. There is nothing stopping a large language model to add different things to its training steps to enable new reasoning.

> What is my interpretation?

Well, I read your post as being on the other side. I believe it is possible to make a model that can reason about truthiness, but I don't think current style LLMs will lead there. I don't know exactly what will take us there, but I wouldn't rule out an alternate way to train LLMs that looks more like how we teach students in school.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: