Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Im saying the model is "intelligent enough" to solve a maze.

And I don't agree. I think that at best the model is "intelligent enough to use a tool that can solve mazes" (which is an entirely different thing) and at worst it is no different than a circus horse that "can do math". Being able to repeat more tricks and being able to select which trick to execute based on the expected reward is not a measure of intelligence.





I would encourage you to read the code it produced. Its not like a simple "solve maze" function. There are plenty of "smart" choices in there to achieve the goal given my very vague instructions, and as a result of it analyzing why it failed at first and then adjusting.

I don't know how else to get my point across: what I am trying to say is that there is nothing "smart" about an automaton that needs to resort to A* algorithm implementations to "solve" a problem that any 4-year old child can solve just by looking at it.

Where you are seeing "intelligence" and "an existential crisis", I see "a huge pattern-matching system with an ever increasing vocabulary".

LLM's are useful. They will certainly cause a lot of disruption of automation on all types of white-collar work. They will definitely lead to all sorts of economic and social disruptions (good and bad). I'm definitely not ignoring them as just another fad... but none of that depends on LLMs being "intelligent" in any way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: