Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Turns out Markov chains with a large context can do a lot and yet no one has figured out why LLMs can not solve sudoku puzzles. Why do you think that's the case if the goalposts have moved so much?


Perhaps because intelligence is multi-faceted, and the aspect required for Sudoku puzzles is not modelled well enough with an LLM style backend.


Perhaps.


“Perhaps”? Are you suggesting that intelligence is not multi-faceted? What exactly did the user you’re replying to say that is questionable?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: