Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs do not have internal reasoning, so the yapping is an essential part of producing a correct answer, insofar as it's necessary to complete the computation of it.

Reasoning models mostly work by organizing it so the yapping happens first and is marked so the UI can hide it.



You can see a good example of this on the deep seek website chat when you enable thinking mode or whatever.

You can see it spews pages of pages before it answers.


My favorite is when it does all that thinking and then the answer completely doesn't use it.

Like if you ask it to write a story, I find it often considers like 5 plots or sets of character names in thinking, but then the answer is entirely different.


I've also noticed that when asking difficult questions, the real solution is somewhere in the pages of "reasoning", but not in the actual answer




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: