Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

JUST when you thought the chatbot was dead


This seems like it's just feeding the output back into the model and using more compute to try and get better answers. If that's all, I don't see how it fundamentally solves any of the issues currently present in LLMs. Maybe a marginal improvement in accuracy at the cost of making the computation more expensive. And you don't even get to see the so called reasoning tokens.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: