Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Modern LLMs in an agentic loop can self correct

If the problem as stated is "Performing an LLM query at newly inflated cost $X is an iffy value proposition because I'm not sure if it will give me a correct answer" then I don't see how "use a tool that keeps generating queries until it gets it right" (which seems like it is basically what you are advocating for) is the solution.

I mean, yeah, the result will be more correct answers than if you just made one-off queries to the LLM, but the costs spiral out of control even faster because the agent is going to be generating more costly queries to reach that answer.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: