Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> From my experience, even the top models continue to fail delivering correctness on many tasks even with all the details and no ambiguity in the input.

You may feel like there are all the details and no ambiguity in the prompt. But there may still be missing parts, like examples, structure, plan, or division to smaller parts (it can do that quite well if explicitly asked for). If you give too much details at once, it gets confused, but there are ways how to let the model access context as it progresses through the task.

And models are just one part of the equation. Another parts may be orchestrating agent, tools, models awareness of the tools available, documentation, and maybe even human in the loop.



I've given thousands of well detailed prompts. Of those a good enough portion yielded results that diverged from unambiguous instructions that I have stopped, long ago, being fooled into thinking instructions are interpreted by LLMs.

But if in your perspective it does work, more power to you I suppose.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: