Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> "Clarifying one's prompts" may be effective in some cases but it's probably not what others seek

It's not even that. Can the LLM run away, stop the conversation or even say no? It's as much as your boss "talking" to you about the task and not giving you a chance to respond. Is that a talk? It's 1-way.

E.g. ask the LLM who invented Wikipedia. It will respond with "facts". If I ask a friend, the reply might be "look it up yourself". This a real conversation. Until then.

Even parrots and dogs can respond differently than a forced reply exactly how you need it.



True - but LLMs can do this.

A German Onion-like magazine has a wrapper around ChatGPT that behaves like that called „DeppGPT“ (IdiotGPT), likely implemented with a decent prompt.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: