Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It doesn’t really solve it as a slight shift in the prompt can have totally unpredictable results anyway. And if your prompt is always exactly the same, you’d just cache it and bypass the LLM anyway.

What would really be useful is a very similar prompt should always give a very very similar result.





This doesn't work with the current architecture, because we have to introduce some element of stochastic noise into the generation or else they're not "creatively" generative.

Your brain doesn't have this problem because the noise is already present. You, as an actual thinking being, are able to override the noise and say "no, this is false." An LLM doesn't have that capability.


Well that’s because if you look at the structure of the brain there’s a lot more going on than what goes on within an LLM.

It’s the same reason why great ideas almost appear to come randomly - something is happening in the background. Underneath the skin.


That’s a way different problem my guy.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: