Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The seed and the actual randomness is a property of the inferencing infrastructure, not the LLM. The LLM outputs probabilities, essentially.

The paper is not claiming that you can take a dump of ChatGPT responses over the network and figure out what prompts were given. It's much more about a property of the LLM internally.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: