Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Except much worse, because it could allow spurious or even harmful facts to accrue

It already did, even in the "purely human" era. I think LLM text will gradually become more trustworthy than a random website by consistency filtering the training set.



Unfortunately, it is more than likely that the training inputs to upcoming LLMs will be partly from older LLM outputs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: