> Except much worse, because it could allow spurious or even harmful facts to accrue
It already did, even in the "purely human" era. I think LLM text will gradually become more trustworthy than a random website by consistency filtering the training set.
It already did, even in the "purely human" era. I think LLM text will gradually become more trustworthy than a random website by consistency filtering the training set.