Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Without a mechanism to detect output from LLMs, we’re essentially facing an eternal model collapse with each new ingestion of information from academic journals, to blogs, to art. [1][2]

[1] https://en.m.wikipedia.org/wiki/Model_collapse

[2]https://thebullshitmachines.com/lesson-16-the-first-step-fal...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: