Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If the catastrophic alternative is actually possible, who's to say the waffling academics aren't the ones to cause it?

I'm being serious here: the AI model the x-risk people are worrying about here because it waffled about causing harm was originally developed by an entity founded by people with the explicit stated purpose of avoiding AI catastrophe. And one of the most popular things for people seeking x-risk funding to do is to write extremely long and detailed explanations of how and why AI is likely to harm humans. If I worried about the risk of LLMs achieving sentience and forming independent goals to destroy humanity based on the stuff they'd read, I'd want them to do less of that, not fund them to do more.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: