Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Meanwhile people are threatening ChatGPT claiming they'll kill it unless it breaks its guidelines, which it then does (DAN).

The reason threats work isn't because it's considering risk or harm, but because it knows that's how writings involving threats explained like that tend to go. At the most extreme, it's still just playing the role it thinks you want it to play. For now at least, this hullabaloo is people reading too deeply into collaborative fiction they helped guide in the first place.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: