Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Assuming they weren’t LARPing, that Reddit account claiming to have been in the room when this was all going down must be nervous. They wrote all kinds of nasty things about Sam, and I’m assuming the signatures on the “bring him back” letter would narrow down potential suspects considerably.

Edit: For those who may have missed it in previous threads, see https://old.reddit.com/user/Anxious_Bandicoot126



First of all nothing on Reddit is real (within margin of error). Secondly it's weird that you'd assume we know what you're talking about.


Links to the profile/comments were posted a few times in each of the major OpenAI HN submissions over the last 4 days. On the off-chance I would be breaking some kind of brigading/doxxing rule I didn't initially link it myself.


That doesn't sound credible or revealing. It's regurgitating a bunch of speculation stuff that's been said on this forum and in the media.


> must be nervous

I seriously doubt they care. They got away with it. No one should have believed them in the first place. I’m guessing they don’t have their real identity visible on their profile anywhere.


Why can't these safety advocates just say what they are afraid of? As it currently stands, the only "danger" in ChatGPT is that you can manipulate it into writing something violent or inappropriate. So what? Is this some San Francisco sensibilities here, where reading about fictional violence is equated to violence? The more people raise safety concerns in the abstract, the more I ignore it.



I'm familiar with the potential risks of an out-of-control AGI. Can you summarise in one paragraph which of these risks concern you, or the safety advocates, in regards to a product like ChatGPT?


It's not only about ChatGPT. OpenAI will probably make other things in the future.


They invented a whole theory of how if we had something called "AGI" it would kill everyone, and now they think LLMs can kill everyone because they're calling it "AGI", even though it doesn't work anything like their theory assumed.

This isn't about political correctness. It's far less reasonable than that.


Based on the downvotes I am getting and the links posted in the other comment, I think you are absolutely right. People are acting as if ChatGPT is AGI, or very close to it, therefore we have to solve all these catastrophic scenarios now.


Consider that your argument could also be used to advocate for safety of starting to use coal-fired steam engines (in 19th century UK): there's no immediate direct problem, but competitive pressures force everyone to use them and any externalities stemming from that are basically unavoidable.


I read the comments, most of them are superficial as if someone with no inside knowledge will post. His understanding of humans is also weak. Book deals and speeches as a motivator is hilarious.


It was definitely LARP. The vast majority of anecdotes shared on Reddit originate as some form of creatice fiction writing exercise.


Link? Not sure which account you are referring to



Context?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: