Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How are you even going to moderate such content, how will the website operator even know if its real human or an AI agent controlling a computer?


So far HN users seem to be doing a pretty job of flagging them.

Of course, the big question is what to do if/when they're smart enough to fool everybody.


By definition, in that limit they'll be genuinely adding to the discourse so presumably they should stay.

Edit: More correctly, they'll be making contributions to the discourse that closely mimic the human distribution, so from a pure content perspective they won't be making the discourse any worse in the very short term.


I made a similar point a while ago (maybe last year) and there were some pretty good objections to it. Unfortunately I couldn't find that post when I looked for it last night!


One obvious counterpoint is that using AI tools allows manipulation of the discussion in a similar way to using a bullhorn in a coffee shop, only without revealing that you're the one holding the bullhorn. 10,000 bots "contributing to the discourse" in accordance with prompts and parameters controlled by one or a few individuals is quite different from the genuine opinions of 10,000 actual humans, especially if anyone is trying to use the discussion to get a sense of what humans actually think.


That's a good counterpoint and, IIRC, it's in addition to the other good counterpoints that I still can't find.


Offer a Turing award to the bot-trainer?


Turns out Hacker News was actually the long play to train AGI.


hm, wouldn't you almost by definition think you were doing a good job of flagging them at any level of actual effectiveness?


Not if I were seeing a bunch of them get through.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: