> Reddit is considered one of the most human spaces left on the internet, but mods and users are overwhelmed with slop posts in the most popular subreddits.
Is it truly considered as such by the actual end-users versus the company leadership that has a vested interest in cultivating that idea toward marketers? I've been on Reddit since 2011. Between 2011 and 2016, the site felt very human. From 2016 onward, the site has progressively felt less and less human.
This is, of course, anecdotal and contrary to the increased # of users the platform acquires every single year but mirrors scores of complaints from other users on how ingenuine the platform has felt.
Perhaps related? I've been noticing posts that occupy Top/Hot/Best have been increasingly made by accounts without so much as a 'Verified Email' badge [1], something Reddit has historically not enforced too heavily but is very easily abused by bad-actors when a barrier to entry is effectively non-existent. These accounts share similar traits: generally palatable posts (usually reposts) scattered across various subreddits, comment history (if any) follows the same styling where it is generally palatable, non-controversial statements that are easily upvoted.
About ~7 years ago, u/KeyserSosa [2] acknowledged an influence campaign on Reddit. An evergreen comment from that thread is:
> I am worried by just how... normal these accounts seem. How can we ever hope to weed out influencers who subvert social platforms like this one if they are so good at hiding it? Can neural algorithms even deal with this?
The ubiquity of inauthentic, AI generated content appearing before the real human end-users that is enabled by the very low barrier to entry will lend itself to more articles like this being written months and years from now, unless Reddit makes some sort of qualitative changes -- something we have a pattern of previous behavior to weigh against that doesn't inspire confidence.
A favorite comment that I've read here on HN is this [3] and it applies so well to the modern social media ecosystem.
> My take is, if a community is constrained by quality (eg moderation, self-selecting invite-only etc) then the only way it grows is by lowering the threshold. Inevitably that means lower quality content.
To some extent, more people can make up for it. Eg if I go from 10 excellent artists to 1000 good ones, chances are that the top 10% artwork created actually gets better.
> But eventually if you grow by lowering quality, then, well, quality drops.
> I suppose for very small societies, they may be limited by discoverability/cliquiness and not quality, so their growth doesn’t mesh with quality and so they could also get better with size.
> Note, “quality” doesn’t have to mean good/bad but also just “property”. When Facebook started, it was for kids from elite schools. It then gradually diluted that by lowering that particular bar. Then it was for kids from all schools. Then young people. Then their parents too. Clearly, it’s far from dying in absolute terms, but it’s certainly no longer what it initially was. To many initial users, it’s as good as dead though.
> Canada should never collaborate with any US authorities!
Cross-border collaboration is a good thing. Our agencies regularly collaborate to bring people who feel insulted and emboldened to account for their crimes. This works both ways.
As someone who has dealt with media of me as a minor (~around 11/yo) from Omegle being shared across the internet, the role archivers play in keeping illegal content “alive” isn’t well recognized. Thankfully, the Internet Archive has a matured process to purge pages that host illegal content.
We do not know what the investigation is for. All is up to speculation. Not all investigations are bad.
Here is an example on archive.is. I submitted multiple complaints to NCMEC but didn’t get results. Germany, though, was able to get the archives purged.
> In response to a request we received from 'jugendschutz.net' the page is not currently available.
That page held many, many images of minors. It is good that it is gone.
August 12, 2025 - Canadian Man Sentenced to 188 Months for Attempted Online Enticement of a Minor and Possessing Child Pornography [1]
August 21, 2024 - Canadian National Extradited To The United States Pleads Guilty To Production Of Child Sex Abuse Material And Enticement Of Minors
December 20, 2024 - Extradited Canadian National Sentenced To Life In Federal Prison for producing child sexual abuse material and enticement of a minor [3]
IMO it's only a good thing when it's a good thing. There are plenty of reasons it could be a bad thing too. For example, Edward Snowden probably would have been hung by now if russia cross-border collaborated.
Not sure we can confidently state how the shooter feels about firearms one way or the other at this time. As of right now, we know the rifle used in the murder was an x/years_old family heirloom that was given to the suspect as a gift but the police have not shared anything substantive beyond those details.
We are likely to hear more about the shooters position on firearms at a more granular scale at trial as prosecutors build a profile of Robinson that will be presented to the jury.
Violent crimes are generally impulsive - the accessibility of the firearm absolutely lent itself to the murder occurring but being in possession of a rifle, in general, doesn't offer much genuine insight beyond speculation.
Over the last ~3 years I've been passively following the negative PR campaign against TT by Meta; a lot of the outrage felt a bit manufactured, specifically the outlandish claims like the 'slap a teacher challenge' which, upon investigation, didn't actually exist [0]
While I'm hesitant to accept anything posted on social media [Reddit, in this case] as something that translates to real-life sentiment of the average person, there has been quite a bit of celebration and encouragement towards damaging Tesla vehicles on there over the last several weeks.
In addition to the a story I saw here of a note with a brick left on a car owners windshield.
I don't think you are including the discouraging effect on buying a new Tesla, and being prone to vandalism might also be considered by insurance companies on their calculations.
Is it truly considered as such by the actual end-users versus the company leadership that has a vested interest in cultivating that idea toward marketers? I've been on Reddit since 2011. Between 2011 and 2016, the site felt very human. From 2016 onward, the site has progressively felt less and less human.
This is, of course, anecdotal and contrary to the increased # of users the platform acquires every single year but mirrors scores of complaints from other users on how ingenuine the platform has felt.
Perhaps related? I've been noticing posts that occupy Top/Hot/Best have been increasingly made by accounts without so much as a 'Verified Email' badge [1], something Reddit has historically not enforced too heavily but is very easily abused by bad-actors when a barrier to entry is effectively non-existent. These accounts share similar traits: generally palatable posts (usually reposts) scattered across various subreddits, comment history (if any) follows the same styling where it is generally palatable, non-controversial statements that are easily upvoted.
About ~7 years ago, u/KeyserSosa [2] acknowledged an influence campaign on Reddit. An evergreen comment from that thread is:
> I am worried by just how... normal these accounts seem. How can we ever hope to weed out influencers who subvert social platforms like this one if they are so good at hiding it? Can neural algorithms even deal with this?
The ubiquity of inauthentic, AI generated content appearing before the real human end-users that is enabled by the very low barrier to entry will lend itself to more articles like this being written months and years from now, unless Reddit makes some sort of qualitative changes -- something we have a pattern of previous behavior to weigh against that doesn't inspire confidence.
[1] https://www.redditstatic.com/awards2/verified_email-40.png
[2] https://www.reddit.com/r/announcements/comments/9bvkqa/an_up...
A favorite comment that I've read here on HN is this [3] and it applies so well to the modern social media ecosystem.
> My take is, if a community is constrained by quality (eg moderation, self-selecting invite-only etc) then the only way it grows is by lowering the threshold. Inevitably that means lower quality content. To some extent, more people can make up for it. Eg if I go from 10 excellent artists to 1000 good ones, chances are that the top 10% artwork created actually gets better.
> But eventually if you grow by lowering quality, then, well, quality drops.
> I suppose for very small societies, they may be limited by discoverability/cliquiness and not quality, so their growth doesn’t mesh with quality and so they could also get better with size.
> Note, “quality” doesn’t have to mean good/bad but also just “property”. When Facebook started, it was for kids from elite schools. It then gradually diluted that by lowering that particular bar. Then it was for kids from all schools. Then young people. Then their parents too. Clearly, it’s far from dying in absolute terms, but it’s certainly no longer what it initially was. To many initial users, it’s as good as dead though.
[3] https://news.ycombinator.com/item?id=31363953