Hacker Newsnew | past | comments | ask | show | jobs | submit | whyenot's commentslogin

It’s like coders (and now their agents) are re-creating biology. As a former software engineer who changed careers to biology, it’s kind of cool to see this! There is an inherent fuzziness to biological life, and now AI is also becoming increasingly fuzzy. We are living in a truly amazing time. I don’t know what the future holds, but to be at this point in history and to experience this, it’s quite something.

The issue is that for most things we don't want the fuzzy nature of biology in our systems. Yet some people try to shoehorn it into everything. It is OK for chat or natural language things, which are directed at a human, but most other systems we would like to be 100% reliable, and not 99% or failing after a few years, and at the very least we want them to behave predictably, so that we can fix any mistakes we made, when writing that software.

As a middle aged (gen x) woman, my facebook feed is pretty good. It's filled with posts from friends and interest groups that I am a part of. The reason I no longer use FB has nothing to do with the feed, it's because Mark Zuckerberg is an awful person, and I refuse to use his product. The cognitive dissonance is great here, because I still use WhatsApp; it's the best way to stay in contact with my relatives in Europe, and I still use IG, albeit mostly for work, and sparingly.

I'm still a FB user even though most friends and relatives have disengaged due to toxicity. But what I've noticed consistently is that any group on FB that has more than 1000 members will end up surfacing so much toxic sentiment that I have to unsubstantiated. I'm talking about innocuous fields such as the local road conditions. That one became full of rants about out of state drivers, drivers who don't understand English, people posting license plates of bad drivers, etc. This has led me to a theory that humans just can't behave nicely beyond some threshold group size.

> But what I've noticed consistently is that any group on FB that has more than 1000 members will end up surfacing so much toxic sentiment that I have to unsubstantiated.

It depends on the group and how well it is moderated.

I live in an area where everything depends on Facebook. There are multiple FB groups for the town, the largest of which has 80k members. Not perfect, but not toxic. The same in other similar groups.

I am an admin of another with 30k members. It has a tight focus (exams and qualifications for home ed kids in the UK - GCSEs/IGCSEs mostly, but other things too), membership is only for parents of such kids (there are membership questions), the group is private, posts require approval, irrelevant comments get deleted, repeat offenders get kicked out. We do not have a lot of problems (some attempts at spam by tutors, but they get kicked out).


> This has led me to a theory that humans just can't behave nicely beyond some threshold group size.

I think what happens is that the risk of including a critical amount of "toxics" (lacking a better word) such that they can keep a conversation going, increases by FB group size. Without actice moderators it doesn't take much.


I think it is important to remember that only a tiny, tiny fraction of most facebook groups is actually posting, commenting, or even viewing the group at any given moment. Most people who view don't post/comment. (True of reddit and other social media as well.)

And the thing about poorly moderated groups (especially on platforms with rage-boosting algorithms) that let assholes go off without consequences is: the people who both a) actually look at the group ever and b) aren't assholes either leave entirely, stop looking at the group, and stop posting/commenting to the group (if they ever did in the first place). They go find places to hang out where there aren't a bunch of assholes. Nobody wants to hang out with the assholes when they can easily just not.

And at the same time, the assholes all gravitate to the same few places because they get kicked out of all the other places. Or if they don't get kicked out outright, they get shouted down or ignored, which they hate. So instead they congregate where they can get away with or get praised for saying whatever vile things they want.


The Dunbar number is 150 for humans but that only measures maintaining a group, maybe the behave nicely number is smaller.

I think after a certain group size people feel immune or that their alternative thought might have a better chance of landing with someone.

Once a group gets big enough...

> This has led me to a theory that humans just can't behave nicely beyond some threshold group size.

I think you're generalizing far too broadly. The problem you're describing is more-or-less exclusively a problem with online, open-membership groups.

Consider: if the groups you describe were in-person groups, these ranters would constantly be getting disengaged/off-put/disgusted reactions from the "silent majority" of the people in the group. And just these reactions — together with a lack of any positive engagement — would, almost always, be enough to make them stop or go somewhere else.

(Or, to put a finer point on that: "annoyed, judgemental silence, and then turning away / back to the person you were talking to" would always put off the vast majority of people, with just a few — people who have trouble understanding non-verbal signals — persisting because they aren't "getting the message." And in an in-person context, these few would still eventually be taken aside and given a talking-to, because if they're butting into other in-person conversations with this behavior, they're being far more disruptive than "random new conversation threads" tend to be felt as. Even though "random new conversation threads" can kill a group just as dead.)

The problem with decorum / respect-for-purpose in unmoderated online open-membership groups seems to mostly stem from the fact that people underestimate the importance of non-verbal signals in moderating/regulating behavior. And so there is a dearth of such signals available in such groups. Our brains didn't evolve to play the game of socializing without these signals, any more than ants evolved to coordinate without pheremones. So many people's brains begin to play the game in degenerate / anti-social ways.

From what I've been able to gather, from personal interactions with many people who admit to being "Internet trolls" at some point in their lives... their behavior was almost never intentional maliciousness/active-disregard-for-others on their part. It's rather an emergent behavior — something they "just ended up doing" — given a lack of (non-verbal-signal-alike) calibrating feedback.

And why is there so little non-verbal-signal-alike communication online?

Well, for one thing, we often aren't even aware we're giving off such signals; and so, if we need to consciously choose to communicate them (as we do in online contexts), then we simply fail to do so, because the majority of these signals never even rise to our conscious attention as something to be communicated.

And even when we do become aware of them, we often don't feel them to be important enough to be "worth" going to the effort of translating into some more conscious/explicit/non-subtextual form of communication.

And then, even when a strong desire to communicate a nonverbal signal does bubble up within us... most online chat/forum systems are horrible at transmitting such signals with any degree of fidelity, when they transmit them at all. Especially the kinds of signals used for intra-group behavior regulation.

Facebook, for example, has reaction emojis on both posts and comments — but no reaction emoji that transmits a sentiment like "I disapprove of you saying this; please stop" (e.g. U+1F611 EXPRESSIONLESS FACE or U+1FAE4 FACE WITH DIAGONAL MOUTH). Rather, the only reaction emoji available are those meant to react sympathetically to the emotive content of the post/comment — e.g. with anger, sadness, etc. (People do try to use the "anger" reaction to express disapproval of posts; but when the content itself is often "ragebait" / meant to evoke anger, the poster won't necessarily understand that these reactions are being directed at them, rather than at their post.)

Further, no chat system or forum I'm aware of has participant-visible signals of "detach rate" — i.e. there's no way for people to know when others are clicking on their posts, reading one line, doing a 180 and running away as fast as they can. (YouTube videos expose this metric to their creators; I think it's actually very helpful for them. It could do with being implemented far more widely.)

(And, to be a conspiracy theorist for a moment: I think, in both cases, this is probably intentional. The explicit purpose of signals that "regulate behavior", after all, is to make people engage less in certain anti-social behaviors. Making available any such tools, will therefore inevitably make any kind of platform-aggregate "engagement metrics" go down! If they were ever temporarily introduced, they'd have been quickly removed again with this justification.)


Great analysis. I do not think its conspiracy theorist to believe it to be intentional, or at least a result of KPIs.

One thing I think you are missing is that in person groups are usually far smaller. Anything with 1,000 people would be organised and there would be rules of behaviour, moderation of discussion etc. Most often if something is that big, its mostly an audience.

I think the other thing that happens in real life groups is that there is no community or real relationships. If you annoy people in real life it has consequences. In an FB group there are none.


My Facebook feed is great, my X feed is great. I don't use Facebook and X because I like Mark Zuckerberg and Elon Musk but because I genuinely read interesting things and I interact with people I like.

That being said, I don't spend too much time on social networks because I have lots of other things to do.


[flagged]


Sometimes you have to stand for something even if it’s inconvenient.

It's working too. All my friends stopped using Facebook for similar reasons. My feed went from a 24/7 pleasant reunion to a fetid swamp and now I also have stopped using it.

[flagged]


> You also don’t systematically evaluate all CEOs of all products to use.

We certainly evaluate companies on their CEOs if their CEOs make themselves high profile enough.

You are certainly judged here if you have a Tesla because of Musk hence why sales have dropped 50%.

Other companies that don't have as high profile CEOs can get away with terrible points of view.


Oh yeah? How is the ceo of your power company? Your refrigerator? Your garage door?

I just am very skeptical any of this is based on a harm based model of morality. Instead it smells like concern about perception or status:

> You are certainly judged here if you have a Tesla


> Other companies that don't have as high profile CEOs can get away with terrible points of view.

> > Oh yeah? How is the ceo of your power company? Your refrigerator? Your garage door?

If they hide their terrible opinions then its hard to made judgements.


Exactly, so your morality is actually based on media prominence and status, not harm.

I agree with you, but it's a tool that should only be used very sparingly because tariffs can be incredibly difficult to get rid of. See for example the "chicken tax" for light trucks which was instituted in 1964 (because the Europeans tariffed US chicken exports).

Green algae, which are essentially being farmed by the fungus, are closely related to Plantae and are often included in the kingdom in the broad sense (Plantae sensu lato).

> LLMs aren’t built around truth as a first-class primitive.

neither are humans

> They optimize for next-token probability and human approval, not factual verification.

while there are outliers, most humans also tend to tell people what they want to hear and to fit in.

> factuality is emergent and contingent, not enforced by architecture.

like humans; as far as we know, there is no "factuality" gene, and we lie to ourselves, to others, in politics, scientific papers, to our partners, etc.

> If we’re going to treat them as coworkers or exoskeletons, we should be clear about that distinction.

I don't see the distinction. Humans exhibit many of the same behaviours.


There's a ground truth to human cognition in that we have to feed ourselves and survive. We have to interact with others, reap the results of those interactions, and adjust for the next time. This requires validation layers. If you don't see them, it's because they're so intrinsic to you that you can't see them.

You're just indulging in sort of idle cynical judgement of people. To lie well even takes careful truthful evaluation of the possible effects of that lie and the likelihood and consequences of being caught. If you yourself claim to have observed a lie, and can verify that it was a lie, then you understand a truth; you're confounding truthfulness with honesty.

So that's the (obvious) distinction. A distributed algorithm that predicts likely strings of words doesn't do any of that, and doesn't have any concerns or consequences. It doesn't exist at all (even if calculation is existence - maybe we're all reductively just calculators, right?) after your query has run. You have to save a context and feed it back into an algorithm that hasn't changed an iota from when you ran it the last time. There's no capacity to evaluate anything.

You'll know we're getting closer to the fantasy abstract AI of your imagination when a system gets more out of the second time it trains on the same book than it did the first time.


Strangely, the GP replaced the ChatGPT-generated text you're commenting on by an even worse and more misleading ChatGPT-generated one. Perhaps in order to make a point.

If an employee repeatedly makes factually incorrect statements, we will (or could) hold them accountable. That seems to be one difference.

You can cancel your AI subscription too.

Then you lose all your employees. And all the new candidates are just the old one in new costumes.

They share a common ancestor with ancient fish, yes. Land mammals also share a common ancestor with starfish. That doesn't make them starfish.

The "humans are fish" idea comes from cladistics.

"Fish", if taken as a monophyletic term, includes land mammals because tetrapods are osteichthyans -- bony fish.

In common use, "fish" is, however a paraphyletic group which excludes tetrapods but otherwise includes all other osteichthyans.

Since starfishes don't include tetrapods, nor vice versa (nor do they share biological features to be in a polyphyletic grouping like "crabs"), the relevant common term is "Animalia" -- "animal".


Do you have a reference or at least some hard numbers for your "fun fact"?

Long gun homicides (justified and unjustified, "assault weapons" and grandpa's 30-06 combined) are typically sub-500 per year, see: FBI crime stats for the last N decades.

Pick whatever demise: falling off of ladders, roofs, etc. - it's not hard to exceed this number in any given year.


Gnetum genom does not have a good fossil record, as is the case for many tropical species, but based on molecular clock data, the genus dates to the late Cretaceous-Paleogene boundary (66 mya). Gnetum is a really weird gymnosperm, not a flowering plant, although it does produce fleshy "cones" (strobili). In Indonesia, the seeds are smashed and deep fried to make crisps called "emping".

https://onlinelibrary.wiley.com/doi/10.12705/642.12


Claude, please remove all invasives

I think it's smart to be skeptical of any "review" site that depends on affiliate links for income. The incentive is no longer to provide advice, it's to sell you something. Anything. Click the link. Good. Now buy something. That's right. Add it to your basket. It doesn't matter what you buy. Yes, higher priced items are better. Checkout. We get our sweet kickback, nice.

Unfortunately, every review site uses affiliate links. Even organizations with very high ethical standards like Consumer Reports use them now. At least CR still gets most of its income from subscriptions and memberships. I guess that's something.


> Yes, higher priced items are better.

This is the real reason I don't trust sources that make money off affiliate links. The incentive is to recommend the more expensive items due to % kickback.


Wirecutter is part of NYTimes and depends on crosswords for income.

I haven't always agreed with them and sometimes the articles are clearly wrong because they're several years old, but they're usually good.

(I think I last seriously disagreed with them about a waffle maker.)


Wirecutter does an interesting thing where - I don't necessarily disagree with their review of the products they chose. But I'm baffled why they didn't choose to review the overwhelmingly most popular item in the category. Those omissions are what seems the most suspect to me.

Sometimes at the bottom of reviews they mention a lot more products than appeared in the main review. Not always though. Not disagreeing with the decline in reliability but just stating because this can be easy to miss and when it is done I do find it helpful.

Wirecutter has stated in the past, maybe it was on their podcast, that they get a lot of their income from affiliate links. They have done some fairly suspicious things like their “gift guide”s for Christmas which are little more than long lists of products with affiliate links. Same for their “sales guide” for Black Friday, and there have been other cases. That doesn’t mean their reviews are bad, I just approach them with a certain amount of skepticism.

Seems in line with their original purpose still. They seemed to always want to be a source to suggest a product that is good enough for a consumer, to help avoid decision paralysis, and avoid fake products that are both expensive and flawed. Suggesting a list of gifts that are suitable and not deeply flawed is exactly what a lot of people are probably looking for around Black Friday.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: