Is it superstition to deduce that I get gassy after eating beans? I need a scientific study to tell me this? Same for if a screen hurts my eyes (not long term, like truly my eyes hurt) when using bright white colors at night.
Yes, actually, if someone has direct scientific evidence contrary to the claim (I doubt such evidence exists for your first example as to the best of my knowledge the relationship between beans and gastrointestinal changes is well understood).
Your eyes could hurt for a variety of reasons - brightness, too long screen time, being dry for external reasons, etc. Most humans are poor at identifying the cause of one-off events: you may think it's because you turned on a blue-light filter, but it actually could be because you used your phone for an hour less.
That's why we have science to actually isolate variables and prove (or at least gather strong evidence for) things about the world, and why doctors don't (or at least shouldn't) make health-related recommendations based on vibes.
It's pretty clear, even on monitor, night and day difference at a push of a button. I'm not arguing if this helps you sleep better but it is pretty arrogant of you to tell me I can't figure out from my own experience if something is comfortable or not.
It’s about the equivalent of someone claiming my saying I find woollen clothing directly touching my skin to be irritating / itchy requires double blind randomised controlled studies to determine whether this is true at the population level.
There are eight billion of us, we can’t all be different, there must be at least some categories we can’t be sorted in to, maybe those who find woollen clothing itchy and those who don’t, and those who find blue-light reduction more comfortable and those who don’t.
One of my pet theories is that this hyper fixation on The Ultimate Truth via The Scientific Method is what happens when a society mints PhDs at an absurd rate. We went up with a lot of people who learn more and more about less and less, and a set of people who idolise those people and their output.
If your eyes routinely hurt when doing something, and then they stop routinely hurting after you make a change, that's pretty good reason to believe that there's a causal effect there.
Sometimes the causality is clear enough that you don't need sophisticated science to figure it out. Did you know that the only randomized controlled trial on the effectiveness of parachutes at preventing injury and death when jumping out of an airplane found that there is no effect? Given that, do you believe there really is no effect?
There's a few viral shorts lately about tricking LLMs. I suspect they trick the dumbest models..
I tried one with Gemini 3 and it basically called me out in the first few sentences for trying to trick / test it but decided to humour me just in case I'm not.
And you believe the other open source models are a signal for ethics?
Don't have a dog in this fight, haven't done enough research to proclaim any LLM provider as ethical but I pretty much know the reason Meta has an open source model isn't because they're good guys.
That's probably why you don't get it, then. Facebook was the primary contributor behind Pytorch, which basically set the stage for early GPT implementations.
For all the issues you might have with Meta's social media, Facebook AI Research Labs have an excellent reputation in the industry and contributed greatly to where we are now. Same goes for Google Brain/DeepMind despite their Google's advertisement monopoly; things aren't ethically black-and-white.
A hired assassin can have an excellent reputation too. What does that have to do with ethics?
Say I'm your neighbor and I make a move on your wife, your wife tells you this. Now I'm hosting a BBQ which is free for all to come, everyone in the neighborhood cheers for me. A neighbor praises me for helping him fix his car.
Someone asks you if you're coming to the BBQ, you say to him nah.. you don't like me. They go, 'WHAT? jack_pp? He rescues dogs and helped fix my roof! How can you not like him?'
Hired assassins aren't a monoculture. Maybe a retired gangster visits Make-A-Wish kids, and has an excellent reputation for it. Maybe another is training FOSS SOTA LLMs and releasing them freely on the internet. Do they not deserve an excellent reputation? Are they prevented from making ethically sound choices because of how you judge their past?
The same applies to tech. Pytorch didn't have to be FOSS, nor Tensorflow. In that timeline CUDA might have a total monopoly on consumer inference. Out of all the myriad ways that AI could have been developed and proliferated, we are very lucky that it happened in a public friendly rivalry between two useless companies with money to burn. The ethical consequences of AI being monopolized by a proprietary prison warden like Nvidia or Apple is comparatively apocalyptic.
A gangster will give free turkeys on thanksgiving while also selling drugs to the same community, enslaving them in the process. Very good analogy you found, thank you.
My problem is you seem naive enough to believe Zuck decided to open source stuff out of the goodness of his heart and not because he did some math in his head and decided it's advantageous to him, from a game theoretic standpoint, to commoditize LLMs.
To even have the audacity to claim Meta is ETHICAL is baffling to me. Have you ever used FB / instagram? Meta is literally the gangster selling drugs and also playing the filantropist where it costs him nothing and might also just bring him more money in the long term.
You must have no notion of good and evil if you believe for a second one person can create facebook with all its dark patterns and blatant anti user tactics and also be ethical.. because he open sourced stuff he couldn't make money from.
IMO in a company (or rather, a conglomerate) as big as Meta, you can have teams that are genuinely good people and also have teams that don't have principles or refuse to live by them. In other words, divisions of big companies aren't homogeneous.
HN already has a mechanism for flagging posts. If flagging low effort/trivial show HN posts were normalized, I suspect it would work just fine: if a human can't easily decide whether or not to flag it, they likely wouldn't.
As dang posted above, I think it's better to frame the problem as "influx of low quality posts" rather than framing policies having to do explicitly with AI. I'm not sure I even know what "AI" is anymore.
If they immediately make another low-quality PR that's when you ban them because they're clearly behaving like a bad actor. But providing even trivial, boilerplate feedback like that is an easy way of drawing a bright line for contributors: you're not going to review contributions that are blatantly low-quality, and that's why they must refrain from trying to post raw AI slop.
> Can you provide examples in the wild of LLMs creating bad descriptions of code? Has it ever happened to you?
Yes. Docs it produces are generally very generic, like it could be the docs for anything, with project-specifics sprinkled in, and pieces that are definitely incorrect about how the code works.
> for some stuff we have to trust LLMs to be correct 99% of the time
The above post is an example of the LLM providing a bad description of the code. "Local first" with its default support being for OpenAI and Anthropic models... that makes it local... third?
Can you provide examples in the wild of LLMs creating good descriptions of code?
>Somehow I doubt at this point in time they can even fail at something so simple.
I think it depends on your expectations. Writing good documentation is not simple.
Good API documentation should explain how to combine the functions of the API to achieve specific goals. It should warn of incorrect assumptions and potential mistakes that might easily happen. It should explain how potentially problematic edge cases are handled.
And second, good API documentation should avoid committing to implementation details. Simply verbalising the code is the opposite of that. Where the function signatures do not formally and exhaustively define everything the API promises, documentation should fill in the gaps.
This happens to me all the time. I always ask claude to re-check the generated docs and test each example/snippet, sometimes more than once; more often than not, there are issues.
That's a moral debate, not suitable for this discussion.
The discussion at hand is about purity and efficiency. Some people are process oriented, perfectionists, purists that take great pride in how they made something. Even if the thing they made isn't useful at all to anyone except to stroke their own ego.
Others are more practical and see a tool as a tool, not every hammer you make needs to be beautiful and made from the best materials money can buy.
Depending on the context either approach can be correct. For some things being a detail oriented perfectionist is good. Things like a web framework or a programming language or an OS. But for most things, just being practical and finding a cheap and clever way to get to where you want to go will outperform most over engineering.
It sure is myopic to think that the debate over if the ends justifies the means is solely a moral consideration, and then literally list cases where the value of the means compared to the ends is a judgment call results in "it depends".
1. They're too stupid to understand what they're truly funding.
2. They understand but believe they can control it for their benefit, basically want to "rule the world" like any cartoon villain.
3. They understand but are optimists and believe AGI will be a benevolent construct that will bring us to post scarcity society. There are a lot of rich / entrepreneurs that still believe they are working to make the world a better place.. (one SaaS at a time but alas, they believe it)
4. They don't believe that AGI is close or even possible
reply