Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

More accurate, albeit less sensational headline:

"Blindly following instructions from an LLM would have killed me."

Not exactly shocking if you take into consideration that at the core, they're simply number predictors.



We know that. The mainstream discussion, use, and personification of LLMs does not suggest that.


Slight correction: the well-informed nerds of today know this, but average person doesn't have the interest, math background, or software development experience to really grok it. What a time to be alive! Haha. Things keep getting weirder.


Even people with that on HN make it abundantly clear that they'll believe or say anything that makes the number go up.


Even "We know that" in "we" are a minority it would seem. The majority is convinced about scaling laws and an acquired deeper meaning in LLMs and that LLMs already exhibit sparks of AGI and what not.


I think this is one of those examples where many people wouldn’t think about botulism because both garlic and oil are common and store safely out of the fridge uncombined.

Things like meat people might be more skeptical about, but imo this goes back to do Google, et al really trust their LLMs to give definitive answers on food safety. If it were me this would be one area I’d have the LLM refuse or hedge all answers like with other sensitive topics.


So, in a way, like this? Someone drove into a river following their GPS. https://www.youtube.com/watch?v=YsxpdX2dB4M


Except GPS is a fixed problem.


It’s all fun and games when it’s obvious the answer comes from a chat bot. So far this is not shocking at all.

Wait until the whole internet will have LLM content mixed with other seemingly legit content, without advertising its LLM generated.


It is the same like saying "Blindly navigating Google Maps would have killed me." when a person was shot for trespassing on a classified military unit that had been deliberately removed from maps by Google.

Normal LLMs are number predictors, yes. But Google Gemini is not a normal LLM: is a lobotomized model of unknown training dataset for supposedly moral-educational filters, from which information about poisonous substances has been cut out.

Specific people are liable for pushing hallucinating word-generator into Google Search. Specific people are liable for censoring this "model". And the fact that the responsibility for this censorship shifts to the end-users plays very much into their hands.

Upd: I provided examples of different Gemini responses depending on version/censorship settings in https://news.ycombinator.com/item?id=40728686




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: