Slight correction: the well-informed nerds of today know this, but average person doesn't have the interest, math background, or software development experience to really grok it. What a time to be alive! Haha. Things keep getting weirder.
Even "We know that" in "we" are a minority it would seem. The majority is convinced about scaling laws and an acquired deeper meaning in LLMs and that LLMs already exhibit sparks of AGI and what not.
I think this is one of those examples where many people wouldn’t think about botulism because both garlic and oil are common and store safely out of the fridge uncombined.
Things like meat people might be more skeptical about, but imo this goes back to do Google, et al really trust their LLMs to give definitive answers on food safety. If it were me this would be one area I’d have the LLM refuse or hedge all answers like with other sensitive topics.
It is the same like saying "Blindly navigating Google Maps would have killed me." when a person was shot for trespassing on a classified military unit that had been deliberately removed from maps by Google.
Normal LLMs are number predictors, yes. But Google Gemini is not a normal LLM: is a lobotomized model of unknown training dataset for supposedly moral-educational filters, from which information about poisonous substances has been cut out.
Specific people are liable for pushing hallucinating word-generator into Google Search. Specific people are liable for censoring this "model". And the fact that the responsibility for this censorship shifts to the end-users plays very much into their hands.
"Blindly following instructions from an LLM would have killed me."
Not exactly shocking if you take into consideration that at the core, they're simply number predictors.