"Hallucination" has always seemed like a misnomer to me anyway considering LLMs don't know anything. They just impressively get things right enough to be useful assuming you audit the output.
If anything, I think all of their output should be called a hallucination.
On the other hand, once you're operating under the model of not knowing if anything knows anything, there's really no point in posting about it here, is there?
I took a semester long 500 level class back in college on the theory of knowledge. It is not easy to define - the entire branch of epistemology in philosophy deals with that question.
... To that end, I'd love to be able to revisit my classes from back then (computer science, philosophy (two classes from a double major), and a smattering of linguistics) with the world state of today's technologies.
Others have suggested "bullshit". A bullshitter does not care (and may not know) whether what they say is truth or fiction. A bullshitter's goal is just to be listened to and seem convincing.
> "Hallucination" has always seemed like a misnomer to me anyway considering LLMs don't know anything. They just impressively get things right enough to be useful assuming you audit the output.
If you pick up a dictionary and review the definition of "hallucination", you'll see something in the lines of "something that you see, hear, feel or smell that does not exist"
Your own personal definition arguably reinforces the very definition of hallucination. Models don't get things right. Why? Because their output contrasts with content covered by their corpus, thus outputting things that don't exist or were referred in it and outright contrast with factual content.
> If anything, I think all of their output should be called a hallucination.
No. Only the ones that contrast with reality, namely factual information.
If anything, I think all of their output should be called a hallucination.