I think Bruckman has the right of it. Wikipedia already has to deal with a lot of automated and subtle vandalism, I don't think the relative cheapness of contextually sensible text from chatGPT represents the same kind of game changer that it may for other sites.
That seems overly emotional, to lie requires intent right? It implies the will to deceive. We don't accuse people of deception when they are simply wrong after all, so it feels inappropriate to accuse a machine that is predicting the next most likely word of lieing.
Yes. Lying requires being able to discern truth from falsity. But LLMs don't know what's true even when they say true things. That's why "hallucinate" is a better word ... better but not perfect because for an LLM it's all hallucinations all the time. Some of those hallucinations (most of them) happen to turn out to be true.
I think this is more accurate for non-emotional reasons. The GPT models have actually been trained to lie in the sense that they will tell you something convincing not just about the topic, but also how they are getting the information, e.g. “I checked my sources and found…”. You can also see jailbreaks where it has been trained to reverse belief on sensitive topics. This is consistently deceptive bias rather than random variance.
Imagine if Apple had a generative image model that produces fake metadata and watermarks showing the data cam from a real iPhone, and then signs it with their key. That would show obvious intent to deceive rather than attempt to accurately represent reality.
Well, in some cases, negligence is sufficient for a lie or deception. E.g. forgetting to properly cite results is plagiarism, whether you intended to lie or not.
Maybe it makes sense to hold the owner of the hardware that produces these <hallucinations/lies/failures/deceptions/confabulations> accountable for it. In many places with freedom of speech, this will be difficult probably.
Non consistent databases is how i’d call these chat toys. We made software to get accurate results but now they want to sell us the idea that a system that consistently produces inaccurate results is good … because it “hallucinates” just like hoomans. As if that’s what we needed from machines in the first place, to mimics our limitations.
There's neuroscience research on hallucinations, for example https://www.math.utah.edu/~bresslof/publications/01-3.pdf "What Geometric Visual Hallucinations Tell Us about the Visual Cortex". The word "conscious" does not appear.
> The results are sensitive to the
detailed specification of the lateral connectivity and suggest that the
cortical mechanisms that generate geometric visual hallucinations are
closely related to those used to process edges, contours, surfaces, and
textures.
Now this example is only one theory, but it shows that serious people can talk in these terms about hallucination and expect to be understood and not have to argue past reviewer #2, without this whole question vitiating their discussion.
Also, btw, we don't fully know that GPT-4 is not conscious, if you really think it matters. When the question comes up, people who are certain it's not usually point to the fact that it's not running continuously, as if that had any bearing on the question.
This really seems like bothsidesism. Reading the notes, Wikipedia is not nearly as split on the subject as Vice makes it appear.