I find the funniest aspect of hallucinations etc to be that we've designed and trained these models based off our knowledge of biological brains and learning.
We expect these models to both act like a biological brain does and yet be absolutely perfect (ie not act like a biological brain does).
Same thing for image recognition and pretty much everything else machine: "I think that kinda sorta looks like a cat" some meatbag: "ha ha dum robot that's a dog says "you too" when the server says 'have a good meal'"
We expect these models to both act like a biological brain does and yet be absolutely perfect (ie not act like a biological brain does).
Same thing for image recognition and pretty much everything else machine: "I think that kinda sorta looks like a cat" some meatbag: "ha ha dum robot that's a dog says "you too" when the server says 'have a good meal'"