> Great! Now show me a system that can verify that list for accuracy as well. Not to be flippant, but this is the complaint. You can't approach outputs uncritically.
In general you can't, but surely it's not that big a deal if ChatGPT offers an inaccurate summary of a movie you're about to use to kill time on a flight? I suppose it becomes important if, e.g., you're relying on it to tell you whether a movie is appropriate for children, but, if you're just asking it whether a movie is worth watching, that's a question that doesn't have an objective, factual answer anyway, so a hallucinated answer is probably about as useful as that of a not-previously-known reviewer.
> If I invested money into a film, I would want its representation in the world to reflect what the movie is about at the very least.
Sure, but that's the filmmaker's interest. As someone sitting on a plane trying to decide whether to watch a movie, I care about my interest, not that of the person who made it. I'm not particularly arguing for the use of ChatGPT here (I wouldn't use it), just that the risks it usually poses are fairly minimal in this case.
You're forgetting the information hazard of five years from now someone mentioning a movie and you saying "oh I didn't want to watch that because of the car chase" and everyone looks at you funny because it is a film set in the 1700s about a carriage driver.
In general you can't, but surely it's not that big a deal if ChatGPT offers an inaccurate summary of a movie you're about to use to kill time on a flight? I suppose it becomes important if, e.g., you're relying on it to tell you whether a movie is appropriate for children, but, if you're just asking it whether a movie is worth watching, that's a question that doesn't have an objective, factual answer anyway, so a hallucinated answer is probably about as useful as that of a not-previously-known reviewer.