Not one single solitary soul has ever made the claim that misinformation didn't exist before AI so it's not clear who you're arguing with. People are rightly concerned about the scale of misinformation that AI is unlocking.
What I'm responding to is the strong tendency to discount our very long history of dealing with factually incorrect information and the ascertainment of truth from sources both dubious and trustworthy.
Entire institutions are set up in order to handle these very real problems, the set of which currently dwarfs the problem of hallucinations in GPT.
From a social perspective, non-GPT falsehoods are even more insidious, because we are inclined to trust and believe those whom we like and are like us.
Again, people are in the habit of discounting just how much we are wrong in our everyday lives. The hallucinations therefore appear more singular than they actually are.