One man's "plausibility" is another person's "barely reasoned bullshit". I think you're being generous, because LLMs explicitly don't deal in facts, they deal in making stuff up that is vaguely reminiscent of fact. Only a few companies are even trying to make reasoning (as in axioms-cum-deductions, i.e., logic per se) a core part of the models, and they're really struggling to hand-engineer the topology and methodology necessary for that to work roughly as facsimile of technical reasoning.
I’m not really being generous. I merely think if I’m gonna condemn something as high-profile snake oil for the tragically gullible, it’s helpful to have a solid basis for doing so. And it’s also important to allow oneself to be wrong about something, however remote the possibility may currently seem, and preferably without having to revise one’s principles to recognise it.