The underlying problem is politization. You can assume that if some political party wants some something and another political party wants the opposite something, you could find a set of impartial experts that would provide hard data and solve the question. In the real world, there are two sets of experts, holding opposite views and providing contradictory data. Everybody will make a big noise and eventually nobody knows what happened, just that the question got muddied and you aren't so sure about anything anymore.
The problem underlying politicisation is confidence. Science isn't binary, it's a set of circles of decreasing confidence that spreads out from a core of propositions that we're very confident about - more or less what you'll learn on an undergrad physics course - to a set of increasingly tentative hypotheses.
A lot of arguments about science are really arguments about confidence. E.g. most climate change scientists are fairly sure about their models, but the lack of absolutely certainty makes it possible for deniers to cherry pick a tiny collection of outlier scientists who will argue in public that it's all nonsense.
Policy makers and the media are some combination of corrupt and clueless, so they're happy to go with the false equivalence this creates.
One way to depoliticise science would be to have an international science foundation, which was funded independently of any individual government.
Of course there would be squeals of disapproval from vested interests, but that would simply highlight the problem - the vested interests don't want independent criticism or oversight. Their entire MO is based on regulatory capture which gives them the freedom (for themselves only) to operate as they want with no personal or financial consequences.
Scientific accountability would set them on the path to democratic accountability, which is the last thing they'll accept.
> A lot of arguments about science are really arguments about confidence. E.g. most climate change scientists are fairly sure about their models, but the lack of absolutely certainty makes it possible for deniers to cherry pick a tiny collection of outlier scientists who will argue in public that it's all nonsense.
I think scale/proportion is also a problem. Humans seem to place a lot of value in narratives/stories but we aren't so good with quantities (e.g. https://en.wikipedia.org/wiki/Conjunction_fallacy ). Pretty much everything (economics, climate, etc.) has factors pushing it in different directions, so we can always find a counterargument to any position (e.g. we can rebuff climate change by pointing to solar cycles, CO2 causing extra plant growth, etc.); that's fine, but some factors are overwhelmingly more important than others, whilst we seem to cling on to these stories/narratives and give them more equal weighting than we should.
As a concrete example, a family member used to leave their lights on overnight, claiming that "they use more energy than normal when they're first switched on". Whilst true, the saving is cancelled out after seconds ( e.g. https://www.energy.gov/energysaver/when-turn-your-lights )
There is also an issue of getting what you measure for since humans game systems to their benefit. Look at standardized tests - they guided education from an early age as opposed to actual educational outcomes. I remember vividly being in elementary school and they multiple workbooks with pages of analogies with occasional ambiguous answers. There wasn't any real learning just a bunch of drilling that depended on existing knowledge. Then the SAT dropped it for a writing section and analogies practically disappeared off the face of the earth. They showed up four times a year at most - literally. Usually because the quarterly state tests had one question with them.