Science needs reason to be trusted

by Judith Curry
Two excellent articles about science, facts, and post-factualism.

Sabine Hossenfelder just published a superb essay in Nature Physics, entitled Science needs reason to be trusted.  Subtitle: That we now live in the grip of post-factualism would seem naturally repellent to most physicists. But in championing theory without demanding empirical evidence, we’re guilty of ignoring the facts ourselves.
Most unfortunately, this essay is behind paywall. [read here via readcube]. Here are some excerpts:
I’m afraid the public has good reasons to mistrust scientists and — sad but true — I myself find it increasingly hard to trust them too.
The reproducibility crisis is a problem, but at least it’s a problem that has been recognized and is being addressed. From where I sit, however, in a research area that can be roughly summarized as the foundations of physics, I have a front-row seat to a much bigger problem.
But we have a crisis of an entirely different sort: we produce a huge amount of new theories and yet none of them is ever empirically confirmed. Let’s call it the overproduction crisis. We use the approved methods of our field, see they don’t work, but don’t draw consequences. Like a fly hitting the window pane, we repeat ourselves over and over again, expecting different results. But my issue isn’t the snail’s pace of progress per se, it’s that the current practices in theory development signal a failure of the scientific method.
In particle physics, jumping on a hot topic in the hope of collecting citations is so common it even has a name: ‘ambulance chasing’, referring to the practice of lawyers following ambulances in the hope of finding new clients. What worries me is that this flood of papers is a stunning demonstration for how useless the current quality criteria are. 
Current observational data can’t distinguish the different models. And even if new data comes in, there will still be infinitely many models left to write papers about. The likelihood that any of these models describes reality is vanishingly small — it’s roulette on an infinitely large table. But according to current quality criteria, that’s first-rate science.  The accepted practice is instead to adjust the model so that it continues to agree with the lack of empirical support.
But in the absence of good quality measures, the ideas that catch on are the most fruitful ones, even though there is no evidence that a theory’s fruitfulness correlates with its correctness.
The underlying problem is that science, like any other collective human activity, is subject to social dynamics. Unlike most other collective human activities, however, scientists should acknowledge threats to their objective judgment and find ways to avoid them. But this doesn’t happen.
If scientists are selectively exposed to information from likeminded peers, if they are punished for not attracting enough attention, if they face hurdles to leave a research area when its promise declines, they can’t be counted on to be objective. That’s the situation we’re in today — and we have accepted it.
To me, our inability — or maybe even unwillingness — to limit the influence of social and cognitive biases in scientific communities is a serious systemic failure. We don’t protect the values of our discipline. The only response I see are attempts to blame others: funding agencies, higher education administrators or policy makers. But none of these parties is interested in wasting money on useless research. They rely on us, the scientists, to tell them how science works.
Last year, the Brexit campaign and the US presidential campaign showed us what post-factual politics looks like — a development that must be utterly disturbing for anyone with a background in science. Ignoring facts is futile. But we too are ignoring the facts: there’s no evidence that intelligence provides immunity against social and cognitive biases, so their presence must be our default assumption. And just as we have guidelines to avoid systematic bias in data analysis, we should also have guidelines to avoid systematic bias stemming from the way human brains process information.
Why hasn’t it been taken seriously so far? Because scientists trust science. It’s always worked, and most scientists are optimistic it will continue to work — without requiring their action. But this isn’t the eighteenth century. Scientific communities have changed dramatically in the past few decades. There are more of us, we collaborate more, and we share more information than ever before. All this amplifies social feedback, and it’s naive to believe that when our communities change we don’t have to update our methods too.
How can we blame the public for being misinformed because they live in social bubbles if we’re guilty of it too?
JC note:  I’ve not previous encountered the writings of Sabine Hossenfelder.  I’m now following her on twitter @skdh and also her blog Back Reaction.  Of particular relevance, see her recent article Academia is fucked-up. So why isn’t anyone doing anything about it?
Facts and reason
I also spotted this article in The Guardian from a few months ago by Mark Carnall entitled Facts are the reason science is losing during the current war on reason. Excerpts:
A controversial paper, When science becomes too easy: Science popularization inclines laypeople to underrate their dependence on experts published at the end of last year in the journal Public Understanding of Science, suggests that it’s the rise of science communication (or scicomm) that could be the cause of rising distrust in experts. Use of the word laypeople, aside, could it be that non-scientists, emboldened by easy-to-digest science stories in the media now have the confidence to reject what scientists say, or go with their gut feeling instead? As well as misunderstanding there’s also deliberate pig-headed ignorance for furthering political agendas to contend with too.
This is the disadvantage for science communication. Do you listen to the scientific analysis – which is full of probably, maybe, possibly, roughly, estimated, hypothesised – or do you just agree with someone who sounds convincing and shouts down/shuts down dissenting opinions? Media coverage and bad science communication sometimes gives the impression that scientists are always changing their minds on climate models, whether chocolate or wine will kill or cure you or whether Pluto is a planet or not. This wrongly creates the impression that scientists are a pretty fickle lot.
Despite the reputation for being about facts, there are very few hard facts in nature or science’s understanding of it.
By not flagging up what we don’t know here, we create a false sense of certainty that’s potentially later undermined by a new analysis,  discovery or alternative explanation.
Conversely, does flagging up the limits of our knowledge, as happened with modelling and predicting climate change, undermine the confidence in the scientific method even with unprecedented consensus on whether or not climate change exists?
You can boil the answer down even more to: we don’t know exactly. Science rarely deals with absolutes, but knowing this comes from scientific training. But not knowing exactly is not the same as anyone’s guess is good enough. What we currently know now could be overturned tomorrow with discoveries  or with the use of [new] techniques.
More often than not, the “facts” of science are actually a series of ever-increasing likelihoods. This is why we train students to question every assumption, fact or proposition in science. Check where it came from, go back to the source and critically evaluate the author, the limitations on methodologies and the assumptions made. 
It’s a skill that we all need to keep practising now that “alternative facts” are muddying the understanding of what “scientific” facts are in the first place.
JC reflections
Both of these articles echo themes discussed in my recent Congressional testimony and my thinking about these issues has been enriched by these two articles.
I find this statement by Sabine Hossenfelder to be particularly profound:
Why hasn’t it been taken seriously so far? Because scientists trust science. It’s always worked, and most scientists are optimistic it will continue to work — without requiring their action. But this isn’t the eighteenth century. Scientific communities have changed dramatically in the past few decades. There are more of us, we collaborate more, and we share more information than ever before. All this amplifies social feedback, and it’s naive to believe that when our communities change we don’t have to update our methods too.
My testimony emphasized the perils of groupthink and cognitive biases, and I argued that individual scientists need to fight against this, and that it is particularly difficult when the institutions that support science are rewarding those that are biased.  Sabine argues that its not just about the individuals and the institutions, but the problem derives from the social nature of scientific method in the 21st century.
The idea of  ‘update our methods‘ is very intriguing.  Perhaps Lamar Smith needs a new hearing that is actually on challenges to the scientific method in the 21st century (NOT focused on climate change).
Carnall’s article about scicomms is also very insightful.  The focus on attempting to ‘increase science literacy’ and close the ‘knowledge deficit’ in the public is to attempt to indoctrinate them with facts.  Carnall argues that this is leading to a distrust of science, and I agree.  Educating the public (not to mention students!) about the scientific process, reasoning and critical thinking would be a much better approach.  However, such an approach is of no interest to those who are using ‘facts’ about science to drive a political agenda.
I continue to think that the issues I raised in my testimony go to the heart of the problem, as do these two articles.  I realize that my testimony may have been too esoteric for that audience, but these issues do need to be confronted, both intellectually and in the context of science policy.Filed under: Scientific method

Source