What is the measure of scientific ‘success’?

by Judith Curry
Science has been extraordinarily successful at taking the measure of the world, but paradoxically the world finds it extraordinarily difficult to take the measure of science — or any type of scholarship for that matter. – Stephen Curry

The problem
The Higher Education Funding Council for England are reviewing the idea of using metrics (or citation counts) in research assessment.
At occamstypewriter, Stephen Curry writes:
The REF has convulsed the whole university sector — driving the transfer market in star researchers who might score extra performance points and the hiring of additional administrative staff to manage the process — because the judgements it delivers will have a huge effect on funding allocations by HEFCE for at least the next 5 years.
This issue of metrics has a stark realization at Kings College London, where they are firing 120 scientists.  The main criteria for the firing appears to be the amount of grant funding.
Using metrics to assess academic researchers is hardly something new.  In my experience with university promotion and tenure, the number of publications, the number of citations (and H-index), and the research funding dollars all receive heavy consideration.  It is my impression that the more prestigious institutions pay less attention to such metrics, and rely more on peer review (both internal and external).  In my experiences on the AMS Awards Committee and the AGU Fellows Selection committee, the number of publications and H-index are considered prominently.
What are the responses of scientists to this?  Well most just play the game in a way that insures they maintain job security.  There are a few interesting perspectives on all this that have emerged in recent weeks:
The most thought provoking essay is from The Disorder of Things, excerpts:
Whilst metrics may capture some partial dimensions of research ‘impact’, they cannot be used as any kind of proxy for measuring research ‘quality’. We suggest that it is imperative to disaggregate ‘research quality’ from ‘research impact’ – not only do they not belong together logically, but running them together itself creates fundamental problems which change the purposes of academic research.
Why do academics cite each others’ work? This is a core question to answer if we want to know what citation count metrics actually tell us, and what they can be used for. Possible answers to this question include:

  • It exists in the field or sub-field we are writing about
  • It is already well-known/notorious in our field or sub-field so is a useful reader shorthand
  • It came up in the journal we are trying to publish in, so we can link our work to it
  • It says something we agree with/that was correct
  • It says something we disagree with/that was incorrect
  • It says something outrageous or provocative
  • It offered a specifically useful case or insight
  • It offered a really unhelpful/misleading case or insight

[Citations] cannot properly differentiate between ‘positive’ impact or ‘negative’ impact within a field or sub-discipline – i.e. work that ‘advances’ a debate, or work that makes it more simplistic and polarised.  Indeed, the overall pressure it creates is simply to get cited at all costs. This might well lead to work becoming more provocative and outrageous for the sake of citation, rather than making more disciplined and rigorous contributions to knowledge.
On ‘originality’ – work may be cited because it is original, but it may also be cited because it is a more famous academic making the same point. Textbooks and edited collections are widely cited because they are accessible – not because they are original. Moreover, highly original work may not be cited at all because it has been published in a lower-profile venue, or because it radically differs from the intellectual trajectories of its sub-field. There is absolutely no logical or necessary connection between originality and being cited.
Using citation counts will systematically under-count the ‘significance’ of work directed at more specialised sub-fields or technical debates, or that adopts more dissident positions. [If] we understand ‘significance’ as ‘the development of the intellectual agenda of the field’, then citation counts are not an appropriate proxy.
To the extent that more ‘rigorous’ pieces may be more theoretically and methodologically sophisticated – and thus less accessible to ‘lay’ academic and non-academic audiences, there are reasons to believe that the rigour of a piece might well be inversely related to its citation count.
An article in Times Higher Education reports:
Academics’ desire to be judged on the basis of their publication in high-impact journals indicates their lack of faith in peer review panels’ ability to distinguish genuine scientific excellence, a report suggests.
Specifically with regards to using researchfunding as a metric:
Philip Moriarty has a post How Universities Incentivise Academics to Short-Change the Public.  Excerpts:
What’s particularly galling, however, is that the annual grant income metric is not normalised to any measure of productivity or quality. So it says nothing about value for money. Time and time again we’re told by the Coalition that in these times of economic austerity, the public sector will have to “do more with less”. That we must maximise efficiency. And yet academics are driven by university management to maximise the amount of funding they can secure from the public pot.
Cost effectiveness doesn’t enter the equation. Literally.
Consider this. A lecturer recently appointed to a UK physics department, Dr. Frugal, secures a modest grant from the Engineering and Physical Sciences Research Council for, say, £200k. She works hard for three years with a sole PhD student and publishes two outstanding papers that revolutionise her field.
Her colleague down the corridor, Prof. Cash, secures a grant for £4M and publishes two solid, but rather less outstanding, papers.
Who is the more cost-effective? Which research project represents better value for money for the taxpayer?
…and which academic will be under greater pressure from management to secure more research income from the public purse?
And finally, a letter to the editor of PNAS entitled Systemic addiction to research funding.
Trending
Daniel McCabe has an essay on The Slow Science Movement.  Excerpts:
Today’s research environment pushes for the quick fix, but successful science needs time to think.
There is a growing school of thought emerging out of Europe that urges university-based scientists to take careful stock of their lives – and to try to slow things down in their work.
According to the proponents of the budding “slow science” movement, the increasingly frenetic pace of academic life is threatening the quality of the science that researchers produce. As harried scientists struggle to churn out enough papers to impress funding agencies, and as they spend more and more of their time filling out forms and chasing after increasingly elusive grant money, they aren’t spending nearly enough time mulling over the big scientific questions that remain to be solved in their fields.
Among those who have sounded the alarm is University of Nice anthropologist Joël Candau. “Fast science, like fast food, favours quantity over quality,” he wrote in an appeal he sent off to several colleagues in 2010. “Because the appraisers and other experts are always in a hurry too, our CVs are often solely evaluated by their length: how many publications, how many presentations, how many projects?”
From Dylans Desk:  Watch this multi-billion-dollar industry evaporate overnight.   Excerpts:
Imagine an industry where a few companies make billions of dollars by exerting strict control over valuable information — while paying the people who produce that information nothing at all.  That’s the state of academic, scientific publishing today. And it’s about to be blown wide open by much more open, Internet-based publishers.
Indeed, Academia.edu, PLOS, and Arxiv.org are doing something remarkable: They’re mounting a full-frontal assault on a multi-billion-dollar industry and replacing it with something that makes much, much less money. They’re far more efficient and fairer, and they vastly increase the openness and availability of research information. I believe this will be nothing but good for the human race in the long run. But I’m sure the executives of Elsevier, Springer, and others are weeping into their lattes as they watch this industry evaporate.  Maybe they can get together with newspaper executives to commiserate.
Dorothy Bishop has a post Blogging as post publication peer review: reasonable or unfair?  Excerpts:
Finally, a comment on whether it is fair to comment on a research article in a blog, rather than going through the usual procedure of submitting an article to a journal and having it peer-reviewed prior to publication. The authors’ reactions: “The items you are presenting do not represent the proper way to engage in a scientific discourse”. 
I could not disagree more. [W]hat has come to be known as ‘post-publication peer review’ via the blogosphere can allow for new research to be rapidly discussed and debated in a way that would be quite impossible via traditional journal publishing. In addition, it brings the debate to the attention of a much wider readership.  I don’t enjoy criticising colleagues, but I feel that it is entirely proper for me to put my opinion out in the public domain, so that this broader readership can hear a different perspective from those put out in the press releases. And the value of blogging is that it does allow for immediate reaction, both positive and negative. 
From occamstypewriter on altmetrics:
One thing that has changed of course is the rise of alternative metrics — or altmetrics — which are typically based on the interest generated by publications on various forms of social media, including Twitter, blogs and reference management sites such as Mendeley. They have the advantage of focusing minds at the level of the individual article, which avoids the well known problems of judging research quality on the basis of journal-level metrics such as the impact factor.
Social media may be useful for capturing the buzz around particular papers and thus something of their reach beyond the research community. There is potential value in being able to measure and exploit these signals, not least to help researchers discover papers that they might not otherwise come across — to provide more efficient filters as the authors of the altmetrics manifesto would have it. But it would be quite a leap from where we are now to feed these alternative measures of interest or usage into the process of research evaluation. Part of the difficulty lies in the fact that most of the value of the research literature is still extracted within the confines of the research community. That may be slowly changing with the rise of open access, which is undoubtedly a positive move that needs to be closely monitored, but at the same time — and it hurts me to say it — we should not get over-excited by tweets and blogs.
JC reflections
Research universities in the 21st century are in a transition period, as the fundamental value proposition of the research university is being questioned in the face of funding pressures. Its time to start re-imagining the 21st century research university.  More on this topic will be forthcoming.
My main reflection on metrics is that you get what you count.  If you count numbers, then numbers are what you will get.  If you want originality, significance, robustness, then counting citations, dollars, and numbers of publications won’t help. If you want impact beyond the ivory tower, such as research that stimulates or supports industry or informs policy making, then counting won’t help either.
In looking back at my own history of funding, publication productivity and citations, here is what I see.  My time at University of Colorado (mid 1990’s to 2002) stands out as the period where I brought in large research budgets ($1M+ per year) and cranked out a large number of papers, only a few of which I regard as important.  I was definitely in ‘no time to think mode’, spending my time writing grant proposals and editing graduate student manuscripts.   With regards to citations, my papers with the largest number of citations are the 2005 hurricane paper and a review article on Arctic clouds. My papers that I truly regard to be scientifically significant have relatively few citations, although the citations on these fundamental papers keep trickling in.
My own rather extended postdoc period (4 years) allowed me lots of time to think; I despair for the current generation of young scientists who are under enormous pressure to crank out the publications and to start bringing in research funds so they can be competitive for a faculty position.
I suspect that the dynamics of all this will change, largely fueled by the internet.  So does anyone wonder why academic climate researchers crank out lots of papers, try to get them published in Nature, Science, or PNAS, and don’t worry too much whether their paper will stand the test of time?  Scientists are following their reward structure – from their employees and from professional societies that dish out awards.
 
 
 Filed under: Sociology of science

Source