Research that goes on to be influential is more likely to cite important papers, than is research that does little to advance science, a large study1 of citation patterns suggests.
The report, which its authors hope will shape funding policies, queries the value of papers with a low or medium impact, which it says are less likely to be fodder for the breakthroughs of others.
The authors analysed every paper published in 2003 in the life, health, physical and social sciences — a total of nearly 870,000 articles, indexed by publishing company Elsevier's Scopus database. In all fields, the papers which went on to be most highly cited were more likely to reference previous highly cited work than were less popular papers. The findings are published in the journal PLoS ONE.1
"Isaac Newton seems to have been right when he said that he had been able to 'see further only by standing on the shoulders of giants'," says Lutz Bornmann, a bibliometrics researcher at the Max Planck Society in Munich, Germany, who led the work.
Giants, icebergs and chance
The study tests what the authors say are three theories for how research advances. The first, extrapolated from Newton's aphorism, states that top-level research is directly connected to former top-level research. The second, the 'iceberg' model, says that top-level research cannot be successful without a mass of medium-level research on which to rest. The third theory states that scientific advancement depends simply on chance.
Bornmann's team argues that citations are a good measurable proxy for scientific impact or influence. The researchers divided the papers that they examined into six classes, depending on the number of citations they received, and found that the top-class papers in each subject — the 1% that went on to receive the most citations — were much more likely to reference other highly cited papers than were their medium and lower cited counterparts. "With every step down to lower cited papers, the references to highly cited papers diminishes," says Bornmann.
In the life sciences, which showed the strongest connection between former and current top-level research, 52% of references in the top-cited articles were themselves top-cited articles, compared with an average of 25% in all the life-sciences papers taken together.
The comparable figure for health-science papers was 47% of references in top-cited articles, against a background of 21% from all articles; for physical sciences, 31% against a background of 14%; and for social sciences, which showed the weakest connection, 16% against a background of 8%.
There are "clear differences" between disciplines, says Bornmann. Medium-level research seems to have a greater role in advancing physical and social sciences than it does in life and health sciences. This might be because the former topics have fewer "commonly shared" research goals than do life or health sciences, he speculates.
Funds for the best
Bornmann says that, in cash-strapped times, the analysis "shows that concentrating funds on top-level researchers is a good strategy", and it makes less sense to distribute funds evenly amongst scholars. He praises programmes that fund individual investigators on the basis of their publication track records rather than their proposals.
The study has limitations, as Bornmann admits. Citations are only one way of measuring scientific impact, and do not necessarily indicate quality. "Bibliometrics on their own can be misleading, and they must be combined with more-informed studies for a useful evaluation," says Jonathan Adams, a bibliometrics expert based in London, who is director of research evaluation at information company Thomson Reuters.
Furthermore, the results do not necessarily conflict with the idea that a mass of medium-level research is needed to facilitate top-class research. Despite top-level scholars' leanings towards citing high-impact work, they do also reference a sizable body of medium-impact research. Bornmann stresses the study does not mean medium impact research is not still important; it is just "less important", he says.
These caveats mean that funders should be wary of taking the study too seriously and funding "citation giants" to the exclusion of others, cautions Stephen Curry, a structural biologist at Imperial College London. He points out that early-career researchers would fare badly under such a system, and warns that many middle-ranking scientists could become disillusioned and leave the system altogether — taking away not only their contributions to science, but also their potentially high-class teaching.
References
- Bornmann, L., de Moya Anegón, F. & Leydesdorff, L. PLoS One, doi:10.1371/journal.pone.0013327
The analysis presented here gives the impression that the successful scientist is a conformist, working on subjects on which a large body of other scientists work and who does not deviate from the accepted paradigms so that his/her research ressonates with others thereby making it citable. Louis Pasteur and Gregor Mendel might have been very unsuccessful in such a context.
How will significant paradigm shifts emerge if one strives to be citable? One of the traits of paradigm-shifting work is that it is at first rejected or ignored by the scientific community. Also, what is the time lag for citation? How rapidily do novel ideas influence other work? And what about "orphan" scientific questions, i.e. questions for which only a few scientists understand the interest or importance? And furthermore, isn't there scientific work that contributes mostly to changes in practices of industries or organisations that do not publish scientific papers (but rather they realise acts that impact society and economy)?
Overall, this citation analysis gives the impression that scientific programs should be defined in terms of their "marketability".
C.E. Morris
Avignon, France