04 August 2012

Citation Manipulation and Academic Publishing

Retraction Watch has a fascinating chronology of a recent case of citation manipulation:
In what appears to be a first, two papers have been retracted for including citations designed to help another journal improve its impact factor rankings. The articles in The Scientific World Journal cited papers in Cell Transplantation, which in turn appears to have cited to a high degree other journals with shared board members.
A little academic naval gazing seems worthwhile here. What is the real added value of impact factor services for science and medicine? One would think that common sense would be more than enough to make every college administrator advise their libraries not to subscribe to Thompson Reuters Citation Reports and similarly advise their faculty to ignore impact factors.

Trying to get published in journals because they have good impact factors is rather like seeking blue ribbons for raising prize chickens. Sure it might be nice to get a ribbon, but in the end of the day most people know that a chicken is a chicken. And the chicken doesn't know and it doesn't care.

The conceit of such 'metricadolatry' is that it suggests that the best science will be published in the best journals. Of course, this can become precisely backwards because the existence of the metric introduces perverse incentives. Metrics assure, for example, that the most faddish science will always be more attractive to journals than lesser topics.

The implications are that people who have pursued excellence in research on topics which may be highly important and useful but not overly attention-grabbing, will struggle to publish their work. Meanwhile, attention grabbing stuff - even of substandard quality - may find a pathway. And since, as a general rule, people who work on faddish topics like to mention that they work on those topics - a lot! - the general shrillness of the crowd will all-but-assure that they will continue to cite each other (and thus the journals) giving the topic the critical mass useful for further bolstering the impact factors of journals.

But the perhaps best criticism of impact factors is historical. They will be totally useless in actually determining good science. Consider Gregor Mendel's paper "Experiments on Plant Hybrids," published in 1866. That paper - which established modern genetics - was largely ignored for 35 years. Its five-year impact would likely have been exactly zero.

While its rather amusing to contemplate Thompson Reuters endeavoring to assess Mendel's scholarly impact five years after he published, the salient point is that it takes a great deal of time for people to recognize the significance of scientific discoveries. If anything, researchers should want to find ways to mitigate obfuscation of important findings. The conceit of impact factors is that they will do this. The reality is that they are probably burying much of real value in the publishing periphery while they aggrandize the stuff of normal science that does little more than add to the noise.

No comments:

Post a Comment