Thursday, 20 June 2013

The impact factor season

Roger Watson, Editor-in-Chief, JAN

The world of editing seems to be divided into two camps: those who agree with the use of impact factors and those who don't; I get the impression that the latter is larger and growing.  However, I defy any editor of a journal on the list produced annually by Thomson Reuters to deny that they take a look to see where their journal is on the list.  I am in the camp which agrees that impact factor exists (i.e. on that I am 100% correct) and that I'll probably make the most of it whatever the score is but rejoice loudly if my journal is at the top of the list.  After all, whatever your view, a great many authors, universities and research awarding bodies use it to judge where to send a paper, how to promote individuals and to whom research funds should be awarded.

In Editor-in-Chief of JAN mode, I am happy to announce an increase in impact factor from approximately 1.4 to approximately 1.5; we do seem rather desperate when we report impact factor to three decimal places.  I am not as happy to report that we have moved down both the 'league tables' in which we appear: the Nursing (Social Science) list and the Nursing (Science) list: from 11th to 14th in the former and 12th to 16th in the latter.  Whatever we are doing at JAN to improve the citability of our papers must be working; however, whatever some other journals are doing is working even better.

'So we do try to improve the cite-ability of our papers, do we?' I hear people say.  Yes.  Of course we do.  If you publish in a journal we assume that, at the least, it will be read, at most cited and at the very most cited many times.  Lots of citations to an individual paper – even if for the wrong reasons, on very few occasions – must tell us something about the paper, about the author, and it must be related, to some extent, to the 'quality' of the journal.  Impact factor is one such measure of quality.  There are others.

For the first time there has been a concerted effort to dissuade scientists and all who have an interest in assessing the quality of research from using impact factor and this is encapsulated in the San Francisco Declaration on Research Assessment.  I am very tempted to say 'good luck with that one lads' as I am sure that impact factor addiction will be with us for many years.  However, it is a sincere and authoritative effort to gather all the arguments against impact factor into one place and propose that 'enough is enough'. Nevertheless, I would be more sympathetic to the cause of the declaration if they had suggested a credible – or, indeed, any – alternative to the impact factor.

Alternatives there are many and these include: the journal h-indexSCImago index; and Eigen factor.  However, these are all citation dependent.  While – with the exception of the Scopus impact factor – they use citations in different ways from the Thomson Reuters impact factor, they are all highly correlated. Plus ca change!

Finally, that old chestnut about how easy it is for editors to manipulate the impact factor.  Most editors, including me, will joke that they have been trying to do this for years...and failed.  Frankly, the penalty is too great: journals can be removed from the Thomson Reuters list for excessive self-citation and 37 were removed in 2013 and Nursing Science Quarterly remains off the list since 2009.  No editor or publisher wants that.