DEVELOPMENT AND APPLICATION OF NEW JOURNAL IMPACT MEASURES Thed N. van Leeuwen and Henk F. Moed (Centre for Science & Technology Studies (CWTS), Leiden University, the Netherlands)
Journal impact measures have been widely used within the scientific community. The best known measure is the Journal Impact Factor (IF), developed by the Institute for Scientific information – ISI (Garfield, 1979). This journal impact measure has had a great influence on the scientific community for more than two decades: on librarians (using IFs for journal collection decision making), scientific editors (using IFs as an indication of the impact of their journal compared to that of the competitor journals), individual scientists (for deciding where to publish in order to generate the highest possible impact), and by science policy makers and evaluators (using the IF as a tool in research performance assessment). The present authors have critically analyzed the use and validity of the ISI IF in several previous publications (Moed and van Leeuwen, 1995, 1996; Moed et al., 1996, 1998, 1999; Van Leeuwen et al., 1999). Our criticisms have focused on four main points. First, ISI has made errors in calculating the IF values for many journals. In the numerator of the IF, ISI counts citations to all types of documents published in a journal, whereas in the denominator it includes only the number of articles, notes, and reviews. However, other document types such as editorials and letters are frequently cited. These types contribute to the IF’s numerator, but are not included in the denominator. Citations to these types are therefore, in a sense, ‘for free’ (Moed and van Leeuwen, 1996; Moed et al., 1996). Another methodological flaw is that the IF does not take into account the composition of a journal in terms of the percentages of articles, notes, and reviews. As a result, journals containing a high proportion of review articles tend to have higher IFs than other journals. Furthermore, the IF ignores differences in citation characteristics among scientific fields. As a result, biochemical journals tend to have considerably higher IF values than, for instance, mathematical or even pharmacological journals. Finally, a citation window of one to two years may be too short to measure a journal’s impact adequately (Moed et al., 1998; van Leeuwen et al., 1999). We have developed an alternative journal impact indicator, which solves all of the four aforementioned problems of the ISI IF. This indicator, the Journal to Field Impact Score (JFIS), is based only on articles, letters, notes, and reviews (Moed et al., 1998), and on the citations received by these document types. JFIS weights the average impact of documents in a particular journal with the average impact of all documents in the field(s) to which a journal is assigned. In this normalization procedure, the type of document and the year of publication is
TABLE I
6.90 6.42 5.88 5.74 4.99 4.83 4.68 4.25 3.96 3.36
CEREBROVASCULAR & BRAIN METABOLISM REV FRONT NEUROENDOCRINOLOGY MOLECULAR & CELLULAR NEUROSCIENCE PROG NEUROBIOLOGY J COGNITIVE NEUROSCIENCE
J CEREBRAL BLOOD FLOW & METABOLISM J NEUROPATHOLOGY & EXPERIMENTAL NEUROLOGY NEUROBIOLOGY DISEASE BRAIN PATHOLOGY J NEUROPHYSIOLOGY
** In Top-15 rankings of both indicators * In Top-15 ranking based on IF # In Top-15 ranking based on JFIS
22.09 15.92 11.67 8.51 7.91 7.73 7.04 6.39 5.75 5.63
ANN REV NEUROSCIENCE NEURON TR NEUROSCIENCES CURR OPINION NEUROBIOLOGY J NEUROSCIENCE BRAIN RES REV CEREBRAL CORTEX ANN NEUROLOGY HUMAN BRAIN MAPPING BRAIN
ISI IF 98
16 17 20 22 30
8 9 11 13 15
1 2 3 4 5 6 7 10 12 14
Ranking ISI IF
1.70 1.82 5.99 2.06 1.60
0.70 1.32 1.46 0.92 1.38
3.82 4.82 2.86 1.61 2.58 1.76 1.68 2.47 2.94 2.09
JFIS 93-97
12 10 1 9 15
76 25 19 52 21
3 2 5 14 6 11 13 7 4 8
Ranking JFIS
Listing of top-15 ranking journals, based on ISI IF and JFIS in Neurosciences
4 7 19 13 15
– 68 – 16 –8 – 39 –6
–2 0 –2 – 10 –1 –5 –6 3 8 6
Diff
# # # # #
* * * * *
** ** ** ** ** ** ** ** ** **
608 Thed N. van Leeuwen and Others
Cortex Forum
609
taken into account as well. A JFIS of 1.0 is assigned to journals in which the documents have on average an impact which equals the subfield average, given their particular distribution of documents among document types and publication years. There are large differences between IF and JFIS, even within a sub-field. Interestingly, the largest differences in rankings by ISI IF and JFIS are found in the field of Neuroscience. Overall, we observed that 80% of all journals change 5 or more positions in the ranking when comparing their ISI IF and JFIS. Table I lists the Top-15 journals in neuroscience, based on each of the two indicators, and a comparison of the two rankings. The ISI IF for the year 1998 was calculated by the authors and does not include citations ‘for free’. JFIS values relate to articles, letters, notes and reviews published during 1993-1997, that were cited from the year of publication up through 1997. For instance, for documents published in 1993, citations are counted during the 5-year time period 1993-1997. The most striking difference between the two rankings is the relative downward movement of the ranking of review journals (Cerebrovascular and Brain Metabolism Reviews, Frontiers in Neuroendocrinology, Progress in Neurobiology). Another review journal, annual review of neuroscience drops only one position in the JFIS ranking. However, its JFIS impact compared to that of other journals is considerably lower than its relative ISI IF score. These declines indicate that the normalization procedure used in calculating JFIS tends to reduce impact scores of review journals. Another important factor is field normalization. For instance, Brain, J. Neurophathol and exp Neurol, Neurobiology Disease and Brain Pathol are assigned to both Neurosciences and Clinical Neurology. These journals move upwards in the JFIS ranking, owing to the somewhat lower average citation rates in the latter field. A first analysis of the effect of citation periods reveal that the window underlying ISI IF is too short in the field of Neuroscience. This study has shown that whatever journal impact measure one chooses to apply, its outcomes have to be interpreted and used with care, and one needs to keep scrutinizing these measures. The authors do not claim to have discovered the ‘ultimate’ journal impact measure. The results presented above are preliminary and further analyses will generate more insight. However, we feel that we have improved journal impact measurement significantly, and we hope to continue to contribute to its further development. REFERENCES GARFIELD E. Citation Indexing. Its Theory and Applications in Science, Technology and Humanities. New York: Wiley, 1979. MOED HF and VAN LEEUWEN ThN. Improving the accuracy of Institute for Scientific Information’s journal impact factors. Journal of the American Society for Information Science (JASIS) 46: 461467, 1995. MOED HF and VAN LEEUWEN ThN. Impact factors can mislead. Nature, 381: 186, 1996. MOED HF, VAN LEEUWEN ThN, and REEDIJK J. A critical analysis of the journal impact factors of Angewandte Chemie and The Journal of the American Chemical Society: Inaccuracies in published impact factors based on overall citations only. Scientometrics, 37: 105-116, 1996. MOED HF, VAN LEEUWEN ThN, and REEDIJK J. A new classification system to describe the ageing of scientific journals and their impact factors. Journal of Documentation, 54: 387-419, 1998.
610
Thed N. van Leeuwen and Others
MOED HF, VAN LEEUWEN ThN, and REEDIJK J. A. Towards appropriate indicators of journal impact. Scientometrics, 46: 575-589, 1999. VAN LEEUWEN TN, MOED HF, and REEDIJK J. Critical comments on Institute for Scientific Information impact factors: a sample of inorganic molecular chemistry journals. Journal Information Science (JIS), 25: 489-498, 1999. Thed N. van Leeuwen, Centre for Science & Technology Studies (CWTS), Leiden University, PO Box 9555, 2300 RB Leiden, the Netherlands. E-mail:
[email protected]